Every community I care about is dead
How would Amazon track a voucher? It’s a physical scratch off code, sealed by Mullvad before they send it over to Amazon. More importantly, if you think that was possible why would Mullvad be unaware of it and/or lie about it? Just go with the vouchers if you want untraceability. They’re also cheaper in USD than other methods IIRC, at $29/6 months and $57/12 months.
Vote with your wallet regards any sort of purchase. By giving money to someone you are giving them the most encouragement possible to continue doing what they’re doing. If you purchase something that you end up not liking, they will still receive your initial vote loud and clear. The gaming industry especially has shown us that companies will happily take both the money and the negative review and say ‘thank you’.
I feel piracy for demo purposes is fully justified if you buy it after you like it. People always say vote with your wallet but it’s more like gambling with your wallet if you don’t get to see and touch the product before you make the purchase. Giving proper demos should be more common with digital media.
Semi-related, can these automations really be relied upon for quality, or is there a system to help find the best copy? When I’m downloading books I do it manually through Anna’s Archive and I always download as many unique versions of a book that I see, then open them all and compare their internals to see which one I keep. Often not many ebooks are suitable for using my own font and layouts with Koreader.
The readme links to this one. It will automatically update after manual installation.
Are NPCs silent when talking? If so you’ll need to install faudio
into your Wine prefix with Winetricks. Running with Steam may help a little also but l don’t remember if Proton includes faudio by default.
As for the cart crashing that’s probably just Skyrim. The opening cutscene is notoriously buggy.
Yeah I wouldn’t bother. It intends for you to have a duplicate copy on every device, which is probably not what you want. Syncthing is really good for things like synchronizing notes, calendars, password databases, music, etc to your devices. Things that you want to access in both places, but that are usually disconnected from each other from time to time.
Conduit is also licensed under Apache 2.0, so it could also be taken closed source at any point in time. The reason this wouldn’t impact Conduit as much is that there’re other contributors, whilst Synapse and Dendrite are almost exclusively developed by Element.
Right. The current perspective is based on the idea that if Synapse/Dendrite go closed-source right now, an open source version would be good as dead. Element is responsible for 95% of Synapse/Dendrite and I’m sure a community fork would have to play a lot of catch-up to figure out how to keep it going. If the community was more involved in Synapse/Dendrite implementation (and if Element let them) there would be less cause for alarm, as closing the source would just mean an immediate community fork and putting Element on ignore. Also to reiterate, The Matrix Foundation is not going along with Element on this move, and even if Element pulled something shady the Matrix Core Spec etc. would still remain open and under the Foundation’s control, so the max we have to lose is Synapse/Dendrite and all of Element’s developers.
As for the rest I agree and I do actually trust that Element is simply playing their only card here. These maneuvers are all required in order for Element to survive as a company at all, but they also unfortunately leave this backdoor open as a consequence. Matthew has pinky-promised over and over that they are only acting in good faith and that they would never use the backdoor, but it’s understandable that the presence of the backdoor is putting everyone at unease. Best case scenario we take this as a warning sign that if Element drops dead tomorrow then Matrix is also dead. If people want Matrix to not be practically owned by Element then we should diversify and prepare escape plans.
It depends on what your workflow/usecase for putting documents on the drive currently is. Syncthing is usually intended to be put on two separate devices, and then a folder on each device gets synchronized - meaning you have a folder of your documents on each device. Is there any reason not to just mount the network drive’s folder and drag the documents in that way?
This is actually quite a controversial change mainly because of their switch to a CLA. This indirectly gives them the opportunity to switch the license to closed source whenever they feel like it in the future. Semi-controversially, they are also primarly making this AGPL change in order to begin selling dual-licensing to companies. The Matrix Foundation itself does not support this change from Element, though Element is within its rights to do so.
You can read some more thoughts on this from the pessimistic folks at HackerNews. My main takeaway is that I don’t trust Element because I don’t trust anyone. I’m sure they’re doing this in good faith but I don’t like the power they have at the moment. I hope this is what’s needed to begin focusing efforts on alternative homeserver implementations like Conduit.
Syncthing - No introduction needed. Couldn’t live without it.
Healthchecks.io (you can self host this) - Dead man’s switch monitoring for all my automation. Most of my automated scripts hit up a Healthchecks endpoint when they run, and if they fail to hit the endpoint on a regular schedule I get notified. Mandatory for my anxiety.
Did you actually apply the crack over top of the original files? You say you’re running the executable in the CODEX directory, but the CODEX directory would be where the temporary crack files are, not where the game is expecting them. You can also try installing the latest repack from KaOs, which you can find on 1337x by searching for “skyrim repack kaos”. The repack will auto-crack everything so you can be sure that at least that is done correctly.
The health is listed next to each piece at this URL: https://annas-archive.org/torrents
Usually torrents remain seeded because private tracker users are encouraged to seed everything forever. In addition, often if a private tracker has a bonus system they will offer extra bonus points for seeding low-seed torrents, and some even automatically mark torrents as freeleech if they’re below ~5 seeds, encouraging people to revive its seed count in a targeted manner.
One potential advantage is that many private trackers are meticulously-curated. The more people that are on a tracker, the harder it is to quality control every single upload. Most of the top-tier trackers aren’t just a dumping ground for data, they have tons of categories and slots for each potential piece of data to go, and if a better piece of data can fit in that slot then the previous one needs to be reviewed, deleted, and replaced.
Another reason is that private trackers often have many rules to facilitate the overall health of the tracker and its swarms, e.g. minimum quality for uploads, minimum seed times, required ratios etc. If anyone could get an account they could break the rules over and over after being banned.
I prefer recertified ones if they’re significantly cheaper, but that’s up to you. Recertified will likely fail faster but when they’re close to ~60% of the cost it makes sense to gamble.
As for which RAID that is up to you and how you’re setting up your array. If you’re running ZFS then mirrored pairs are somewhat flexible since you can add a pair whenever you want of any size disks, but they will cost you 50% of your disk space in redundancy. For RAID5/6 you want the disk sizes to match and for ZFS you won’t be able to add any disks to a RAID5/6 array for about a year - the code that adds that feature is coming in the next release which will take about a year.
You should submit some issues for those sites with these steps - maybe they’ll add them to the list. The addon supports 453 sites at the moment by my count, so I’m sure they’d love for you to tell them about more sites that haven’t been bypassed yet.
Do you have any examples? I have never seen a paywall while using this. They have a list of supported sites, though I’m not sure if all of them are guaranteed to work 24/7 or if they need frequent updates.
The link I posted is for Firefox. The Chrome version is here, and it looks like it should continue working with MV3. (Obviously, the better solution is to stop using Chrom*. Mozilla is modifying Manifest V3 so adblockers/etc will continue to work in a post-MV3 world).
Edit: Added dev’s comments/issue-link on MV3
Bypass Paywalls Clean will let you read them, as an alternative to hiding. I think you have to manually install it but it will auto-update after that.
It was probably me. I use these two places + eBay primarily but I’m sure there are other good ones out there.
Edit: also for posterity this is a cool site but shucking drives hasn’t been viable for a long time as far as I’ve seen: https://shucks.top/
FYI: RAIDZ expansion just got merged: https://github.com/openzfs/zfs/pull/15022
Estimated timeline is about a year from now for OpenZFS 2.3 which will include it.
You can also use MergerFS+SnapRAID over individual BTRFS disks which will give you a pseudo-RAID5/6 that is safe. You dedicate one or more disks to hold parity, and the rest will hold data. At a specified time interval, parity will be calculated by SnapRAID and stored on the parity disk (not realtime). MergerFS will scatter your files across the data disks without using striping, and present them under one mount point. Speed will be limited to the disk that has the file. Unmitigated failure of a disk will only lose the files that were assigned to that disk, due to lack of striping. Disks can be pulled and plugged in elsewhere to access the files they are responsible for.
It’s a bit of a weird-feeling solution if you’re used to traditional RAID but it’s very flexible because you can add and remove disks and they can be any size, as long as your parity disks are the largest.
Yeah. What repackers do is they source the game, source all the updates, source the DLC, and source the cracks, then put it all together, make sure it works, compress it into an installer, and distribute it. As an end-user you run the installer to decompress the game and then click play. Repackers don’t actually do any of the cracking themselves, they just put it all in one package and make it easy. The installer will run CPU-heavy while it decompresses their heavy compression so don’t be alarmed by that.
They’re available as an option. You can source from any of Fitgirl, DODI, or KaOs if they have the game you want. Other repackers do exist and they’re all generally trustworthy, but those 3 put out a lot of content and have a good track record. ElAmigos is another good one that puts out a lot of releases.
Mirrored vdevs allow growth by adding a pair at a time, yes. Healing works with mirrors, because each of the two disks in a mirror are supposed to have the same data as each other. When a read or scrub happens, if there’s any checksum failures it will replace the failed block on Disk1 with Disk2’s copy of that block.
Many ZFS’ers swear by mirrored vdevs because they give you the best performance, they’re more flexible, and resilvering from a failed mirror disk is an order of magnitude faster than resilvering from a failed RAIDZ - leaving less time for a second disk failure. The big downside is that they eat 50% of your disk capacity. I personally run mirrored vdevs because it’s more flexible for a small home NAS, and I make up for some of the disk inefficiency by being able to buy any-size disks on sale and throw them in whenever I see a good price.
Yeah ECC RAM is great in general but there’s nothing about ZFS that likes ECC more than any other thing you do on your computer. You are not totally safe from bit flips unless every machine in the transaction has ECC RAM. Your workstation could flip a bit on a file as it’s sending it to your ZFS pool, and your ECC’d ZFS pool will hold that bit flip as gospel.
The main problem with self-healing is that ZFS needs to have access to two copies of data, usually solved by having 2+ disks. When you expose an mdadm device ZFS will only perceive one disk and one copy of data, so it won’t try to store 2 copies of data anywhere. Underneath, mdadm will be storing the two copies of data, so any healing would need to be handled by mdadm directly instead. ZFS normally auto-heals when it reads data and when it scrubs, but in this setup mdadm would need to start the healing process through whatever measures it has (probably just scrubbing?)
Any sort of malicious activity that’s not bitcoin mining can be disguised easily within the background noise of normal CPU usage. If you’re getting pwned, assume you’re getting pwned for longer than the duration of the install program - any strenuous pwning could be spread out across hours or days if they really wanted. If you’re ultra-worried about whether repackers are trustworthy, you can always source clean game files and crack the game yourself.
ZFS can grow if it has extra space on the disk. The obvious answer is that you should really be using RAIDZ2 instead if you are going with ZFS, but I assume you don’t like the inflexibility of RAIDZ resizing. RAIDZ expansion has been merged into OpenZFS, but it will probably take a year or so to actually land in the next release. RAIDZ2 could still be an option if you aren’t planning on growing before it lands. I don’t have much experience with mdadm, but my guess is that with mdadm+ZFS, features like self-healing won’t work because ZFS isn’t aware of the RAID at a low-level. I would expect it to be slightly janky in a lot of ways compared to RAIDZ, and if you still want to try it you may become the foremost expert on the combination.
ZFS without redundancy is not great in the sense that redundancy is ideal in all scenarios, but it’s still a modern filesystem with a lot of good features, just like BTRFS. The main problem will be that it can detect data corruption but not heal it automatically. Transparent compression, snapshotting, data checksums, copy-on-write (power loss resiliency), and reflinking are modern features of both ZFS/BTRFS, and BTRFS additionally offers offline-deduplication, meaning you can deduplicate any data block that exists twice in your pool without incurring the massive resources that ZFS deduplication requires. ZFS is the more mature of the two, and I would use that if you’ve already got ZFS tooling set up on your machine.
Note that the TrueNAS forums spread a lot of FUD about ZFS, but ZFS without redundancy is ok. I would take anything alarmist from there with a grain of salt. BTRFS and ZFS both store 2 copies of all metadata by default, so bitrot will be auto-healed on a filesystem level when it’s read or scrubbed.
Edit: As for write amplification, just use ashift=12
and don’t worry too much about it.
ZFS doesn’t eat your SSD endurance. If anything it is the best option since you can enable ZSTD compression for smaller reads/writes and reads will often come from the RAM-based ARC cache instead of your SSDs. ZFS is also practically allergic to rewriting data that already exists in the pool, so once something is written it should never cost a write again - especially if you’re using OpenZFS 2.2 or above which has reflinking.
My guess is you were reading about SLOG devices, which do need heavier endurance as they replicate every write coming into your HDD array (every synchronous write, anyway). SLOG devices are only useful in HDD pools, and even then they’re not a must-have.
IMO just throw in whatever is cheapest or has your desired performance. Modern SSD write endurance is way better than it used to be and even if you somehow use it all up after a decade, the money you save by buying a cheaper one will pay for the replacement.
I would also recommend using ZFS or BTRFS on the data drive, even without redundancy. These filesystems store checksums of all data so you know if anything has bitrot when you scrub it. XFS/Ext4/etc store your data but they have no idea if it’s still good or not.
I’ll add that if you want to archive games forever, storing repacks is a good idea because of their extreme compression. From what I’ve observed, Fitgirl trends towards heavier compression while DODI trends towards faster install times.
KaOsKrew is another respected repacking group that you can trust.
If you’re familiar with What.CD and its shutdown, the power users and their local copy of that music archive moved over to redacted.ch (stats) and orpheus.network (stats). They’re private torrent trackers so they’re invite-only, but TMK they both still offer interviewing as an entry option: RED, OPS. The interviews mostly consist of technical audio information and private tracker rules. The main downside is that these trackers expect you to seed an equal amount back, so you don’t get a free pass to download everything without limits. Of the two, Orpheus is a lot easier to maintain “ratio” on since it gives you incremental credit just for having a large seedbase (even if no one is downloading from you). Ideally you should be on both if you’re serious about music collecting, but these days they are largely just mirrors of each other.
If you don’t want to get dirty in the private tracker world, I’d recommend Soulseek and RuTracker.
I would prefer 1. Restoring a failed ZFS mirror is easy, and you can continue to operate while a new drive arrives.
2 will get you more space in theory but you’ll have downtime with any problem like you said, and you’ll also have slower speeds without the mirror.
3 is unnecessary unless you have a good reason.
I don’t see any disadvantages with Proxmox and VMs on the same disk, as Proxmox shouldn’t have much activity going on.
My suggestion is to set up Proxmox under a VM and give it some virtual disks to replicate these setups and then yank a disk and try to recover. Write down the steps it takes to get back to a normal system and see if that affects your decision.
You probably have a higher attack surface from the gremlins in your walls. OTOH, Amazon knowing that you use Mullvad is a tangible downside, as they will probably use that to stick you in a marketing group or something. Monero is still an easy solution with the ~same cost if you’re concerned about that.