• 0 Posts
  • 34 Comments
Joined 1Y ago
cake
Cake day: Jun 27, 2023

help-circle
rss

Whatever you get for your NAS, make sure it’s CMR and not SMR. SMR drives do not perform well in NAS arrays.

I just want to follow this up and stress how important it is. This isn’t “oh, it kinda sucks but you can tolerate it” territory. It’s actually unusable after a certain point. I inherited a Synology NAS at my current job which is used for backup storage, and my job was to figure out why it wasn’t working anymore. After investigation, I found out the guy before me populated it with cheapo SMR drives, and after a certain point they just become literally unusable due to the ripple effect of rewrites inherent to shingled drives. I tried to format the array of five 6TB drives and start fresh, and it told me it would take 30 days to run whatever “optimization” process it performs after a format. After leaving it running for several days, I realized it wasn’t joking. During this period, I was getting around 1MB/s throughput to the system.

Do not buy SMR drives for any parity RAID usage, ever. It is fundamentally incompatible with how parity RAID (RAID5/6, ZFS RAID-Z, etc) writes across multiple disks. SMR should only be used for write-once situations, and ideally only for cold storage.


Refurbished drives get their SMART data reset during the process, they absolutely had more than that originally.


The games will still be designed by humans. Generative AI will only be used as a tool in the workflow for creating certain assets faster, or for creating certain kinds of interactivity on the fly. It’s not good enough to wholesale create large sets of matching assets, and despite what folks may think, it won’t be for a long time, if ever. Not to mention, people just don’t want that. People want art to have intentional meaning, not computer generated slop.


This is no different than anything else, we naturally appreciate the skill it takes to create something entirely by hand, even if mass production is available.


If you’re waiting for Jellyfin to run some kind of relay like Plex, you’ll be waiting a long time. That takes a lot of money to upkeep, and the demand for people who self-host FOSS and then want to depend on an external service is very minimal, certainly not enough to sustain such a service. I’d recommend just spending a weekend afternoon learning how to set up Nginx Proxy Manager and being done with it, the GUI makes it very easy.


I chose Bookstack for the same situation. It’s dead simple in usage and maintenance. No issues yet!


I will have an OG Xiaomi Mi Box and it’s absurd how over the years it went from a purely functional media device to a complete shit show covered ads. Genuinely disgusted me every time I turned the TV on. I couldn’t stand it anymore, I had to tear out the launcher with ADB and replace it with FLauncher.

I wish Kodi wasn’t such a pain in the ass to deal with, especially for YouTube. We really need a new FOSS media center application. Until then, at least FLauncher works for now as a simple app switcher for a handful of Android apps.


Recently started using Tempo with Navidrome. Haven’t had more than a few days of use yet, but everything has worked exactly as expected! Can’t ask for much more than that.


When the corporation wars start over the remaining arable land and drinkable water, I’ll be joining the Steam Corps


I very recently started using borgbackup. I’m extremely impressed with how much it compressed the data before sending, and how well it detects changes and only sends the difference. I have not yet attempted a proper restore from backup, though.

I have much less data I’m currently securing (~50gb) and much more uplink bandwidth (~115mbps) so my situation isn’t nearly as dire. But it was able to compress that down to less than 25gb before sending, and after the initial upload, the next week’s backup only required about 100mb of data transfer.

If you can find a way to seed your data from a faster location, reduce the amount you need to back up, and/or break it up into multiple smaller transfers, this might be an effective solution for you.

Borgbase’s highest plan has an upper limit of 8TB, which you would be brushing right up against, but Hetzner storage boxes go up to 20TB and officially support Borg.

Outside of that, if you don’t expect the data to change often, you might be looking for some sort of cheap S3 storage from AWS or other similar large datacenter company. But you’ll still need to find a way to actually get them the data safely, and I’m not sure if they support differential uploads like Borg does.


The goal here is to make it difficult to link to things uploaded to discord from outside of discord. The malware reason is BS. If they wanted to curb malware it would be as easy as making it a nitro feature. What that doesn’t fix is all the people piggybacking on discord as a free CDN.

Discord isn’t even wrong for doing this. I just resent their dishonesty.



Convincing argument, but unfortunately a cursory Google search will reveal he was right. There is very little CPU overhead. The only real consideration is a bite extra storage and RAM to store and load the redundant dependencies of the container.


While that isn’t false, defaults carry immense weight. Also, very few have the means to host at scale like Docker Hub; if the goal is to not just repeat the same mistake later, each project would have to host their own, or perhaps band together into smaller groups. And unfortunately, being a good programmer does not make you good at devops or sysadmin work, so now we need to involve more people with those skillsets.

To be clear, I’m totally in favor of this kind of fragmentation. I’m just also realistic about what it means.


Never trust corporations. If you’re not profitable, they will abandon you. Only trust community-driven projects with a true open source commitment.


Proxmox is completely different from Docker. Proxmox is focused on VMs, and to a lesser extent LXC containers. If you think you will have a need to run VMs (for example, a Windows VM for a game server that doesn’t support Linux) Proxmox is great for that.

I run Docker on a dedicated VM inside Proxmox, and then I spin up other specialized VMs on the same system when needed. The Docker VM only does Docker and nothing else at all.


because of the check against darkweb leaks or whatever type feature when you pay. That’s seems like an anti privacy thing. I understand it’s a good idea albeit seems to expose a lot of information about you

For the password leak checks, your passwords are never transmitted. They are one-way hashed locally, and then only the first few characters of the hash are checked against the API provided at https://haveibeenpwned.com which is run and designed by Troy Hunt, one of the most respected people in the cybersecurity industry. He collects major password breaches and makes them available to check against without actually exposing the data. It’s perfectly safe and secure.


I use Portainer a lot and have no issues with it. There’s very little you can’t do without Portainer though, it’s just a convenient web frontend to access Docker tools. It’s helpful if you manage a lot of stuff or multiple hosts. I also use it at work to expose basic management to members of my team who aren’t Linux or Docker savvy.


We all go down this hole at the start. The truth is, you should only reserve IPs if you actually need it to stay the same. You don’t need to check IPs as often as you think, I promise. The only segmentation and planning you should do for a home network is for subnets/vlans; LAN, Guest, IOT, Server, etc.

Instead of managing the IP addresses, just manage hostnames. Make sure every device with a customizable hostname is easily identifiable. This will help you so much more in the long run.


That’s what I do. All my IOT stuff that I can’t get wired or via Zigbee/Z-Wave goes on a separate VLAN along with my Home Assistant server. I have an mDNS repeater for ease of access to TV stuff via apps (might spin TVs off into its own VLAN, just haven’t gotten around to it) but a 1-way firewall rule that only allows the main network to initiate connections. Certain devices which don’t need internet at all get static IPs and completely firewalled.


It’s a docker container that runs an OpenVPN/Wireguard client in order to provide a connection for other containers, yes.


I don’t want to spend 30 minutes traveling from one side of a map to the next

I’m not talking 30 minutes. There should be options that let the player do it in a few, depending on the scale.

Just let me get there immediately so I can talk to this single person and get this item I will never use.

You’re encouraging bad design in order to facilitate bad content. There also shouldn’t be much if any mailman content either, that’s just filler.


I strongly dislike ingame teleporting and pause menu quick travel. I’d much rather the game have more ways for me to get to where I’m going than simply materializing wherever I want to be.

Let the travel itself be part of the game instead of just a way to link the “real” parts of the game together. Make it fun and fast to move around, add unlockable shortcuts, add more in-universe traveling options. Let me get to where I’m going myself instead of doing it for me, and make it fun to do so.

Especially in open world games, not only is this the most true, but they’re the worst offenders. Literally what is the point of making an open world and then letting people skip it? You see everything once and that’s it. If you make an open world full of opportunities to wander and explore, and then players want to avoid it as much as possible via teleportation, you have failed as a designer.


This is a completely valid option and one that more people should consider. You don’t have to selfhosted everything, even if you can. I actually prefer to support existing instances of stuff in a lot of cases.

I use https://disroot.org for email and cloud, and I’m more than happy to kick them a hundred bucks a year to help support a community. Same with https://fosstodon.org for Mastodon. I’m fully capable of self-hosting these things, but instead I actively choose to support them instead so that their services can be extended to more than just myself. I chose those two because they send excess funds upstream to FOSS projects. I’m proud to rep those domains.


Cheers for this, I just bought a stack of new hard drives myself and this is exactly what I didn’t know I needed.


You can absolutely attach each VM and even the host to separate NICs which each connect back to the switch and has its own VLAN. You can also attach everything to one NIC and just use a virtual bridge(s) on the host to connect everything. Or any combination therein. You have complete freedom on how you want to do it to suit your needs. How this is done depends on what you’re using on the host for a hypervisor though, so I can’t give you exact directions.

One thing I should have thought of before; if two NICs are on one single PCI card, you probably can’t pass them through to the VM independent of one another. So that would limit you to doing virtual networking if you want to split them.


Having tried both, I found it far easier and less troublesome to just add a PCI passthrough than it is to worry about managing the network both on the host and in the VM. As long as FreeBSD supports the driver, I strongly recommend passthrough vs virtualized NICs.


Yeah, this is perfectly doable. I ran a very similar setup for a while. I’d recommend passing one of the NICs directly through to the VM and using one for the host to keep it simple, but you can also virtualize the networking if you need something more complex. If you do pass through a single NIC, you’ll need a switch capable of handling VLANs and a bit of knowledge on how to set up what’s called a “router on a stick” with everything trunked over one connection and only separated by VLANs.

Keep in mind, while this is a great way to save resources, it also means these systems are sharing resources. If you need to reboot, you’re taking everything down. If you have other users, that might be annoying for everyone involved.


Heroes of Newerth was the most toxic community I’ve ever been apart of. Nothing comes even close. It was rotten from top to bottom and made me quit a game I otherwise loved to play. I’m talking “The CEO frequently calls people slurs in all chat” level of bad.


I’ve used both, each for a long stretch of time; they are fundamentally extremely similar and you’ll be fine with either. I switched to AdGuard Home entirely because I could run it directly from my OPNSense router instead of a second machine. There isn’t really anything else major I’ve noticed different between them, but my usage is fairly basic. AdGuard’s interface felt a bit more mature and clean, but that’s it.

If you’re happy with your PiHole, there’s no reason I’m aware of to switch.


How the fuck is Reddit closing their API behind a ridiculous paywall only the SECOND stupidest social media move of the day


This doesn’t really bother me because FSR is open source and platform neutral.


Once per day I enable light mode for two minutes


Almost nothing Valve has worked on is only for SteamOS, other than packaging and distributing SteamOS itself. They’ve upstreamed kernel patches, RADV patches, KDE patches, etc which affects all desktops. Not to mention the open source tools like Gamescope and Fossilize, the latter of which is used automatically on all Linux PCs playing Steam games, and their contributions and funding to Wine and other projects. Even the new Steam big picture UI, which was initially only available on SteamOS, is now broadly available.

It’s no exaggeration to say that Valve is carrying Linux gaming these past few years. It has been a downright renaissance.