• 0 Posts
  • 36 Comments
Joined 1Y ago
cake
Cake day: Aug 15, 2023

help-circle
rss


If you don’t want to be on the bleeding edge and want a distro with longer support, CentOS Stream isn’t bad. Sure, there was some controversy surrounding it, when Red Hat killed the old CentOS. But ignoring that, the distro itself is pretty good and stable.


You can’t hard link across docker volumes. In the second example, you need to remove the /media/movies and /media/downloads volumes, only keep /media.

After fixing this, only future downloads will be hard links. Use a deduplication tool like jdupes to create hard links for the already downloaded files.


Much better. SSDs and HDDs do monitor the health of the drives (and you can see many parameters through SMART), while pen drives and SD cards don’t.

Of course, they have their limits which is why raid exists. File systems like ZFS are built on the premise that drives are unreliable. It’s up to you if you want that redundancy. The most important thing to not lose data is to have backups. Ideally at least 3 copies, 1 off site (e.g. on a cloud, or on a disk at some place other than your home).


PhotoRec and TestDisk are probably the best, but they don’t recover file structure.


Fuck up #1: no backups

Fuck up #2: using SD cards for data storage. SD cards and USB drives are ephemeral storage devices, not to be relied on. Most of the time they use file systems like FAT32 which are far less safe than NTFS or ext4. Use reliable storage media, like hard drives.

Fuck up #3: no backups.


The honestly prefer the bottom one than the modern 50 step wizards that take 10 seconds for each page to load, and load an ungodly amount of JS scripts.

A company I worked for was using an ancient bug tracking tool (called Pivotal) that looked like a 90s site. It was so fast and responsive. Later, we moved to something modern. It was 10 times worse, significantly slower and overly complex.


When you release something, your work is not done. You have to maintain it, fix bugs, release patches, and probably the worst part, keeping it up to date.

For example, Apple decides to deprecate some API, or decides to switch cpu architecture, or for the millionth time change how app signing works, or add some new security feature that breaks your app. Now you need to make your app work properly on the new platform, switch APIs, all the fun. Or, there’s some critical vulnerability in library you used and customers are deleting your app from their computers (a lot of companies use automated scanners that check against published CVEs). It’s most fun when you learn that the new version that fixes the vulnerability completely breaks compatibility with the old one and now you have to rewrite all the code that used that library.

Also, maintaining open source projects is not fun. It’s a lot of work, in most cases unpaid, thankless, and building a community around a project is really hard.


If you store them properly and create fresh backups on new discs every couple of years, they can last a long time.


The biggest disadvantage of physical media is DRM. With the exception of music which isn’t usually locked, pretty much all optical discs have some form of region locking. Software/video games also typically have additional DRM schemes. Some are easy to bypass (e.g. nocd cracks). Online activation is the worst because it relies on the game publisher keeping the servers alive.


Google isn’t any better. And there aren’t a lot phone operating system options you can choose from.


While in this particular case I agree with you, I’ve noticed a frustrating trend that just keeps getting worse. On one hand, search engines are failing to adapt to content farms. On pretty much any topic, you will find these generic sites that have poorly written articles that are hard to distinguish from AI. Try searching for “best linux distro” to see what I mean. Even on programming topics, you will find many sites that simply copy the content from stackoverflow and github.

On the other hand, people aren’t making websites and blogs anymore. More and more people are only using social media platforms, which aren’t being indexed by search engines. I hate seeing that so many discussions are now on Discord instead of forums. How many Twitter threads have you seen that should have been blog posts?


I wouldn’t recommend Optiplexes… HP, Dell, Lenovo pre-builts use proprietary parts making them a pain in the rear to work with. I recommend getting a PC made with standard parts.


Personally I prefer older PCs in standard formfactors. I avoid HP, Dell, Lenovo pre-builts because they use proprietary power supplies and motherboards, making them difficult to upgrade. Laptops aren’t really upgradable, they don’t have enough SATA ports, and USB isn’t reliable enough for storage. Raspberry Pies, while power efficient, are too underpowered. Old server hardware is also an option, but they are generally too noisy.


Containers are very useful because they isolate the application from the rest of your server.

This solves a lot of problems: no dependency conflicts with your operating system, you can upgrade/downgrade any time you want, no state gets stored on your main system which makes resetting the application when it misbehaves as easy as deleting and recreating the container.

Before containers, changing my host OS (e.g. because ZFS wasn’t properly supported on the distro I was using) meant reinstalling and configuring a lot of shit, which could take days. With docker, I can migrate in 1-2 hours… Just install docker on the new OS, copy over the files, docker compose up a few times and done. The only things left to setup are samba, ssh and a few cron jobs.


My experience wasn’t as bad, but after the third time the database got corrupted during an upgrade I stopped using it.


Yes!! I enjoy playing with retro tech and was actually surprised on how much you can do with an ancient Pentium 2 machine, and how responsive the software at the time was.

I really dislike how inefficient modern software is. Like stupid chat apps that use more RAM while sitting in the background than computers had 15-20 years ago…


Jellyfin or Plex media server on the NAS.

To view content, there are several options. Both servers have client apps for various platforms, this usually provides the most features and best experience. Another option is using a browser, both come with an integrated web server. The third option is through DLNA, which is a protocol for media streaming that many players already support, but it may be a bit more limited.


Do people actually use non - english keyboards? All my computers that I ever owned used the standard US layout. If I want to type in my language, I switch layout and I’ve simply learned where the characters are. But 99% of the time, I’m using the US layout.


I honestly don’t think it’s so bad. There are some things which make it look ugly, the Hungarian notation, the fact that it’s a C API which means everything has to be functional and there are many limitations, and there is a lot of legacy stuff kept for backwards compatibility. There is a lot of "we did it this way before we knew the right way of doing it, but now we’re stuck with it because of backwards compatibility.

I think MFC is a lot worse. It’s basically a C++ API that wraps a lot of things from the win32 API. It heavily relies on macros, and I really dislike it in general. And don’t get me started on COM.


You can’t run a debugger on a customer’s machine.


I wish there were more Sims like games. I feel that under EA, the games aren’t living up to their full potential, and they could be so much more.


Wobbly life is like that. It’s targeted more towards children, but my kids absolutely love it.


Yes, it’s called a text file. If you put .sh at the end of the file name, you can even run it directly from the command line.

You can even copy full curl commands from browsers in the network tab so you can reproduce the exact request.



From a career perspective, think of languages and frameworks as tools. Knowing how to work with more tools broadens your horizon about what you can achieve and how efficiently. Sure, you can specialize on certain tools, but these come and go.


Something a lot of open source projects lack are designers and UX experts. Translation is also something a lot of people can help with. Documentation writing too.

For the programming community at large, sharing knowledge is a great thing to do. There are many channels available, blogs, wikis, even videos on YouTube.


In terms of performance and flexibility, building your own is better. Depends on what you want out of it.

If all you want is an easy to setup NAS with no bells and whistles, get a synology. If you want to build a server that also has a NAS, if you want to be in control of the software, build your own.

You don’t even need server hardware. I used an older desktop computer with an HBA card. It’s also less noisy and much smaller.


Self hosting basically means you are running the server application yourself. It doesn’t matter if it’s at home, on a cloud service or anywhere else.

I wouldn’t recommend hosting a social network like lemmy, because you would be legally responsible for all the content served from your servers. That means a lot of moderation work. Also, these types of applications are very demanding in terms of data storage, you end up with an ever growing dataset of posts, pictures etc.

But self hosting is very interesting and empowering. There are a lot of applications you can self host, from media servers (Plex, Jellyfin), personal cloud (like Google Drive) with NextCloud, blocking ads with pihole, sync servers for various apps like Obsidian, password manager BitWarden etc. You can even make your own website by coding it, or using a CMS platform like WordPress.

Check the Awesome Self-hosted list on GitHub, has a ton of great stuff.

And in terms of hardware, any old computer or laptop can be used, just install your favorite server OS (Linux, FreeBSD/OpenBSD, even Windows Server). You can play with virtualization too if you have enough horsepower and memory with ESXI or Proxmox, so you can run multiple severs at once on the same computer.


Also renamed xml, renamed json and renamed sqlite.



Look at commercial TVs, those used by businesses. Some even come with a RPi slot.


If you only need nextcloud on your local network, a quick and dirty way of assigning hostnames to machines is the hosts file. Obviously, this has to be done on every computer from which you wish to access nextcloud. Also, nonrooted mobile OSs don’t let you edit the hosts file.

Alternatively, you can set up a local DNS server. Pihole also has that capability (I personally had mixed results with Pihole, not sure if I did something wrong). Some routers may have that too.

If you need it public on the internet, yes, you need a domain name. Some providers offer free domains (but it will be a subdomain of the provider). Something to keep in mind is that your IP is probably dynamic. When you connect to the Internet, the ISP assigns you a random IP address from their pool of IPs. To keep the domain up to date, you will need to setup a dynamic DNS solution. This is a simple script/program that periodically checks your IP, and if it changes, updates that domain automatically.



Even though it takes a bit of learning, I would recommend just using a server Linux distro with docker. It does require a bit of learning, but it is well worth it. I would personally go with AlmaLinux, but Debian, RockyLinux, CentOS Stream, Ubuntu Server are also fine choices.

You can find docker images on dockerhub for pretty much everything, and even if you don’t, creating dockerfiles isn’t that hard. This is very convenient because you know where the configuration and data for everything is, you can easily control access (file system, ports, permissions), it’s easy to update. And if you need to reinstall the OS, migrating docker containers is as easy as just copying the data and config files.