• 0 Posts
  • 36 Comments
Joined 1Y ago
cake
Cake day: Jun 30, 2023

help-circle
rss

A lot of other people who took the test got largely the same result as when they joined the company — my results had worsened (by the HR Manager’s standards) — she later told me that I was anti-authoritarian and more likely to do what I thought was right rather than what I had been instructed to do. […]

She mentioned that my chances of securing the job upon re-interviewing at the company were slim due to my psychometric profile.

What a nice thing to say to one of your senior employees. HR people really are something else. They could’ve easily lost him that day because of some random bullshit.


Use something like pgAdmin, DBeaver or the pg cli to connect to your postgres instance. Then run the command from the changelog as a SQL query.


Maybe you could install a local mail client like Thunderbird and connect it to your Gmail via POP3? POP will download the mails and delete them from the server. Then you’ll just have to figure out how to export the mails from Thunderbird/your client of choice.

EDIT: This article contains relevant information.

EDIT 2: Alternativly you could just use IMAP instead of POP to download everything and then delete the mails from the server manually.


You can get a quick overview via DSM, I think in the Disk Manager. For more details you could jump into a terminal and use smartctl.


Have you checked the SMART values of your drives? Do they give you a reason for your concerns?

Anyhow, you should never be in a position where you need to worry about drive failure. If the data is important, back it up separatly. If it isn’t, well, don’t sweat it then.


Why would you buy something new if your current solution works and your requirements don’t change? Just keep it.


Wasabi S3 is nice and cheap. You’ll only pay what you use, so probably just a few cents in your case.

Oops, nevermind:

If you store less than 1 TB of active storage in your account, you will still be charged for 1 TB of storage based on the pricing associated with the storage region you are using.


I recently upgraded three of my proxmox hosts with SSDs to make use of ceph. While researching I faced the same question - everyone said you need an enterprise SSD, or ceph would eat it alive. The feature that apparently matters the most in my case is Power Loss Protection (PLP). It’s not even primarily needed to protect from an possible outage, but it forces sync writes instead of relying on a cache for performance.

There are some SSDs marketed for usage in data centers, these are generally enterprisey. Often they are classified for “Mixed Use” (read and write) or “Read Intensive”. Other interesting metrics are the Drive Writes Per Day (DWPD) and obviously TBW and IOPS.

At the end I went with used Samsung PM883.

But before you fall into this rabbit hole, you might check if you really need an enterprise SSD. If all you’re doing is running a few vms in a homelab, I would expect consumer SSDs to work just fine.



No, the registrar just registers the domain for you (duh). You can then change the DNS recods for this domain and these records will propagate to other DNS servers all around the world. Your clients will use some of these DNS servers to lookup the IP address of your server and then connect to this IP.

The traffic between your clients and server has nothing to do with your domain registrar.


Yep, spotDL is nice. It does however not download from Spotify directly because of legal reasons. Instead, it searches the songs on other sites, for example YouTube Music, and downloads them from there. So YMMV based on which songs you’re trying to get.




You could look into mainboards with IPMI. They give you a web based interface to fully control your server, including power management, shell, sensor readings, etc.


Also not a fan about the closed source thing, but I like about Obsidian that it’s all just markdown. If I ever need to ditch it, I can keep and use my existing files as they are.

Would this also be possible with Zettlr or Logseq?



I would have thought that building an automated warehouse starts with designing robots and warehouses that complement each other. Using humanoid robots seems strange - I doubt that evolution gave us the optimal shape to work in a warehouse.

He denied this would lead to job cuts, however, claiming that it “does not” mean Amazon will require fewer staff.

Sure thing. As if Amazons endgame isn’t always to reduce costs and increase profits. They don’t give a shit about their employees or people in general.




I love Jellyfin but I would absolutely not make it accessible over the public internet. A VPN is the way to go.



Yeah, tail would be the more obvious choice instead of negating head.

Fuck, I need coffee. @klay@lemmy.world is right (again).



This line seems to list all dumps and then deletes all but the two most recent ones.

In detail:

  • ls -1 /backup/*.dump lists all files ending with .dump alphabetically inside the /backup directory
  • head -n -2 returns all filenames except the two most recent ones from the end of the list
  • xargs rm -f passes the filenames to rm -f to delete them

Take a look at explainshell.com.


Yeah, the quality is really good. It’s also not cheap. I bought this case mostly because it’s rather shallow and did fit into my previous server rack.

I’m now at a point where I should buy another drive cage but I’m a bit hesitant to spend 150€ for it. Well…

Edit: Any reason you decided to go with a non-server mainboard without IPMI and ECC support?


Fun! I used the exact same chassis for my NAS. Thanks for sharing!


I don’t use rclone at all, restic is perfectly capable to backup to remote storage on its own.


I’ve been working in IT for about 6/7 years now and I’ve been selfhosting for about 5. And in all this time, in my work environment or at home, I’ve never bothered about backups.

That really is quite a confession to make, especially in a professional context. But good for you to finally come around!

I can’t really recommend a solution with a GUI but I can tell you a bit about how I backup my homelab. Like you I have a Proxmox cluster with several VMs and a NAS. I’ve mounted some storage from my NAS into Proxmox via NFS. This is where I let Proxmox store backups of all VMs.

On my NAS I use restic to backup to two targets: An offsite NAS which contains full backups and additionally Wasabi S3 for the stuff I really don’t want to lose. I like restic a lot and found it rather easy to use (also coming from borg/borgmatic). It supports many different storage backends and multithreading (looking at you, borg).

I run TrueNAS, so I make use of ZFS Snapshots too. This way I have multiple layers of defense against data loss with varying restore times for different scenarios.


One simple way to pull the new image into your cluster is to overwrite the latest tag, specify imagePullPolicy: Always in your deployment and then use kubectl rollout restart deployment my-static-site from within your pipeline. Kubernetes will then terminate all pods and replace them with new ones that pull the latest image.

You can also work with versioned tags and kubectl set image deployment/my-static-site site=my/image:version. This might be a bit nicer and allows imagePullPolicy: IfNotPresent, but you have to pass your version number into your pipeline somehow, e.g. with git tags.


You’d need to post your complete docker-compose.yaml, otherwise nobody knows what you’re doing.

Also (and I don’t want to sound rude) you should probably start learning docker with a less critical service. If you just learned how volumes work you should not store your passwords in one. Yet.


While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5.

What kind of argument is that supposed to be? We’ve stolen his art before so it’s fine? Dickheads. This whole AI thing is already sketchy enough, at least respect the artists that explicitly want their art to be excluded.


That’s a very specific problem and I don’t know if there is an existing solution that does exactly what you want.

paperless-ngx does a lot of the things you ask for, it lets you upload pdfs, does OCR and gives you full text search via a web ui. It’s just not made specifically for manuals and it does not highlight the search hits or scrolls to them.


I have no experience with terraform but Bitwarden has an API and CLI, so you might be able to script something with it?


Check out porkbun, they have cheap prices and an API that’s supported by ddclient.


Just wanted to add that you can get Jeff Geerlings book “Ansible for DevOps” for free right now:

https://leanpub.com/ansible-for-devops/c/CTVMPCbEeXd3


I assume you’re not really experienced with storage servers? Then I would likely recommend a Synology NAS. They give you great software that you can easily configure without the need of deeper knowledge of the inner workings. I started with a Synology and didn’t regret it. It just worked and gave me reliable storage so I could concentrate on the other parts of my homelab. It comes with a price though and you mostly pay for the software.

If you aren’t afraid to get your hands dirty or prefer to use an open source storage solution from the beginning, you might consider Unraid or TrueNAS. The latter is more “enterprisey”, the former seems to be more beginner friendly (but I haven’t used it personally).