reddit refugee

  • 0 Posts
  • 31 Comments
Joined 1Y ago
cake
Cake day: Jun 11, 2023

help-circle
rss

Okay 2 things.

  1. I haven’t heard about any security issues with Rustdesk. Can you link to what you’re referring to?
  2. It’s strange to see someone concerned about security recommend RDP for remote access when it’s a terrible idea to expose it to the internet. But maybe you mean to expose a VPN and then connect to RDP via that?

Edit: I see you have some info on the Rustdesk point elsewhere in the thread. I’ll read up on that part so don’t feel like you have to repeat yourself here.


No rustdesk but recommend RDP for remoting?

I’m confused on both recommendations.


Yeah that was fun times.

Luckily, thanks to using docker, it was easy enough to “pin” a working version in the compose file while I figured out what just broke.

For everyone’s reference, here’s my fstab to give you an idea of what works with linuxserver.io’s qbittorrent

## Media disks setup for mergerfs and snapraid

# Map cache to 1TB SSD
/dev/disk/by-id/ata-Samsung_SSD_860_EVO_1TB_S3Z8NB0K820469N-part1 /mnt/ssd1 xfs defaults 0 0

# Map storage and parity. All spinning disks.
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK39X4N-part1 /mnt/par1         xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK3TY5N-part1 /mnt/disk01       xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK4806N-part1 /mnt/disk02       xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK4H0RN-part1 /mnt/disk03       xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT0TS-part1 /mnt/disk04 xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT1YS-part1 /mnt/disk05 xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT3EK-part1 /mnt/disk06 xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N6CKJJ6P-part1 /mnt/disk07 xfs defaults 0 0

# Setup mergerfs backing pool
/mnt/disk* /mnt/stor fuse.mergerfs defaults,nonempty,allow_other,use_ino,inodecalc=path-hash,cache.files=off,moveonenospc=true,dropcacheonclose=true,link_cow=true,minfreespace=1000G,category.create=pfrd,fsname=mergerfs 0 0

# Setup mgergerfs caching pool
/mnt/ssd1:/mnt/disk* /mnt/cstor fuse.mergerfs defaults,nonempty,allow_other,use_ino,inodecalc=path-hash,cache.files=partial,moveonenospc=ff,dropcacheonclose=true,minfreespace=10G,category.create=ff,fsname=cachemergerfs 0 0

I do this with mergerfs.

I then periodically use their prewritten scripts to move things off the cache and to the backing drives.

I should say it’s not really caching but effectively works to take care of this issue. Bonus since all that storage isn’t just used for cache but also long term storage. For me, that’s a better value proposition.


Transfer charges are not restore charges - which are required when bringing files out of glacier.

Something to keep in mind.


For testing, do you just use one of the listed public instances? I guess ultimately it would be best to self-host.


You’ve just made me realize I haven’t really evaluated any others.

IIRC Startpage is one of them? I might have used that once a long while ago but I can’t say I’ve given the others a fair shake yet.


And you can even abuse AWS support by trying to get them to troubleshoot your code for you.


I have a strong suspicion that’s already happening.


I’m currently trying out the first 300 free searches with Kagi. It’s only been a day but it’s already looking like I’m going to subscribe.

Remember when you got good at Google and you started to notice that you could find what you needed better than most other people? It’s a bit like that and it’s refreshing.


My use case is basically the same as yours.

I do restic to Wasabi.

I’ve been on restic for a few years now and have never had an issue. I started out using Google Drive for the backend but that was though my college which went away eventually so I swapped over to Wasabi but I’m considering B2.

It’s actively maintained and encrypted.

There are a handful of backends it supports but can be extended by writing to an rclone backend.





I do the same but I just use a script that runs periodically to update CloudFlare with my current IP with their native API.


Ubuntu Server with docker/docker-compose on top.

So many guides for Ubuntu specifically makes reading up on something a lot easier and it works just fine.


Important info here. Definitely not the impression I got from the OP.


Just set a rate limit? This could have been a code change and a blog post.


Yep. 2nd part is important.

The way I solved this was to leave my regular browser setup alone, make a new Firefox profile just for YouTube, install ONLY ublock origin, and create a shortcut to that profile on my desktop.

Now I only use that profile for YouTube. I haven’t seen one ad since this thing started.



To manage your library and transfer to the reader: https://calibre-ebook.com/

There’s also a plugin to de-DRM books if you want.


I’m using restic and Wasabi.


I built the server I’m using now over a decade ago, with much worse specs, for way more money.

This will last you a long while depending on your requirements.




If I controlled a paper, I’d force a git control system with publicly viewable edits made after publication.

Imagine the goodwill and trust that would instill in the public toward your paper.

Edit: I’ve thought the same thing about proposed legislation for a long time.





Shit. I’d have moved to Jellyfin already if they had an Apple TV client. If they go under I might have to get a 2nd set top box just to run JF.