• 1 Post
  • 21 Comments
Joined 1Y ago
cake
Cake day: Sep 26, 2023

help-circle
rss

There’s always Charity Navigator, they rate charities based on financial health and accountability, at least. I’ve not heard any controversy with it, for what it’s worth


How tested are the proprietary protocols for data safety?


To the point where a lot of gamers have paid for more games than they’d ever have time to play.


Great point about paid plans. I didn’t look closely at the project today - they didn’t have any paid plans when I was first trying it in 2020 (and ultimately decided downloading the preferred source was good enough and abandoned trans-coding).

This is a more script based solution I’ve tried in the past for ongoing ISOs with decent results. Good luck!


I’m pretty certain the two options I can think of are just front-ends for ffmpeg: Handbrake and Tdarr, Tdarr runs as a service and monitors your folder for things that don’t match spec and then converts them, if you plan to continue acquiring ISOs.


It is, but there are some barriers not present on a *nix OS. Docker runs on the Windows Subsystem for Linux (WSL) as a prerequisite.


Ha, Rentals on Plex sure didn’t evoke their intended reaction from me. Agreed on the Verge article, too. Thanks for sharing!


the hacker obtained and used the member’s credentials to authenticate the requests to the server as a member library.

Hacking is the act of breaking into a computer system without authorization or exceeding authorized access.

This part could be hacking. Not that I care and think this is frivolous.

requiring around-the-clock efforts from November 2022 to March 2023 to attempt to limit service outages and maintain the production systems’ performance for customers.

Doesn’t major hosting require 24/7 monitoring anyway? Like they should have been doing this for more than just 11/22 to 3/23.



It’s not just the prices, people are lazy.

Or they just demand good value for their service. Netflix hugely curtailed piracy in their early days. Same with Valve’s Steam.


Awesome to see, good luck to you!

If you’re looking for tips, I’d try to set up Prowlarr first if you intend to use it, it’ll save some reconfiguration down the line.

Though I don’t find anything as complex as mounting and permissions in the *arrs, haha.

But my favorite part about tinkering with home servers is just learning a little at a time, expanding naturally. It’s easy to find guides that are the “ultimate, best server configs”, but unless you understand what benefits they’re offering, you can’t really determine what fits best for YOUR needs.

I started with CouchPotato on Windows years ago and now have *arrs running through docker on headless boxes and keep adding on fun services.


I just bought a few 18tb from serverpartsdeals (via eBay) and they’re working well for now. YMMV of course.


kill -9

Just tested, thanks for the suggestion! It killed a few instances of rsync, but there are two apparently stuck open. I issued reboot and the system seemed to hang while waiting for rsync to be killed and failed to unmount the zpool.

Syslog errors:

Dec 31 16:53:34 halnas kernel: [54537.789982] #PF: error_code(0x0002) - not-present page
Jan  1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.
Jan  1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.
Jan  1 12:57:19 halnas kernel: [    1.119609] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
Jan  1 12:57:19 halnas kernel: [    1.120020] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
Jan  1 12:57:19 halnas kernel: [    1.120315] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
Jan  1 22:59:08 halnas kernel: [    1.119415] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
Jan  1 22:59:08 halnas kernel: [    1.119814] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
Jan  1 22:59:08 halnas kernel: [    1.120112] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
Jan  1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.
Jan  1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.
Jan  2 02:23:18 halnas kernel: [12293.792282] gdbus[2809399]: segfault at 7ff71a8272e8 ip 00007ff7186f8045 sp 00007fffd5088de0 error 4 in libgio-2.0.so.0.7200.4[7ff718688000+111000]
Jan  2 02:23:22 halnas kernel: [12297.315463] unattended-upgr[2810494]: segfault at 7f4c1e8552e8 ip 00007f4c1c726045 sp 00007ffd1b866230 error 4 in libgio-2.0.so.0.7200.4[7f4c1c6b6000+111000]
Jan  2 03:46:29 halnas kernel: [17284.221594] #PF: error_code(0x0002) - not-present page
Jan  2 06:09:50 halnas kernel: [25885.115060] unattended-upgr[4109474]: segfault at 7faa356252e8 ip 00007faa334f6045 sp 00007ffefed011a0 error 4 in libgio-2.0.so.0.7200.4[7faa33486000+111000]
Jan  2 07:07:53 halnas kernel: [29368.241593] unattended-upgr[4109637]: segfault at 7f73f756c2e8 ip 00007f73f543d045 sp 00007ffc61f04ea0 error 4 in libgio-2.0.so.0.7200.4[7f73f53cd000+111000]
Jan  2 09:12:52 halnas kernel: [36867.632220] pool-fwupdmgr[4109819]: segfault at 7fcf244832e8 ip 00007fcf22354045 sp 00007fcf1dc00770 error 4 in libgio-2.0.so.0.7200.4[7fcf222e4000+111000]
Jan  2 12:37:50 halnas kernel: [49165.218100] #PF: error_code(0x0002) - not-present page
Jan  2 19:57:53 halnas kernel: [75568.443218] unattended-upgr[4110958]: segfault at 7fc4cab112e8 ip 00007fc4c89e2045 sp 00007fffb4ae2d90 error 4 in libgio-2.0.so.0.7200.4[7fc4c8972000+111000]
Jan  3 00:54:51 halnas snapd[1367]: stateengine.go:149: state ensure error: Post "https://api.snapcraft.io/v2/snaps/refresh": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

I believe there’s another issue. ZFS has been using nearly all RAM (which is fine, I only need RAM for system and ZFS anyway, there’s nothing else running on this box), but I was pretty convinced while I was looking that I don’t have dedup turned on. Thanks for your suggestions and links!


I did, great suggestion. It never recovered.


Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot).


Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.

Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.



Question - ZFS and rsync
Hey fellow Selfhosters! I need some help, I think, and searching isn't yielding what I'm hoping for. I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine. I've been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I've had to reboot the machine, because the zpool doesn't appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while. EDIT: Of note, the rsync process seems to stall and I can't get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output. While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it's been a week long process to copy the 3TB that I have now. I don't think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don't think I'm ready to drop down on Unraid. rsync is being initiated from the NAS to copy from the old server, am I better off "pushing" than "pulling"? I can't imagine it'd make much difference. Could my drives be bad? How could I tell? They're attached to a 10 port SATA card, could that be defective? How would I tell? Thanks for any help! I've dabbled in linux for a long time, but I'm far from proficient, so I don't really know the intricacies of dmesg et al.
fedilink


You can show more ads with more quantity, any ad driven platform will trend that way.


Or maybe exclusively time to game as we live in our caves waiting for the fallout to settle. How many watts is a potato?