• 3 Posts
  • 7 Comments
Joined 1Y ago
cake
Cake day: Jun 11, 2023

help-circle
rss

It just seemed the easiest route, but I may just give using the GPU a go.


Jellyfin hardware acceleration docker issues.
Having a bit of trouble getting hardware acceleration working on my home server. The cpu of the server is an i7-10700 and has a discrete GPU, RTX 2060. I was hoping to use intel quick sync for the hardware acceleration, but not having much luck. From the guide on the jellyfin site https://jellyfin.org/docs/general/administration/hardware-acceleration/intel I have gotten the render group ID using "getent group render | cut -d: -f3" though it mentions on some systems it might not be render, it may be video or input which i tried with those group ID's as well. When I run "docker exec -it jellyfin /usr/lib/jellyfin-ffmpeg/vainfo" I get back ``` libva info: VA-API version 1.22.0 libva info: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/nvidia_drv_video.so libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/nvidia_drv_video.so libva info: Trying to open /usr/lib/dri/nvidia_drv_video.so libva info: Trying to open /usr/local/lib/dri/nvidia_drv_video.so libva info: va_openDriver() returns -1 vaInitialize failed with error code -1 (unknown libva error),exit ``` I feel like I need to do something on the host system since its trying to use the discrete card? But I am unsure. This is the compose file just in case I am missing something ``` version: "3.8" services: jellyfin: image: jellyfin/jellyfin user: 1000:1000 ports: - 8096:8096 group_add: - "989" # Change this to match your "render" host group id and remove this comment - "985" - "994" # network_mode: 'host' volumes: - /home/hoxbug/Docker/jellyfin/config:/config - /home/hoxbug/Docker/jellyfin/cache:/cache - /mnt/External/Movies:/Movies devices: - /dev/dri/renderD128:/dev/dri/renderD128 networks: external: external: true ``` Thank you for the help.
fedilink

Thank you will have to look into that particular forum. Though my problem has been solved for now I will definitely have more things that come up in the future and that seems like just the place for this sort of thing.


Thank you so much, where did you find this bit of information? I have been trying to solve this problem for the past week or so, but my google foo failed me on this one. Was about to give up on this one.


Question about mounting ZFS pool
Hello, I have had a pool of two hard drives in a mirror pool for some time but the OS got corrupted and I reinstalled the OS. Now I am on Linux Mint and my pool does not appear any more. When I use zpool import it says no pool available to import, I have looked around online and found you can import a zpool by specifying the drives, so I used zpool import -f -d /dev/sda1 -f -d /dev/sdb1 internal I get back my pool > pool: internal state: ONLINE scan: scrub repaired 0B in 00:08:39 with 0 errors on Tue Jun 18 18:38:40 2024 config: NAME STATE READ WRITE CKSUM internal ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sda ONLINE 0 0 0 errors: No known data errors But I am unable to mount the pool, zfs mount internal returns >cannot mount 'internal': legacy mountpoint use mount(8) to mount this filesystem I tried using mount but I am not having any success, saying mount: internal: can't fin in /etc/fstab. Is there any chance to get this pool back on the computer or is it a lost cause. Thank you for the help.
fedilink

Yeah definitely wanting the redundancy, most of what I will be storing will not be life changing if lost just a big inconvenience, and for the life changing stuff I plan on having it backed up to cloud storage.


That is a good point about stress testing them, if memory serves me well I believe one of the 12TB would disconnect a while back maybe 2 or so years ago when I was using windows and doing large backups. I think the consensus around seems to just mirror the 6TB and mirror the 12TB drives separate, it’s probably what I will end up doing since in the end I am tripling the amount of storage and really allowing me to lose two drives albeit two different drives before data loss. Feel I may be getting a bit greedy with what I have and should just be happy with what I am getting with that. Looking at getting an upgrade in about a year or two either way.


That’s definitely I have to look into, the nixos page on ZFS had a link to a ZFS cheat sheet of sorts that I have been trying to wrap my head around, thanks for pointing it out though.


Thank you for that will have to have a look into it since I am quite new and I am not completely sure how to go about things in a way to not regret it later down the line in half a year or so.


Beginner questions about ZFS and how to use my drives.
Hello, I currently have a home server mainly for media, in which I have an SSD for the system and 2 6TB hard drives set up in raid 1 using mdadm, its the most I can fit in the case. I have been getting interested in ZFS and wanting to expand my storage since it's getting pretty full. I have 2 12TB external hard drives. My question is can I create a pool (I think that's what they are called), using all 4 of these drives in a raidz configuration, or is this a bad idea? (6TB+6TB) + 12TB + 12TB, should give me 24TB, and should work even if one of the 6TB or 12TB fails if I understand this correctly. How would one go about doing this? Would you mdadm the 2 6TB ones into a raid 0 and then create a pool over that? I am also just dipping my toes now into Nixos so having a resource that would cover that might be useful since the home server is currently running Debian. This server will be left at my parents house and would like it to have minimal onsite support needed. Parents just need to be able to turn screen on and use the browser. Thank you
fedilink