• 17 Posts
  • 92 Comments
Joined 1Y ago
cake
Cake day: Aug 15, 2023

help-circle
rss

Just came to say thanks…Yeah eventually after copy-pasting it from scratch again, I got it running. Seems to be working now. Thanks again!


Thanks I appreciate your reply… I have a bit of concern about an unprivileged container having firewall limitations (as I might have read in the past this was…finicky), but I’m going to give it a shot.


Nginx in LXC/Proxmox…how to Fail2ban?
Hi guys! Back in the day I used to have a VM holding nginx and all the crap exposed...and I did set it up with fail2ban. I moved away from it, as the OS upgrade was turning messy, and rebuilt onto an LXC container. How should I use fail2ban/iptables in order to protect/harden my LXC container/server? Do the same conditions apply, or will I have any limitations/issues due to the container itself? Thanks!
fedilink

 services:
   jellystat-db:
     image: postgres:16-alpine
     container_name: jellystat-db
     restart: unless-stopped
     environment:
       POSTGRES_USER: ${POSTGRES_USER}
       POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
     volumes:
       - postgres-data:/var/lib/postgresql/data
     networks:
       - jellystat
   jellystat:
     image: cyfershepard/jellystat:latest
     container_name: jellystat
     restart: unless-stopped
     environment:
       POSTGRES_USER: ${POSTGRES_USER}
       POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
       POSTGRES_IP: jellystat-db
       POSTGRES_PORT: 5432
       JWT_SECRET: ${JWT_SECRET}
       TZ: Europe/Paris # timezone (ex: Europe/Paris)
       JS_BASE_URL: /
     volumes:
       - jellystat-backup-data:/app/backend/backup-data
     depends_on:
       - jellystat-db
     networks:
       - traefik
       - jellystat
     labels:
       - traefik.enable=true
       - traefik.docker.network=traefik
       - traefik.http.routers.jellystat.entrypoints=https
       - traefik.http.routers.jellystat.rule=Host(`${HOSTNAME}`)
       - traefik.http.routers.jellystat.tls.certresolver=http
       - traefik.http.routers.jellystat.service=jellystat
       - traefik.http.services.jellystat.loadbalancer.server.port=3000
       - traefik.http.services.jellystat.loadbalancer.server.scheme=http
 networks:
   jellystat: {}
   traefik:
     external: true
 volumes:
   postgres-data: null
   jellystat-backup-data: null

Hmmm thanks but I’m not using traefik…Is it part of the needed setup?


Huh…so the log is just an almost infinite loop of these:

jellystat-1     | Error: getaddrinfo ENOTFOUND jellystat-db
jellystat-1     |     at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26)
jellystat-1     | [JELLYSTAT] Database exists. Skipping creation
jellystat-1     | FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
jellystat-1     | FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
jellystat-1     | node:internal/process/promises:391
jellystat-1     |     triggerUncaughtException(err, true /* fromPromise */);
jellystat-1     |     ^
jellystat-1     | 
jellystat-1     | Error: getaddrinfo ENOTFOUND jellystat-db
jellystat-1     |     at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26) {
jellystat-1     |   errno: -3008,
jellystat-1     |   code: 'ENOTFOUND',
jellystat-1     |   syscall: 'getaddrinfo',
jellystat-1     |   hostname: 'jellystat-db'
jellystat-1     | }

Just for clarity’s sake, here’s my docker-compose.yml:

version: '3'
services:
  jellystat-db:
    image: postgres:15.2
    environment:
      POSTGRES_DB: 'jfstat'
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: mypassword
    volumes:
    - /postgres-data:/var/lib/postgresql/data # Mounting the volume
  jellystat:
    image: cyfershepard/jellystat:latest
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: MyJellystat
      POSTGRES_IP: jellystat-db
      POSTGRES_PORT: 5432
      JWT_SECRET: 'my-secret-jwt-key'
    ports:
      - "3000:3000" #Server Port
    volumes:
      - /backup-data:/app/backend/backup-data # Mounting the volume

    depends_on:
      - jellystat-db
    restart: unless-stopped
networks:
  default:

I literally haven’t changed anything from default as it was a test, even the password fields.


Thanks…I don’t think think I have considered rTorrent before. But this one doesn’t have a remote GUI client the way deluge and transmission allow their UI to connect to a remote daemon, right?

Regarding all the troubleshooting steps, thanks a lot. I’m going to go about enabling logging by default on the service, which is disabled and definitely doesn’t help. I’m also considering to rebuild the whole thing, since it’s running off of an older Ubuntu 20.04 container. I might as well take the chance to do it on 24.04. We’ll see.


Anyone else having constant trouble with Deluge and brand new torrent releases?
Hi guys! It's been a long while, and I still struggle with Deluge catching brand new releases of movies that just about everyone's downloading. A bit of background, I have 1Gbps connection, and Deluge in headless mode (that's why I chose Deluge, for headless you get either Deluge or Transmission...AFAIK those are the only two supporting it). So, whenever my -arr servers catch the latest release of the very latest movie or TV show, Deluge catches it, and faceplants it with a download error immediately. I can either "force check" or "resume". Either way (doesn't matter which), it will error again in a second or two. This struggle continues for a while of resume/error/resume, until it finally starts to download a larger chunk...for it to error again a minute or two later, after downloading several hundred MB. And then another section of constant errors. Finally, it will get stuck at the end at 99%, where it really needs a "force check" to find whatever data was corrupted, redownload that, and finish. Any idea why this happens? Any way to fix/avoid it? I'm not sure deluge is connecting to fake seeders giving it corrupted data and it fails to catch/fix it. Any help would be very welcome. Thanks!
fedilink

Sorry i don’t have experience checking docker logs… How do I go about that?


Yeah…I copied the whole of it onto my docker-compose.yml. But after running a docker compose up, and after getting:

docker-compose.yml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
[+] Running 3/3
 ✔ Network jellystat_default           Created                                                                                                                         0.1s 
 ✔ Container jellystat-jellystat-db-1  Started                                                                                                                         0.9s 
 ✔ Container jellystat-jellystat-1     Started       

I still can’t get to connect on http://myIP:3000, I get nothing, just a “unable to connect” firefox error. Is there anything I should set up/modify on the docker-compose.yml?


Depends on your judgement of other people, i guess. I have thousands of movies taking TBs of space on my NAS and lots of users. I’d like to have easy reports such as “movies never watched in a year with a low imdb score”. So i know what can I delete if needed. But to each their own.


Any reliable group making AV1 releases consistently?
So...yeah. Looking at file size, it clearly beats older 264 or even 265. I don't mind if my server is going to have to transcode for most clients, I think the size difference in size might be worth it. But not sure which groups I could focus to look for these AV1 releases, seem they're quite scarce still?
fedilink

Thanks…Yeah I saw it. I have a few docker things deployed. But the “getting started” section completely ignores setting up the Postgresql DB, which very clearly it seems to want. This is not listed as a requirement, but still hinted casually around whenever it mentions the user/pass, environment variables etc.

So…is there anywhere mentioned how to get the whole thing up and running, including docker and postgresql?


Jellystat…guide or instructions?
Hi! I'm currently looking onto perhaps running Jellystat. But the instructions seem to be a bit...lacking? Is there a step by step guide on how to get it up and running? Thanks!
fedilink

Guitar Hero clone…with characters/venues?
Hi guys! So...just that. I'm looking for some recent developments in the GH clones scene...maybe someone made some GH variation that includes venues and characters? It kinda gave the whole vibe to it. I installed the mods for GHWT that allow me to play a bunch of extra songs, and that's neat and all, but it plays with an insufferable delay on Linux, way over the limit of what you can adjust/correct in game. It plays at perfect 30/60fps, but it just has intolerable input delay, which forces me to play it on Windows. So...I was wondering if there's anything newer, with more songs etc. And characters, and venues. And blackjack, and hookers. Thanks!
fedilink

Yeah I agree with that. I was giving it a spin. They produced a release with open source attached on github, but not sure how much of the source is in there, and that release seems to be a bit outdated compared to the release I got running on my nanoKVM right now.


Sometimes…and sometimes they have rather good UI. But usually it gets pretty messed up when translated. I’ve found the network speed to be pretty decent for image transfer, even at the inefficient MJPEG format they’re currently using right now. They said they’re working on better encoding. Today I found that the remote keyboard/mouse work on certain desktops, but sometimes stops on text mode or when on BIOS. And then you continue booting, and it works again. Not sure what’s going on with the hardware identifier they’re using…

So…yeah, once they fix the keyboard/mouse issue, and add the function to remotely load ISOs (not only the ones on its own storage), it’s going to be golden. Since it has SSH, I think in theory you should be able to upload the ISOs remotely using SFTP or similar, but I haven´t tested just yet.


Posted on their github. All they have is a Chinese forum. And the wiki is…rough at the moment. Chinese only (not a problem with a translation extension) and a lot of “Todo” sections there. Basically the UI right now has no configuration options, besides “checking for updates” which didn’t tell you which version you’re in anyway. While I was testing I saw the check for updates had a blue dot, so I guess it did manage to reach their servers, and after checking and installing an update…seems that menu had a slight improvement, and now it does say current running version. But that’s it.

But there’s no denying the huge potential for this tiny device. It’s way cheaper and smaller, and consumes way less power. The physical limitations I can see is the NIC is only 10/100 (no gigabit connection), and no wifi. Everything else is software, which I reckon they’ll be working on.


Anyone trying the Sypee NanoKVM?
Hi guys! When I saw [this tiny little guy](https://www.hackster.io/news/sipeed-turns-the-lichee-nano-into-a-risc-v-powered-ip-kvm-the-22-lichee-nanokvm-e08a0da66d14), I had to go in and get it. And so I received it today. My first experience is...the software is a bit rough at the moment. And now I'm having trouble with the keyboard detection. It's no longer working, and I"m not sure what's wrong. Basically, it worked initially, but after I unplugged it to dump some isos onto it*, the USB keyboard emulation seems to no longer work. And since I'm one of the very first users...I think have no documentation (yay). I see there's a Chinese forum where more people mention a USB keyboard issue, but I don't think this is sorted. Anyone else tried it? How's your experiences so far? Any ideas how to fix the keyboard issues? Still, for all its initial wonkiness, I clearly see this as the future for a KVM device, instead of a full blown Raspberry Pi board, which I think is a bit overkill. *: The 'full' version comes with an embedded 32GB microSD, of which 8GB is for the OS, but the remainder is a separate partition for ISOs...you connect it as a USB storage to a PC and drop your ISOs there. At the moment you don't seem to be able to mount a random file from your PC via the browser UI. Only ISO files it already has in its own storage.
fedilink


For file handling Seafile has been pretty efficient for me. No multimedia though.


Yeah…Overkill indeed. I was considering to stop using proton calendar altogether and just migrating to NextCloud…but seems this might work much easier.


ICSDroid

Duuuude. I just wish I saw your comment BEFORE spinning and fighting with a nextcloud container. Well…At least I didn’t go all the way in just yet. Just found out ICSx5 does exactly this (it popped when searching for icsdroid on f-droid). My calendar is populated with the Proton Calendar. For my use, I can create events with proton calendar, and Android gets it to the local calendar via ICSx5. Thanks man!



Thanks…one option is sharing your data from Proton back to google, which I was trying to get away from. The other involves a closed source paid app, which I’d also avoid. I’m guessing I’ll have to lay down my own caldav sync container/server to sync from.


Yes, it gives you notifications on events about to happens (or for which you have set a timed notification ahead of the time). But can you get a week overview? Or a day overview? Do you have a calendar in the watch? Because I do, and mine is empty because it can’t sync with proton (mind you, I still receive notifications for the events coming in 30mins, or a day ahead if I set it that way on the proton calendar app…but I can’t view the event itself, just the notification of it!).

Android itself (GrapheneOS in my case) isn’t getting calendar events, because Proton Calendar isn’t an Android Calendar app. If you click on your Permission Manager, you can see the different kinds of permission specific apps can request. As in, access to the phone, to the cameras, to the SMS, to the files…to the CALENDAR. Guess which app doesn’t even bother to use the Android calendar infrastructure laid for them? Because internally it’s not officially an android calendar app, at least not internally in its manifest.


Sure. But you should be able to decrypt it, so you can use your favored application, if so you choose. Or your favored OS. Which in my case is GrapheneOS. So I’m already in a kinda private environment. I can trust the internal OS not to talk to GOS. And I can trust my watch to do the same, because it’s locked completely thanks to Gadgetbridge. The official companion app only saw the watch in the initial pairing/key exchange, in a garbage separate profile that doesn’t hold any useful data, and it was removed immediately after. As you can see, my scenario is full data lockdown, and yet I can’t choose my favored app to use in my trusted zone.


Yeah… I’m afraid i might need something like that in the end. Can you hold events in different color for different categories of things?


While they’re reinventing the wheel at every step, the default email protocols involve your email being unencrypted at every hop until it reaches destination. While their solution in effect also has the same issue, they allow for sending encrypted emails you can only open by clicking on a link to decrypt them, or similar. And everything at their end is fully encrypted, which is why i bought in…But its getting old at how everything is a closed ecosystem not playing nice with anything else in any OS.


Proton Calendar to…Android calendar? Via caldav, perhaps?
Hi guys! So, I have Proton Mail, and this also gives me the Calendar. I love that I have a encrypted private calendar, but it bothers me that it doesn’t play well with any other app, as it’s not officially a “calendar” to Android. This bothers me, because I use GrapheneOS, with mostly no Google services, and I'd like my Gadgetbridge-connected smartwatch to be able to display calendar events, since they're not being shared with anyone else. But I can't, because Proton Calendar isn't really an Android Calendar. There’s a way in Proton to permanently share a link to your private calendar. In effect, it’s an up-to-date .ics file, that I believe needs to be checked/downloaded every time there’s an update. Is there a way to update this in Proton? Alternatively, I wouldn’t mind creating some caldav system that imported this, but not sure if there’s already any guide for it? Thanks so much!
fedilink

I…Think I found the issue. A classic case of “increase max upload size on your reverse proxy”. Which I thought I did… https://lemm.ee/comment/12234368

Thanks for all the help through the process!


O.M.G…so many hours wasted. One of my first searches already returned “you should increase client_max_body_size to something like 50000M”, and I was like I aLreAdy iNCreAseD mY client_max_body_size tO zERo sO its uNLimIteD DuH <spongebob.jpg> Well turns out my client_max_body_size 0 parameter was in a section defining parameters for a different container/server. So of course it wasn’t applying to Immich. Just added the same line to Immich section too, restarted nginx…and the backed up asset count is already wayy ahead of the ceiling it would always hit at 180ish assets. I think I might have found my issue.

Thanks for all the help and following up!


You mean to docker logs immich_server ? I think this would be about the only error kinda it’s outputting once in a bluemoon.

[Nest] 6 - 05/29/2024, 2:40:17 PM WARN [ImmichServer] [ExpressAdapter] Content-Type doesn’t match Reply body, you might need a custom ExceptionFilter for non-JSON responses

Everything else is Websocket Connect, Websocket Disconnect.


You mean, without defining additional paths for thumbs, profile etc? Will it work without declaring them?

Thanks!


Thanks for all the help. I changed the paths as I was mentioning a bit on the .env, so they matched the ones on the docker-compose.yml. But no dice. I think it gets stuck at the same picture, although I’m not 100% sure which one. After I rebuilt the container, the number of assets increased by two, but I also realized that I took a couple pics earlier. So it added those two and crashed a while later at the same spot as before…Is a picture/video capable of corrupting the whole backup?? Also, I’m not sure how to properly track which one is messing it, because the backup seems to have skipped a lot of pictures in what it copied.


Yeah, also my thought. Seems it’s still not working. I’ve seen it repeat the uploads multiple times, and still have quite a limited amount of pictures on the server.


I mean, the NAS is already mounted. It’s an NFS share, it gets mounted at boot, and it should be just another regular folder, transparent to docker (or anything else).


I mean, my concern is…it would seem in docker-compose.yml that these paths would match:

${THUMB_LOCATION} = ${UPLOAD_LOCATION}/thumbs
${ENCODED_VIDEO_LOCATION} = ${UPLOAD_LOCATION}/encoded-video
${PROFILE_LOCATION} = ${UPLOAD_LOCATION}profile

, which might explain why I’m getting a thumbs folder inside my /media/MyNAS/Immich/immich-files…same with profile and encoded-video. The thumbs clearly is being used (the one inside Immich/immich-files/thumbs), while the root-located one (Immich/thumbs) is not. The root-based folders might be created due to the .env file, but then not used…and maybe it’s confusing Immich? Can I remove the entries from the .env, leaving just the UPLOAD_LOCATION one? Or am I making myself a mess this way? Maybe I should make them point equally in the .env to the same sub-paths inside immich-files so they match the structure in the docker-compose.yml? Sorry…kinda new to docker compose.


The docker container where the DB lives has still another 2GB free to go (I’ll increase this). The pictures storage goes to a NAS drive with nearly 10TB of storage left. The docker container has 4GB RAM to run, only immich is running on this at this moment…I didn’t seen running too high just yet.


Good point about media storing. I seem to have some loop folders issue. My Immich folder has encoded-video (empty), profile (empty), thumbs (with stuff), and immich-files…and immich-files has encoded-video (again, empty), library (with all the stuff), profile (empty), thumbs (empty) and upload (empty). This is my .env:

UPLOAD_LOCATION=/media/MyNAS/Immich/immich_files
THUMB_LOCATION=/media/MyNAS/Immich/thumbs
ENCODED_VIDEO_LOCATION=/media/MyNAS/Immich/encoded-video
PROFILE_LOCATION=/media/NASdata/MyNAS/Immich/profile

And…this is my docker-compose.yml:

    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - ${THUMB_LOCATION}:/usr/src/app/upload/thumbs
      - ${ENCODED_VIDEO_LOCATION}:/usr/src/app/upload/encoded-video
      - ${PROFILE_LOCATION}:/usr/src/app/upload/profile
      - /etc/localtime:/etc/localtime:ro

Is there anything obviously wrong that would make Immich place stuff in weird folders? Should I remove all the additional paths from .env, leaving only UPLOAD_LOCATION?


Thanks! I never walked through that screen yet…I’ll try that!

EDIT: Seems…not to have done much. Backup keeps tracking and uploading a ton of files. Many more than what the server actually displays, as in, I’ve seen the backup go through pictures with dates the server doesn’t seem to show. I’ve seen it slowly go through at least 15-20 videos for example, but the server only acknowledges a grand total of 2 video assets.

EDIT2: Oops, seems the screen eventually turned off and backup stalled…Let’s see what it does now… …aaaand back to May 25th. Yup, still the same problem here :(


[SOLVED] Immich keeps restarting the backup
Hi guys! I'm having my first attempt at Immich (...and docker, since I'm at it). So I have successfully set it up (I think), and connected the phone and it started uploading. I have enabled foreground and background backup, and I have only chosen the camera album from my Pixel/GrapheneOS phone. Thing is, after a while (when the screen turns off for a while, even though the app is unrestricted in Android/GrapheneOS, or whenever changing apps...or whenever it feels like), the backup seems to start again from scratch, uploading again and again the first videos from the album (the latest ones, from a couple of days ago), and going its way until somewhere in December 2023...which is where at some point decides to go back and re-do May 2024. It's been doing this a bunch of times. I've seen mentioned a bunch of times that I should set client_max_body_size on nginx to something large like 5000MB. However in my case it's set to 0, which should read as unrestricted. It doesn't skip large videos of several hundreds megs, it does seem to go through the upload process...but then it keeps redoing them after a while. Any idea what might be failing? Why does it keep restarting the backup? By the way, I took a screenshot of the backup a couple days ago, and both the backed up asset number and the remainder has kept the same since (total 2658, backup 179, remainder 2479). This is a couple of days now going through what I'd think is the same files over and over? SOLVED: So it was about adding the `client_max_body_size` value to my nginx server. I thought I did, so I was ignoring this even though I saw it mentioned multiple times. Mine is set to value 0, not 50000M as suggested on other threads, but I thought it should work. But then again, it was in the wrong section, applying to a different service/container, not Immich. Adding it to Immich too (with 0, in my case, which should set it to "unlimited") worked immediately after restarting nginx service. Thanks everyone for all the follow ups and suggestions!
fedilink

Thanks. I’ve been noticing this A LOT. Deluge will stop and error out the download. You are forced to manually check the file before being able to resume the download. Only, it will error out again a few seconds later. Keeping a very up to date block filter seems to help, but only a little bit.


Sadly no luck…and re-syncing these manually would be a royal PITA. I’d be happy to re-encode them myself from source DVDs if I was able to find them (I’d assume the DVDs would have multilanguage?). But so far no luck :(

EDIT: Holy cow I think I might have found them. You can search Simpsons. MULTi DSNP RondoBYM on sites such as 1337. These encodes are…big, taking 25-30GB per season, but because they include about 10 languages audio? This is still untested, I’m trying to download these at the moment. Spanish, Latin Spanish, French, Portuguese are included (and many others, Czech, Turkish, Japanese?). I plan to run some remuxer or something later to cut out the audios I’m not interested on. Sadly (or obviously) this guy didn’t manage to go any further than seasons 1-8. The classics, I guess.


Thanks…but I don’t want to block all other users/devices from using x265 just because that one single device :(


I didn’t see an option to disable x265, sadly. There’s a bunch of transcoding options, but seems this wasn’t considered, unless the device didn’t support it.


Thanks…I was looking onto that, but I don’t think I’ve seen the option to always transcode x265 back to x264…Unless the device truly can’t. But if it can decode, it’s preferred, and I’m not sure there’s an option to change that.


Jellyfin: Can I disable HEVC playback on ONE device?
So, the issue at hand is, I have a Chromecast 4K with Jellyfin Android TV on it. And most of my library is x265/HEVC. But, whenever playing from this specific device, it will natively take HEVC, but with exoplayer library it plays kinda like a slideshow, at about 5-10FPS. Choosing VLC should be ok, and forcing a transcode will result in a perfectly playable x264 at 24-30-60FPS or whatever is needed. But x265 with the default exoplayer seems to be a struggle. Is there a way either in Jellyfin Android TV or in the server, to specifically disable x265 playback, but only on this device?
fedilink

THanks…I haven’t tried Tailscale yet, I think I’ll get the “easier” commercial version just yet for this. Still learning on this.


Thanks…I think I have now both containers in the home and remote servers within the tailscale network, showing their own IPs. This will make the site accessible via SSH, if I get this right. I’m not sure though how to route whichever remote server I build to the home network


Server behind CGNAT - Reverse VPN? Or how to bypass?
So..in a short sentence...the title. I have a server in a remote location which also happens to be under CGNAT. I only get to visit this location once a year at best, so if anything goes off...It stays off for the rest of that year until I can go and troubleshoot. I have a main location/home where everything works, I get a fixed IP and I can connect multiple services as desired. I'd like to make this so I could publish internal servers such as HA or similar on this remote location, and reach them in a way easy enough that I could install the apps to non-tech users and they could just use them through a normal URL. Is this possible? I already have a PiVPN running wireguard on the main location, and I just tested an LXC container from remote location, it connects via wireguard to the main location just fine, can ping/ssh machines correctly. But I can't reach this VPN-connected machine from the main location. Alternatively, I'm happy to listen to alternative solutions/ideas on how to connect this remote location to the main one somehow. Thanks!
fedilink

Basic docker networking?
Hi guys! I'm going at my first docker attempt...and I'm going in Proxmox. I created an LXC container, from which I installed docker, and portainer. Portainer seems happy to work, and shows its admin page on port 9443 correctly. I tried next running the image of immich, following the steps detailed in their own guide. This...doesn't seem to open the admin website on port 2283. But then again, it seems to run in its own docker internal network (172.16.0.x). How should I reach immich admin page from another computer in the same network? I'm new to Docker, so I'm not sure how are images supposed to communicate within the normal computer network...Thanks!
fedilink

Trying to get the Simp- series…with multi-dub multiple languages. Is there any release as this? Or where/how could I rip it myself?
So, yeah...trying to get a multi-language rip of these series. Ideally with English and Latin Spanish, maybe additionally Spain-Spanish, French and Italian...But not sure how could I go about this. How could I get a multi-dub version? Thanks!
fedilink

How does the Office activation work for Macs? Any KMS equivalent?
Hi guys! Trying to get Office running on a Mac...How does one get it activated on Mac? On windows it has become so effortless, but I wonder if there's a similar KMS method on Mac? Thanks!
fedilink

Any way to accelerate traffic between two very distant homes?
Hi guys! Just...that. My parents live in Europe, while my servers are at my home in Asia. Europe home has fiber at about 200Mbps symmetric, and Asia home has 1Gbps symmetric. But due to distance, it's hard to get them to reach very fast speeds at all, being capped at about 1MB/s when transferring files or watching Jellyfin. Is there any way to do some sort of static faster routing for the specific home IPs?
fedilink

Is duckdns oug at the moment? Anyone else having issues?
Hi guys! Just that...not sure there's news about this anywhere, but seems duckdns has been out...maybe for 3-4h already?
fedilink

Sonarr and Frontline…how to name it?
Hi guys! So, Sonarr seems to completely miss my Frontline episodes. I manually downloaded the whole 2023 season. It still fails to find it on its "Season 2023" folder. What should I be doing? Thanks!
fedilink

Help deciding PC upgrades
So...I've been increasingly struggling to run the latest games, as the age of my 6 years old desktop is starting to show, and Starfield denying my GPU just pissed me. I know it's a bug and I can probably play it, but it's outright the minimum for this game, and so I'd like a refresh of the worst, or should I consider a full new desktop? I know the GPU is starting to show its age, but not sure the CPU is salvageable or you'd advice a new one... Here's a quick short summary of the computer: -Mobo Gygabite Z170 K3 -CPU i7 6700 -2x8GB DDR4 2133 -MSI Nvidia 1070 8GB -SSD 1TB on the SATA port (I believe I can install an m.2 instead) -EVGA G2 750W My questions...I believe these days an AMD card would be cheaper than an Nvidia, correct? What would be an equivalent to a 3070, or a 4070? More importantly...are they bigger in size (would it fit)? Do they take more power than my 1070 (will it roast my power supply?). Power would be a bit important, as I'd rather not replace all the wiring for the power supply, and electricity is becoming kinda pricey these days... I'm basically considering upgrading GPU and RAM, and considering if this would be a good upgrade or the CPU would then be a bottleneck (hence just throw it all and go for a full new desktop...I'd rather not). Thanks!
fedilink