services:
jellystat-db:
image: postgres:16-alpine
container_name: jellystat-db
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- jellystat
jellystat:
image: cyfershepard/jellystat:latest
container_name: jellystat
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_IP: jellystat-db
POSTGRES_PORT: 5432
JWT_SECRET: ${JWT_SECRET}
TZ: Europe/Paris # timezone (ex: Europe/Paris)
JS_BASE_URL: /
volumes:
- jellystat-backup-data:/app/backend/backup-data
depends_on:
- jellystat-db
networks:
- traefik
- jellystat
labels:
- traefik.enable=true
- traefik.docker.network=traefik
- traefik.http.routers.jellystat.entrypoints=https
- traefik.http.routers.jellystat.rule=Host(`${HOSTNAME}`)
- traefik.http.routers.jellystat.tls.certresolver=http
- traefik.http.routers.jellystat.service=jellystat
- traefik.http.services.jellystat.loadbalancer.server.port=3000
- traefik.http.services.jellystat.loadbalancer.server.scheme=http
networks:
jellystat: {}
traefik:
external: true
volumes:
postgres-data: null
jellystat-backup-data: null
Hmmm thanks but I’m not using traefik…Is it part of the needed setup?
Huh…so the log is just an almost infinite loop of these:
jellystat-1 | Error: getaddrinfo ENOTFOUND jellystat-db
jellystat-1 | at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26)
jellystat-1 | [JELLYSTAT] Database exists. Skipping creation
jellystat-1 | FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
jellystat-1 | FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
jellystat-1 | node:internal/process/promises:391
jellystat-1 | triggerUncaughtException(err, true /* fromPromise */);
jellystat-1 | ^
jellystat-1 |
jellystat-1 | Error: getaddrinfo ENOTFOUND jellystat-db
jellystat-1 | at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26) {
jellystat-1 | errno: -3008,
jellystat-1 | code: 'ENOTFOUND',
jellystat-1 | syscall: 'getaddrinfo',
jellystat-1 | hostname: 'jellystat-db'
jellystat-1 | }
Just for clarity’s sake, here’s my docker-compose.yml:
version: '3'
services:
jellystat-db:
image: postgres:15.2
environment:
POSTGRES_DB: 'jfstat'
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mypassword
volumes:
- /postgres-data:/var/lib/postgresql/data # Mounting the volume
jellystat:
image: cyfershepard/jellystat:latest
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: MyJellystat
POSTGRES_IP: jellystat-db
POSTGRES_PORT: 5432
JWT_SECRET: 'my-secret-jwt-key'
ports:
- "3000:3000" #Server Port
volumes:
- /backup-data:/app/backend/backup-data # Mounting the volume
depends_on:
- jellystat-db
restart: unless-stopped
networks:
default:
I literally haven’t changed anything from default as it was a test, even the password fields.
Thanks…I don’t think think I have considered rTorrent before. But this one doesn’t have a remote GUI client the way deluge and transmission allow their UI to connect to a remote daemon, right?
Regarding all the troubleshooting steps, thanks a lot. I’m going to go about enabling logging by default on the service, which is disabled and definitely doesn’t help. I’m also considering to rebuild the whole thing, since it’s running off of an older Ubuntu 20.04 container. I might as well take the chance to do it on 24.04. We’ll see.
Yeah…I copied the whole of it onto my docker-compose.yml. But after running a docker compose up, and after getting:
docker-compose.yml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
[+] Running 3/3
✔ Network jellystat_default Created 0.1s
✔ Container jellystat-jellystat-db-1 Started 0.9s
✔ Container jellystat-jellystat-1 Started
I still can’t get to connect on http://myIP:3000, I get nothing, just a “unable to connect” firefox error. Is there anything I should set up/modify on the docker-compose.yml?
Thanks…Yeah I saw it. I have a few docker things deployed. But the “getting started” section completely ignores setting up the Postgresql DB, which very clearly it seems to want. This is not listed as a requirement, but still hinted casually around whenever it mentions the user/pass, environment variables etc.
So…is there anywhere mentioned how to get the whole thing up and running, including docker and postgresql?
Sometimes…and sometimes they have rather good UI. But usually it gets pretty messed up when translated. I’ve found the network speed to be pretty decent for image transfer, even at the inefficient MJPEG format they’re currently using right now. They said they’re working on better encoding. Today I found that the remote keyboard/mouse work on certain desktops, but sometimes stops on text mode or when on BIOS. And then you continue booting, and it works again. Not sure what’s going on with the hardware identifier they’re using…
So…yeah, once they fix the keyboard/mouse issue, and add the function to remotely load ISOs (not only the ones on its own storage), it’s going to be golden. Since it has SSH, I think in theory you should be able to upload the ISOs remotely using SFTP or similar, but I haven´t tested just yet.
Posted on their github. All they have is a Chinese forum. And the wiki is…rough at the moment. Chinese only (not a problem with a translation extension) and a lot of “Todo” sections there. Basically the UI right now has no configuration options, besides “checking for updates” which didn’t tell you which version you’re in anyway. While I was testing I saw the check for updates had a blue dot, so I guess it did manage to reach their servers, and after checking and installing an update…seems that menu had a slight improvement, and now it does say current running version. But that’s it.
But there’s no denying the huge potential for this tiny device. It’s way cheaper and smaller, and consumes way less power. The physical limitations I can see is the NIC is only 10/100 (no gigabit connection), and no wifi. Everything else is software, which I reckon they’ll be working on.
ICSDroid
Duuuude. I just wish I saw your comment BEFORE spinning and fighting with a nextcloud container. Well…At least I didn’t go all the way in just yet. Just found out ICSx5 does exactly this (it popped when searching for icsdroid on f-droid). My calendar is populated with the Proton Calendar. For my use, I can create events with proton calendar, and Android gets it to the local calendar via ICSx5. Thanks man!
Nope, not yet. Also very demanded by the users, but nothing released on that end yet.
Yes, it gives you notifications on events about to happens (or for which you have set a timed notification ahead of the time). But can you get a week overview? Or a day overview? Do you have a calendar in the watch? Because I do, and mine is empty because it can’t sync with proton (mind you, I still receive notifications for the events coming in 30mins, or a day ahead if I set it that way on the proton calendar app…but I can’t view the event itself, just the notification of it!).
Android itself (GrapheneOS in my case) isn’t getting calendar events, because Proton Calendar isn’t an Android Calendar app. If you click on your Permission Manager, you can see the different kinds of permission specific apps can request. As in, access to the phone, to the cameras, to the SMS, to the files…to the CALENDAR. Guess which app doesn’t even bother to use the Android calendar infrastructure laid for them? Because internally it’s not officially an android calendar app, at least not internally in its manifest.
Sure. But you should be able to decrypt it, so you can use your favored application, if so you choose. Or your favored OS. Which in my case is GrapheneOS. So I’m already in a kinda private environment. I can trust the internal OS not to talk to GOS. And I can trust my watch to do the same, because it’s locked completely thanks to Gadgetbridge. The official companion app only saw the watch in the initial pairing/key exchange, in a garbage separate profile that doesn’t hold any useful data, and it was removed immediately after. As you can see, my scenario is full data lockdown, and yet I can’t choose my favored app to use in my trusted zone.
While they’re reinventing the wheel at every step, the default email protocols involve your email being unencrypted at every hop until it reaches destination. While their solution in effect also has the same issue, they allow for sending encrypted emails you can only open by clicking on a link to decrypt them, or similar. And everything at their end is fully encrypted, which is why i bought in…But its getting old at how everything is a closed ecosystem not playing nice with anything else in any OS.
I…Think I found the issue. A classic case of “increase max upload size on your reverse proxy”. Which I thought I did… https://lemm.ee/comment/12234368
Thanks for all the help through the process!
O.M.G…so many hours wasted. One of my first searches already returned “you should increase client_max_body_size
to something like 50000M
”, and I was like I aLreAdy iNCreAseD mY client_max_body_size tO zERo sO its uNLimIteD DuH <spongebob.jpg>
Well turns out my client_max_body_size 0 parameter was in a section defining parameters for a different container/server. So of course it wasn’t applying to Immich. Just added the same line to Immich section too, restarted nginx…and the backed up asset count is already wayy ahead of the ceiling it would always hit at 180ish assets. I think I might have found my issue.
Thanks for all the help and following up!
You mean to docker logs immich_server
? I think this would be about the only error kinda it’s outputting once in a bluemoon.
[Nest] 6 - 05/29/2024, 2:40:17 PM WARN [ImmichServer] [ExpressAdapter] Content-Type doesn’t match Reply body, you might need a custom ExceptionFilter for non-JSON responses
Everything else is Websocket Connect, Websocket Disconnect.
Thanks for all the help. I changed the paths as I was mentioning a bit on the .env, so they matched the ones on the docker-compose.yml. But no dice. I think it gets stuck at the same picture, although I’m not 100% sure which one. After I rebuilt the container, the number of assets increased by two, but I also realized that I took a couple pics earlier. So it added those two and crashed a while later at the same spot as before…Is a picture/video capable of corrupting the whole backup?? Also, I’m not sure how to properly track which one is messing it, because the backup seems to have skipped a lot of pictures in what it copied.
I mean, my concern is…it would seem in docker-compose.yml that these paths would match:
${THUMB_LOCATION} = ${UPLOAD_LOCATION}/thumbs
${ENCODED_VIDEO_LOCATION} = ${UPLOAD_LOCATION}/encoded-video
${PROFILE_LOCATION} = ${UPLOAD_LOCATION}profile
, which might explain why I’m getting a thumbs folder inside my /media/MyNAS/Immich/immich-files…same with profile and encoded-video. The thumbs clearly is being used (the one inside Immich/immich-files/thumbs), while the root-located one (Immich/thumbs) is not. The root-based folders might be created due to the .env file, but then not used…and maybe it’s confusing Immich? Can I remove the entries from the .env, leaving just the UPLOAD_LOCATION one? Or am I making myself a mess this way? Maybe I should make them point equally in the .env to the same sub-paths inside immich-files so they match the structure in the docker-compose.yml? Sorry…kinda new to docker compose.
Good point about media storing. I seem to have some loop folders issue. My Immich folder has encoded-video (empty), profile (empty), thumbs (with stuff), and immich-files…and immich-files has encoded-video (again, empty), library (with all the stuff), profile (empty), thumbs (empty) and upload (empty). This is my .env:
UPLOAD_LOCATION=/media/MyNAS/Immich/immich_files
THUMB_LOCATION=/media/MyNAS/Immich/thumbs
ENCODED_VIDEO_LOCATION=/media/MyNAS/Immich/encoded-video
PROFILE_LOCATION=/media/NASdata/MyNAS/Immich/profile
And…this is my docker-compose.yml:
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- ${THUMB_LOCATION}:/usr/src/app/upload/thumbs
- ${ENCODED_VIDEO_LOCATION}:/usr/src/app/upload/encoded-video
- ${PROFILE_LOCATION}:/usr/src/app/upload/profile
- /etc/localtime:/etc/localtime:ro
Is there anything obviously wrong that would make Immich place stuff in weird folders? Should I remove all the additional paths from .env, leaving only UPLOAD_LOCATION?
Thanks! I never walked through that screen yet…I’ll try that!
EDIT: Seems…not to have done much. Backup keeps tracking and uploading a ton of files. Many more than what the server actually displays, as in, I’ve seen the backup go through pictures with dates the server doesn’t seem to show. I’ve seen it slowly go through at least 15-20 videos for example, but the server only acknowledges a grand total of 2 video assets.
EDIT2: Oops, seems the screen eventually turned off and backup stalled…Let’s see what it does now… …aaaand back to May 25th. Yup, still the same problem here :(
Sadly no luck…and re-syncing these manually would be a royal PITA. I’d be happy to re-encode them myself from source DVDs if I was able to find them (I’d assume the DVDs would have multilanguage?). But so far no luck :(
EDIT: Holy cow I think I might have found them. You can search Simpsons. MULTi DSNP RondoBYM on sites such as 1337. These encodes are…big, taking 25-30GB per season, but because they include about 10 languages audio? This is still untested, I’m trying to download these at the moment. Spanish, Latin Spanish, French, Portuguese are included (and many others, Czech, Turkish, Japanese?). I plan to run some remuxer or something later to cut out the audios I’m not interested on. Sadly (or obviously) this guy didn’t manage to go any further than seasons 1-8. The classics, I guess.
Just came to say thanks…Yeah eventually after copy-pasting it from scratch again, I got it running. Seems to be working now. Thanks again!