Just had NextCloud denying my credentials (not for the first time). I know they weren’t wrong because I’m using a password manager. Logs didn’t say much. Was about to reinstall (again, not the first time nextcloud went bonkers on me) before I tried a docker compose down && docker compose up. Lo and behold after a restart the credentials worked again.
This stuff is just way too flaky for something so important.
Is OwnCloud good again? My main usecase is saving photos but I don’t want them locked away in a database so SeaFile is out.
Edit: I’m going to take the time to reply to you all, bit busy with work and family suddenly. But a little update - I’ve quickly setup Immich and fired up the CLI to import my library. AFAIK the files are still stored on disk somewhere but metadata is in a database. I didn’t realize this before, knowing that I think my mind is made up and Immich is the best solution. Thanks everyone!
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Maybe Immich might be a fit fit
https://github.com/immich-app/immich
I’m giving this a try now - it’s true it still saves the files on disk somewhere right? AFAIK at least so, this fits my requirements.
The installation instructions talk about the yaml amd env file to dowmload and edit, in one of those you specify explicit path of where your files go
Yes it does, you can back up the files externally and everything if needed. You can also import external directories of existing photos.
I haven’t got this kind of issue with nextcloud, I’m pretty sure you can reset your password using occ via cli
I’m using the LSIO docker image and I could not locate the occ file to fire off the reset - but even then - I didn’t need to reset my password anyway…
That’s your problem, just there: you deployed a one size fits all blackbox of a container that, by definition, on top of pulling all the inefficiencies and redundancies of docker, isn’t tuned for your specific hardware and operational needs. I get the appeal of containers, but if you want to self-host responsibly, you’ve got to be in control of what’s running and how.
Sorry if this sounds harsh.
I honestly don’t see how my issues are related to docker. Sure the occ app was missing (or I just couldn’t find it, but the conclusion was that I didn’t even need it)
I’m running Linux so there’s not really any inefficiencies in regards to resources AFAIK - it’s just namespaces and cgroups.
I could give you plenty of reasons why you would be worse-off deploying from docker without deep understanding of what’s going on, but to only list a few out of the obvious pile:
your container ships a bunch of things that you do not need and that take-up significant server resources. Not just nextcloud apps that you will never need but get loaded nonetheless, but also things like redis and a full-fledged collabora server that only make sense in a large-scale instances.
your container isn’t tuned for your server because whoever made the container had no way to know that in advance. For instance, It might be that your php-fpm forks beyond your multithreading or IO capabilities, that your application cache isn’t adequate wrt. your system’s RAM memory, etc
your containers duplicate functionalities from each other and from the operating system. You don’t need more than one http server, database, application process manager, interpreter, … but they add-up nonetheless and reduce the pool of available resources from the rest of the system and containers.
I also stopped using Nextcloud after it broke a couple of times. As a consequence I also never use :latest tag on any docker container anymore - manual updates only
When I switched to Android phone, I also switched to syncthing. If you have enough storage on your phone, it is amazing! Never looked back
When I first started selfhosting I put latest on everything AND used watchtower. Quickly learned my lesson.
Are you exposing it to the Internet? Weirdness like that might be from someone exploiting your instance.
Yeah - don’t see any evidence of that in the logs + why would it work again after a restart?
How do you have your auth working? Is it basic user/password managed on Nextcloud (external database connected?), is it external auth against something like Okta, or is it user/pass that you define from docker-compose?
If via docker-compose then a restart would clear anything an attacker would have done and it would reload from the docker-compose process I think? I’m not too familiar with the specifics on that as I’m not a security researcher, but generally some attacks are resident in memory only and a restart can clear them only for it to crop up again later either due to a running process that was set to rerun an exploit or someone monitoring your system externally and retrying the exploit remotely again.
Or it could just be some bug in Nextcloud or unique to your environment. Personally I’m only hosting things that are internally accessible via VPN anymore. Tailscale makes that super easy these days.
I’ve been running two NC instances for over five years (linuxserver docker images)—one has been issue-free, and the other had sporadic issues like OP is describing… but not for the last year or so, so I assumed the issue had been fixed in an update. Or maybe the problem was the network configuration instead of NC.
Would be interesting to hear a little more about your setup. I had some issues when I had Nextcloud installed directly on Debian (though nothing this major), have since switched to running it on Docker and it’s been very solid.
It’s the LSIO image hooked up to seperate (but also docker) postgres db that’s also used for other apps. The data and config directories are bind mounts to the local filesystem. It connects to a samba share via the external storage plugin. It is exposed to the internet through a caddy reverse proxy though (the database isn’t)
Do you have 2 redis containers by any chance? I’m asking because you mentioned Immich, and that one has redis as part of its stack
I am using nextcloud for years now with postgres, redis and configured PHP setttings, but I installed it on the host. Never had any problems, Performance is awesome… Almost everytime I read about problems is with the docker images. The new AIO image shall be bad too, but I can not say anything to this, since I don’t use it.
I really like docker, but sometimes it is better to install on the host directly or use an LXC if you need isolation. MinIO is the same… Would not want it in a Container
Maybe seafile could be an option for you 🤔
Been running multiple Nextcloud instances for years on bog standard debian + apache + php-fpm install, as documented in the official docs which do not even mention docker. Upgrades were never a problem. Some apps may suffer some bugs from time to time, but Nextcloud itself works flawlessly. Wrote an ansible role to install, manage and update it. The only thing that deviates from the “recommended” setup is Postgres instead of MariaDB. People need to start following the actual documented/well-supported installation options and stop trying to stick containers everywhere…
The docs are very good and you have a lot of tutorials for nextcloud, bit mostly they scratch only on the surface. They show you how to install and if you are lucky you see how to setup HTTPS…
But then? Start nextcloud and go to system overview and everything is red and you get warnings about region, php opcache… 😁 Most tutorials end there. It is a pity
I just don’t see how docker can fuck something like this up honestly, the only thing that can be screwy is permissions when dealing with filesystem mounts - but once you’ve got that working it should be pretty static.
Maybe it is permissions or the image won’t start correctly. Maybe it tries to read from the database which is jot up at the moment or something similar 🤔
Bare metal club! :D
That’s how I ran my nextcloud for about a decade and never had problems. On my new server I’m running it in docker and so far it seems to work ok.
Good to hear that it is running ☺️
Did you follow a specific guide or did you migrate yourself? Which image are you using? Maybe this could help others
TBH I’m migrating manually by synching files. Still wonder if it’s worth the hassle to somehow export/import contacts & calendar instead of reproducing them by hand. I thought about feeding the mariadb the psql dump I used to create for backups but that’s probably more work than doing things by hand.
One reason for me to try docker is “easier” backup. I just throw the whole data-directory of the db container into restic. Restoring the backup would just be starting a container with that saved directory. I hope that way I don’t have to argue with the database about reading a huge sql dump.
Unfortunately the documentation is a bit weird, I think. There’s the official all-in-one container that starts a container that starts more containers but that was a bit too much “magic” for my taste. I used the images and documentation maintained by the community here and ended up with this compose file I can manage in portainer. Runs nextcloud (with apache), mariadb and redis. Also had to add that final bit for the cron job. This way I can point my reverse proxy at the local ip of the nextcloud_apache container and be done with it.
Thanks for the input. Copy over files is the “clean” way, without struggle, yes. For backups I use rclone and copy the userdata folder encrypted to external storage. Works fine for me. Maybe your dokcer files will help others, thanks for sharing :)
deleted by creator
Docker has it usecases but I don’t need everything in there. Like I said MinIO for example is just a short oneliner to start.
The most important thing is backup ☺️
I tried to run it on Debian and on each update it was always complaining the php version too old. Maybe on a distro that doesn’t come with ancient packages can be ok…
I guess this is only a problem on an older debian server. Then you could use the PHP PPA. Some people still run PHP 7.4 or even 5.6, but they are end of life 😳
Just wanted to +1 your comment. Installing on bare metal host is higher risk, but higher reward as well in terms of stability and performance. In my case I’m using mariaDB, redis, php, and apache and it’s been solid for years now.
I used it with mariadb before, converting to postgres gave a performanceboost. Don’t ask me why but it ran faster
If you are intrested, than here is a guide 😊
I’m interested, it’s on the list but pretty far down. pgsql is better hands down imho but I followed nextcloud recommendations at the time I set things up and just never switched. Thanks for the guide!!
I use Seafile… can give a partial recommendation
Maybe try https://github.com/kd2org/karadav if you want to continue using the NC apps for photo backups.
Is it measurably better? From a quick look, it’s PHP again (and if your NC runs slow, there’s a fair chance it’s not properly set-up/tuned), and SQLite (so, rather small scale).
That’s really cool. I’m giving Immich a try now but I saved your comment.
I’m not done but I’m so tired of just stupid error messages that don’t help from developers. I love the open source community but for gods sake devs, handle your errors in a format that makes sense.
Nextcloud or others, it’s always the same. I either get a 200 line stacktrace that means absolutely nothing to me because the dev didn’t bother to handle the exception (like you submit a form and get a null reference back. It sure would be nice to know what field was null) or of course the infamous “Exception occurred” and nothing else.
My favorite was I tried to submit to Jellyfin a fix for one of their very opaque exceptions, keep the stack trace but rewrite the error message like “x exception occurred, do you have permissions to do that?” Or something and the PR was rejected. I just can’t even with that
Out of interest, which PR was that?
It’s uncommon to rewrite exception messages to be user friendly, they are for developers. The exception shouldn’t be thrown in the first place if it’s a common issue or the error message should be more generic for unhandled problems.
ehh I try to keep me here and my real github separate. I’m all for exception messages being for developers especially in logs, but things also shouldn’t error silently either. This was a case where there was something different with my OS I was running and I wanted to show an error that there was a common reason for that exception being thrown. This was years ago though, so I don’t remember details
I strongly disagree with this, any error message shown to the user should be helpful to the user
I think you misunderstood, this is about exceptions, which shouldn’t be shown to users unless they ask for it.
Exceptions are not helpful to users most of the time, as shown above. They need instructions on how to report issues instead since they most likely can’t fix an unhandled exception by themselves.
Underrated comment.
To put it into user perspective:
Exception X with error code xxx means Y. Y should be shown via a modal dialog to the user. The state of the application has to be reverted to a valid state as error handling.
The exception/error gets logged, the user doesn’t receive a exception but the interpretation of the error is shown to him via the UI.
I’m also a develop and my philosophy is that stack traces are for the developers but they should be translated to informative error messages for the user. Otherwise you’re doing security through obscurity.
Mine has randomly done that for the last few versions now. I also noticed it now maintains several cookies that I have to clear before I can log in successfully again.
I do have Redis configured with it, have never used their AIO image, and previously, the session ID was the only cookie. Haven’t kept quite up to date with NC’s development, but maybe it’s no longer using PHP’s session store in favor of its own mechanism?
Unfortunately, I’m too invested in NC to start switching everything to discrete apps, so I guess I just have to put up with it. :shrug:
For this exact reason I’m using NextCloud as a service. You can even install plugins.
It’s a trade-off ofc but it works rocks solid so far.
I’m not affiliated with that particular provider though.
use immich for photos.
owncloud ocis works but is very young. is literally just file hosting with something to open office files online.
https://github.com/simone-viozzi/my-server
those are my configs. you have both immich and owncloud.
I just implemented authentik SSO for Nextcloud and other apps and it’s made my life easier.
In my experience, Immich is way better for Photos.
Giving this a shot, importing everything through the CLI now
I love Immich. Such an amazing system it still shocks me that it’s foss