• 4 Posts
  • 145 Comments
Joined 1Y ago
cake
Cake day: Jun 09, 2023

help-circle
rss

Vaultwarden itself is actually one of the easiest docker apps to deploy…if you already have the foundation of your home lab setup correctly.

The foundation has a steep learning curve.

Domain name, dynamic DNS update, port forwarding, reverse proxy. Not easy to get all this working perfectly but once it does you can use the same foundation to install any app. If you already had the foundation working, additional apps take only a few minutes.

Want ebooks? Calibre takes 10 mins. Want link archiving? Linkwarden takes 10 mins

And on and on

The foundation of your server makes a huge difference. Well worth getting it right at the start and then building on it.

I use this setup: https://youtu.be/liV3c9m_OX8

Local only websites that use https (Vaultwarden) and then external websites that also use https (jellyfin).


See me comment above

https://lemmy.ca/comment/11490137

I don’t like that obsidian not fully open source but the plugins can’t be beat if you use them. Check out some youtube videos for top 20 plugins etc. Takes the app to a whole new level.


I could never get NextCloud on android to sync files back to the servers


The real power of obsidian is similar to why Raspberry Pi is so popular, it has such a large community that plugins are amazing and hard to duplicate.

That being said, I use this to live sync between all my devices. It works with almost the same latency as google docs but its not meant for multiple people editing the same file at the same time

https://github.com/vrtmrz/obsidian-livesync


This is the correct answer for the selfhosted crowd


And borgmatic makes retention rules with automatic runs super easy. It basically a wrapper that runs borg on the client side.



Last in checked, there is an open PR for the PWA Android app the expose the share function. That will allow this to work however you will have to install the PWA via chrome since the share feature for PWA is proprietary. Sucks because I use Firefox with a bunch of privacy features .


Https is end to end encryption and doesn’t need to be on their road map

Encryption at rest could be an option but seeing as how many other projects have trouble with it (nsxtcloud), its probably best to have this at the fike system level with disc encryption


Same with jellyfin.

They basically don’t accept recurrent donations on purpose

https://forum.jellyfin.org/t-we-re-good-seriously


I’ve got multiple apps using LDAP, oauth, and proxy on authentik, I’ve not had this happen.

I also use traefik as reverse proxy.

I didn’t manually create an outpost. Not sure what advantage there is unless you have a huge organization and run multiple redundant containers. Regardless there might be some bug here because I otherwise have the same setup as you.

I would definitely try uploading everything to the latest container version first


For people wanting the a very versatile setup, follow this video:

https://youtu.be/liV3c9m_OX8

Apps that are accessed outside the network (jellyfin) are jellyfin.domain.com

Apps that are internal only (vaultwarden) or via wireguard as extra security: Vaultwarden.local.domain.com

Add on Authentik to get single sign on. Apps like sonarr that don’t have good security can be put behind a proxy auth and also only accessed locally or over wireguard.

Apps that have oAuth integration (seafile etc) get single sign on as well at Seafile.domain.com (make this external so you can do share links with others, same for immich etc).

With this setup you will be super versatile and can expand to any apps you could every want in the future.


Does anyone know if dockge allows you to directly connect to a git repo to pull compose files?

This is what I like most about portainer. I work in the compose files from an IDE and the check them into my self hosted git repo.

Then on portainer, the stack is connected to the repo so only press a button to pull the latest compose and there is a check box to decide if I want the docker image to update or not.

Works really well and makes it very easy to roll back if needed.


Bitwarden let’s you upload files (key files) and save all you passwords.


I don’t remember all the details. They never went closed source, there was a difference in opinion between primary devs on the direction the project should take.

Its possible that was related to corporate funding but I don’t know that.

Regardless it was a fork where some devs stayed with owncloud and most went with NextCloud. I moved to NextCloud at this time as well.

OwnCloud now seems to have the resources to completely rewrite it from the ground up which seems like a great thing.

If the devs have a disagreement again then the code can just be forked again AFAIK just like any other open source project.


I only read the beginning but it says you can use it for private deployments but can’t use it commercially. Seems reasonable. Any specific issues?


I have no problem supporting devs but locking what should be core features behind a paywall in unacceptable for me.


I mean software that’s actively being developed can’t be called DOA. Even if it’s garbage now (and I don’t know if it is) doesn’t mean it can’t become useful at a future date.

Its not like a TV show where once released it can never be changed.


Oh never mind, I saw this finding announcement for 6M and assumed it was the same company. Looks like they have many corporate investors…doesn’t inspire too much confidence.

Although they are still using the Apache 2 license and you can see they are very active in github. It does look like it’s a good FOSS project from the surface.

https://owncloud.com/news/muktware-owncloud-gets-another-round-6-3-million-funding-releases-owncloud-6-enterprise-edition/


Ya it was bought by kiteworks which provides document management services for corps (which explains why that mention traceable file access in their features a lot).

That being said, they bought them in 2014 it seems and it’s been a decade now Correcting: they were bought very recently, they have been accepting corporate funding for more than a decade however. That’s not bad in and of itself.


Thank your for providing first hand perspective. I’ll probably try to spin up a docker deployment for testing.

I don’t really plan to use many of the plugins since I think that was the down fall of NextCloud. Trying to do everything instead of doing it’s core job well.


Also looking through some of the issues and comments on github about no plans to implement basic features (file search on the android app) does not inspire confidence at all. One of the reasons I’m hoping the OwnCloud rewrite is good.


Did not know this. Thanks!

Looks like Kiteworks invested in OwnCloud in 2014 and they still seems to be going strong with the OSS development which is a good sign.

This probably explains why there are so many active devs on the project and how they got a full rewrite into version 4 relatively quickly.

Already seems to have more features than Seafile.


I know, I did as well.

The point of the post is that there is a very active full rewrite of the whole thing trying to ditch all the tech debt that NextCloud inherited from the OG owncloud (php, Apache etc)


I had NextCloud on a Ryzen 3600 with NVME zfs array. While faster that my previous Intel atom with HDD + SSD cache, Seafile blows it away in terms of speed and resiliency. It feels much more reliable with updates etc.


Exactly, Seafile is the best I’ve found so far but a clean re write of the basic sync features would be great.

Seafile for example has full text search locked behind a paywall even though tools like Elasticsearch could be integrated into it for free. Even the android app as filename search locked behind a paywall. You have to log into the website on your phone if you need to search.

Pathetic state of affairs.


The topic of self-hosted cloud software comes up often but I haven't seen anyone mention owncloud infinite scale (the rewrite in Go). I started my cloud experience with owncloud years ago. Then there was a schism and almost all the active devs left for the nextcloud fork. I used nextcloud from it's inception until last year but like many others it always felt brittle (easy to break something) and half baked (features always seemed to be at 75% of what you want). As a result I decided to go with Seafile and stick to the Unix philosophy. Get an app that does one thing very well rather than a mega app that tries to do everything. Seafile does this very well. Super fast, works with single sign on etc. No bloat etc. Then just the other day I discovered that owncloud has a full rewrite. No php, no Apache etc. Check the github, multiple active devs with lots of activity over the last year etc. The project seems stronger than ever and aims to fix the primary issues of nextcloud/owncloud PHP. Also designed for cloud deployment so works well with docker, should be easy to configure via docker variables instead of config files mapped into the container etc. Anyways, the point of this thread is: 1. If you never heard of it like me then check it out 2. If you have used it please post your experiences compared to NextCloud, Seafile etc.
fedilink

When I was starting out I almost went down the same pathway. In the end, docker secrets are mainly useful when the same key needs to be distributed around multiple nodes.

Storing the keys locally in an env file that is only accessible to the docker user is close enough to the same thing for home use and greatly simplifies your setup.

I would suggest using a folder for each stack that contains 1 docker compose file and one env file. The env file contains passwords, the rest of the env variables are defined in the docker compose itself. Exclude the env files from your git repo (if you use this for version control) so you never check in a secret to your git repo (in practice I have one folder for compose files that is on git and my env files are stored in a different folder not in git).

I do this all via portainer, it will setup the above folder structure for you. Each stack is a compose file that portainer pulls from my self hosted gitea (on another machine). Portainer creates an env file itself when you add the env variables from the gui.

If someone gets access to your system and is able to access the env file, they already have high level access and your system is compromised regardless of if you have the secrets encrypted via swarm or not.


True, but the downside of cloudflare is that they are a reverse proxy and can see all your https traffic unencrypted.


I like finamp as my android music client for jellyfin


I world strongly suggest a second device like an RPI with Gitea. There what I have.

I use portainer to pull straight from git and deploy


Not to mention the advantage of infrastructure as code. All my docker configs are just a dozen or so text files (compose). I can recreate my server apps from a bare VM in just a few minutes then copy the data over to restore a backup, revert to a previous version or migrate to another server. Massive advantages compared to bare metal.


Yes, you should use something that makes sense to you but ignoring docker is likely going to cause more aggravation than not in the long term.


There is an issue with your database persistence. The file is being uploaded but it’s not being recorded in your database for some reason.

Describe in detail what your hardware and software setup is, particularly the storage and OS.

You can probably check this by trying to upload something and then checking the database files to see the last modified date.


Thanks! Makes sense if you can’t change file systems.

For what it’s worth, zfs let’s you dedup on a per dataset basis so you can easily choose to have some files deduped and not others. Same with compression.

For example, without building anything new the setup could have been to copy the data from the actual Minecraft server to the backup that has ZFS using rsync or some other tool. Then the back server just runs a snapshot every 5 mins or whatever. You now have a backup on another system that has snapshots with whatever frequency you want, with dedup.

Restoring an old backup just means you rsync from a snapshot back to the Minecraft server.

Rsync only needed if both servers don’t have ZFS. If they both have ZFS, send and recieve commands are built into zfs are are designed for exactly this use case. You can easily send a snap shot to another server if they both have ZFS.

Zfs also has samba and NFS export built in if you want to share the filesystem to another server.


I use zfs so not sure about others but I thought all cow file systems have deduplication already? Zfs has it turned on by default. Why make your own file deduplication system instead of just using a zfs filesystem and letting that do the work for you?

Snapshots are also extremely efficient on cow filesystems like zfs as they only store the diff between the previous state and the current one so taking a snapshot every 5 mins is not a big deal for my homelab.

I can easily explore any of the snapshots and pull any file from and of the snapshots.

I’m not trying to shit on your project, just trying to understand its usecase since it seems to me ZFS provides all the benefits already


Start with this to learn how snapshots work

https://fedoramagazine.org/working-with-btrfs-snapshots/

Then here the learn how to make automatic snapshots with retention

https://ounapuu.ee/posts/2022/04/05/btrfs-snapshots/

I do something very similar with zfs snapshots and deduplication on. I have one ever 5 mins and save 1 hr worth then save 24 hourlys every day and 1 day for a month etc

For backup to remote locations you can send a snapshot offsite


This is really amazing! In theory, can you can use 2gb with 4 different VMs?



The proper way of doing this is to have two separate systems in a cluster such as proxmox. The system with GPUs runs certain workloads and the non GPU system runs other workloads.

Each system can be connected (or not) to a ups and shut down with a power outage and then boot back up when power is back.

Don’t try hot-plugging a gpu, it will never be reliable.

Run a proxmox cluster or kubernetes cluster, it is designed for this type of application but will add a fair amount of complexity.


A Story of Silent Data Corruption with Seafile
Technically this isn't actually a seafile issue, however the upload client really should have the ability to run checksums to compare the original file to the file that is being synced to the server (or other device). I run docker in a VM that is hosted by proxmox. Proxmox manages a ZFS array which contains the primary storage that the VM uses. Instead of making the VM disk 1TB+, the VM disk is relatively small since its only the OS (64GB) and the docker containers mount a folder on the ZFS array itself which is several TBs. This has all been going really well with no issues, until yesterday when I tried to access some old photos and the photos would only load half way. The top part would be there but the bottom half would be grey/missing. This seemed to be randomly present on numerous photos, however some were normal and others had missing sections. Digging deeper, some files were also corrupt and would not open at all (PDFs, etc). Badness alert.... All my backups come from the server. If the server data has been corrupt for a long time, then all the backups would be corrupt as well. All the files on the seafile server originally were synced from my desktop so when I open the file locally on the desktop it all works fine, only when I try to open the file on seafile does it fail. Also not all the files were failing only some. Some old, some new. Even the file sizes didn't seem to consistently predict if it would work on not. Its now at the point where I can take a photo from my desktop, drag it into a seafile library via the browser and it shows successful upload, but then trying to preview the file won't work and downloading that very same file back again shows the file size about 44kb regardless of the original file size. Google/DDG...can't find anyone that has the same issue...very bad Finally I notice an error in mariadb: "memory pressure can't write to disk" (paraphrased). Ok, that's odd. The ram was fine which is what I assumed it was. HD space can't be the issue since the ZFS array is only 25% full and both mariadb and seafile only have volumes that are on the zfs array. There are no other volumes...or is there??? Finally in portainer I'm checking out the volumes that exist, seafile only has the two as expected, data and database. Then I see hundreds of unused volumes. Quick google reveals `docker volume purge` which deletes many GBs worth of volumes that were old and unused. By this point, I've already created and recreated the seafile docker containers a hundred times with test data and simplified the docker compose as much as possible etc, but it started working right away. Mariadb starts working, I can now copy a file from the web interface or the client and it will work correctly. Now I go through the process of setting up my original docker compose with all the extras that I had setup, remake my user account (luckily its just me right now), setup the sync client and then start copying the data from my desktop to my server. I've got to say, this was scary as shit. My setup uploads files from desktop, laptop, phone etc to the server via seafile, from there borg backup takes incremental backups of the data and sends it remotely. The second I realized that local data on my computer was fine but the server data was unreliable I immediately knew that even my backups were now unreliable. IMHO this is a massive problem. Seafile will happily 'upload' a file and say success, but then trying to redownload the file results in an error since it doesn't exist. **Things that really should be present to avoid this:** 1. The client should have the option to run a quick checksum on each file after it uploads and compare the original to the uploaded one to ensure data consistency. There should probably be an option to do this afterwards as well as a check. Then it can output a list of files that are inconsistent. 2. The default docker compose should be run with health checks on mariadb so when it starts throwing errors but the interface still runs, someone can be alerted. 3. Need some kind of reminder to check in on unused docker containers.
fedilink

Self hosted YouTube player with automatic yt-dlp downloader
Looking for a self hosted YouTube front end with automatic downloader. So you would subscribe to a channel for example and it would automatically download all the videos and new uploads. Jellyfin might be able to handle the front end part but not sure about automatic downloads and proper file naming and metadata
fedilink

Anyone tried this 4x 10gbe + 5x 2.5gbe router?
Very solid price, the cheapest I've seen for something like this. Has anyone tried it with OPNsense or other software? The linked thread talks about someone getting 60C load temps but the air was 37C and they are using a RJ45 DAC which are known to use lots of power. Wondering if anyone else has experience with this. Seems like a big advancement in what's possible at a home scale for non second hand equipment. Another article about this: https://liliputing.com/this-small-fanless-pc-is-built-for-networking-with-four-10-gbe-and-five-2-5-gb-ethernet-ports/
fedilink