• 2 Posts
  • 50 Comments
Joined 1Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss

I’ve set up tailscale in the past week and fallen in love with the ease of use. So, this has my vote too. But, if i was doing this, i would chop the file into, say, 500mb parts using 7z or WinZip and then transfer it through SCP (WinSCP if using windows) over tailscale IPs.



Your point is valid. I’ll use the learnings from this thread for other, robust, services first and keep an eye on the progress of immich in terms of security.


Thank you, I’ll work it out based on what you’ve told me.


Thanks, I’ll figure the best way out based on the responses.

And lol, I did not know about goDaddy being this bad since this was the first time I purchased a domain. Is it possible to move domains from one provider to another or do I have to wait for it to expire and then register on the other provider?


I read about funnel and it is really cool. But it seems to only expose the services through a *.ts.net type of URL. What I want is to use the domain that I’ve acquired.


I have used reverse proxy in office setup where my local IP was NATed to a dedicated public IP. But in my home lab, I don’t have a dedicated public IP. So, i need to figure a way around that.


For now only Immich, but on a sub domain like I said in the PS. And yes, immich is installed using docker.


How do I make my immich available publicly?
I have self hosted immich on Debian on my homelab. I have also setup tailscale to be able to access it outside my home. Sometime ago, I was able to purchase a domain of my choice from GoDaddy. While I am used to hosting stuff on Linux, I've never exposed it for access publicly. I want to do that now. Is it something I can do within tailscale or do I need to setup something like cloudflare? What should I be searching for to learn and implement? What precautions to take? I would like to keep the tailscale thing too. PS: I would like to host immich as a subdomain like photos.mydomain.com. Thanks!
fedilink

I’m scared to these ‘breaking changes’ even though I’m not exactly a self hosting newbie. That’s because I don’t have a proper 3-2-1 backup and I’m afraid I might lose my photos or settings. I’ve been exploring of setting up immich through a homeserver management tool like runtipi that allows taking backups separately that can be reverted to, in case something goes wrong. Anyone aware of any negatives about that?


Yes, the other answer also suggests this and I think this will do the trick. Thank you for your response.


Phew! I almost believed I was asking for something beyond the scope of linux-fu. English not being my first language may be part of the reason but still I think I covered everything that was relevant.

Yes, that’s exactly what I want and your post has given me the clarity I needed. M.2 wifi slots don’t support disks so that option is definitely out. I’m going to boot with the latest Ubuntu live OS on a USB and attempt what you’ve outlined.

I don’t have anything really critical on the zfs that is not backed up separately so I’m definitely going to attempt this and learn in the process.

Thank you for taking the time to respond!


The SSD is 256GB while the two HDDs are 4TB each. What kind of zfs config/array do you suggest I create from them?


I get the part that the cloning software does not care for the underlying OS. My worry is the fact that I’ll run the cloning software/command from a live USB which will not be able to detect the zfs mirror on my backup drive on its own and thus break the zfs mirror with bad consequences for the existing data. I could not find any commands to make the live USB OS discover and respect the existing zfs configuration.


I’ll definitely take this route if the wifi slot will not support the m.2 drive. Thanks for the suggestion.


Why do I need to add my nvme to the zfs pool? That doesn’t really make sense.

If the wifi slot does support m.2 drivers (I was just looking for some confirmation/document), it’ll solve my problem. Thanks nonetheless.


I would like to avoid buying additional hardware if possible.


How to image a Debian system on a zfs mirror?
I have set up a refurbished PC as a media PC with storage. The OS, Debian, is on an m.2 nvme disk of 256 GB. I have connected 2x4TB risks in zfs mirror mode to store my media. Off late, while booting, I've noticed some messages that suggest that the health of nvme disk is not good. Searching the error, i realised that I should not rely on it. I've done a number of tweaks to set up my system the way I like that I want to save by creating an image of the OS drive on a fresh nvme disk of same size that I have. How do I go about doing it? I could boot using a live USB and create the image on the HDDs but the live USB OS won't recognise my zfs, right? Is using another external disk or another PC my only option here? Thanks and cheers! PS: The machine is a HP Elitedesk 800 G3 that has a wifi port that I've heard can be used as additional port for m.2 drives. Is it true?
fedilink

I would like to replicate your setup in the future. How do you connect between the two machines, using tailscale or something like that?


Does portainer, and docker in turn, allow taking/accessing something like point in time snapshots of containers like VM software do? They make it easy to tinker with stuff, knowing that if I mess up, I can go back to a snapshot and be good again.


I don’t have anything to offer but I have a question since I’m on a similar quest. I’m also a n00b when it comes to Docker even though I have best intentions to learn it eventually. In the coming days I want to set up a similar server for photos backup and I was hoping to go with Immich too.

I have heard that Immich occasionally provides updates that are breaking in nature. How does Tipi handle it in such circumstances?


I am in the same situation as OP and this is the route I’m moving along. I got myself a used 800 G3 SFF based on a comment in a similar thread a few days ago.

It feels like a fine machine that consumes less power, supports upto 64 GB of RAM (I got 2x16 with future upgrades in mind). But the most noticeable thing about it is that it has space for 2x3.5" HDDs that none of the machines this size have, apart from the NVMe disk I put in. I intend to put 2x4 TB disks in it in RAID1 mainly to store family photos and videos. I’m learning docker to set up immich properly- i don’t want to lose anything due to my stupidity in updating things.

I have about a dozen questions regarding the same but I’m scared to put them all in a single post. But I’m trying to follow advice on https://www.youtube.com/watch?v=WCDmHljsinY and hoping things will turn out fine. Based on their recommendation, I’ve also obtained a used g4560 processor to replace the existing dual core processor (i think it is 4400 or something like that) it came with. I hope it’ll be sufficient for stremio/torrentio/RD that everyone is so pleased with.

All in all I’m enjoying the journey of setting it up and my only fear is not of losing the data but of curating it to my liking and then losing the customisations in a fuckup.


Or they could connect their pi as well as their laptop to a hotspot created on their pocket computer masquerading as a phone. They won’t lose their internet on the laptop or pi that way.


This scares me to an extent but as long as immich provides some instructions on what to do to get back on track, this should be okay.

Also, what happens if one skips multiple such breaking updates? Will it be my responsibility to hunt down the changes and make corresponding amends?

And finally, while I understand that immich is not supposed to be photo backup solution, does it allow export and import of metadata, tags etc? I ask because I intend to set it up and I may skip few of these updates and instead do a fresh install a year or so later. If I can simply export my settings, face ID info, albums from old setup and import it into the new one then it makes things very easy.

Otherwise the phrase breaking changes does sound really scary.


For someone freshly interested in self hosting, what does breaking changes with respect to immich really mean? Does it mean that if I upgrade to this version, I have to rebuild my library, face tags etc? Or does it mean that things might stop working and some files might need to be changed, upgraded manually and things may get awry in doing so.


Cool, I’ll try this next time. So far the least damaging way I’ve tried is putting the thing in hot water. The magnet and the base expand by different amounts and it is relatively easy to pry the magnet off. But the thing cools down quickly so it takes a few tries.


Sites like Anna’s library should permit users to flag books without OCR and permit users to submit OCR version of the books.



Another such application that I wish had easy implementation for what you call base URLs is Apache Superset. Such a great application that I’m unable to use in my setup.


I stand with you for the subdomain and bare metal thing. There are many great applications that I’m facing trouble implementing since I don’t have control over A domain settings within my setup. Setting mysite.xyz/something is trivial that I have full control over. Docker thing I can understand to some extent but I wish it was as simple as python venv kind of thing.

I’m sure people will come after me saying this or that is very easy but your post proves that I’m not alone. Maybe someone will come to the rescue of us novices too.


If they are not copyright material, upload them on YouTube and make the videos private.

Edit: i just noticed the sub. Oops


I am still figuring it out since it is my hobby and I’m unable to devote much time to it. But I think it will be something like Ubuntu live disks which enabled you to try Ubuntu by running it from a DVD. You could run anything like web server, save files, settings etc. Only they would not persist after a reboot since every thing was saved in RAM. Only here it’ll be a write locked SD card instead of a DVD.

I’m also sure there must be a name for it and step by step tutorial somewhere. If only Google was not so bad these days…


It might not be applicable to you but in many cases single board computers are used where there is minimal changes in files in day to day basis. For example when used for displaying stuff. For such cases, it is useful to know that after installing all the required stuff, the SD card can be turned into read only mode. This prolongs its life exponentially. Temporary files can still be generated in the RAM and if needed, you can push them to an external storage/FTP through a cron job or something. I have built a digital display with weather/photos/news where beyond the initial install, everything is pulled from the internet. I’m working towards implementing what I’ve suggested above.


I’ve heard good things about H2O AI if you want to self host and tweak the model by uploading documents of your own (so that you get answers based on your dataset). I’m not sure how difficult it is. Maybe someone more knowledgeable will chime in.


I think I’m not aware of the exporting/publishing part and that’s the cause of my woes. I get everything running on the machine with unrestricted access, move to the machine with restricted access go “docker compose up” and get stuck. I’ll read up on exporting/publishing, thank you.


Thank you for these links, they look just right. Most tutorials I come across these days are videos. Maybe they are easier to make. These tutorials that allow you to tinker at your own pace seem better to me. Will you mind if I reach out to you over DM if I get stuck at something while learning and am not able to find the right answer easily?


I’m not much into new year resolutions, but I think I’ll make a conscious effort to learn Docker in the coming months. Any suggestions for good guides for someone coming from VM end will be appreciated.


I hate it very much. I am sure it is due to my limited understanding of it, but I’ve been stuck on some things that were very easy for me using VM.

We have two networks, one of which has very limited internet connectivity, behind proxy. When using VMs, I used to configure everything: code, files, settings on a machine with no restrictions; shut it down; move the VM files to the restricted network; boot and be happily on my way.

I’m unable to make this work with docker. Getting my Ubuntu server fetch its updates behind proxy is easy enough; setting it for python Pip is another level; realising the specific python libraries need special keys to work around proxies is yet another; figuring out how to get it done for Docker and python under it is when I gave up. Why can it not be as simple as the VM!

Maybe I’m not looking using the right terms or maybe I should go and learn docker “properly”, but there is no doubt that using Docker is much more difficult for my use case than using VMs.


Just want to add for anyone who might attempt this, my IRC client of choice is Pidgin. Open source and works on windows as well as Linux (not sure about iOS). Most tutorials suggest the good old mIRC, but using it after the trial period increasingly becomes a pain with its wait screen with timer.


Sounds good for setting the events up and getting notifications part. A good calender would also let you see the upcoming events in week, month at a glance. Cron entries are non sorted lists. Is there a cron visualizer like they have visualizers for logs?


The algo of our overlords has decided that your life will be better with cats. You just don’t know it yet. Get one. Resistance is futile.


I’m having a good experience with Samsung DeX on an 7 year old TV. Gives me a trackpad with keypad on my phone to navigate a full OS on a large screen. In a pinch, I connected a Bluetooth keyboard to my phone and have even used it as a document editor using Google Docs without any hitch.