• 0 Posts
  • 19 Comments
Joined 1Y ago
cake
Cake day: Jul 22, 2023

help-circle
rss

Not sure exactly how good this would work for your use case of all traffic, but I use autossh and ssh reverse tunneling to forward a few local ports/services from my local machine to my VPS, where I can then proxy those ports in nginx or apache on the VPS. It might take a bit of extra configuration to go this route, but it’s been reliable for years for me. Wireguard is probably the “newer, right way” to do what I’m doing, but personally I find using ssh tunnels a bit simpler to wrap my head around and manage.

Technically wireguard would have a touch less latency, but most of the latency will be due to the round trip distance between you and your VPS and the difference in protocols is comparatively negligible.


Maybe I’ll try and give it another go soon to see if things have improved for what I need since I last tried. I do have a couple aging servers that will probably need upgraded soon anyway, and I’m sure my python scripts that I’ve used in the past to help automate server migration will need updated anyway since I last used them.


I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying “just learn docker instead.” Which is fine, if it didn’t mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn’t already used to managing servers pre-docker, then yeah, I totally get it.


That’s a big reason I actively avoid docker on my servers, I don’t like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it’s just as easy to learn to configure each piece of software for yourself and know what’s going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.

I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I’ve peeked at it since, to me it seems that docker gets in the way more often than it solves problems.

I don’t mean to yuck other people’s yum though, so if you like docker, and it works for you, don’t let me stop you from enjoying it. I just can’t justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).



Source has been posted on Internet Archive (along with the latest builds for a bunch of platforms). Something will likely rise from the ashes of YuZu, but it wouldn’t surprise me if it takes a few years. Nintendo is probably gonna be extra litigious this year (even more than usual), due to them likely failing to have the Switch’s successor ready this year, and not really having a full slate of games ready, so with Switch sales projected to be down, best to lay low on anything that might get Nintendo’s attention for a while.


I just use public trackers and search for “VR180” - more than half the results are usually porn. If you want non-porn 3D movies “HSBS” is a good term to use as it’s probably the most common format for 3D Blu-rays.


I have a similar setup. Even for hard drives and slower SSDs on a NAS, 10g has been beneficial. 2.5 gig would probably be sufficient for most of what I do, but even a few years ago when I bought my used mellanox sfp+ cards on eBay it was basically just as cheap to go full 10g (although 2.5 gig Ethernet ports are a bit more common to find built-in these days, so depending on your hardware, that might be a cheaper place to start). But even from a network congestion standpoint, having my own private link to my NAS is really nice.


I’ve dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.

Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.

Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.

If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I’m not here to “yuck” anyone’s “yum”) and deliberately avoid them.


Just doing a single pass all the same like zeroes, often still leaves the original data recoverable. Doing passes of random data and then zeroing it lowers the chance that the original data can be recovered.


Not saying there aren’t any benefits to docker, migration to a different host distro and dependency conflicts are the big two. But for me they are kinda the only two, I find for what I do it’s just as easy to write a shell script that downloads and unpacks software, and copies my own config files into place than it is to deal with basically doing the same thing, but with docker. I could use ansible or something similar for that, but for me, shell scripts are easier to manage.

Don’t get me wrong, docker has its place. I just find that it gets in my way with it’s own quirks almost as much as it helps in other areas, especially for web apps like Nextcloud that are already just a single folder under the web root and a database.

One additional benefit I get from not using docker, is that I can do more with a lower-powered server, since I’m not running multiple instances of PHP and nginx across multiple containers.


I’ve been self-hosting since before docker and containers were a thing, and even though Nextcloud kinda pushes their container images these days, I still refuse to use them, and use the community archive releases or web installer when reconfiguring my system or setting up a new system to migrate to. Maybe it’s just Nextcloud and the other software I use, or maybe it’s just that I’m not really trying to build scalable server infrastructure with a lot of users, but I generally find that docker causes more problems than it solves, and it does my head in when I see projects that recommend containers as the primary suggested install method.

Totally agree with your assessment of the plugins/apps systems. Feels like you need to stick to official “apps” and hope they don’t get abandoned to have anything close to a good experience because even minor updates can break all the 3rd party apps because of a compatibility check, where you end up waiting for the app developer to release an “update” that only changes the version compatibility number.


Yeah, unless you need the GPIO or the lower power consumption of a Pi, mini PCs are better for 90% of the projects people use single board computers for. Plus you usually get upgradable ram, and more-resilient storage.


I’ve never heard any confirmation of GoDaddy falsely charging a premium for domains, but I definitely experienced that kind of thing with them years ago. Wasn’t really a “few-minutes” thing, but if you left a domain in your cart, within a day or so, it would be sold and available at a premium. Not sure if it was a GoDaddy tactic, or if their data was being sold and used by resellers.


I used to have a self-built, locally-hosted power strip with individual outlet control that served it’s own interface. It was powered by a Model B+. I’ve since moved to home-assistant and zigbee plugs since my self-built solution was pretty bulky, but it was by far my longest lived Pi project.


Looks interesting, although the comments about other git repo services being bloated, complicated, and resource heavy, followed by a paragraph about AI features that have been added, with more planned in the future, seems a touch ironic to me.


I’m using Plex for all my self-hosted streaming (movies, TV, and music.) I’ve tried to move to jellyfin for the video streaming in the past, but for music, I’ve not found anything that works as well as Plex. There are things like Ampache and Navidrome that I have tried, but they didn’t fit my needs that well.

As far as finding new music that I like enough to add to my server, I generally just use YouTube or a paid streaming service. There are technically ways to download songs and albums straight from YouTube, if you are okay with opus format, but I normally try to find FLAC or physical media I can rip to put into my Plex.


Skim coat a heart with your initials in it, so when the light hits the wall just right you can see the texture difference through the paint.


Yeah, I’d recommend using rspamd for lower-end hardware over spamassassin. Might be a bit more work to set up, mostly because it’s not as popular, and there are fewer tutorials, but it doesn’t have the overhead from running on perl like spamassassin. That said, while there are people using rspamd on systems with 512MB of ram, they are usually smaller, personal setups that aren’t dealing with hundreds of emails a day.