I’m Hunter Perrin. I’m a software engineer.

I wrote an email service: https://port87.com

I write free software: https://github.com/sciactive

  • 7 Posts
  • 199 Comments
Joined 1Y ago
cake
Cake day: Jun 14, 2023

help-circle
rss

The cheapest one I know of is about $8 a month, so it should be affordable, even on a tight budget.


You can buy a super cheap cloud VM and use a (self hosted) VPN so it can access your own PC and a reverse proxy to forward all incoming requests to your own PC behind your school’s network.

It’s arguable whether this would violate their policy, since you are technically hosting something, but not accessible on the internet from their IP. So if you wanna be safe, don’t do this, otherwise, that could help you get started.


Yes, but then you’re not using IMAP.


If you’re using IMAP, the emails aren’t completely downloaded by Thunderbird, just the headers.


If it’s just hours, that’s fine. I’ve spent months on a system before that ultimately got scrapped. When I was at Google, they accidentally had two teams working on basically the same project. The other team, with about 40 engineers, having worked on it for about a year, had their project scrapped. My team was meant to do the same work, with about 23 engineers. So if you’re ever wondering why Hangouts Chat launched kinda half baked, that’s why.


What I use for a lot of my sites is SvelteKit. It has a static site generator. If you like writing the HTML by hand, it’s great. Also HTML5 Up is where I get my templates. I made the https://nymph.io website this way. And https://sveltematerialui.com.


Backups and rollbacks should be your next endeavor.


If it doesn’t, I would consider that a bug in the router.

Routers are not particularly known for being free of bugs.


If you want cheap encrypted storage you can run a Nephele server with encryption and something like Backblaze B2.


The way I’ve done it is Ubuntu Server with a bunch of Docker Compose stacks for each service I run. Then they all get their own subdomain which all runs through the Nginx Proxy Manager service to forward to the right port. The Portainer service lets me inspect things and poke around, but I don’t manage anything through it. I want it all to be super portable, so if Ubuntu Server becomes too annoying, I can pack it all up and plop it into something like Fedora Server.


They can’t do 4K video. The best they can do is 1080p30.


I didn’t say Raspberry Pi Zero. Those are niche machines. They’re not fast enough to do general purpose computing.



No. They emulate a keyboard, and use the keyboard shortcuts to do things in Windows. So they won’t work out of the box in Linux, but you can add each of the keys as a keyboard shortcut, then they’ll work.


I’d recommend the Pepper Jobs windows 10 gyro remote. I’ve got two of them because they’re so great.


Exactly. N100 mini PCs are like the Swiss Army Knife of computers. Almost as compact as a Raspberry Pi, and compatible with a lot more things.


Any camera that uses the V4L2 system on Linux. So, mostly webcams.

One important note is that IP cams are not supported yet, but I’d like to add support for them.


I’m working on one called Soteria. It’s still early in development, but I’m focusing on both privacy and cloud availability.

It uses any WebDAV store to upload footage, but it’s designed to work best with my own WebDAV server Nephele. This lets it upload footage to any S3 compatible blob storage, end to end encrypted.

That way if your cameras go offline, you can watch the last footage they were able to upload.

Like I said, it’s in early development, so it’s not yet ready to use, but I’m going to be putting more work into it soon and try to get it to a place where you can use it.

It works with any V4L2 compatible camera, so laptops, webcams, and Raspberry Pi cameras should all work.


What about the one that should be in the top left, both high power and easy, interns.


USB-A male to USB-A male is not in any USB standard (not entirely true, but compliant cables are very rare and don’t connect voltage), and if you plug it into a device it’s not meant for, the behavior is entirely unspecified. It will probably do nothing. But it might fry your USB controller that is not expecting to receive voltage.

USB-C to USB-C is in the spec, and if you plug in two host devices, they won’t hurt each other. You can actually charge a host device over USB-C, unlike USB-A.

That’s why it isn’t ok. It’s not the same thing, it’s not in the standard, and it can even be dangerous (to the device).


The original 3, “.cum”, “.nut”, and “.orgasm”.



Why are you running servers with a data store on a partition that you mount on multiple operating systems?


O(n!n!)

It works really well, until n=3, which takes a while. Don’t ask about n=4.


I don’t think it’s (just) that. It’s also a different skill set to write documentation than code, and generally in these kind of open source projects, the people who write the code end up writing the documentation. Even in some commercial projects, the engineers end up writing the docs, because the higher ups don’t see that they’re different skill sets.


Sounds great to me. Us software devs need to eat, so I totally get trying to turn this into a profitable business model. I’m very happy that they’re not paywalling any features, but honestly, I’d be fine if they did. I’m probably going to pay either way. Immich has been awesome, and it’s gotten me off of my second to last Google app, Photos. If only there were a good alternative to YouTube…


Nothing is safe to run unless you write it yourself. You just have to trust the source. Sometimes that’s easy, like Red Hat, and sometimes that’s hard. Sometimes it bites you in the ass, and sometimes it doesn’t.

Docker is a good way to sandbox things, just be aware of the permissions and access you give a container. If you give it access to your network, that’s basically like letting the developer connect their computer to your wifi. It’s also not perfect, so again, you have to trust the source. Do some research, make sure they’re trustworthy.


And it was basically just Google and Microsoft that took away our ability to run our own mail servers.


If you’re only looking for 1TB, go with an SSD. It’s about the same price. It’s only when you’re looking for >1TB that HDD starts to get substantially cheaper.


Well I almost have a solution for you, but it’s not ready yet. I have a WebDAV server called Nephele, but I haven’t finished writing the CardDAV and CalDAV extensions for it. I should be done with it in a few months. (My priorities are on my commercial project right now, then back to open source stuff in a couple months.)


Because you have to manage it on your server and all your own machines, and it doesn’t provide any value if your server is hacked. It actually makes you less safe if your server is hacked, because then you can consider every machine that has that CA as compromised. There’s no reason to use HTTPS if you’re running your own CA. If you don’t trust your router, you shouldn’t trust anything you do on your network. Just use HTTP or use a port forward to localhost through ssh if you don’t trust your own network.

You don’t have to pay anyone to use HTTPS at home. Just use a free subdomain and HTTP validation for certbot.



A reverse proxy makes setup a lot easier and more versatile, and can manage SSL certs for you.


The easiest way to do it is to do it the right way with LetsEncrypt. The hardest way to do it is the wrong way, where you create your own CA, import it as a root CA into all of the machines you’ll be accessing your servers from, then create and sign your own certs using your CA to use in your servers.


So your “friend’s” unethical business hired unethical workers and now you’ve come here to ask for advice on running your unethical business without paying anyone. Got it.


You are not a good person if this is how you want to get through life.


Your “friend’s” business is very unethical. Maybe your friend should think about what they’re doing with their life, and quit doing this.


Maybe just write the academic works yourself, then they should pass.


My setup is pretty safe. Every day it copies the root file system to its RAID. It copies them into folders named after the day of the week, so I always have 7 days of root fs backups. From there, I manually backup the RAID to a PC at my parents’ house every few days. This is started from the remote PC so that if any sort of malware infects my server, it can’t infect the backups.


Yeah, that could work if I could switch to zfs. I’m also using the built in backup feature on Crafty to do backups, and it just makes zip files in a directory. I like it because I can run commands inside the Minecraft server before the backup to tell anyone who’s on the server that a backup is happening, but I’m sure there’s a way to do that from a shell script too. It’s the need for putting in years worth of old backups that makes my use case need something very specific though.

In the future I’m planning on making this work with S3 as the blob storage rather than the file system, so that’s something else that would make this stand out compared to FS based deduplication strategies (but that’s not built yet, so I can’t say that’s a differentiating feature yet). My ultimate goal is to have all my Minecraft backups deduplicated and stored in something like Backblaze, so I’m not taking up any space on my home server.


https://hub.docker.com/r/sciactive/nephele In the latest version of Nephele, you can now create a WebDAV server that deduplicates files that you add to it. I created this feature because every night at midnight, my Minecraft world that my friends and I play on gets backed up. Our world has grown to about 5 GB, but every night, the same files get backed up over and over. It's a waste of space to store the same files again and again, but I want the ability to roll back our world to any day in the past. So with this new feature of Nephele, I can upload the Minecraft backup and only the files that have changed will take up additional space. It's like having infinite incremental backups that never need a full backup after the first time, and can be accessed instantly. Nephele will only delete a file from the file storage once all copies that share the same file contents have been deleted, so unlike with most incremental backup solutions, you can delete previous backups easily and regain space. Edit: So, I think my post is causing some confusion. I should make it clear that my use case is specific for me. This is a general purpose deduplicating file server. It will take any files you give it and deduplicate them in its storage. It's not a backup system, and it's not a versioning system. My use case is only one of many you can use a deduplicating file server for.
fedilink

Recommendations for a bug tracker/forum?
Does anyone have any recommendations for bug trackers with a forum feature? Basically something where users can report issues, request features, and ask questions, all about a specific service. Preferably, I’d like something that integrates with GitHub issues, but that’s not a requirement. Also I’d like something like a public roadmap or project tracker.
fedilink

cross-posted from: https://lemmy.world/post/12284817 > There's a new version of [Nephele WebDAV server](https://github.com/sciactive/nephele) (also on [Docker Hub](https://hub.docker.com/r/sciactive/nephele)) that supports using an S3 compatible server as storage and encrypting filenames and file contents. > > This essentially means you can build your own cloud storage server leveraging something like Backblaze B2 for $6/TB/month, and that data is kept private through encryption. That's cheaper than Google Drive, _and_ no one can snoop on your files.
fedilink

Question: Best UI to manage VMs and containers?
At this point, I’ve got a lot of containers already running on my system, all in separate directories in my home directory. They’re each set up with a docker-compose file, and all of the volumes are just directories within those directories. I don’t really want to change this setup, because it allows me to easily rip it all out and transplant it to a new system. What I’d like is a web UI to see all of these containers, view their status, and potentially reboot them. It would also be great to be able to spin up VMs (not containers, but actual VMs) with it. I’ve heard of Portainer, but haven’t had any experience with it. What are your suggestions, and why do you recommend them?
fedilink

You all remember just a few weeks ago when Sony ripped away a bunch of movies and TV shows people “owned”? This ad is on Amazon. You can’t “own” it on Prime. You can just access it until they lose the license. How can they get away with lying like this?
fedilink

Nephele WebDAV server for Docker
After a lot of work (cause I'm new to it), I published **my first Docker image**! Nephele is an open source WebDAV server written by yours truly. I've been using it for about a year now on my own home server. It basically acts as my self hosted cloud storage and all of my PCs and my family's PCs back up to it. It's FOSS, so use it for your own project. :)
fedilink

PSA: The Docker Snap package on Ubuntu sucks.
I spent two hours today trying to figure out why Nextcloud couldn’t read my data directory. Docker wasn’t mounting my data directory. Moved everything into my data directory. Docker couldn’t even see the configuration file. Turns out the Docker Snap package only has access to files under the `/home` directory. Moral of the story: never trust a Snap package.
fedilink