🇨🇦

  • 3 Posts
  • 217 Comments
Joined 1Y ago
cake
Cake day: Jul 01, 2023

help-circle
rss

I found my works wifi blocks most ports outbound, but switching my my vpn to a more ‘standard’ port like 80, 443, 22, etc gets through just fine.

Now I’ve got a couple port forwarding rules I can switch on, as needed, that take one of those and route it to my vpn host.


It’s honestly baffling how many people are, willfully ignorant of things they depend on.

I know far too many people that know nothing about cars beyond ‘turn key, engine turns on’. I’m no mechanic either, but I can at least identify some parts and perform basic maintenance.



Because many many people know absolutely nothing about ethernet or the actual hardware behind their wifi connection, as quite often that was setup by a technician from their ISP. When it comes to acquiring internet; a wifi name+password is all they’ve ever experienced.



Yeah, 2.5+ years since the last release?

Somehow I don’t think this has survived youtubes client war…



Bit of a different solution:

If Paperless-NGX is one of the things you self-host; it has options to import emails based on your specified criteria, then you could have it delete each piece of mail it imports. You can also just have it move mail to folders on the mail server, or just tag/flag mail instead of deleting it. (for you to then manually delete at your leisure)

I use this to automatically import receipts, bills, work documents, and any other regular mail instead of dealing with it manually every week/month.


Yeah “dd if=/dev/mmcblk0 of=$HOSTNAME.$(date +%Y.%m.%d).img” and while its running. (!!! Make sure the output is NOT going to the sd card you are backing up…)

I deliberately chose a time when it’s not very active to perform the backup. Never had an issue, going on 6 years now.


I used to wonder why porn sites aren’t required to use ‘.cum’ instead of ‘.com’…


I’ve always used dd + sshfs to backup the entire sd card daily at midnight to an ssh server; retaining 2 weeks of backups.

Should the card die, I’ve just gotta write the last backup to a new card and pop it in. If that one’s not good, I’ve got 13 others I can try.

I’ve only had to use it once and that went smoothly. I’ve tested half a dozen backups though and no issues there either.


Pretty sure the media itself is stored in ram, or similar volatile memory; so it wipes automatically on powerloss.


Last time I looked at the topic (several years ago in a now deleted reddit post); someone had posted info on the projector system.

The media is delivered on a battery backed up rack-mount pc with proprietary connectors and a dozen anti-tamper switches in the case. If it detects meddling; it wipes itself. You’re not likely to grab a copy from there.

As the other commenter mentioned; the projector and media are heavily protected with DRM, encrypting the stream all the way up to the projector itself. You can pull an audio feed off the sound board; but you’re stuck with a camera for video.


I haven’t actually tried myself; but I’ve read many DRMs can be defeated by simply running the service in a VM, then screen recording the VM from the host.

Try to directly screen capture Netflix for example, and the webpage will appear as a solid black box in the recording; but not if the capture is done from outside the VM.


Yeah… Becoming a public-facing file host for others to use seem rather irresponsible.

If/when a user’s given a means of uploading files to my server, there’s no method/permissions for them to share those files with others; it’s really just for them to send files to me. (Filebrowser is pretty good for that)

That and almost nothing is public access; auth or gtfo.




Configuring input/output paths are only really necessary when you have multiple systems that don’t see the media at the same paths. Such as a Linux server and a Windows node working together.

Honestly, I just wish I’d have known about and set it up sooner:



https://home.tdarr.io/

I used to use the built in convert options in Emby server, but recently switched to Tdarr to manage all my conversions. It’s got far more control/configurablity to encode your files exactly how you’d like.

It can also ‘health check’ files by transcoding them, but not saving the output; checking for errors during that process to ensure the file can actually be played through successfully. With 41k+ files to manage, that made it much easier to find and replace the dozen or so broken files I had, before I found them by trying to play them.

Fore warning; this is a long and intensive process. Converting my entire library to HEVC using an RTX 2080 took me over 2 months non-stop. (not including health checks)


4451 movies

398 series / 36130 episodes

Taking up 25.48tb after conversion to HEVC compressing it ~40%

Every series is monitored for new episodes which download automatically; and there’s a dozen or so public IMDB lists being monitored for new movies from studios/categories I like. Anything added to the lists gets downloaded automatically.

Then there’s Ombi gathering media requests from my friends/family to be passed to sonarr/radarr and downloaded.

At this point, the library continuously grows on its own, and I have to do little more than just tell it what I want to watch.


Did… Did you just ask; why creating photo-realistic sexually explicit material of real children, should be illegal?


As far as I understand; it’s not the tools used that makes this illegal, but the realism/accuracy of the final product regardless of how it was produced.

If you were to have a high proficiency with manual Photoshop and produced similar quality fakes, you’d be committing the same crime(s)

creating child sex abuse images

and

offenses against their victims’ moral integrity

The thing is, AI tools are becoming more and more accessible to teens. Time, effort, and skill are no longer roadblocks to creating these images; which leaves very very little in an irresponsible teenagers way…


I tend to drop the link into yt1s.com

Sometimes just for audio, sometimes for the full vid.

I’m rarely grabbing more than one video at a time though.

[re-commenting as I meant this to be a top-level comment, not a reply]


127.0.0.1 is the loopback address. Ie it’s your computer trying to reach a service its hosting (prowlarr)/talk to itself. Sonarr/Radarr should be using this address as long as prowlarr+sonarr/radarr are on the same machine.

Prowlarr is either not running, or port 8080 is blocked by your firewall.


A paid plex share is a plex server that someone is running + selling access too.

This is against plex’ terms, gets plex accounts banned; and in some cases, Plex (co) has taken rather drastic action by blocking entire VPS providers from reaching plex.tv; thus plex server software no longer functions on those VPS’s at all.

Naturally, people selling shares want to maximize profit, so they use VPS providers on the cheaper end; resulting in cheaper VPS solutions being blocked for everyone.


Drink less paranoia smoothie…

I’ve been self-hosting for almost a decade now; never bothered with any of the giants. Just a domain pointed at me, and an open port or two. Never had an issue.

Don’t expose anything you don’t share with others; monitor the things you do expose with tools like fail2ban. VPN into the LAN for access to everything else.


and using DDNS

As in, running software to update your DNS records automatically based on your current system IP. Great for dynamic IPs, or just moving location.


Sure, cloudflare provides other security benefits; but that’s not what OP was talking about. They just wanted/liked the plug+play aspect, which doesn’t need cloudflare.

Those ‘benefits’ are also really not necessary for the vast majority of self hosters. What are you hosting, from your home, that garners that kind of attention?

The only things I host from home are private services for myself or a very limited group; which, as far as ‘attacks’ goes, just gets the occasional script kiddy looking for exposed endpoints. Nothing that needs mitigation.


Unless you are behind CGNAT; you would have had the same plug+play experience by using your own router instead of the ISP supplied one, and using DDNS.

At least, I did.


The middle-man provides plausible deniability in this case. PornHub can genuinely say they don’t see connections from age-verification states atm. That stops being true if they host the VPN, making them aware of actual client locations.


If they are injecting ads into the actual video stream; it won’t matter what client you use. You request the next video chunk for playback and get served a chunk filled with advertising video instead. The clients won’t be able to tell the difference unless they start analyzing the actual video frames. That’s an entirely server-side decision that clients can’t bypass.


Only if the ads are a fixed length and always in the same place for each playback of the same video.

Inserting ads of various lengths in varying places throughout the video will alter all the time stamps for every playback.

The 5th minute of the video might happen 5min after starting playback, or it could be 5min+a 2min ad break after starting. This could change from playback to playback; so basing ad/sponsor blocking on timestamps becomes entirely useless.


I have one more thought for you:

If downtime is your concern, you could always use a mixed approach. Run a daily backup system like I described, somewhat haphazard with everything still running. Then once a month at 4am or whatever, perform a more comprehensive backup, looping through each docker project and shutting them down before running the backup and bringing it all online again.


I setup borg around 4 months ago using option 1. I’ve messed around with it a bit, restoring a few backups, and haven’t run into any issues with corrupt/broken databases.

I just used the example script provided by borg, but modified it to include my docker data, and write info to a log file instead of the console.

Daily at midnight, a new backup of around 427gb of data is taken. At the moment that takes 2-15min to complete, depending on how much data has changed since yesterday; though the initial backup was closer to 45min. Then old backups are trimmed; Backups <24hr old are kept, along with 7 dailys, 3 weeklys, and 6 monthlys. Anything outside that scope gets deleted.

With the compression and de-duplication process borg does; the 15 backups I have so far (5.75tb of data) currently take up 255.74gb of space. 10/10 would recommend on that aspect alone.

/edit, one note: I’m not backing up Docker volumes directly, though you could just fine. Anything I want backed up lives in a regular folder that’s then bind mounted to a docker container. (including things like paperless-ngxs databases)



Lmao, yeah… You can make a can so secured a bear definitely won’t get in; but will people go to the effort to use it then?

Definitely some overlap there.


Companies like to keep posts/comments/other data, but they rarely keep the history of changes you’ve made to that data.

So before deleting the account, replace all the data in it with garbage. Then it wont matter if they keep it.



They weren’t storing your name in the first place; they’ve acquired a new service ‘blowfish’ for which an account is automatically created for you if you currently or in the past have used glassdoor. Blowfish demands a real name to be used at all. (including to delete your account)

Ontop of this, after linking the two services on your behalf; glassdoor will now automatically populate your real name and any other information they can gleam from blowfish, your resumes, and any other sources they can find, regardless of whether the information is correct (users have reported lots of incorrect changes). This is new.


What are your favorite tools for monitoring Linux and individual docker containers?
CPU/GPU/RAM/Disk usage, logs, errors, network usage, overall status, etc What do you use/prefer? Mainly looking for self-hosted web based tools, stuff I can view from a browser; but desktop and CLI apps are welcome too :)
fedilink

SquareSpace dropping the ball.
After almost a year of repeated emails stating the transition from Google Domains will have no effect on customers, no action is required; I just got this email: > Update Dynamic DNS records Hi there, As previously communicated, Squarespace has purchased all domain name registrations and related customer accounts from Google Domains. Customers are in the process of being moved to Squarespace Domains, but before we migrate your domain [redacted] we wanted to inform you that a feature you use, Dynamic DNS (DDNS), will not be supported by Squarespace. So apparently SquareSpace will be entirely useless to me and I've got "as soon as 30 days" to move. Got any suggestions for good registrars to migrate to? (it's a .pw domain if that matters) /edit. I'm a moron. I already use cloudflare as my name server, Google/SquareSpace only handles the registration. I'll be fine. Thanks for the help everyone!
fedilink

Google -> SquareSpace?
I've only ever had my domain registered via Google Domains (~7 years), mostly because it was cheap+convenient, and google already had my billing info. Google has however sold its domain registration services to SquareSpace and will soon be transitioning customers there. Not upset to be removing one more bit of google from my life, but I don't know much about SquareSpace and I'm not sure if I should just go with the transition to them or perhaps move to a different registrar... If I was to move, where too? Curious what others think about the situation and company. Are you a Google domains customer? What's your plan? Why?
fedilink