I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.
There’s a certbot addon which uses nginx directly to renew the certificate (so you don’t need to stop the web server to renew). If you install the addon you just use the same certbot commands but with --nginx instead and it will perform the actions without interfering with web server operation.
You just then make sure the cron job to renew also includes --nginx and you’re done.
I mean, while they can block most things, to give people a usable experience they’re going to allow http and https traffic through, and they can’t really proxy https because of the TLS layer.
So for universal chance of success, running openvpn tcp over port 443 is the most likely to get past this level of bad. I guess they could block suspicious traffic in the session before TLS is established (in order to block certain domains). OpenVPN does support traversing a proxy, but it might only work if you specify it. If their network sets a proxy via DHCP, maybe you could see that and work around it.
I did have fun working around an ex gf’s university network many years ago to get a VPN running over it. They were very, very serious about blocking non-standard services. A similar “through” the proxy method was the last resort they didn’t seem to bother trying to stop.
They (the service that provides both web protection and logging) installs their own root certificate. Then creates certs for sites on demand, and it will route web traffic through their own proxy, yes.
It’s why I don’t do anything personal at all on the work laptop. I know they have logs of everything everyone does.
What if I told you, businesses routinely do this to their own machines in order to make a deliberate MitM attack to log what their employees do?
In this case, it’d be a really targetted attack to break into their locally hosted server, to steal the CA key, and also install a forced VPN/reroute in order to service up MitM attacks or similar. And to what end? Maybe if you’re a billionaire, I’d suggest not doing this. Otherwise, I’d wonder why you’d (as in the average user) be the target of someone that would need to spend a lot of time and money doing the reconnaissance needed to break in to do anything bad.
I find anything with that coated plastic over time gets crappy. I still have an old X52 pro I’ve had for probably around 15 years now. In the end I just completely took off the flaking rubber style coating they put over it and it’s now shiny plastic and still going strong.
I also have a G502 that’s 6 years old. It has some worn areas where it’s actively held and on the buttons. I replaced the skates last year and have a spare set. Otherwise, still going strong.
Really not sure why I’d subscribe for something that lasts so long and isn’t THAT expensive to replace.
I’m going to blame the cloud for this. SaaS has got pretty much most software companies into the idea that they can have their cake and eat it with recurring revenue from cloud hosting their services.
This seems to have overflowed into every other market, where they want a piece of that pie.
I’m hoping it’s a fad that goes away. You know how we can make it a fad that goes away? Don’t buy into this shit.
Well good news. Because ipv6 has a thing called privacy extensions which has been switched on by default on every device I’ve used.
That generates random ipv6 addresses (which are regularly rotated) that are used for outgoing connections. Your router should block incoming connections to those ips but the os will too. The proper permanent ip address isn’t used for outgoing connections and the address space allocated to each user makes a brute force scan more prohibitive than scanning the whole Ipv4 Internet.
So I’m going to say that using routable ipv6 addresses with privacy extensions is more secure than a single Ipv4 Nat address with dnat.
I think people’s experience with PLE will always be subjective. In the old flat we were in, where I needed it. It would drop connection all the time, it was unusable.
But I’ve had them run totally fine in other places. Noisy power supplies that aren’t even in your place can cause problems. Any kind of impulse noise (bad contacts on an old style thermostat for example) and all kinds of other things can and will interfere with it.
Wifi is always a compromise too. But, I guess if wiring direct is not an option, the OP needs to choose their compromise.
I’m in the ntppool.org pool for the UK. It randomly assigns servers which could be any stratum really (but there is quality control on the time provided). I also have stratum 2 servers in .fi, and .fr (which are dedicated servers I also use for other things, rather than a raspberry pi).
Well I run an ntp stratum 1 server handling 2800 requests a second on average (3.6mbit/s total average traffic), and a flight radar24 reporting station, plus some other rarely used services.
The fan only comes on during boot, I’ve never heard it used in normal operation. Load averages 0.3-0.5. Most of that is Fr24. Chrony takes <5% of a single core usually.
It’s pretty capable.
Well, I run forgejo for my own stuff. So, let’s say I decided to host something that is subject to a copyright complaint. As soon as people start using your repo and their lawyers get a whiff of it, they’ll just take the IP of your server and DMCA the owner of the IP. Whether it be me, or the host. It’s an entity they can go after and will need to yield to appropriate law. The effect would be the same as the DMCA going to Github.
But on tor, it hides the entity operating and running the server. Making it a lot harder to find the person to even send the DMCA to, let alone start the legal wheels turning, if it were ignored.
But isn’t that the point? You pay a low fee for inconvenient access to storage in the hope you never need it. If you have a drive failure you’d likely want to restore it all. In which case the bulk restore isn’t terrible in pricing and the other option is, losing your data.
I guess the question of whether this is a service for you is how often you expect a NAS (that likely has redundancy) to fail, be stolen, destroyed etc. I would expect it to be less often than once every 5 years. If the price to store 12TB for 5 years and then restore 12TB after 5 years is less than the storage on other providers, then that’s a win, right? The bigger thing to consider is whether you’re happy to wait for the data to become available. But for a backup of data you want back and can wait for it’s probably still good value. Using the 12TB example.
Backblaze, simple cost. $6x12 = $72/month which over a 5-year period would be $4320. Depending on whether upload was fast enough to incur some fees on the number of operations during backup and restore might push that up a bit. But not by any noticeable amount, I think.
For amazon glacier I priced up (I think correctly, their pricing is overly complicated) two modes. Flexible access and deep archive. The latter is probably suitable for a NAS backup. Although of course you can only really add to it, and not easily remove/adjust files. So over time, your total stored would likely exceed the amount you actually want to keep. Some complex “diff” techniques could probably be utilised here to minimise this waste.
Total: $1379.95 / $1594.99
Total: $3150.65
In my mind, if you just want to push large files you’re storing on a high capacity NAS somewhere they can be restored on some rainy day sometime in the future, deep archive can work for you. I do wonder though, if they’re storing this stuff offline on tape or something similar, how they bring back all your data at once. But, that seems to me to be their problem and not the user’s.
Do let me know if I got any of the above wrong. This is just based on the tables on the S3 pricing site.
Yep. The ISP doesn’t offer it any more. They stopped, I think when RIPE officially “ran out” of new net blocks. But I’ve moved address twice so far and have kept the allocation. Well, on the last move they messed up and gave new a new single IP. I complained, and they asked why it matters so much to have my old IP. I pointed out I had a netblock, and they fixed it up pretty quickly.
Pretty soon, full fibre will be in my area and available on the same ISP. So, hoping for a smooth transition to keep it for a bit longer.
I think that’s the main problem. You could make a Linux distro that works like android and other embedded setups. But it would be locked down to only allow installations from an app store and custom hardware likely not supported with no way to get a kernel update until the distro does it.
That would totally alienate the current Linux userbase who are used to taking a distro, adding their own install sources, compile some stuff from source, upgrading kernel or perhaps also recompiling from source. Sure an upgrade might break things but they know how to fix it.
The two types of user are worlds apart. I think snap/flatpak etc come closer to a way to get windowsesque setups. But again for many experienced users those also sacrifice too much in favour of convenience.
But I’ve never played smash. What does that mean? Oh! Oh.