Plex was how last pass got hacked. https://www.howtogeek.com/147554/lastpass-data-breach-shows-why-plex-updates-are-important/
You still need to do stuff even if it is plex.
I don’t think that works on my Samsung TV, or my partners iPad though. :)
Although not especially effective on the YouTube front, it actually increases network security just by blocking api access to ad networks on those kinds of IoT and walled garden devices. Ironically my partner loves it not for YouTube but apparently all her Chinese drama streaming websites. So when we go travel and she’s subjected to those ads she’s much more frustrated than when she’s at home lol.
So the little joke while not strictly true, is pretty true just if you just say ‘streaming content provider’.
How are they placing this data? Api? Not possible to align disk tiers to api requests per minute? Api response limited to every 1ms for some clients, 0.1ms rate for others?
You’re pretty forthcoming about the problems so I do genuinely hope you get some talking points since this issue affects, app&db design, sales, and maintenance teams minimally. Considering all aspects will give you more chance for the business to realise there’s a problem that affects customer experience.
I think from handling tickets, maybe processes to auto respond to rate limited/throttled customers with 'your instance been rate limited as it has reached the {tier limit} as per your performance tier. This limit is until {rate limit block time expiry}. Support tickets related to performance or limits will be limited to P3 until this rate limit expires."
Work with your sales and contracts team to update the sla to exclude rate limited customers from priority sla.
I guess I’m still on the “maybe there’s more you can do to get your feet out of the fire for customer self inflicted injury” like correctly classifying customer stuff right. It’s bad when one customer can misclassify stuff and harm another customer with an issue by jumping a queue and delaying response to real issues, when it’s working as intended.
If a customer was warned and did it anyway, it can’t be a top priority issue, which is your argument I guess. Customers who need more, but pay for less and then have a expectation for more than they get. It’s really not your fault or problem. But if it’s affecting you I guess I’m wondering how to get it to affect you less.
If it’s possible to do, and it causes a user experience issue, especially one as jarring as “stop accepting writes” you should start adding rate limits and validate inputs with rate limits expressed to the user before they hit the error rate.
To me you should already be sanitising input anyway, and this would just be part of that logic. If a user is trying to upload more than x it warns (with link to documentation of the limit). If user has gone past the rate limits, then error.
I’m not a sre or dev, just a sysadmin though. Users expect guard rails. If it’s possible, it’s permitted.
There have been a few cases where ports are blocked. For example on many residential port 25 is blocked. If you pay and get a static ip this often gets unblocked. Same with port 10443 on a few residential services. There’s probably more but these are issues I’ve seen.
If you think about how trivial these are to bypass, but also that often aligns to fixing the problem for why they’re blocked. Iirc port 10443 was abused by malicious actors when home routers accepted Nat- pnp from say an unpatched qnap. Automatically forwarding inbound traffic on 10443 to the nas which has terrible security flaws and was part of a wide spread botnet. If you changed the Web port, you probably also are maintaining the qnap maybe. Also port 25 can be bypassed by using start-tls authenticated mail on 587 or 465 and therefore aren’t relaying outbound mail spam from infected local computers.
Overall fair enough.
Bring free on cloudflare makes it widely adopted quickly likely.
It’s also going to break all the firewalls at work which will no longer be able to do dns and http filtering based on set categories like phishing, malware, gore, and porn. I wish I didn’t need to block these things, but users can’t be trusted and not everyone is happy seeing porn and gore on their co-workers screens!
The malware and other malicious site blocking though is me. At every turn users will click the google prompted ad sites, just like the keepass one this week.
Anyway all that’s likely to not work now! I guess all that’s left is to break encryption by adding true mitm with installing certificates on everyone’s machines and making it a proxy. Something I was loathe to do.
After I followed the instructions and having 15 years of system administration experience. Which I was willing to help but I guess you’d rather quip.
From my perspective unless there’s something that you’ve not yet disclosed, if wireguard can get to the public domain, like a vps, then tailscale would work. Since it’s mechanically doing the same thing, being wireguard with a gui and a vps hosted by tailscale.
If your ISP however is blocking ports and destinations maybe there are factors in play, usually ones that can be overcome. But your answer is to pay for mechanically the same thing. Which is fine, but I suspect there’s a knowledge gap.
Not possible without a domain, even just “something.xyz”.
The way it works is this:
Now, to get that experience you need to meet those conditions. The machine trying to browse to your website needs to trust the certificate that’s presented. So you have a few ways as I previously described.
Note there’s no reverse proxy here. But it’s also not a toggle on a Web server.
So you don’t need a reverse proxy. Reverse proxies allow some cool things but here’s two things they solve that you may need solving:
But in this case you don’t really need to if you have lots of ips since you’re not offering publicly you’re offering over tailscale and both Web servers can be accessed directly.
It’s possible to host a dns server for your domain inside your tailnet, and offer dns responses like: yourwebserver.yourdomain.com = tailnetIP
Then using certbot let’s encrypt with DNS challenge and api for your public dns provider, you can get a trusted certificate and automatically bind it.
Your tailnet users if they use your internal dns server will resolve your hosted service on your private tailnet ip and the bound certificate name will match the host name and everyone is happy.
There’s more than one way though, but that’s how I’d do it. If you don’t own a domain then you’ll need to host your own private certificate authority and install the root authority certificate on each machine if you want them to trust the certificate chain.
If your family can click the “advanced >continue anyway” button then you don’t need to do anything but use a locally generated cert.
It’s totally fine to bulk replace some sensitive things like specifically sensitive information with “replace all” as long as it doesn’t break parsing which happens with inconsistency. Like if you have a server named "Lewis-Hamiltons-Dns-sequence“ maybe bulk rename that so is still clear “customer-1112221-appdata”.
But try to differentiate ‘am I ashamed’ or ‘this is sensitive and leaking it would cause either a PII exfiltration risk or security risk’ since only one of these is legitimate.
Note, if I can find that information with dns lookup, and dns scraping, that’s not sensitive. If you’re my customer and you’re hiding your name, that I already invoice, that’s probably only making me suspicious if those logs are even yours.
Just fyi, as a sysadmin, I never want logs tampered with. I import them filter them and the important parts will be analysed no matter how much filller debugging and info level stuff is there.
Same with network captures. Modified pcaps are worse than garbage.
Just include everything.
Sorry you had a bad experience. The customer service side is kind of unrelated to the technical practice side though.
So I think you may not know about quick sync, an Intel transcoding acceleration feature of Intel gpus in Intel CPUs.
https://handbrake.fr/docs/en/latest/technical/video-qsv.html
There’s information about it for I think plex and handbrake and ffmpeg in general. This is how some people do real time transcoding for media servers. But I’m not an expert. I just hope you can be guided with easier search terms.
The bypass is to run your own router, distribute locally hosted dns servers (either the router or pihole) and the dns servers get their lookups over dns over https (443) and your provider can’t intercept that since it looks like regular encrypted Web traffic just like they shouldn’t be able to inspect your netbank.
Australia is different but these isps who do that generally have a +$5 per month plan to go to a static public rout able public Up (instead of cgnat) and unfiltered Internet. They usually are more allowing mum and dad to filter the Web so their kids can’t get too far off track. Maybe just double check on your ISP portal settings but I’m going to assume you’re not in aus.
100%.Or set host file entries on each endpoint to resolve the mail.domain.com to your internal ip that’s available only over vpn. Not going to be easy on mobiles.
There is an assumption though that the mail server has an internal IP address wherever you are hosting. That might not be true. I would always put the public IP on the firewall and then NAT with specific port 25 in to the private IP of the server, but who knows what this particular OP has done.
Sure was! You need to be on top of paid and free and open source software from a security stand point. There’s no shortcut no matter what you think you’re paying for. Your threat model might be better when the service automates a Web proxy for you, but that’s only one facet. You trade problems but should never feel like you can “set and forget”. Sometimes it’s better for you to do it yourself because there’s no lying about responsibilities that way.