• 0 Posts
  • 25 Comments
Joined 1Y ago
cake
Cake day: Jun 06, 2023

help-circle
rss

In my very limited experience with my 5400rpm SMR WD disk, it’s perfectly capable of writing at over 100 MB/s until its cache runs out, then it pretty much dies until it has time to properly write the data, rinse and repeat.

40 MB/s sustained is weird (but maybe it’s just a different firmware? I think my disk was able to actually sustain 60 MB/s for a few hours when I limited the write speed, 40 could be a conservative setting that doesn’t even slowly fill the cache)


Your mileage may vary - your experience might be different for one reason or another


Vista’s problem was just the terrible third party drivers and the fact that it was preinstalled on machines it had no business running on. 7 didn’t improve much on it (except fixing the UAC prompt so that it no longer made you feel like you’re using Linux with misconfigured sudo timeout), but it had the benefit of already having working drivers from Vista and proper hardware capable of running Vista/7.


Zig didn’t come to my mind when I was writing my comment and I agree that it’s probably a decent option (the only issue I can think of is its somewhat small community, but that’s not a technical issue with the language).

My argument against Go and Java is garbage collection - even if Java’s infamous GC pause can apparently be worked around with a specialized JVM, I’m pretty sure it still comes at the cost of higher memory usage and wasted CPU cycles compared to some kind of reference counting or Rust’s ownership mechanism (not sure about the proper term for that). And higher memory usage is definitely not something I want to see in my browser, they’re hungry enough as is.


Why not just say Rust? There isn’t really anything else that would provide good enough performance for a browser engine with modern heavy webpages while also fixing some major pain point of C/C++


Right, now get a borderline computer-illiterate person to connect to your network, ensure their firewall isn’t misconfigured to block all incoming traffic (with TeamViewer, this configuration would still work because the device just connects to the TV server) and open and set up a completely separate screen sharing program.

I know none of these steps are difficult if you have any idea what you’re doing, but I’ve met plenty of people who would most likely need assistance going through the motions. Funnily enough, the best way to do it remotely would probably be to get them to install TeamViewer to then set this up for them remotely.

By the way, as far as networking goes, Tailscale does the same thing TeamViewer does, just for a VPN instead of a screen sharing application - it will try to do all the NAT punchthrough techniques and IPv6 connection and fall back on tunneling through relay servers if all else fails. It’s not any more of a direct connection than TV.


Convenience (after you install it, all you have to do is enter the code and you’re connected, no other setup required), familiarity (it’s the default name people will think of or find if they want remote access - that alone means they can get away with pushing their users slightly more) and - IMHO most importantly - connectivity: if two computers can connect to the TeamViewer servers, they will be able to connect to each other.

That’s huge in the world of broken Internet where peer to peer networking feels like rocket science - pretty much every consumer device will be sitting behind a NAT, which means “just connecting” is not possible. You can set up port forwarding (either manually or automatically using UPnP, which is its own bag of problems), or you can use IPv6 (which appears to be currently available to roughly 40% users globally; to use it, both sides need to have functional IPv6), or you can try various NAT traversal techniques (which only work with certain kinds of NAT and always require a coordinating server to pull off - this is one of the functions provided by TeamViewer servers). Oh, and if you’re behind CGNAT (a kind of NAT used by internet providers; apparently it’s moderately common), then neither port forwarding or NAT traversal are possible. So if both sides are behind CGNAT and at least one doesn’t have IPv6, establishing a direct link is impossible.

With a relay server (like TeamViewer provides), you don’t have to worry about being unable to connect - it will try to get you a direct link, but if that fails, it will just act as a tunnel and pass the data between both devices.

Sure, you can self host all this, but that takes time and effort to do right. And if your ISP happens to use CGNAT, that means renting a VPS because you can’t host it at home. With TeamViewer, you’re paying for someone else to worry about all that (and pay for the servers that coordinate NAT traversal and relay data, and their internet bandwidth, neither of which is free).


If it doesn’t come at the expense of battery wear, then sure, lower charge time is just better. But that would make phone batteries the only batteries that don’t get excessively stressed when fast charging. Yeah, phone manufacturers generally claim that fast charging is perfectly fine for the battery, but I’m not sure I believe them too much when battery degradation is one of the main reasons people buy new phones.

I have no clue how other manufacturers do it (so for all I know they could all be doing it right and actually use slow charging), but Google has a terrible implementation of battery conservation - Pixels just fast charge to 80%, then wait until some specific time before the alarm, then fast charge the rest. Compare that to a crappy Lenovo IdeaPad laptop I have that has a battery conservation feature that sets a charge limit AND a power limit (60% with 25W charging), because it wouldn’t make sense to limit the charge and still use full 65W for charging.


It doesn’t slow charge, at least not on Pixel 7a. Well, you could argue whether 20W is slow charging, but it’s all this phone can do.

It just charges normally to 80%, stops, and then resumes charging about an hour or two before the alarm. And last time I used it, it had a cool bug where if it fails to reach 80% by the point in time when it’s supposed to resume charging, it will just stop charging no matter what the current charge level is. Since that experience, I just turned this feature off and charge it in whenever it starts running low.


Cheap Bluetooth might have connection hitches

Fair enough, but I’ve only ever seen this happen with cheap wireless cards / chipsets that do both Bluetooth and WiFi and don’t properly avoid interference between these two (for example, I can get perfectly functioning Bluetooth audio out of my laptop with shitty Realtek wireless card if I completely disable WiFi (not just disconnect)). I think this is less of an issue for dedicated Bluetooth devices.

Bluetooth doesn’t work with airplane mode although I think most airplanes these days aren’t actually affected or we’d have planes dropping out if the sky daily.

Yeah, that’s true. As for the second part, AFAIK there was never an issue with 2.4 GHz radios (which is the frequency band Bluetooth uses) interfering with planes, it was more of a liability / laws thing - the plane manufacturer never explicitly said that these radios are safe (so the airline just banned them to be safe) and/or laws didn’t allow non-certified radios to operate on planes.

Also, does Bluetooth get saturated the way WiFi does?

Eventually yes, but it’s much more resilient than WiFi - 2.4 GHz WiFi only has three non-overlapping channels to work with (and there’s a whole thing with the in-between channels being even worse for everyone involved than everyone just using the same correct three channels that I won’t get into), while Bluetooth slices the same spectrum into 79 fully usable channels. It also uses much lower transmission power, so signal travels a shorter distance. And unlike WiFi, it can dynamically migrate from channel to channel (in fact, it does this even without any interference). 100 people actually seeing each other’s devices might be a problem, but I don’t think that’s a realistic scenario - Bluetooth will use the lowest transmit power at which it can get a reliable link, so if everyone’s devices are only transmitting over a meter or so, there shouldn’t be any noticeable interference on the other side of the plane.


I don’t really see the big problem here? Like sure, it’s silly that it’s cheaper to make wireless headphones than wired ones (I assume - the manufacturers are clearly not too bothered by trademarks and stuff if they put the Lightning logo on it so they wouldn’t avoid wired solution just due to licensing fees), but what business does Apple have in cracking down on this? Other than the obvious issues with trademarks, but those would be present even if it were true wired earphones. It’s just a knockoff manufacturer.

Cheapest possible wired earphones won’t sound much better than the cheapest possible wireless ones, so sound quality probably isn’t a factor. And on the plus side, you don’t have multiple batteries to worry about, or you could do something funny, like plugging the earphones into a powerbank in your pocket and have a freak “hybrid” earphones with multi-day battery (they’re not wireless, but also not tethered to your phone). On the other side, you do waste some power on the wireless link, which is not good for the environment in the long run (the batteries involved will see marginally more wear)

Honestly the biggest issue in my mind is forcing people to turn on Bluetooth, but I don’t think this will change anyone’s habits - people who don’t know what Bluetooth is will definitely just leave it on anyway (it’s the default state), and people technical enough to want to turn it off will recognize that there’s something fishy about these earphones.


Are you sure you didn’t set DNS directly on some/all of your devices? Because in that case they won’t care about the router settings and will use whatever you set them to.

Also as the other commenter said, DNS changes might not propagate to other devices on the network until the next time they connect - a reboot or unplugging the cable from your computer for a few seconds is a dirty but pretty OS agnostic way to do that.


I feel like the ingest system will be sophisticated enough to throw away pieces of text that begin with a message like “ChatGPT says”. Probably even stuff that follows the “paragraph with assumptions and clarifications followed by a list followed by a brief conclusion” structure - everything old has been ingested already, and most of the new stuff containing this is probably AI generated.


Yeah, it’s not practical right now, but in 10 years? Who knows, we might finally have some built-in AI accelerator capable of running big neural networks on consumer CPUs by then (we do have AI accelerators in a large chunk of current CPUs, but they’re not up to the task yet). The system memory should also go up now that memory-hungry AI is inching closer to mainstream use.

Sure, Internet bandwidth will also increase, meaning this compression will be less important, but on the other hand, it’s not like we’ve stopped improving video codecs after h.264 because it was good enough - there are better codecs now even though we have the resources to handle bigger h.264 videos.

The technology doesn’t have to be useful right now - for example, neural networks capable of learning have been studied since the 1940s, even though there would be no way to run them for many decades, and it would take even longer to run them in a useful capacity. But now that we have the technology to do so, they enjoy rapid progress building on top of that original foundation.


I’m pretty sure all of those entries are in the same /12 network - 172.16.0.0/12. Apparently there’s nothing wrong with it, but I think you can significantly simplify that config by just removing all the extra ones


Because of the built-in SSD, I could also sell the external SSD and buy an 8-12tb HDD instead.

If you’re going for a 3.5" HDD, then you’ll most likely have to look for a bit bigger form factor than TinyMiniMicro (Lenovo Tiny / HP Mini / Dell Micro series) - these computers can’t fit a 3.5" HDD.

If size isn’t a major concern, I’d go for the SFF variants of these computers - they are often cheaper than minis for same specs, but probably have a bit larger idle power draw and take up more space. As a bonus upside, you get some small PCIe slots in these computers, so yay for expansions.


On the other hand, it’s also worth noting that newer RAM generations are less and less susceptible to this kind of attack. Not because of any countermeasures, they just lose the data without constant refreshing much quicker even when chilled / frozen, so the attack becomes impractical.

So from DDR4 up, you’re probably safe.


I think the idea at the time was that if /usr is unavailable, you won’t be doing much with the system anyway (other than fixing the configuration).

Nevermind, apparently the original meaning had nothing to do with a network (TIL for me), so our discussion is kinda moot. See section 0.24 in this 2.9BSD (1983) installation guide

Locally written commands that aren’t distributed are kept in /usr/src/local and their binaries are kept in /usr/local. This allows /usr/bin, /usr/ucb, and /bin to correspond to the distribution tape (and to the manuals that people can buy). People wishing to use /usr/local commands are made aware that they aren’t in the base manual.


No comment on sensibility, but technically both are equally difficult - mount the parent filesystem, then mount the child filesystem into an empty directory in the parent. Doesn’t matter which one is where, it’s all abstracted away at this level anyway.


I do not get paid every time it runs for the rest of my life, so why should you?

Sorry if I misunderstood you, but this feels rather easy to answer: because you are being paid to write the code. Spotify doesn’t pay anyone to write music (well maybe they technically do for some ads or something, but it’s definitely not how they acquire more music to add to the library), they just pay for streaming rights on music that was somehow already independently produced. And tiny unknown musicians have no leverage to negotiate better terms than what Spotify offers.


Yeah, OP literally said that they weren’t blocked when using Vivaldi with uBlock Origin, you were the first one to mention the builtin adblock (which is detected by YouTube).

Again: to use YT, you have to disable the builtin adblock and use only uBO. That’s in line with OPs statement.


That’s cool, but YouTube detects Vivaldi’s built in adblocker, so it’s kinda irrelevant if it’s affected by extension policies.

To use YT in Vivaldi, you have to properly configure uBlock Origin (avoid extra filters that interfere with YT) and disable the builtin adblock for YT. And given that Vivaldi relies on Chrome Extension Store for its extensions, there will still be some friction to getting Mv2 extensions after Google pulls the plug on them.


I mean, there is the whole 128/8 for localhost, kinda hard to beat that with crazy allocations. And OP still has another /12 and /16 networks available even if they refuse to further divide them.


JavaScript. Your browser downloads and runs it automatically and the vast majority of people either don’t consider it a problem at all or just accept that they can’t choose what software they run on their computer. This person apparently wants to avoid websites with proprietary JavaScript if possible.