Seeing the runaway succes of others like Nvidia, Apple, Mediatek, do you think any meaningful new entries are going to deviate from their playbook?
Being a good citizen with regards to transparency in firmware and Linux support is not a proven differentiator for these vendors and shown time and time again not to be a requirement for success.
I don’t understand what you mean. Why does ARM hardware become obsolete after a few years? Lacking ongoing software support and no mainline Linux?
What does that have to do with the instruction set license? If you think RISC-V implementors who actually make the damn chips won’t ship locked hardware that only run signed and encrypted binary blobs, you are in for a disappointing ride.
Major adopters, like WD and Nvidia didn’t pick RISC-V over arm for our freedoms. They were testing the waters to see if they could stop paying the ARM tax. All the other stuff will stay the same.
While C is certainly better for some problems in my experience, it too is very hard to use in large projects with a mix of developers, and it is unsuitable for most higher level applications in most companies.
I think C has its place in the world still, which is mostly confined low level embedded, kernel space and malware. I do believe that the market segment that used to rely on C++ is today better served by either Go or Rust, depending on the project.
That said, while I LOVE working with Rust, it suffers from many of the same issues I mentioned for C++ in my comment above when working in a mixed skillset team.
Everything is fine within the scope of a college course or project.
Where C++ breaks down is large, complicated projects where you colaborate with other developers over multiple years.
I worked in C++ for almost a decade, and while there were a few good projects I encountered, most suffered from one or more of the following problems:
Agreed, but for many services 2 or 3 nines is acceptable.
For the cloud storage system I worked on it wasn’t, and that had different setups for different customers, from a simple 3 node system (the smallest setup, mostly for customers trialing the solution) to a 3 geo setup which has at least 9 nodes in 3 different datacenters.
For the finanicial system, we run a live/live/live setup, where we’re running a cluster in 3 different cloud operators, and the client is expected to know all of them and do failover. That obviously requires little more complexity on the client side, but in many cases developers or organisations control both anyway.
Netflix is obviously at another scale, I can’t comment on what their needs are, or how their solution looks, but I think it’s fair to say they are an exceptional case.
Sorry, yes, that was durability. I got it mixed up in my head. Availability had lower targets.
But I stand by the gist of my argument - you can achieve a lot with a live/live system, or a 3 node system with a master election, or…
High availability doesn’t have to equate high cost or complexity, if you can take it into account when designing the system.
I used to work on an on premise object storage system before, where we required double digits of “nines” availability. High availability is not rocket science. Most scenarios are covered by having 2 or 3 machines.
I’d also wager that using the cloud properly is a different skillset than properly managing or upgrading a Linux system, not necessarily a cheaper or better one from a company point of view.
Got to agree with @Zushii@feddit.de here, although it depends on the scope of your service or project.
Cloud services are good at getting you up and running quickly, but they are very, very expensive to scale up.
I work for a financial services company, and we are paying 7 digit monthly AWS bills for an amount of work that could realistically be done with one really big dedicated server. And now we’re required to support multiple cloud providers by some of our customers, we’ve spent a TON of effort trying to untangle from SQS/SNS and other AWS specific technologies.
Clouds like to tell you:
The last item is true, but the first two are only true if you are running a small service. Scaling up on a cloud is not cost effective, and maintaining a complicated cloud architecture can be FAR more complicated than managing a similar centralized architecture.
I have an Ryzen+Radeon Zephyrus G14 from 2022. It’s been great, battery life and performance wise. I run Linux but I’m sure Windows is no worse in this regard.
The only thing I can say is that I misjudged the 14" form factor and regret not getting a 16" model, and the mechanism for lifting the laptop of the table with the lid works great on a table but makes the laptop largely unusable on your lap in the couch.
The problem that the courts haven’t really answered yet is: How much human input is needed to copyright something? 5%? 20%? 50%? 80%? If some AI wrote most of a script and a human writer cleaned it up, is that enough?
Or perhaps even coming up and writing a prompt is considered enough human input by some.
Yes, because AI and automation will definitely not be on the side of big capital, right? Right?
Be real. The cost of building means they’re always going to favour the wealthy. At best right now were running public copies of the older and smaller models. Local AI will always be running behind the state of the art big proprietary models, which will always be in the hands of the richest moguls and companies in the world.
Surely Elon would prefer the old Lucid fork, https://www.xemacs.org/
This article is trying to conflate two different things:
Anti trust regulation of big tech which is trying to reign in the power of these companies. This is happening everywhere - including the US, which is currently starting a big anti trust case against Alphabet. The same is happening in the EU and probably the UK.
The UK online safety bill trying to ban private and encrypted communication
These are not the same. Portraying them as two branches of the same tree, and the tech companies as upset bullies because someone is standing up to them is disengenious.
Of course they don’t particularly like either, but most of them are threatening to leave over the online safety bill and the UK trying to puff its chest and show it can regulate these forces post brexit.
I don’t see this going well for the UK honestly.
Even if you don’t end up using it, if it enables more users to find their way to Lemmy, we all benefit.
I never really clicked with Sync for reddit, and trying it for Lemmy, all the acknowledgements and agreeing with privacy policy really rubs me the wrong way for a Fediverse client. But if it works for others I’m all for it.
I’m not convinced. I think a lot more people are susceptible to getting distracted than there are susceptible to extreme acts of violence.
Your stated good use cases can easily be performed after/outside of classes. And I would say in this day and age should be part of assignments/homework/studying in high school level education to guide and educate young people in filtering, identifying and assessing source materials better. But that’s asking a lot from teachers, who are not experts at this, either.
I don’t see how any of this discussion relates to funding though.
Just FYI, when your drive is encrypted, and the system is up and running, the keys for the encryption are in memory and thus recoverable. And even if they were magically protected by something like SGX or a some secure enclave, you can still interact with the machine and the filesystem while it is running.
So full disk encryption is NOT a silver bullet to data protection when being raided.
For many reasons. Nvidia requiring secure boot in this case, which is not available for all distros or kernels on all computers.
The other is requiring a workable kernel module and user space component from Nvidia, which means that as soon as Nvidia deprecates your hardware, you’re stuck with legacy drivers, legacy kernels, or both.
Nvidia also has it’s own separate userspace stack, meaning it doesn’t integrate with the whole DRM & Mesa stack everyone else uses. For the longest time that meant no Wayland support, and it still means you’re limited to Gnome only on wayland when using Nvidia AFAIK.
Another issue is switcheable graphics. Since systems with switchable graphics typically combine a Mesa based driver stack (aka everyone but Nvidia, but typically this would be AMD or Intel integrated graphics) with an Nvidia one, it involves swapping out the entire library chain (OpenGL or Vulkan or whatever libraries). This is typically done by using ugly hacks (wrapper scripts using LD_PRELOAD for example) and are prone to failure. Symptoms can be anything as mild as everything running on the integrated graphics, the discrete graphics never sleeping causing poor battery life or high power consumption, to booting to a black screen all or some of the time.
If these things don’t bother you or you have no idea what these things mean, or you don’t care about them or your hardware lasting more than 3-5y then it probably isn’t a big deal to you. But none of the above exist when using Intel, AMD or a mix of those two.
In my experience the past twenty years, proprietary drivers are the root cause of I would say 90% of my issues using Linux.
Literally buy anything but Nvidia. Intel, AMD have upstream drivers that work regardless of secure boot. Various ARM platforms also have free drivers.
It used to be that there waa only bad choices, now there really is only one bad choice left.
Intel Arc still has some teething problems, particularly with power management on laptops, but AMD has been smooth sailing for almost a decade now.
I don’t mind this. It’s unreasonable to expect them to provide a free service forever without any kind of monetization.