• 0 Posts
  • 55 Comments
Joined 1Y ago
cake
Cake day: Jun 18, 2023

help-circle
rss

I don’t mind this. It’s unreasonable to expect them to provide a free service forever without any kind of monetization.


In a lot of modern work flows this is incompatible with the development pattern.

For example, at my job we have to roll a test release through CI that we then have to deploy to a test kubernetes cluster. You can’t even do that if the build is failing because of linting issues.


Depends heavily on the market segment. I also work in Europe and in my 15 years as a software developer (the first 6-7 as C/C++ developer) I’ve never seen anyone use Visual Studio.


What would you do if you had a million dollars?


Odd to see the gap between C and Pascal this big. Is this a matter of lacking optimization effort for less popular languages?



They’ve been talking about removing the GIL since I was in primary school. My children are in primary school now. I’ll believe it when I see it.


This is Indian English, same as “I have a doubt”. It’s not something you commonly hear outside India.


Why not? It costs nothing, appart from transforming the old format into something the current site can work with, or more likely, have the old site support tbe old format.


Seeing the runaway succes of others like Nvidia, Apple, Mediatek, do you think any meaningful new entries are going to deviate from their playbook?

Being a good citizen with regards to transparency in firmware and Linux support is not a proven differentiator for these vendors and shown time and time again not to be a requirement for success.


I don’t understand what you mean. Why does ARM hardware become obsolete after a few years? Lacking ongoing software support and no mainline Linux?

What does that have to do with the instruction set license? If you think RISC-V implementors who actually make the damn chips won’t ship locked hardware that only run signed and encrypted binary blobs, you are in for a disappointing ride.

Major adopters, like WD and Nvidia didn’t pick RISC-V over arm for our freedoms. They were testing the waters to see if they could stop paying the ARM tax. All the other stuff will stay the same.


You give him far too much credit. Never attribute to malice what can adequately be explained by stupidity.


Given how little spotify gives to artists, I can’t imagine this being a cost effective way to launder your money at all.


While C is certainly better for some problems in my experience, it too is very hard to use in large projects with a mix of developers, and it is unsuitable for most higher level applications in most companies.

I think C has its place in the world still, which is mostly confined low level embedded, kernel space and malware. I do believe that the market segment that used to rely on C++ is today better served by either Go or Rust, depending on the project.

That said, while I LOVE working with Rust, it suffers from many of the same issues I mentioned for C++ in my comment above when working in a mixed skillset team.


Everything is fine within the scope of a college course or project.

Where C++ breaks down is large, complicated projects where you colaborate with other developers over multiple years.

I worked in C++ for almost a decade, and while there were a few good projects I encountered, most suffered from one or more of the following problems:

  • C++ has so many parts, everyone picks a subset they think is “good”, but noone seems to fully agree on what that subset is.
  • A side effect of the many possibilities C++ offers to compose or abstract your project is that it allows for developers to be “clever”. However, this often results in code that is hard to maintain or understand, especially for other developers.
  • Good C++ is very hard. Not everyone is a C++ veteran that read dozens of books or has a robust body of knowledge on all its quirks and pitfalls, and those people are also often assigned to your project and contribute to it. I was certainly never an expert, despite a lot of time and effort spent learning and using C++.

Back when I was still in school, I ran a few tests on real world LISP and Java (the then dominant language, this was in the late days of Sun Microsystems succes).

Turns out most LISP programs had fewer parentheses then Java had braces, parens and brackets.


Well, if your mac address changes every time you connect to a different network, Unity would be detecting and billing for a lot of false positives, so it would be a bad method to identify unique devices.


Except iOS will randomize its mac adress at each boot / after a while to prevent users being tracked by rogue WiFi networks, which is actually a thing being used to track consumers in commercial spaces etc. So that wouldn’t work.


I’ve been using Firefox since it was called Phoenix. Mozilla, for all it’s flaws, has been our first and only line of defense for an open web for so long.


Agreed, but for many services 2 or 3 nines is acceptable.

For the cloud storage system I worked on it wasn’t, and that had different setups for different customers, from a simple 3 node system (the smallest setup, mostly for customers trialing the solution) to a 3 geo setup which has at least 9 nodes in 3 different datacenters.

For the finanicial system, we run a live/live/live setup, where we’re running a cluster in 3 different cloud operators, and the client is expected to know all of them and do failover. That obviously requires little more complexity on the client side, but in many cases developers or organisations control both anyway.

Netflix is obviously at another scale, I can’t comment on what their needs are, or how their solution looks, but I think it’s fair to say they are an exceptional case.


Sorry, yes, that was durability. I got it mixed up in my head. Availability had lower targets.

But I stand by the gist of my argument - you can achieve a lot with a live/live system, or a 3 node system with a master election, or…

High availability doesn’t have to equate high cost or complexity, if you can take it into account when designing the system.


If you really need the scale of 2000 physical machines, you’re at a scale and complexity level where it’s going to be expensive no matter what.

And I think if you need that kind of resources, you’ll still be cheaper of DIY.


I used to work on an on premise object storage system before, where we required double digits of “nines” availability. High availability is not rocket science. Most scenarios are covered by having 2 or 3 machines.

I’d also wager that using the cloud properly is a different skillset than properly managing or upgrading a Linux system, not necessarily a cheaper or better one from a company point of view.


Got to agree with @Zushii@feddit.de here, although it depends on the scope of your service or project.

Cloud services are good at getting you up and running quickly, but they are very, very expensive to scale up.

I work for a financial services company, and we are paying 7 digit monthly AWS bills for an amount of work that could realistically be done with one really big dedicated server. And now we’re required to support multiple cloud providers by some of our customers, we’ve spent a TON of effort trying to untangle from SQS/SNS and other AWS specific technologies.

Clouds like to tell you:

  • Using the cloud is cheaper than running your own server
  • Using cloud services requires less manpower / labour to maintain and manage
  • It’s easier to get up and running and scale up later using cloud services

The last item is true, but the first two are only true if you are running a small service. Scaling up on a cloud is not cost effective, and maintaining a complicated cloud architecture can be FAR more complicated than managing a similar centralized architecture.


I have an Ryzen+Radeon Zephyrus G14 from 2022. It’s been great, battery life and performance wise. I run Linux but I’m sure Windows is no worse in this regard.

The only thing I can say is that I misjudged the 14" form factor and regret not getting a 16" model, and the mechanism for lifting the laptop of the table with the lid works great on a table but makes the laptop largely unusable on your lap in the couch.


Crash his car into the wall and break his wrist :(


The problem that the courts haven’t really answered yet is: How much human input is needed to copyright something? 5%? 20%? 50%? 80%? If some AI wrote most of a script and a human writer cleaned it up, is that enough?

Or perhaps even coming up and writing a prompt is considered enough human input by some.


Yes, because AI and automation will definitely not be on the side of big capital, right? Right?

Be real. The cost of building means they’re always going to favour the wealthy. At best right now were running public copies of the older and smaller models. Local AI will always be running behind the state of the art big proprietary models, which will always be in the hands of the richest moguls and companies in the world.



This article is trying to conflate two different things:

  • Anti trust regulation of big tech which is trying to reign in the power of these companies. This is happening everywhere - including the US, which is currently starting a big anti trust case against Alphabet. The same is happening in the EU and probably the UK.

  • The UK online safety bill trying to ban private and encrypted communication

These are not the same. Portraying them as two branches of the same tree, and the tech companies as upset bullies because someone is standing up to them is disengenious.

Of course they don’t particularly like either, but most of them are threatening to leave over the online safety bill and the UK trying to puff its chest and show it can regulate these forces post brexit.

I don’t see this going well for the UK honestly.


They got nothing on Crusader Kings patch notes anyway.


These days there’s also Lithium ion AA batteries, with different voltages. You can get them downvolted to anything from 1.5 to 1.8V.

The ones over 1.5V are commonly used in applications with electronic motors, since it allows you to effectively overdrive the toy or whatever it is you’re powering.


Even if you don’t end up using it, if it enables more users to find their way to Lemmy, we all benefit.

I never really clicked with Sync for reddit, and trying it for Lemmy, all the acknowledgements and agreeing with privacy policy really rubs me the wrong way for a Fediverse client. But if it works for others I’m all for it.


I’m not convinced. I think a lot more people are susceptible to getting distracted than there are susceptible to extreme acts of violence.

Your stated good use cases can easily be performed after/outside of classes. And I would say in this day and age should be part of assignments/homework/studying in high school level education to guide and educate young people in filtering, identifying and assessing source materials better. But that’s asking a lot from teachers, who are not experts at this, either.

I don’t see how any of this discussion relates to funding though.


Just FYI, when your drive is encrypted, and the system is up and running, the keys for the encryption are in memory and thus recoverable. And even if they were magically protected by something like SGX or a some secure enclave, you can still interact with the machine and the filesystem while it is running.

So full disk encryption is NOT a silver bullet to data protection when being raided.


For many reasons. Nvidia requiring secure boot in this case, which is not available for all distros or kernels on all computers.

The other is requiring a workable kernel module and user space component from Nvidia, which means that as soon as Nvidia deprecates your hardware, you’re stuck with legacy drivers, legacy kernels, or both.

Nvidia also has it’s own separate userspace stack, meaning it doesn’t integrate with the whole DRM & Mesa stack everyone else uses. For the longest time that meant no Wayland support, and it still means you’re limited to Gnome only on wayland when using Nvidia AFAIK.

Another issue is switcheable graphics. Since systems with switchable graphics typically combine a Mesa based driver stack (aka everyone but Nvidia, but typically this would be AMD or Intel integrated graphics) with an Nvidia one, it involves swapping out the entire library chain (OpenGL or Vulkan or whatever libraries). This is typically done by using ugly hacks (wrapper scripts using LD_PRELOAD for example) and are prone to failure. Symptoms can be anything as mild as everything running on the integrated graphics, the discrete graphics never sleeping causing poor battery life or high power consumption, to booting to a black screen all or some of the time.

If these things don’t bother you or you have no idea what these things mean, or you don’t care about them or your hardware lasting more than 3-5y then it probably isn’t a big deal to you. But none of the above exist when using Intel, AMD or a mix of those two.

In my experience the past twenty years, proprietary drivers are the root cause of I would say 90% of my issues using Linux.


Literally buy anything but Nvidia. Intel, AMD have upstream drivers that work regardless of secure boot. Various ARM platforms also have free drivers.

It used to be that there waa only bad choices, now there really is only one bad choice left.

Intel Arc still has some teething problems, particularly with power management on laptops, but AMD has been smooth sailing for almost a decade now.


Pro tip if you want to use Linux: don’t rely on non-free drivers.


Vote with your wallet. I recently increased my monthly donation to Mozilla.


Did you buy it through the iOS app? Because they are 30% more.