Pretty much the exact reason containerized environments were created.

Terrasque
link
fedilink
39M

Yep, I usually make docker environments for cuda workloads because of these things. Much more reliable

You can’t run a different Nvidia driver in a container though

When you hit that config need the next step is light weight VM + pcie passthru.

Uranium3006
link
fedilink
169M

Some numbnut pushed nvidia driver code with compilation errors and now I have to use an old Kernel until it’s fixed

Presi300
link
fedilink
English
2
edit-2
9M

Insert JavaScript joke here

spoiler

Error: joke is undefined

@Gabu@lemmy.ml
link
fedilink
74
edit-2
9M

I prefer ROCM:
R -
O -
C -
M -

  • Fuck me, it didn’t work again

I don’t know what any of this means, upvoted everything anyway.

baltakatei
link
fedilink
159M

Nvidia: I have altered the deal, pray I do not alter it further.

THCDenton
cake
link
fedilink
389M

Oh cool I got the wrong nvidia driver installed. Guess I’ll reinstall linux 🙃

Yum downgrade.

Avid Amoeba
link
fedilink
119M

Not a hot dog.

Gamma
link
fedilink
English
189M

Related to D: today vscode released an update that made it so you can’t use the remote tools with Ubuntu 18.04 (which is supported with security updates until 2028) 🥴 the only fix is to downgrade

Skull giver
link
fedilink
23
edit-2
9M

deleted by creator

Gamma
link
fedilink
English
59M

For sure, I’m on the latest LTS! The problem here is that the remote ssh tools don’t work on older servers either, so you can no longer use the same workflow you had yesterday if you’re trying to connect to an 18.04 Ubuntu server

Skull giver
link
fedilink
7
edit-2
9M

deleted by creator

Skull giver
link
fedilink
2
edit-2
9M

deleted by creator

Yeah. Fuck stable dev platforms, amirite?

You can cure yourself of that shiny-things addiction, but you have to go attend the meetings yourself.

deleted by creator

I’ve been working with CUDA for 10 years and I don’t feel it’s that bad…

I started working with CUDA at version 3 (so maybe around 2010?) and it was definitely more than rough around the edges at that time. Nah, honestly, it was a nightmare - I discovered bugs and deviations from the documented behavior on a daily basis. That kept up for a few releases, although I’ll mention that NVIDIA was/is really motivated to push CUDA for general purpose computing and thus the support was top notch - still was in no way pleasant to work with.

That being said, our previous implementation was using OpenGL and did in fact produce computational results as a byproduct of rendering noise on a lab screen, so there’s that.

I don’t know wtf cuda is, but the sentiment is pretty universal: please just fucking work I want to kill myself

Cuda turns a gpu in to a very fast cpu for specific operations. It won’t replace the cpu, just assist it.

Graphics are just maths. Plenty of operations for display the beautiful 3d models with the beautiful lights and shadows and shines.

Those maths used for display 3d, can be used for calculate other stuffs, like chatgpt’s engine.

MustrumR
link
fedilink
409M

I program 2-3 layers above (Tensorflow) and those words reverberate all the way up.

Bipta
link
fedilink
209M

I program and those words reverberate.

Pyro
link
fedilink
English
149M

I reverberate.

Scew
link
fedilink
English
89M

be.

Recently, I’ve just given up trying to use cuda for machine learning. Instead, I’ve been using (relatively) cpu intensive activation functions & architecture to make up the difference. It hasn’t worked, but I can at least consistently inch forward.

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 77 users / day
  • 211 users / week
  • 413 users / month
  • 2.92K users / 6 months
  • 1 subscriber
  • 1.53K Posts
  • 33.8K Comments
  • Modlog