this post was submitted on 12 Jun 2024
11 points (100.0% liked)

Memes

44918 readers
2775 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
top 10 comments
sorted by: hot top controversial new old
[–] JPAKx4@lemmy.blahaj.zone 2 points 2 months ago (1 children)

Now where is the shovel head maker, TSMC?

[–] kautau@lemmy.world 1 points 2 months ago

And then China popping their head out claiming Taiwan is part of China because they want to seize TSMC

[–] Venator@lemmy.nz 2 points 2 months ago

Edited the price to something more nvidiaish: 1000009536

[–] zakobjoa@lemmy.world 1 points 2 months ago

They will eat massive shit when that AI bubble bursts.

[–] phoenixz@lemmy.ca 0 points 2 months ago (2 children)

Serious Question:

Why is Nvidia AI king and I see nothing of AMD for AI?

[–] Naz@sh.itjust.works 1 points 2 months ago* (last edited 2 months ago) (1 children)

I'm an AI Developer.

TLDR: CUDA.

Getting ROCM to work properly is like herding cats.

You need a custom implementation for the specific operating system, the driver version must be locked and compatible, especially with a Workstation / WRX card, the Pro drivers are especially prone to breaking, you need the specific dependencies to be compiled for your variant of HIPBlas, or zLUDA, if that doesn't work, you need ONNX transition graphs, but then find out PyTorch doesn't support ONNX unless it's 1.2.0 which breaks another dependency of X-Transformers, which then breaks because the version of HIPBlas is incompatible with that older version of Python and ..

Inhales

And THEN MAYBE it'll work at 85% of the speed of CUDA. If it doesn't crash first due to an arbitrary error such as CUDA_UNIMPEMENTED_FUNCTION_HALF

You get the picture. On Nvidia, it's click, open, CUDA working? Yes?, done. You don't spend 120 hours fucking around and recompiling for your specific usecase.

[–] barsoap@lemm.ee 1 points 2 months ago* (last edited 2 months ago)

Also, you need a supported card. I have a potato going by the name RX 5500, not on the supported list. I have the choice between three rocm versions:

  1. An age-old prebuilt, generally works, occasionally crashes the graphics driver, unrecoverably so... Linux tries to re-initialise everything but that fails, it needs a proper reset. I do need to tell it to pretend I have a different card.
  2. A custom-built one, which I fished out of a docker image I found on the net because I can't be arsed to build that behemoth. It's dog-slow, due to using all generic code and no specialised kernels.
  3. A newer prebuilt, any. Works fine for some, or should I say, very few workloads (mostly just BLAS stuff), otherwise it simply hangs. Presumably because they updated the kernels and now they're using instructions that my card doesn't have.

#1 is what I'm actually using. I can deal with a random crash every other day to every other week or so.

It really would not take much work for them to have a fourth version: One that's not "supported-supported" but "we're making sure this things runs": Current rocm code, use kernels you write for other cards if they happen to work, generic code otherwise.

Seriously, rocm is making me consider Intel cards. Price/performance is decent, plenty of VRAM (at least for its class), and apparently their API support is actually great. I don't need cuda or rocm after all what I need is pytorch.

[–] morrowind@lemmy.ml 1 points 2 months ago

Simple Answer:

Cuda

[–] Meowie_Gamer@lemmy.world -1 points 2 months ago (1 children)

Nvidias being pretty smart here ngl

This is the ai gold rush and they sell the tools.

[–] Meltrax@lemmy.world 1 points 2 months ago

Yes that's the meme.