this post was submitted on 25 Jul 2024
655 points (100.0% liked)

196

16589 readers
2147 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] PriorityMotif@lemmy.world 7 points 4 months ago (2 children)

You can probably find a used workstation/server capable of using 256GB of RAM for a few hundred bucks and get at least a few gpus in there. You'll probably spend a few hundred on top of that to max out the ram. Performance doesn't go up much past 4 gpus because the CPU will have a difficult time dealing with the traffic. So for a ghetto build you're looking at $2k unless you have a cheap/free local source.

[–] areyouevenreal@lemm.ee 3 points 4 months ago

Without sufficient VRAM it probably couldn't be GPU accelerated effectively. Regular RAM is for CPU use. You can swap data between both pools, and I think some AI engines do this to run larger models, but it's a slow process and you probably wouldn't gain much from it without using huge GPUs with lots of VRAM. PCIe just isn't as fast as local RAM or VRAM. This means it would still run on the CPU, just very slowly.

[–] AdrianTheFrog@lemmy.world 1 points 4 months ago

PCIe will probably be the bottleneck way before the number of GPUs is, if you're planning on storing the model in ram. Probably better to get a high end server CPU.