this post was submitted on 28 Apr 2025
39 points (97.6% liked)

LocalLLaMA

2929 readers
37 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 3 points 5 days ago* (last edited 5 days ago) (1 children)

I'm actually more medium on this!

  • Only 32K context without yarn, and with yarn Qwen 2.5 was kinda hit/miss.

  • No 32B base model. Is that a middle finger to the Deepseek distils?

  • It really feels like "more of qwen 2.5/1.5" architecture wise. I was hoping for better attention mechanisms, QAT, a bitnet test, logit distillation... something new other than some training data optimizations and more scale.

[–] pebbles@sh.itjust.works 2 points 5 days ago (1 children)

There actually is a 32b dense

[–] brucethemoose@lemmy.world 3 points 5 days ago (1 children)

Yeah, but only an Instruct version. They didn't leave any 32B base model like they did for for the 30B MoE.

That could be intentional, to stop anyone from building on their 32B dense model.

[–] pebbles@sh.itjust.works 3 points 4 days ago* (last edited 4 days ago) (1 children)

Huh. I didn't realize that thanks. Lame that they would hold back the one that is the biggest size most consumers would ever run.

[–] brucethemoose@lemmy.world 3 points 4 days ago* (last edited 4 days ago)

It could be an oversight, no one has answered yet. Not many people asking either, heh.