this post was submitted on 06 Oct 2024
111 points (88.8% liked)

Technology

59554 readers
3048 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] remotelove@lemmy.ca 68 points 1 month ago

This kind of skill might help developers build AI agents that identify buttons or fields on a webpage ~~to handle tasks like making a reservation at a restaurant.~~

... to improve efficiency of click farms and to bypass captchas.

[–] simple@lemm.ee 41 points 1 month ago* (last edited 1 month ago)

This reads like an ad. They claim to use 1000 times less data than proprietary models, except nobody knows how much data they use or how big proprietary models actually are. Also there's a giant asterisk here they fail to mention: Molmo outperforms the competition at visual benchmarks, not actual text chat.

[–] pennomi@lemmy.world 14 points 1 month ago

Daaaang, Apache license AND open dataset + training tools.

[–] homoludens 11 points 1 month ago (1 children)

but an order of magnitude smaller

I'm pretty sure that would be three orders of magnitude.

[–] FaceDeer@fedia.io 16 points 1 month ago (1 children)

They're not talking about the same thing.

Last week, researchers at the Allen Institute for Artificial Intelligence (Ai2) released a new family of open-source multimodal models competitive with state-of-the-art models like OpenAI’s GPT-4o—but an order of magnitude smaller.

That's in reference to the size of the model itself.

They then compiled a more focused, higher quality dataset of around 700,000 images and 1.3 million captions to train new models with visual capabilities. That may sound like a lot, but it’s on the order of 1,000 times less data than what’s used in proprietary multimodal models.

That's in reference to the size of the training data that was used to train the model.

Minimizing both of those things is useful, but for different reasons. Smaller training sets make the model cheaper to train, and a smaller model makes the model cheaper to run.

[–] General_Effort@lemmy.world 1 points 1 month ago

After a quick skim, seems like the article has lots of errors. Molmo is trained on top of Qwen. The smallest ones are trained on something by the same company as Molmo.

[–] lunarul@lemmy.world 11 points 1 month ago

Instead of writing captions, the team asked annotators to record 60- to 90-second verbal descriptions answering a list of questions about each image. They then transcribed the descriptions—which often stretched across several pages—and used other large language models to clean up, crunch down, and standardize them.

So those other LLMs are needed to train this one?

[–] chemical_cutthroat@lemmy.world 1 points 1 month ago

And a modern calculator has more computer power than the Apollo program... This is how tech works.