this post was submitted on 30 Apr 2025
15 points (80.0% liked)

Open Source

36326 readers
450 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
 

I’ve been exploring MariaDB 11.8’s new vector search capabilities for building AI-driven applications, particularly with local LLMs for retrieval-augmented generation (RAG) of fully private data that never leaves the computer. I’m curious about how others in the community are leveraging these features in their projects.

I’m especially interested in using it with local LLMs (like Llama or Mistral) to keep data on-premise and avoid cloud-based API costs or security concerns.

Does anyone have experiences to share, in particular what LLMs are you using when generating embeddings to store in MariaDB?

you are viewing a single comment's thread
view the rest of the comments
[–] AddiXz@feddit.nl 2 points 22 hours ago (1 children)

I used all-minilm which worked well for documents. But it doesn't work for images (or didn't when I was using it last time). Although I did in combination with N8N and Qdrant.

[–] swelter_spark@reddthat.com 1 points 20 hours ago

MixedBread is nice.