this post was submitted on 04 May 2025
12 points (92.9% liked)

LocalLLaMA

2945 readers
4 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

Hi, I'm not too informed about LLMs so I'll appreciate any correction to what I might be getting wrong. I have a collection of books I would like to train an LLM on so I could use it as a quick source of information on the topics covered by the books. Is this feasible?

you are viewing a single comment's thread
view the rest of the comments
[–] will_a113@lemm.ee 7 points 2 days ago (2 children)

The easiest option for a layperson is retrieval augmented generation, or RAG. Basically you encode your books and upload them into a special kind of database and then tell a regular base model LLM to check the data when making an answer. I know ChatGPT has a built in UI for this (and maybe anthropic too) but you can also build something out using Langchain or OpenWebUi and the model of your choice.

The next step up from there is fine tuning, where you kinda retrain a base model on your books. This is more complex and time consuming but can give more nuanced answers. It’s often done in combination with RAG for particularly large bodies of information.

[–] ryedaft@sh.itjust.works 3 points 1 day ago* (last edited 1 day ago) (1 children)

Umm, fine-tuning the model that makes the embeddings, right? Or is there an API for messing with the generative AI somewhere? Or are we assuming that newbie has a lot of compute resources? And they would have to use the generative model to create queries for their passages as well, right?

I would try something like

Guides | RAGFlow - https://ragflow.io/docs/dev/category/guides

or a similar tool.

Edit: not for fine-tuning, just to get started. Local models, RAG, your books are your knowledge base

[–] will_a113@lemm.ee 1 points 1 day ago

Making your own embeddings is for RAG. Most base model providers have standardized on OpenAIs embeddings scheme, but there are many ways. Typically you embed a few tokens worth of data at a time and store that in your vector database. This lets your AI later do some vector math (usually cosine similarity search) to see how similar (related) the embeddings are to each other and to what you asked about. There are fine tuning schemes where you make embeddings before the tuning as well but most people today use whatever fine tuning services their base model provider offers, which usually has some layers of abstraction.

[–] hendrik@palaver.p3x.de 1 points 2 days ago* (last edited 2 days ago)

And as far as I know people do fine-tuning so it picks up on the style of writing and things like that, for example to mimick an author, or specifics of a genre. I'd say to just fetch facts from a pile of text, RAG would be the easier approach. It depends on the use-case, the collection of books, however. Fine-tuning is definitely a thing people do as well.