this post was submitted on 04 May 2025
12 points (92.9% liked)

LocalLLaMA

2939 readers
15 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

Hi, I'm not too informed about LLMs so I'll appreciate any correction to what I might be getting wrong. I have a collection of books I would like to train an LLM on so I could use it as a quick source of information on the topics covered by the books. Is this feasible?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] mitram2@lemm.ee 1 points 1 day ago* (last edited 1 day ago) (1 children)

Would you recommend fine-tuning over RAG to improve domain specific performance, my end goal would be a small, efficient and very specialised LLM to help get info on the contents of the books (all of them are about the same topic, from different povs and authors)

[โ€“] Smokeydope@lemmy.world 1 points 6 hours ago* (last edited 6 hours ago)

I would receommend you read over the work of the person who finetuned a mistral model on many us army field guides to understand what fine tuning on a lot of books to bake in knowledge looks like.

If you are a newbie just learning how this technology works I would suggest trying to get RAG working with a small model and one or two books converted to a big text file just to see how it works. Because its cheap/free t9 just do some tool calling and fill up a models context.

Once you have a little more experience and if you are financially well off to the point 1-2 thousand dollars to train a model is who-cares whatever play money to you then go for finetuning.