this post was submitted on 19 Apr 2025
6 points (87.5% liked)

Large Language Models

198 readers
1 users here now

A place to discuss large language models.

Rules

  1. Please tag [not libre software] and [never on-device] services as such (those not green in the License column here).
  2. Be useful to others

Resources

github.com/ollama/ollama
github.com/open-webui/open-webui
github.com/Aider-AI/aider
wikipedia.org/wiki/List_of_large_language_models

founded 2 years ago
MODERATORS
 

I'm running ollama with llama3.2:1b smollm, all-minilm, moondream, and more. I am able to integrate it with coder/code-server, vscode, vscodium, page assist, cli, and also created a discord ai user.

I'm an infrastructure and automation guy, not a developer so much. Although my field is technically devops.

Now, I hear that some llms have "tools." How do I use them? How do I find a list of tools for a model?

I don't think I can simply prompt "Hi llama3.2, list your tools." Is this part of prompt engineering?

What, do you take a model and retrain it or something?

Anybody able to point me in the right direction?

you are viewing a single comment's thread
view the rest of the comments
[–] 0x01@lemmy.ml 2 points 2 weeks ago (2 children)

Tools are usually sent in your request to the provider, tools are nothing more than identifiers that your code can handle if the llm requests it.

An example would be, you want to allow the llm to turn on or off the light in your bedroom. You could pass a tool (literally just the word toggle_light in this case) to the llm request along with whatever your input is (this could be a part of a chat, voice command, whatever) then if the llm determines that the input has somehow warranted the light toggle (say the input is a user saying "turn on the light") it will "use" the tool. Using the tool means literally sending a response back to your code that initiated the required that says "hey run the light toggle code with these parameters" your code is responsible for actually executing that action and telling the llm the response

Tool use is the single most useful thing llms do today imo

[–] chicken@lemmy.dbzer0.com 2 points 2 weeks ago (1 children)

What I'm wondering is, is there a standard format for instructing models to give outputs using the tool? They're specifically trained to be better at doing this right

[–] 0x01@lemmy.ml 1 points 1 week ago (1 children)

Ah for training a new model from scratch? Yes there is a specific format, you can look at the ollama source code or any of the big models that accept tool use like llama4 for the format both to and from a model. However unless you're secretly a billionaire I doubt you could compete with these pertained models in tool calling.

Ollama's model list on their website has a filter for tool using models. To be honest all open source models suck at tool use compared to the big players, openai, anthropic, google. To be fair I don't have any hardware capable of running deepseeks newest models so I haven't tested them for tool use.

[–] chicken@lemmy.dbzer0.com 1 points 1 week ago* (last edited 1 week ago)

No I meant like, for prompting tool supporting models to be aware of the functions you are making available to it. I've tried arbitrary prompts to tell it to do this and it sort of works but yeah the models I've tried don't seem very good at that, was mainly wondering if using a specific format in the prompt would improve performance

[–] deckerrj05@lemmy.world 1 points 1 week ago

Love it. Very clear and to the point. Thank you.