this post was submitted on 19 Apr 2025
6 points (87.5% liked)

Large Language Models

198 readers
1 users here now

A place to discuss large language models.

Rules

  1. Please tag [not libre software] and [never on-device] services as such (those not green in the License column here).
  2. Be useful to others

Resources

github.com/ollama/ollama
github.com/open-webui/open-webui
github.com/Aider-AI/aider
wikipedia.org/wiki/List_of_large_language_models

founded 2 years ago
MODERATORS
 

I'm running ollama with llama3.2:1b smollm, all-minilm, moondream, and more. I am able to integrate it with coder/code-server, vscode, vscodium, page assist, cli, and also created a discord ai user.

I'm an infrastructure and automation guy, not a developer so much. Although my field is technically devops.

Now, I hear that some llms have "tools." How do I use them? How do I find a list of tools for a model?

I don't think I can simply prompt "Hi llama3.2, list your tools." Is this part of prompt engineering?

What, do you take a model and retrain it or something?

Anybody able to point me in the right direction?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] deckerrj05@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Very nice rough answer i'll check it out. . Thank you!

Of my many projects, currently for vs code (to actively exclude microsoft in my infrastructure), I'm using containerized code-server with the vscode-llama extension and a docker client. on my win 10 laptop (sorry, moving off windows at end of win10 support) i mapped the extension's FIM "on" toggle by nesting a wsl call which nests a docker run --rm command which nests an ephemeral llama-cpp container for the AI calls. Docker engine running in wsl with ports forwarded from host/wsl vm.

Why? Portable, cross-platform and partially ephemeral dev environment with minimal dependencies.

I might even get a terraform tofu module for this eventually.

I have a small attention span ๐Ÿ’€.