1
4
submitted 9 months ago* (last edited 3 months ago) by autonomoususer@lemmy.world to c/llm@lemmy.world

Rules

  1. Please tag [not libre software] and [never on-device] services as such (those not green in the License column here).
  2. Be useful to others

Resources

github.com/ollama/ollama
github.com/danny-avila/LibreChat
github.com/Aider-AI/aider
wikipedia.org/wiki/List_of_large_language_models

2
4
submitted 1 month ago by sommerset@thelemmy.club to c/llm@lemmy.world
3
4
submitted 1 month ago by sommerset@thelemmy.club to c/llm@lemmy.world
4
7
submitted 1 month ago by monica_b1998@lemmy.world to c/llm@lemmy.world
5
3
submitted 1 month ago by monica_b1998@lemmy.world to c/llm@lemmy.world
6
5
submitted 1 month ago by vimmiewimmie@slrpnk.net to c/llm@lemmy.world

Hello,

I have been looking into a new laptop, and was coming across ones with these NPUs heavily advertised in them. Doing some reading, they don't seem extremely functional at this stage.

They are around 45-50 TOPS at the highest it seems. I found some articles and comments suggesting that 'could' be useful for locally using smaller models, but also statements conflicting with that. As well, most, if not all, 'technical use' of them seem locked into the Windows environment. Even some program from AMD allowing local LLM use requires a Windows Server for it to communicate with, iirc. (AMD, GAIA)

So, is there, currently, any technical use for these, such that it makes much sense to grab a device with one for tinkering?

I'd considered experimenting with smaller models and seeing what comes of those (if small model improvements come through as DeepSeek proponents might suggest).

I'm also just generally new to the technology, but intrigued by the potential to localize usage; not only because of the potential to limit the environmental impact of large data center use.

Any comments, ideas, suggestions, or general pointing in a direction is very appreciated.

Thank you for taking the time. Have a good day!

7
13
submitted 2 months ago by mapto@masto.bg to c/llm@lemmy.world

Hallucinations are destroying under-resourced languages

These have been abundant even before #GenAI, when they were generated by machine translation. And for whatever motivation naive users have flooded crowdsourced resources with such hallucinations.

https://www.technologyreview.com/2025/09/25/1124005/ai-wikipedia-vulnerable-languages-doom-spiral/

@llm

8
5
submitted 4 months ago* (last edited 4 months ago) by VoxAliorum@lemmy.ml to c/llm@lemmy.world

Yesterday I had a brilliant idea: why not parse the wiki of my favorite table top roleplaying game into yaml via an llm? I had tried the same with beautfifulsoup a couple of years ago, but the page is very inconsistent which makes it quite difficult to parse using traditional methods.

However, my attempts where not very successful to parse with a local mistral model (the one you get with ollama pull mistral) as it first insisted on writing more than just the yaml code and later had troubles with more complex pages like https://dsa.ulisses-regelwiki.de/zauber.html?zauber=Abvenenum So I thought I had to give it some examples in the system prompts, but while one example helped a little, when I included more, it sometimes started to just return an example from the ones I gave to it via system prompt.

To give some idea: the bold stuff should be keys in the yaml structure, the part that follows the value. Sometimes values need to be parsed a bit more like separating pages from book names - I would give examples for all that.

Any idea what model to use for that or how to improve results?

9
9
submitted 4 months ago by HelloRoot@lemy.lol to c/llm@lemmy.world

Since openai removed access to 4.5, I am looking for something comparable from any other company.

Personally, I used it when 4o was not good enough. 4.5 was way better at research and doing more complex programming tasks.

What is comparably good in your experience?

10
4
submitted 4 months ago by vermaterc@lemmy.ml to c/llm@lemmy.world
11
6
submitted 6 months ago* (last edited 6 months ago) by autonomoususer@lemmy.world to c/llm@lemmy.world

They cry when companies profit from their work, while ignoring the most blatant solution from the start: the AGPL.

Now, its libre software license text file has been replaced with a fake, banning us users from freely forking new versions.

Open WebUI v0.6.6+ ... now adds a ... branding ... clause.

The original BSD-3 license continues to apply for all contributions made to the codebase up to and including release v0.6.5.

12
4
submitted 8 months ago* (last edited 8 months ago) by autonomoususer@lemmy.world to c/llm@lemmy.world

Open WebUI lets you download and run large language models (LLMs) on your device using Ollama.

Install Ollama

See this guide: https://lemmy.world/post/27013201

Install Docker (recommended Open WebUI installation method)

  1. Open Console, type the following command and press return. This may ask for your password but not show you typing it.
sudo pacman -S docker
  1. Enable the Docker service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now docker
  1. Allow your current user to use Docker.
sudo usermod -aG docker $(whoami)
  1. Log out and log in again, for the previous command to take effect.

Install Open WebUI on Docker

  1. Check whether your device has an NVIDIA GPU.
  2. Use only one of the following commands.

Your device has an NVIDIA GPU:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Your device has no NVIDIA GPU:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Configure Ollama access

  1. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit.
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama

Get automatic updates for Open WebUI (not models, Ollama or Docker)

  1. Create a new service file to get updates using Watchtower once everytime Docker starts.
sudoedit /etc/systemd/system/watchtower-open-webui.service
  1. Add the following, save and exit.
[Unit]
Description=Watchtower Open WebUI
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
  1. Enable this new service to start with your device and start it now.
sudo systemctl enable --now watchtower-open-webui
  1. (Optional) Get updates at regular intervals after Docker has started.
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

Use Open WebUI

  1. Open localhost:3000 in a web browser.
  2. Create an on-device Open WebUI account as shown.
13
8
submitted 8 months ago by deckerrj05@lemmy.world to c/llm@lemmy.world

I'm running ollama with llama3.2:1b smollm, all-minilm, moondream, and more. I am able to integrate it with coder/code-server, vscode, vscodium, page assist, cli, and also created a discord ai user.

I'm an infrastructure and automation guy, not a developer so much. Although my field is technically devops.

Now, I hear that some llms have "tools." How do I use them? How do I find a list of tools for a model?

I don't think I can simply prompt "Hi llama3.2, list your tools." Is this part of prompt engineering?

What, do you take a model and retrain it or something?

Anybody able to point me in the right direction?

14
7
A2A protocol (lemmy.world)

Did any of you already took a look at the A2A protocol page on github?

15
5
submitted 8 months ago by autonomoususer@lemmy.world to c/llm@lemmy.world

cross-posted from: https://lemmy.dbzer0.com/post/41844010

The problem is simple: consumer motherboards don't have that many PCIe slots, and consumer CPUs don't have enough lanes to run 3+ GPUs at full PCIe gen 3 or gen 4 speeds.

My idea was to buy 3-4 computers for cheap, slot a GPU into each of them and use 4 of them in tandem. I imagine this will require some sort of agent running on each node which will be connected through a 10Gbe network. I can get a 10Gbe network running for this project.

Does Ollama or any other local AI project support this? Getting a server motherboard with CPU is going to get expensive very quickly, but this would be a great alternative.

Thanks

16
3
submitted 8 months ago by autonomoususer@lemmy.world to c/llm@lemmy.world
17
13
submitted 9 months ago* (last edited 9 months ago) by autonomoususer@lemmy.world to c/llm@lemmy.world

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
18
7
submitted 9 months ago* (last edited 8 months ago) by autonomoususer@lemmy.world to c/llm@lemmy.world

Ollama lets you download and run large language models (LLMs) on your device.

Install Ollama on Arch Linux

  1. Check whether your device has an AMD GPU, NVIDIA GPU, or no GPU. A GPU is recommended but not required.
  2. Open Console, type only one of the following commands and press return. This may ask for your password but not show you typing it.
sudo pacman -S ollama-rocm    # for AMD GPU
sudo pacman -S ollama-cuda    # for NVIDIA GPU
sudo pacman -S ollama         # for no GPU (for CPU)
  1. Enable the Ollama service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now ollama

Test Ollama alone

  1. Open localhost:11434 in a web browser and you should see Ollama is running. This shows Ollama is installed and its service is running.
  2. Run ollama run deepseek-r1 in a console and ollama ps in another, to download and run the DeepSeek R1 model while seeing whether Ollama is using your slow CPU or fast GPU.

AMD GPU issue fix

https://lemmy.world/post/27088416

Use with Open WebUI

See this guide: https://lemmy.world/post/28493612

19
4
submitted 9 months ago by autonomoususer@lemmy.world to c/llm@lemmy.world
20
2
submitted 1 year ago by graphito@sopuli.xyz to c/llm@lemmy.world
21
1
submitted 1 year ago by drawerair@lemmy.world to c/llm@lemmy.world

y2u.be/aVvkUuskmLY

Llama 3.1 (405b) seems ๐Ÿ‘. It and Claude 3.5 sonnet are my go-to large language models. I use chat.lmsys.org. Openai may be scrambling now to release Chatgpt 5?

22
1
submitted 2 years ago* (last edited 2 years ago) by drawerair@lemmy.world to c/llm@lemmy.world

I'm an avid Marques fan, but for me, he didn't have to make that vid. It was just a set of comparisons. No new info. No interesting discussion. Instead he should've just shared that Wired podcast episode on his X.

I wonder if Apple is making their own large language model (llm) and it'll be released this year or next year. Or are they still musing re the cost-benefit analysis? If they think that an Apple llm won't earn that much profit, they may not make 1.

23
2
submitted 2 years ago by TehBamski@lemmy.world to c/llm@lemmy.world
24
1
DALL-E 3 Release (openai.com)
submitted 2 years ago by mojo@lemm.ee to c/llm@lemmy.world
25
1
submitted 2 years ago by Blaed@lemmy.world to c/llm@lemmy.world

Click Here to be Taken to the Megathread!

from !fosai@lemmy.world

Vicuna v1.5 Has Been Released!

Shoutout to GissaMittJobb@lemmy.ml for catching this in an earlier post.

Given Vicuna was a widely appreciated member of the original Llama series, it'll be exciting to see this model evolve and adapt with fresh datasets and new training and fine-tuning approaches.

Feel free using this megathread to chat about Vicuna and any of your experiences with Vicuna v1.5!

Starting off with Vicuna v1.5

TheBloke is already sharing models!

Vicuna v1.5 GPTQ

7B

13B


Vicuna Model Card

Model Details

Vicuna is a chat assistant fine-tuned from Llama 2 on user-shared conversations collected from ShareGPT.

Developed by: LMSYS

  • Model type: An auto-regressive language model based on the transformer architecture
  • License: Llama 2 Community License Agreement
  • Finetuned from model: Llama 2

Model Sources

Uses

The primary use of Vicuna is for research on large language models and chatbots. The target userbase includes researchers and hobbyists interested in natural language processing, machine learning, and artificial intelligence.

How to Get Started with the Model

Training Details

Vicuna v1.5 is fine-tuned from Llama 2 using supervised instruction. The model was trained on approximately 125K conversations from ShareGPT.com.

For additional details, please refer to the "Training Details of Vicuna Models" section in the appendix of the linked paper.

Evaluation Results

Vicuna Evaluation Results

Vicuna is evaluated using standard benchmarks, human preferences, and LLM-as-a-judge. For more detailed results, please refer to the paper and leaderboard.

view more: next โ€บ

Large Language Models

263 readers
2 users here now

A place to discuss large language models.

Rules

  1. Please tag [not libre software] and [never on-device] services as such (those not green in the License column here).
  2. Be useful to others

Resources

github.com/ollama/ollama
https://github.com/danny-avila/LibreChat
github.com/Aider-AI/aider
wikipedia.org/wiki/List_of_large_language_models

founded 2 years ago
MODERATORS