this post was submitted on 31 Dec 2024
80 points (71.5% liked)

Firefox

18633 readers
4 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 21 points 1 month ago (1 children)

In such scenario you need to host your choice of LLM locally.

[–] [email protected] 5 points 1 month ago (1 children)

does the addon support usage like that?

[–] [email protected] 7 points 1 month ago (1 children)

No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.

I have this setup running for a while now.

[–] [email protected] 4 points 1 month ago (1 children)

Which model you are running? Who much ram?

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago)

My (docker based) configuration:

Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1

Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM

Docker: https://docs.docker.com/engine/install/

Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Open WebUI: https://docs.openwebui.com/

Ollama: https://hub.docker.com/r/ollama/ollama