Decompose AI Workflow Tested

Open WebUI

A self-hosted, feature-rich web interface for running local LLMs via Ollama. Supports multiple models, conversation history, RAG, and a plugin system — all running on your own hardware.

Install docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
Last updated: February 1, 2025
llmollamalocal-aiself-hostedragworkflow

Open WebUI is the tool that makes running local LLMs feel like using a real product instead of a research prototype. If you have tried Ollama and found the CLI limiting, this is what you install next.

What it does

The interface mirrors the experience of ChatGPT or Claude.ai but runs entirely on your machine. Conversation history, model switching, system prompt management, document upload with RAG, and a plugin architecture that lets you connect it to external tools. The model management panel lets you pull, update, and delete Ollama models without touching the command line.

The RAG (retrieval-augmented generation) system is the feature most relevant to this audience. You can upload PDFs — papers, internal documents, your dissertation — and ask questions against them. It is not perfect, but for working scientists who need to query a corpus of literature without sending it to an external API, it is a genuinely useful starting point.

Who it is for

ML engineers and research scientists who want a private LLM workspace. Biotech R&D professionals who cannot send proprietary sequences or patient-adjacent data to cloud APIs. Quants who want to run models on financial data locally.

Tested notes

Tested on MacBook Pro M3 Max running llama3.2:3b and mistral:7b-instruct. Performance is usable for document Q&A. The RAG quality depends heavily on chunking settings — the defaults work for most documents. The model management interface is notably cleaner than alternatives like LM Studio. Memory footprint with a 7B model loaded is approximately 8GB RAM.

Setup

Requires Docker and Ollama already installed. After running the install command, navigate to localhost:3000 to create your admin account.