https://store-images.s-microsoft.com/image/apps.55093.2c94e931-1d6e-42a3-917d-b3b4608dd51a.f6582aa7-c3f4-4de2-b432-5557665fd013.12d664c5-1ed2-453a-bc7b-34741d7d747e

OpenWebUI with Ollama

by ATH Infosystems

Version 0.7.2 + Free Support on Ubuntu 24.04

Open WebUI with Ollama is a modern, web-based interface that allows users to run, manage, and interact with AI language models locally. It provides an intuitive interface for chatting with models, selecting models, and configuring settings, making AI access simple and efficient.

Features of Open WebUI with Ollama:

  • User-friendly web interface for interacting with AI models.
  • Real-time AI chat with conversation history.
  • Local management of multiple AI models served via Ollama.
  • Supports CPU and GPU acceleration for faster inference.
  • Container-based deployment using Docker for portability and easy updates.

Start Open WebUI and Ollama:

$ sudo su
$ sudo systemctl enable --now ollama
$ ollama --version
$ docker pull ghcr.io/open-webui/open-webui:v0.7.2
$ docker run -d \
  -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:v0.7.2
$ docker start open-webui

Pull and Run a Model (Example):

$ ollama pull tinyllama
$ ollama run tinyllama
$ sudo systemctl daemon-reload  
$ sudo systemctl restart ollama

Access Web Interface:
http://your-server-ip:3000

Admin Setup:
Admin credentials are created during the first-time setup in the browser.

Disclaimer:
Open WebUI and Ollama are open-source software projects. Users are responsible for reviewing and complying with the licensing terms of Ollama and any AI models used. The software is provided "as is" without warranties of any kind.