AI@Home, Part 1: Overview and Ollama
Starting with this I will describe my home AI setup. It consists of several pieces:
- Ollama : This platform serves as the backbone for hosting several Large Language Models locally, enabling seamless integration and customization tailored to your specific needs.
- Open WebUI : Offering a user-friendly interface, Open WebUI facilitates interaction with various language models—both local and remote. It simplifies accessing AI-powered features directly from your browser.
- ComfyUI : This tool leverages diffusion models to generate stunning images, on par to popular platforms like Midjourney and DALL-E. I will demonstrate how ComfyUI can be seamlessly integrated with Open WebUI for an enhanced creative workflow.
- SearchXNG, a free internet metasearch engine which aggregates results from various search services and databases. Users are neither tracked nor profiled.
- Obsidian : Renowned for its versatility across major platforms, Obsidian is a fantastic app for crafting notes and building knowledge bases. By integrating it with the AI chat capabilities of Open WebUI, users can achieve new levels of productivity and insight.
- VSCode : As one of Microsoft’s most popular Integrated Development Environments (IDE), VSCode is further enhanced by continue.dev . This combination brings AI-assisted coding to life, utilizing local large language models such as Qwen-coder 2.5 for smarter and more efficient development processes.
As a sneak peak, have a look at the below video snippet. It shows prompting (“generate a prompt for … “) in Open WebUI and image generation based on the given response.
Let’s start to dig into the various components
Ollama
Ollama is an open-source platform designed to simplify the process of running Large Language Models (LLMs) on your local machine. For more information, please visit the site but in short:
- Tool for Running LLMs Locally : Allows users to run large language models (LLMs) on personal computers.
- Democratizes AI Technology : Makes advanced AI tools accessible without relying on cloud services.
- Supports Various OS : Compatible with Windows, Linux, and macOS.
- Enhanced Performance with GPUs : Performs best with discrete NVIDIA or AMD GPUs.
- Python Integration : Offers a Python library for integrating Ollama’s capabilities into Python applications.
Requirements
Ollama is designed to leverage the power of GPUs for enhanced performance, particularly when dealing with large-scale machine learning models. While it can technically operate with any compatible GPU, using an NVIDIA GPU is highly recommended due to its superior support and optimization for deep learning tasks.
In my personal experience, I utilize an older model of the RTX 3080 equipped with 12GB VRAM. It is a quite powerful piece of hardware, proficient in running smaller models, specifically those up to 14B parameters. A good example of this would be Phi4 from Microsoft.
Installation
My local home server is an older PC with an AMD Ryzen 3700 processor and 32GB of RAM. As the operating system Unraid (highly recommended!)is used but every other system capable of running docker will do.
As Unraid comes with optional drivers for Nvidia, the installation is very easy as one does not have to fiddle around with the driver installation in Linux, which can be a bit messy.
Given the drivers are already installed, Ollama can be installaed via the Community Applications.

Otherwise, docker will do the job:
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Ollama provides you with the opportunity to explore and experiment. You can download various models to enhance your experience.
if installed without Docker, using one of the installation methods described here, Ollama can run also completely independently. Simply execute the command ollama run llama3.1 or any other model available for Ollama.
However, I will stop here and continue with Open WebUI, which is a user-friendly interface to work with Ollama but also to be in front of the commercial models such as GPT-4 – given you have a key, respective a commercial subscription.
Stay tuned for part 2.