Local RAG Stack Configurator
Generate a production-ready private AI stack tailored to your specific hardware.
Configure Your Stack
Target Model: llama3.1:8b
Based on your hardware, we recommend this model for the best balance of speed and intelligence.
Deployment Script
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- "11434:11434"
volumes:
- ./ollama:/root/.ollama
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://ollama:11434
volumes:
- ./open-webui:/app/backend/data
depends_on:
- ollamaCLI Setup
# 1. Install Ollama (https://ollama.com)
# 2. Pull the optimized models
ollama pull llama3.1:8b
ollama pull nomic-embed-text
# 3. Start the stack
docker-compose up -d