LocalAI is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run LLMs, generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.

Quickstart

Using the Bash Installer

  # Basic installation
curl https://localai.io/install.sh | sh
  

See Installer for all the supported options

Run with docker:

  # CPU only image:
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-cpu

# Nvidia GPU:
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12

# CPU and GPU image (bigger size):
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

# AIO images (it will pre-download a set of models ready for use, see https://localai.io/basics/container/)
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
  

Load models:

  # From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
local-ai run llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with the phi-2 model directly from huggingface
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
# Install and run a model from the Ollama OCI registry
local-ai run ollama://gemma:2b
# Run a model from a configuration file
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
local-ai run oci://localai/phi-2:latest
  

For a full list of options, refer to the Installer Options documentation.

Binaries can also be manually downloaded.

Using Homebrew on MacOS

You can install Homebrew’s LocalAI with the following command:

  brew install localai
  

Using Container Images or Kubernetes

LocalAI is available as a container image compatible with various container engines such as Docker, Podman, and Kubernetes. Container images are published on quay.io and Docker Hub.

For detailed instructions, see Using container images. For Kubernetes deployment, see Run with Kubernetes.

Running LocalAI with All-in-One (AIO) Images

Already have a model file? Skip to Run models manually.

LocalAI’s All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the features of LocalAI. If pre-configured models are not required, you can use the standard images.

These images are available for both CPU and GPU environments. AIO images are designed for ease of use and require no additional configuration.

It is recommended to use AIO images if you prefer not to configure the models manually or via the web interface. For running specific models, refer to the manual method.

The AIO images come pre-configured with the following features:

  • Text to Speech (TTS)
  • Speech to Text
  • Function calling
  • Large Language Models (LLM) for text generation
  • Image generation
  • Embedding server

For instructions on using AIO images, see Using container images.

Using LocalAI and the full stack with LocalAGI

LocalAI is part of the Local family stack, along with LocalAGI and LocalRecall.

LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility which encompassess and uses all the softwre stack. It provides a complete drop-in replacement for OpenAI’s Responses APIs with advanced agentic capabilities, working entirely locally on consumer-grade hardware (CPU and GPU).

Quick Start

  # Clone the repository
git clone https://github.com/mudler/LocalAGI
cd LocalAGI

# CPU setup (default)
docker compose up

# NVIDIA GPU setup
docker compose -f docker-compose.nvidia.yaml up

# Intel GPU setup (for Intel Arc and integrated GPUs)
docker compose -f docker-compose.intel.yaml up

# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
MODEL_NAME=gemma-3-12b-it docker compose up

# NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
  

Key Features

  • Privacy-Focused: All processing happens locally, ensuring your data never leaves your machine
  • Flexible Deployment: Supports CPU, NVIDIA GPU, and Intel GPU configurations
  • Multiple Model Support: Compatible with various models from Hugging Face and other sources
  • Web Interface: User-friendly chat interface for interacting with AI agents
  • Advanced Capabilities: Supports multimodal models, image generation, and more
  • Docker Integration: Easy deployment using Docker Compose

Environment Variables

You can customize your LocalAGI setup using the following environment variables:

  • MODEL_NAME: Specify the model to use (e.g., gemma-3-12b-it)
  • MULTIMODAL_MODEL: Set a custom multimodal model
  • IMAGE_MODEL: Configure an image generation model

For more advanced configuration and API documentation, visit the LocalAGI GitHub repository.

What’s Next?

There is much more to explore with LocalAI! You can run any model from Hugging Face, perform video generation, and also voice cloning. For a comprehensive overview, check out the features section.

Explore additional resources and community contributions:

Last updated 01 May 2025, 22:36 +0200 . history