
This comprehensive guide provides detailed instructions for installing and running Ollama, an open-source tool for running large language models locally, on Ubuntu 25.04 as a non-root user. The goal is to enable individual users to set up and manage their own isolated Ollama instances, ensuring that each user’s configuration, models, and runtime environment are independent and stored within their home directory. This approach avoids the need for root privileges and prevents conflicts between multiple users on the same system.
Table of Contents
- Overview
- Prerequisites
- Step-by-Step Installation
- Step 1: Log in as the Target User
- Step 2: Download the Ollama Binary
- Step 3: Make the Binary Executable
- Step 4: Set Up Environment Variables
- Step 5: Set Up Ollama Data Directory
- Step 6: Run Ollama Manually
- Step 7: Pull and Run a Model
- Step 8: Optional – Run Ollama as a User-Level Systemd Service
- Step 9: Verify the Installation
- Additional Configuration
- Troubleshooting
- References
Overview
Ollama is a lightweight, extensible platform for running large language models (LLMs) locally, such as LLaMA, Mistral, or Gemma. Unlike system-wide installations that require root access, this guide focuses on a per-user installation, allowing each user on an Ubuntu 25.04 system to run their own instance of Ollama. This setup ensures:
- Isolation: Each user’s models and configurations are stored in their home directory (
~/.ollama
). - No Root Privileges: The installation avoids
sudo
or system-wide changes. - Flexibility: Users can manage their own models, update Ollama independently, and customize settings like ports.
This guide is tailored for Ubuntu 25.04, a hypothetical future release, assuming compatibility with the latest Ollama binaries and standard Linux tools available in Ubuntu’s ecosystem as of July 2025.
Prerequisites
Before starting, ensure the following:
- Operating System: Ubuntu 25.04 (64-bit, AMD64 or ARM64 architecture).
- Disk Space: At least 10GB of free space in the user’s home directory for models (e.g., the
llama3
8B model requires ~4.7GB). - RAM: Minimum 8GB, with 16GB or more recommended for larger models.
- Internet Access: Required to download the Ollama binary and models.
- Tools:
curl
for downloading the binary andsystemd
(optional) for running Ollama as a user service. - Permissions: The user must have write access to their home directory (
~
).
To install curl
, run:
sudo apt update sudo apt install -y curl
Step-by-Step Installation
Step 1: Log in as the Target User
Each user must perform the installation in their own account to ensure isolation. Log in to the user’s account via a terminal or SSH:
whoami # Verify the current user
All subsequent steps are executed as this non-root user.
Step 2: Download the Ollama Binary
Ollama provides a precompiled binary for Linux, which can be downloaded and used without root access.
- Create a directory in the user’s home directory to store the Ollama binary:
mkdir -p ~/ollama cd ~/ollama
- Download the latest Ollama binary for Linux (AMD64). Adjust the URL for ARM64 if needed:
curl -L https://ollama.com/download/ollama-linux-amd64 -o ollama
- Check the Ollama GitHub releases page for the latest binary or alternative architectures (e.g.,
ollama-linux-arm64
for ARM systems).
- Check the Ollama GitHub releases page for the latest binary or alternative architectures (e.g.,
Step 3: Make the Binary Executable
Grant execute permissions to the downloaded binary:
chmod +x ~/ollama/ollama
Step 4: Set Up Environment Variables
To make the ollama
command accessible from any directory, add the binary’s location to the user’s PATH
.
- Edit the user’s shell configuration file (e.g.,
~/.bashrc
for Bash or~/.zshrc
for Zsh):nano ~/.bashrc
- Append the following line to add
~/ollama
to thePATH
:export PATH="$HOME/ollama:$PATH"
- Apply the changes:
source ~/.bashrc
- Verify the binary is accessible:
ollama --version
This should display the installed Ollama version.
Step 5: Set Up Ollama Data Directory
Ollama stores models and configuration in the ~/.ollama
directory by default, which is perfect for per-user setups. Create this directory if it doesn’t exist:
mkdir -p ~/.ollama
No further configuration is needed, as Ollama automatically uses this directory for model storage and logs.
Step 6: Run Ollama Manually
Start the Ollama server in the background to handle model requests:
ollama serve &
- The
&
runs the server in the background. To keep it running after closing the terminal, consider usingtmux
orscreen
:tmux new -s ollama ollama serve
Detach fromtmux
withCtrl+B
, thenD
. Reattach later withtmux attach -t ollama
.
Step 7: Pull and Run a Model
To test Ollama, download and run a model like llama3
:
- Pull the model:
ollama pull llama3
- Models are stored in
~/.ollama/models
, isolated to the user. - The
llama3
8B model requires approximately 4.7GB of disk space.
- Models are stored in
- Run the model interactively:
ollama run llama3
This starts a command-line interface where you can interact with the model. Type queries and pressEnter
to receive responses.
Step 8: Optional – Run Ollama as a User-Level Systemd Service
For persistent operation without keeping a terminal open, set up a user-level systemd
service.
- Create a systemd service file for the user:
mkdir -p ~/.config/systemd/user nano ~/.config/systemd/user/ollama.service
- Add the following configuration:
[Unit] Description=Ollama Service for %u After=network.target [Service] ExecStart=%h/ollama/ollama serve Restart=always Type=simple [Install] WantedBy=default.target
- Enable and start the service:
systemctl --user enable ollama systemctl --user start ollama
- Check the service status:
systemctl --user status ollama
The service will start automatically when the user logs in and restart if it crashes.
Step 9: Verify the Installation
Confirm that Ollama is installed and running correctly:
- Check the version:
ollama --version
- Verify the server is running:
curl http://localhost:11434
A response likeOllama is running
indicates success. - Test model interaction:
ollama run llama3 "Hello, world!"
This should return a response from thellama3
model.
Additional Configuration
Managing Port Conflicts
Ollama’s default port is 11434
. If multiple users run Ollama on the same machine, port conflicts may occur. To assign a unique port for a user:
- Set the
OLLAMA_HOST
environment variable:export OLLAMA_HOST=127.0.0.1:11435
- Add it to
~/.bashrc
for persistence:echo 'export OLLAMA_HOST=127.0.0.1:11435' >> ~/.bashrc source ~/.bashrc
- Restart the Ollama server:
ollama serve &
Each user should choose a unique port (e.g., 11435
, 11436
, etc.).
Updating Ollama
To update Ollama to a newer version:
- Stop the running Ollama server or service:
systemctl --user stop ollama # If using systemd pkill ollama # If running manually
- Download the new binary:
cd ~/ollama curl -L https://ollama.com/download/ollama-linux-amd64 -o ollama chmod +x ollama
- Restart the server or service:
systemctl --user start ollama # If using systemd ollama serve & # If running manually
Storage Considerations
- Model Size: Models vary in size (e.g.,
llama3
8B is4.7GB, while larger models like/.ollama/models`.llama3
70B require ~38GB). Ensure sufficient disk space in ` - Multiple Models: Users can pull multiple models, but this increases storage needs.
- Backup: To save space or migrate, back up
~/.ollama/models
and restore it later.
Troubleshooting
- Port Conflict: If
curl http://localhost:11434
fails with a connection error, check for port conflicts:netstat -tuln | grep 11434
Assign a new port as described in Managing Port Conflicts. - Permission Issues: Ensure the
ollama
binary and~/.ollama
are owned by the user:chown -R $USER:$USER ~/ollama ~/.ollama
- Service Fails to Start: Check logs for user-level systemd services:
journalctl --user -u ollama
- Model Download Issues: Verify internet connectivity and sufficient disk space:
df -h ~
- Binary Incompatibility: Ensure the correct binary (AMD64 or ARM64) is downloaded for your system’s architecture:
uname -m
For additional help, consult the Ollama GitHub issues page or search for community discussions on platforms like X.
References
This guide was last updated on July 29, 2025, based on the latest available information for Ubuntu 25.04 and Ollama.