Installing & Training AI Locally
Using Linux, Docker, Open WebUI, and Ollama (This is for Mac OS)
Artificial Intelligence is rapidly evolving, and having a local AI model can be a powerful tool for development, experimentation, and personal use. This guide walks you through installing and setting up a local AI using Ollama and Docker on macOS.
One thing you need before you start this project is a computer that has a high GPU. The higher the GPU the bigger and better the models you can install.
Step 1: Install Ollama
Ollama is a lightweight framework that allows you to run AI models locally. To install it:
Visit Ollama.com and download the macOS version.
Follow the installation instructions to complete the setup.
Step 2: Update Your System
Before proceeding, ensure your system is up to date. Open Terminal and run:
sudo apt update
sudo apt upgrade -y
This ensures you have the latest security patches and software updates.
Step 3: Verify Ollama is Running
To check if Ollama is active, open Safari and search:
localhost:11434
If you see a message saying 'Ollama is Running', you're ready to proceed.
Step 4: Install an AI Model
To pull and install an AI model, use the following command in Terminal:
ollama pull <model-name>
For example, if you want a Mistral Model, run:
ollama pull mistral-openorca
This downloads the specified model for local use.
Step 5: Download and Set Up Docker
Docker allows you to manage containerized applications efficiently. First, add Docker’s official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Then, add the Docker repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
Step 6: Install Docker
Now, install Docker by running:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Step 7: Pull an Image for a Docker Container
Run the following command to set up Open WebUI, a web-based interface for interacting with your local AI model:
sudo docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Step 8: Verify Open WebUI
To check if Open WebUI is working, open Safari and search:
localhost:8080
If you see the Open WebUI login page, you've set up everything correctly. Sign up to start using your local AI model.
Step 9: Use Your Local AI Model
Once logged into Open WebUI:
Select your local AI model (that you previously installed) at the top.
Start interacting with it!
Step 10: Train Your Own AI
Customizing your AI involves:
Uploading files as a knowledge base. The AI will search these files before responding to any prompt.
To create a model,
Navigate to Model Files > Create Model File.
Assign a name, description, and guidelines.
You can also link a knowledge base and create a knowledge base outside the setup window.
Save, create, and test your AI model!
For example, you might name your AI Alfred, and configure it to start every conversation with:
"Hello Master Bruce, how can I assist you today?"
Conclusion
Congratulations! You’ve successfully set up a local AI model on macOS. This setup allows you to interact with and train your AI securely without relying on online services. Enjoy exploring AI on your local machine, and who knows, you might create your very own intelligent assistant!