Disclaimer:
The information provided in this guide is for educational and informational purposes only. It is provided “as is” without any representations or warranties, express or implied. You are using this information at your own risk. The author assumes no responsibility for any loss, damage, or issues arising from the use or misuse of the content provided.
What is Ollama?
You can get more information directly from the source:
ollama.com
TL;DR:
Ollama is a lightweight, open-source platform that lets you run large language models (LLMs) directly on your local computer—no internet or cloud connection required. This gives developers, researchers, and privacy-conscious users more control over their data, as well as faster, offline access to powerful AI capabilities.
Prerequisite:
Install ollama via brew:
- Open Terminal (found in Applications > Utilities)
- Run: brew install ollama
Install a Large Language Model (LLM).
Before you can use Ollama, you need at least 1 LLM.
What is an LLM?
A large language model (LLM) is a type of artificial intelligence trained to understand and generate human-like text. It learns by studying massive amounts of written content—like books, websites, and articles—so it can answer questions, write sentences, summarize information, or even have conversations. You can think of it as a very advanced autocomplete that tries to predict and generate the most helpful or logical response based on what you type.
Instructions:
- Open Terminal (found in Applications > Utilities)
- Run: ollama pull deepseek-coder
- To start Ollama with the LLM: ollama run deepseek-coder
- After a moment of loading, you should be able to converse with the AI agent from CLI.