Build ollama from source on Windows

Post at — Feb 06, 2025
#Ollama

How to Build Ollama from Source for running Large Language Models on Windows

Install prerequisites

Install prerequisites:

  • Go
  • C/C++ Compiler e.g. Clang on macOS, TDM-GCC (Windows amd64) or llvm-mingw (Windows arm64), GCC/Clang on Linux.

Then build and run Ollama from the root directory of the repository:

1
go run . serve
configure and build the project and run Ollama

Install prerequisites:

[Notice] Ensure prerequisites are in PATH before running CMake.

[Notice] CUDA is only compatible with Visual Studio CMake generators.

Then, configure and build the project:

1
2
cmake -B build
cmake --build build --config Release

Lastly, run Ollama:

1
go run . serve
Build Ollama.exe

Install source code:

1
2
git clone https://github.com/tanwubin/ollama.git
cd ollama

compile the source to ollama.exe

1
2
go generate ./...
go build .
Pull Models

Install Models:

1
2
3
cd ollama
ollama.exe pull llama3.2
ollama.exe pull deepseek-r1:7b
Use curl to test
1
2
3
4
5
curl http://127.0.0.1:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt": "Why is the sky blue?",
  "stream": false
}'

if you see the response of llama3.2 model, it means works!