Install prerequisites
Install prerequisites:
- Go
- C/C++ Compiler e.g. Clang on macOS, TDM-GCC (Windows amd64) or llvm-mingw (Windows arm64), GCC/Clang on Linux.
Then build and run Ollama from the root directory of the repository:
|
|
configure and build the project and run Ollama
Install prerequisites:
- CMake
- Visual Studio 2022 including the Native Desktop Workload
- (Optional) NVIDIA GPU support
[Notice] Ensure prerequisites are in
PATH
before running CMake.
[Notice] CUDA is only compatible with Visual Studio CMake generators.
Then, configure and build the project:
|
|
Lastly, run Ollama:
|
|
Build Ollama.exe
Install source code:
|
|
compile the source to ollama.exe
|
|
Pull Models
Install Models:
|
|
Use curl to test
|
|
if you see the response of llama3.2 model, it means works!