Ollama3 works - running a local LLM

Do this:

curl -fsSL https://ollama.com/install.sh | sh

ollama create llama3 

It will then download the llama3 model, about 4.7 gb

to run it:

ollama run llama3 

if you want it to read and summarise a textfile called textfile.txt:

ollama run llama3 "Read the text file $(cat textfile.txt) and summarise the findings in one paragraph of no more than 300 words" 

And here's a gui for it:

https://github.com/amithkoujalgi/ollama-pdf-bot

The system runs as a service (daemon) on Linux. CPU usage is high but not bad and response time much better than localGPT.

It stores the model files (large LLM blobs) in 

/usr/share/ollama/.ollama/models

The models can be dowloaded from huggingface.io or https://ollama.com/library

Popular posts from this blog

Recent Experiments testing ChatGPT's limits and some useful prompts

Testing ChatGPT on coding - a script to make CSV to ICS