Ollama
Ollama allows you to run open-source large language models, such as LLaMA2, locally.
Ollama
bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage. For a complete list of supported models and model variants, see the Ollama model library.
See this guide for more details
on how to use Ollama
with LangChain.
Installation and Setupβ
Follow these instructions to set up and run a local Ollama instance.
LLMβ
from langchain_community.llms import Ollama
API Reference:Ollama
See the notebook example here.
Chat Modelsβ
Chat Ollamaβ
from langchain_community.chat_models import ChatOllama
API Reference:ChatOllama
See the notebook example here.
Ollama functionsβ
from langchain_experimental.llms.ollama_functions import OllamaFunctions
API Reference:OllamaFunctions
See the notebook example here.
Embedding modelsβ
from langchain_community.embeddings import OllamaEmbeddings
API Reference:OllamaEmbeddings
See the notebook example here.