upstash
Upstash offers developers serverless databases and messaging platforms to build powerful applications without having to worry about the operational complexity of running databases at scale.
One significant advantage of Upstash is that their databases support HTTP and all of their SDKs use HTTP. This means that you can run this in serverless platforms, edge or any platform that does not support TCP connections.
Currently, there are two Upstash integrations available for LangChain: Upstash Vector as a vector embedding database and Upstash Redis as a cache and memory store.
Upstash Vector
Upstash Vector is a serverless vector database that can be used to store and query vectors.
Installationβ
Create a new serverless vector database at the Upstash Console. Select your preferred distance metric and dimension count according to your model.
Install the Upstash Vector Python SDK with pip install upstash-vector
.
The Upstash Vector integration in langchain is a wrapper for the Upstash Vector Python SDK. That's why the upstash-vector
package is required.
Integrationsβ
Create a UpstashVectorStore
object using credentials from the Upstash Console.
You also need to pass in an Embeddings
object which can turn text into vector embeddings.
from langchain_community.vectorstores.upstash import UpstashVectorStore
import os
os.environ["UPSTASH_VECTOR_REST_URL"] = "<UPSTASH_VECTOR_REST_URL>"
os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "<UPSTASH_VECTOR_REST_TOKEN>"
store = UpstashVectorStore(
embedding=embeddings
)
An alternative way of UpstashVectorStore
is to pass embedding=True
. This is a unique
feature of the UpstashVectorStore
thanks to the ability of the Upstash Vector indexes
to have an associated embedding model. In this configuration, documents we want to insert or
queries we want to search for are simply sent to Upstash Vector as text. In the background,
Upstash Vector embeds these text and executes the request with these embeddings. To use this
feature, create an Upstash Vector index by selecting a model
and simply pass embedding=True
:
from langchain_community.vectorstores.upstash import UpstashVectorStore
import os
os.environ["UPSTASH_VECTOR_REST_URL"] = "<UPSTASH_VECTOR_REST_URL>"
os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "<UPSTASH_VECTOR_REST_TOKEN>"
store = UpstashVectorStore(
embedding=True
)
See Upstash Vector documentation for more detail on embedding models.
Namespacesβ
You can use namespaces to partition your data in the index. Namespaces are useful when you want to query over huge amount of data, and you want to partition the data to make the queries faster. When you use namespaces, there won't be post-filtering on the results which will make the query results more precise.
from langchain_community.vectorstores.upstash import UpstashVectorStore
import os
os.environ["UPSTASH_VECTOR_REST_URL"] = "<UPSTASH_VECTOR_REST_URL>"
os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "<UPSTASH_VECTOR_REST_TOKEN>"
store = UpstashVectorStore(
embedding=embeddings
namespace="my_namespace"
)
Inserting Vectorsβ
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
from langchain_openai import OpenAIEmbeddings
loader = TextLoader("../../modules/state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# Create a new embeddings object
embeddings = OpenAIEmbeddings()
# Create a new UpstashVectorStore object
store = UpstashVectorStore(
embedding=embeddings
)
# Insert the document embeddings into the store
store.add_documents(docs)
When inserting documents, first they are embedded using the Embeddings
object.
Most embedding models can embed multiple documents at once, so the documents are batched and embedded in parallel.
The size of the batch can be controlled using the embedding_chunk_size
parameter.
The embedded vectors are then stored in the Upstash Vector database. When they are sent, multiple vectors are batched together to reduce the number of HTTP requests.
The size of the batch can be controlled using the batch_size
parameter. Upstash Vector has a limit of 1000 vectors per batch in the free tier.
store.add_documents(
documents,
batch_size=100,
embedding_chunk_size=200
)
Querying Vectorsβ
Vectors can be queried using a text query or another vector.
The returned value is a list of Document objects.
result = store.similarity_search(
"The United States of America",
k=5
)
Or using a vector:
vector = embeddings.embed_query("Hello world")
result = store.similarity_search_by_vector(
vector,
k=5
)
When searching, you can also utilize the filter
parameter which will allow you to filter by metadata:
result = store.similarity_search(
"The United States of America",
k=5,
filter="type = 'country'"
)
See Upstash Vector documentation for more details on metadata filtering.
Deleting Vectorsβ
Vectors can be deleted by their IDs.
store.delete(["id1", "id2"])
Getting information about the storeβ
You can get information about your database like the distance metric dimension using the info function.
When an insert happens, the database an indexing takes place. While this is happening new vectors can not be queried. pendingVectorCount
represents the number of vector that are currently being indexed.
info = store.info()
print(info)
# Output:
# {'vectorCount': 44, 'pendingVectorCount': 0, 'indexSize': 2642412, 'dimension': 1536, 'similarityFunction': 'COSINE'}
Upstash Redis
This page covers how to use Upstash Redis with LangChain.
Installation and Setupβ
- Upstash Redis Python SDK can be installed with
pip install upstash-redis
- A globally distributed, low-latency and highly available database can be created at the Upstash Console
Integrationsβ
All of Upstash-LangChain integrations are based on upstash-redis
Python SDK being utilized as wrappers for LangChain.
This SDK utilizes Upstash Redis DB by giving UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN parameters from the console.
Cacheβ
Upstash Redis can be used as a cache for LLM prompts and responses.
To import this cache:
from langchain.cache import UpstashRedisCache
To use with your LLMs:
import langchain
from upstash_redis import Redis
URL = "<UPSTASH_REDIS_REST_URL>"
TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"
langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
Memoryβ
See a usage example.
from langchain_community.chat_message_histories import (
UpstashRedisChatMessageHistory,
)