Qdrant
Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.
This documentation demonstrates how to use Qdrant with LangChain for dense (i.e., embedding-based), sparse (i.e., text search) and hybrid retrieval. The QdrantVectorStore
class supports multiple retrieval modes via Qdrant's new Query API. It requires you to run Qdrant v1.10.0 or above.
Setup
There are various modes of how to run Qdrant
, and depending on the chosen one, there will be some subtle differences. The options include:
- Local mode, no server required
- Docker deployments
- Qdrant Cloud
Please see the installation instructions here.
pip install -qU langchain-qdrant
Credentials
There are no credentials needed to run the code in this notebook.
If you want to get best in-class automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Initialization
Local mode
The Python client provides the option to run the code in local mode without running the Qdrant server. This is great for testing things out and debugging or storing just a small amount of vectors. The embeddings can be kept fully in-memory or persisted on-disk.
In-memory
For some testing scenarios and quick experiments, you may prefer to keep all the data in-memory only, so it gets removed when the client is destroyed - usually at the end of your script/notebook.
pip install -qU langchain-openai
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
from langchain_qdrant import QdrantVectorStore
from qdrant_client import QdrantClient
from qdrant_client.http.models import Distance, VectorParams
client = QdrantClient(":memory:")
client.create_collection(
collection_name="demo_collection",
vectors_config=VectorParams(size=3072, distance=Distance.COSINE),
)
vector_store = QdrantVectorStore(
client=client,
collection_name="demo_collection",
embedding=embeddings,
)
On-disk storage
Local mode, without using the Qdrant server, may also store your vectors on-disk so they persist between runs.
client = QdrantClient(path="/tmp/langchain_qdrant")
client.create_collection(
collection_name="demo_collection",
vectors_config=VectorParams(size=3072, distance=Distance.COSINE),
)
vector_store = QdrantVectorStore(
client=client,
collection_name="demo_collection",
embedding=embeddings,
)
On-premise server deployment
No matter if you choose to launch Qdrant locally with a Docker container or select a Kubernetes deployment with the official Helm chart, the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service.
url = "<---qdrant url here --->"
docs = [] # put docs here
qdrant = QdrantVectorStore.from_documents(
docs,
embeddings,
url=url,
prefer_grpc=True,
collection_name="my_documents",
)