The Vespa sample applications are created to run both self-hosted and on Vespa Cloud.
You can easily deploy the sample applications to Vespa Cloud without changing the files -
just follow the same steps as for
Managed Vector Search using Vespa Cloud,
adding security credentials.
First-time users should go through the getting-started guides first.
Explore the examples for smaller applications helping you getting started with a particular feature, and see operations for operational examples.
Album Recommendations is the intro application to Vespa.
Learn how to configure the schema for simple recommendation and search use cases.
Pyvespa: Hybrid Search - Quickstart and
Pyvespa: Hybrid Search - Quickstart on Vespa Cloud
create a hybrid text search application combining traditional keyword matching with semantic vector search (dense retrieval).
They also demonstrate the Vespa native embedder functionality.
These are intro-level applications for Python users using more advanced Vespa features.
Use
Pyvespa: Authenticating to Vespa Cloud for Vespa Cloud credentials.
Pyvespa: Querying Vespa
is a good start for Python users, exploring how to query Vespa using the Vespa Query Language (YQL).
Pyvespa: Read and write operations
documents ways to feed, get, update, and delete data;
Using context manager for efficiently managing resources and feeding streams of data using
feed_iter
,
which can feed from streams, Iterables, Lists, and files by the use of generators.
Pyvespa: Application packages
is a good intro to the concept of application packages in Vespa.
Try
Pyvespa: Advanced Configuration for Vespa Services configuration.
Pyvespa: Examples
is a repository of small snippets and examples, e.g., really simple vector distance search applications.
The News and Recommendation Tutorial
demonstrates basic search functionality and is a great place to start exploring Vespa features.
It creates a recommendation system where the approximate nearest neighbor search in a shared user/item embedding space
is used to retrieve recommended content for a user.
This app also demonstrates using parent-child relationships.
The Text Search Tutorial
demonstrates traditional text search using
BM25/Vespa nativeRank,
and is a good start to using the MS Marco dataset.
There is a growing interest in AI-powered vector representations of unstructured multimodal data
and searching efficiently over these representations.
Managed Vector Search using Vespa Cloud
describes how to unlock the full potential of multimodal AI-powered vector representations using Vespa Cloud.
Vespa Multi-Vector Indexing with HNSW and
Pyvespa: Multi-vector indexing with HNSW
demonstrate how to index multiple vectors per document field for semantic search for longer documents.
These are more advanced than the Hybrid Search examples in the Getting Started section.
Vector Streaming Search
uses vector streaming search for naturally partitioned data, see the
blog post for details.
Multilingual Search with multilingual embeddings
demonstrates multilingual semantic search with multilingual text embedding models.
Simple hybrid search with SPLADE
uses the Vespa splade-embedder for
semantic search using sparse vector representations,
and is a good intro to SPLADE and sparse learned weights for ranking.
Customizing Frozen Data Embeddings in Vespa
demonstrates how to adapt frozen embeddings from foundational embedding models -
see the blog post.
Frozen data embeddings from foundational models is an emerging industry practice
for reducing the complexity of maintaining and versioning embeddings.
The frozen data embeddings are re-used for various tasks, such as classification, search, or recommendations.
Pyvespa: Using Cohere Binary Embeddings in Vespa
demonstrates how to use the Cohere binary vectors with Vespa,
including a re-ranking phase that uses the float query vector version for improved accuracy.
Pyvespa: Billion-scale vector search with Cohere binary embeddings in Vespa
uses the Cohere int8 & binary Embeddings
with a coarse-to-fine search and re-ranking pipeline;
This reduces costs but offers the same retrieval (nDCG) accuracy.
The packed binary vector representation is stored in memory,
with an optional HNSW index using
hamming distance.
The
int8
vector representation is stored on disk
using Vespa’s paged option.
Pyvespa: Multilingual Hybrid Search with Cohere binary embeddings and Vespa
demonstrates:
- Building a multilingual search application over a sample of the German split of Wikipedia using binarized Cohere embeddings.
- Indexing multiple binary embeddings per document without having to split the chunks across multiple retrievable units.
- Hybrid search, combining the lexical matching capabilities of Vespa with Cohere binary embeddings.
- Re-scoring the binarized vectors for improved accuracy.
Pyvespa: BGE-M3 - The Mother of all embedding models
demonstrates how to use the BGE-M3 embeddings
and represent all three embedding representations in Vespa.
This code is inspired by the BAAI/bge-m3 README.
Pyvespa: Evaluating retrieval with Snowflake arctic embed
shows how different rank profiles in Vespa can be set up and evaluated.
For the rank profiles that use semantic search,
we will use the small version of Snowflake’s arctic embed model series for generating embeddings.
Pyvespa: Exploring the potential of OpenAI Matryoshka 🪆 embeddings with Vespa
demonstrates the effectiveness of using the recently released (as of January 2024) OpenAI
text-embedding-3
embeddings with Vespa.
Specifically, we are interested in the Matryoshka Representation Learning technique used in training,
which lets us "shorten embeddings (i.e., remove some numbers from the end of the sequence) without the embedding losing its concept-representing properties".
This allows us to trade off a small amount of accuracy in exchange for much smaller embedding sizes,
so we can store more documents and search them faster.
Pyvespa: Using Mixedbread.ai embedding model with support for binary vectors
shows how to use the mixedbread-ai/mxbai-embed-large-v1 model
with support for binary vectors with Vespa.
The notebook example also includes a re-ranking phase that uses the float query vector version for improved accuracy.
The re-ranking step makes the model perform at 96.45% of the full float version,
with a 32x decrease in storage footprint.
Retrieval Augmented Generation (RAG) in Vespa
is an end-to-end RAG application where all the steps are run within Vespa.
This application focuses on the generation part of RAG,
with a simple text search using BM25.
This application has three versions of an end-to-end RAG application:
- Using an external LLM service to generate the final response.
- Using local LLM inference to generate the final response.
- Deploying to Vespa Cloud and using GPU-accelerated LLM inference to generate the final response. This includes using Vespa Cloud's Secret Store to save the OpenAI API key.
Pyvespa: Building cost-efficient retrieval-augmented personal AI assistants
uses streaming mode
for cost-efficient retrieval for applications that store and retrieve personal data.
This notebook connects a custom LlamaIndex Retriever
with a Vespa app using streaming mode to retrieve personal data.
Pyvespa: Turbocharge RAG with LangChain and Vespa Streaming Mode for Partitioned Data
uses streaming mode
to build cost-efficient RAG applications over naturally sharded data - also available as a blog post:
Turbocharge RAG with LangChain and Vespa Streaming Mode for Sharded Data.
This example features using elementSimilarity
in search results to easily inspect each chunk's closeness to the query embedding.
Also try Pyvespa: Chat with your pdfs with ColBERT, LangChain, and Vespa -
this demonstrates how you can now use ColBERT ranking natively in Vespa,
which handles the ColBERT embedding process with no custom code.
Pyvespa: Visual PDF RAG with Vespa - ColPali demo application
is an end-to-end demo application for visual retrieval of PDF pages, including a frontend web application -
try vespa-engine-colpali-vespa-visual-retrieval.hf.space for a live demo.
The main goal of the demo is to make it easy to create your own PDF Enterprise Search application using Vespa!
Pyvespa: Vespa 🤝 ColPali: Efficient Document Retrieval with Vision Language Models
demonstrates how to retrieve PDF pages using the embeddings generated by the ColPali model.
ColPali is a powerful Vision Language Model (VLM) that can generate embeddings for images and text.
This notebook uses ColPali to generate embeddings for images of PDF pages and store them in Vespa.
We also store the base64-encoded image of the PDF page and some metadata like title and url.
Pyvespa: Scaling ColPALI (VLM) Retrieval
demonstrates how to represent ColPali in Vespa and to scale to large collections.
Also see the Scaling ColPali to billions of PDFs with Vespa blog post.
Pyvespa: ColPali Ranking Experiments on DocVQA
shows how to reproduce the ColPali results on DocVQA with Vespa.
The dataset consists of PDF documents with questions and answers.
We demonstrate how we can binarize the patch embeddings
and replace the float MaxSim scoring
with a hamming-based MaxSim
without much loss in ranking accuracy but with a significant speedup (close to 4x) and reducing the memory (and storage) requirements by 32x.
Pyvespa: PDF-Retrieval using ColQWen2 (ColPali) with Vespa
is a continuation of the notebooks related to the ColPali models (above) for complex document retrieval,
and demonstrates use of the ColQWen2 model checkpoint.
Billion-Scale Image Search
demonstrates billion-scale image search using a CLIP model
exported in ONNX-format for retrieval.
It features separation of compute from storage and query-time vector similarity de-duping.
It uses PCA to reduce from 768 to 128 dimensions.
Text-video search is a notebook that
downloads a set of videos, converts from
.avi
to .mp4
, creates CLIP embeddings,
feeds to Vespa and lets you query the videos in text using a Streamlit application.
It is a good start for creating a video search application using Vespa!
Video Search and Retrieval with Vespa and TwelveLabs is a notebook
showcasing the use of TwelveLabs state-of-the-art generation and embedding models
for video processing. It demonstrates how to generate rich metadata (including summaries and keywords) for videos
using TwelveLabs' technology, and how to embed video chunks for efficient retrieval. The notebook processes three
sample videos, segments them into chunks, and stores their embeddings along with metadata in Vespa's multi-vector
tensors. You can perform hybrid searches to find specific video scenes based on natural language descriptions.
This serves as an excellent starting point for implementing advanced video retrieval with Vespa!
MS Marco Passage Ranking
shows how to represent state-of-the-art text ranking using Transformer (BERT) models.
It uses the MS Marco passage ranking datasets and features
bi-encoders, cross-encoders, and late-interaction models (ColBERT):
- Simple single-stage sparse retrieval accelerated by the WAND dynamic pruning algorithm with BM25 ranking.
- Dense (vector) search retrieval for efficient candidate retrieval using Vespa's support for approximate nearest neighbor search.
- Re-ranking using the Late contextual interaction over BERT (ColBERT) model.
- Re-ranking using a cross-encoder with cross attention between the query and document terms.
- Multiphase retrieval and ranking combining efficient retrieval (WAND or ANN) with re-ranking stages.
- Using Vespa embedder functionality.
- Hybrid ranking.
With Vespa’s phased ranking capabilities,
doing cross-encoder inference for a subset of documents at a later stage in the ranking pipeline
can be a good trade-off between ranking performance and latency.
Pyvespa: Using Mixedbread.ai cross-encoder for reranking in Vespa.ai
shows how to use the Mixedbread.ai
cross-encoder for global-phase reranking in Vespa.
Pyvespa: Standalone ColBERT with Vespa for end-to-end retrieval and ranking
illustrates using the colbert-ai package to produce token vectors,
instead of using the native Vespa ColBERT embedder.
The guide illustrates how to feed and query using a single passage representation:
- Compress token vectors using binarization compatible with Vespa's
unpack_bits
used in ranking. This implements the binarization of token-level vectors usingnumpy
. - Use Vespa hex feed format for binary vectors.
- Query examples.
As a bonus, this also demonstrates how to use ColBERT end-to-end with Vespa for both retrieval and ranking. The retrieval step searches the binary token-level representations using hamming distance. This uses 32 nearestNeighbor operators in the same query, each finding 100 nearest hits in hamming space. Then, the results are re-ranked using the full-blown MaxSim calculation.
ColBERT token-level embeddings:
Simple hybrid search with ColBERT uses a single vector embedding model for retrieval and ColBERT (multi-token vector representation) for re-ranking. This semantic search application demonstrates the colbert-embedder and the tensor expressions for ColBERT MaxSim. It also features reciprocal rank fusion to fuse different rankings.
Long-Context ColBERT demonstrates Long-Context ColBERT (multi-token vector representation) with extended context windows for long-document retrieval, as announced in Vespa Long-Context ColBERT. The app demonstrates the colbert-embedder and the tensor expressions for performing two types of extended ColBERT late-interaction for long-context retrieval. This app uses trec-eval for evaluation using nDCG.
Pyvespa: Standalone ColBERT + Vespa for long-context ranking is a guide on how to use the ColBERT package to produce token-level vectors, as an alternative to using the native Vespa ColBERT embedder. It illustrates how to feed multiple passages per Vespa document (long-context):
- Compress token vectors using binarization that is compatible with Vespa's
unpack_bits
. - Use Vespa hex feed format for binary vectors with mixed vespa tensors.
- How to query Vespa with the ColBERT query tensor representation.
- Compress token vectors using binarization that is compatible with Vespa's
Pyvespa: LightGBM: Training the model with Vespa features
deploys and uses a LightGBM model in a Vespa application.
The tutorial runs through how to:
- Train a LightGBM classification model with variable names supported by Vespa.
- Create Vespa application package files and export them to an application folder.
- Export the trained LightGBM model to the Vespa application folder.
- Deploy the Vespa application using the application folder.
- Feed data to the Vespa application.
- Assert that the LightGBM predictions from the deployed model are correct.
Pyvespa: LightGBM: Mapping model features to Vespa features
shows how to deploy a LightGBM model with feature names that do not match Vespa feature names.
In addition to the steps in the app above, this tutorial:
- Trains a LightGBM classification model with generic feature names that will not be available in the Vespa application.
- Creates an application package and includes a mapping from Vespa feature names to LightGBM model feature names.
Pyvespa: Feeding performance
intends to shine some light on the different modes of feeding documents to Vespa, looking at 4 different methods:
- Using
VespaSync
- Using
VespaAsync
- Using
feed_iterable()
- Using Vespa CLI
Use Feeding to Vespa Cloud
to test feeding using Vespa Cloud.
The e-commerce application is an end-to-end shopping engine,
using the Amazon product data set.
This use case bundles a front-end application.
It demonstrates building next-generation E-commerce Search using Vespa,
and is a good intro to using the Vespa Cloud CI/CD tests.
Data in e-commerce applications is structured,
so Gradient Boosted Decision Trees (GBDT)
models are popular in this domain.
Try Vespa Product Ranking for using
learning-to-rank (LTR) techniques (using XGBoost and LightGBM)
for improving product search ranking.
In Vespa, faceting (the attribute filtering) is called grouping.
Grouping Results
is a quick intro to implementing faceting/grouping in Vespa.
Recommendations are integral to e-commerce applications.
The recommendation tutorial is a good starting point.
Finally, search as you type and query suggestions lets users quickly create good queries.
Incremental Search shows search-as-you-type functionality,
where for each keystroke of the user, it retrieves matching documents.
It also demonstrates search suggestions (query auto-completion).
Stateless model evaluation demonstrates
using Vespa as a stateless ML model inference server
where Vespa takes care of distributing ML models to multiple serving containers,
offering horizontal scaling and safe deployment.
It features model versioning and a feature processing pipeline,
as well as using custom code in Searchers,
Document Processors and
Request Handlers.
Vespa Documentation Search
is the search application that powers search.vespa.ai -
refer to this for GitHub Actions automation.
This sample app is a good start for automated deployments,
as it has system, staging and production test examples.
It uses the Document API
both for regular PUT operations but also for UPDATE with create-if-nonexistent.
It also has Vespa Components
for custom code.
cord19.vespa.ai is a full-featured application,
based on the Covid-19 Open Research Dataset:
- cord-19: frontend
- cord-19-search: search backend
This application uses embeddings to implement "similar documents" search.
Note: Applications with pom.xml are Java/Maven projects and must be built before deployment. Refer to the Developer Guide for more information.
Contribute to the Vespa sample applications.