Skip to content

Commit

Permalink
New Feat: Added HNSWLib and in-memory self-query retriever (langchain…
Browse files Browse the repository at this point in the history
…-ai#1543)

* added hnswlib and memory self-query

* Refactor documentation

* Run format, small docs edit

* Update tests

* replaced neq with ne

* Test flake

* Removed in and nin as comparator

* File structure and entrypoint changes

* Fix entrypoints, docs

* Fix generated file

* Docs update

* Fix docs

---------

Co-authored-by: jacoblee93 <[email protected]>
  • Loading branch information
ppramesi and jacoblee93 authored Jun 7, 2023
1 parent aba00ae commit d543fb9
Show file tree
Hide file tree
Showing 24 changed files with 763 additions and 67 deletions.
14 changes: 0 additions & 14 deletions docs/docs/modules/indexes/retrievers/chroma-self-query.mdx

This file was deleted.

14 changes: 0 additions & 14 deletions docs/docs/modules/indexes/retrievers/pinecone-self-query.mdx

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Chroma Self Query Retriever

This example shows how to use a self query retriever with a Chroma vector store.

## Usage

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/chroma_self_query.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# HNSWLib Self Query Retriever

This example shows how to use a self query retriever with an HNSWLib vector store.

## Usage

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/hnswlib_self_query.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
10 changes: 10 additions & 0 deletions docs/docs/modules/indexes/retrievers/self_query/examples/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
sidebar_label: Examples
hide_table_of_contents: true
---

import DocCardList from "@theme/DocCardList";

# Examples: Self Query Retrievers

<DocCardList />
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Memory Vector Store Self Query Retriever

This example shows how to use a self query retriever with a basic, in-memory vector store.

## Usage

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/memory_self_query.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Pinecone Self Query Retriever

This example shows how to use a self query retriever with a Pinecone vector store.

## Usage

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/pinecone_self_query.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
16 changes: 16 additions & 0 deletions docs/docs/modules/indexes/retrievers/self_query/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
sidebar_label: Self Query Retrievers
sidebar_position: 1
---

# Self Query Retrievers

Self Query Retrievers have the ability to query themselves.

Specifically, given an arbitrary natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to its underlying vector store.
This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documents, but to also extract filters from the user query on the metadata of stored documents and to execute those filters.

They require a translator class that translates LLM-generated queries into a filter format that the vector store can understand.
If you don't see an example for your vector store in the docs, you can create your own translator by extending the [BaseTranslator](/docs/api/retrievers_self_query/classes/BaseTranslator) abstract class.

The vector store also needs to support filtering on the metadata attributes you want to query on.
9 changes: 3 additions & 6 deletions examples/src/retrievers/chroma_self_query.ts
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,6 @@ const attributeInfo: AttributeInfo[] = [

/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We use the Pinecone vector store here, but you can use any vector store you want.
* At this point we only support Chroma and Pinecone, but we will add more in the future.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const embeddings = new OpenAIEmbeddings();
Expand All @@ -102,10 +100,9 @@ const selfQueryRetriever = await SelfQueryRetriever.fromLLM({
/**
* We need to create a basic translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here (which works for Chroma and Pinecone), but you can create
* your own translator by extending BaseTranslator abstract class. Note that the
* vector store needs to support filtering on the metadata attributes you want to
* query on.
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new BasicTranslator(),
});
Expand Down
124 changes: 124 additions & 0 deletions examples/src/retrievers/hnswlib_self_query.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { AttributeInfo } from "langchain/schema/query_constructor";
import { Document } from "langchain/document";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { FunctionalTranslator } from "langchain/retrievers/self_query/functional";
import { OpenAI } from "langchain/llms/openai";

/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];

/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];

/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const embeddings = new OpenAIEmbeddings();
const llm = new OpenAI();
const documentContents = "Brief summary of a movie";
const vectorStore = await HNSWLib.fromDocuments(docs, embeddings);
const selfQueryRetriever = await SelfQueryRetriever.fromLLM({
llm,
vectorStore,
documentContents,
attributeInfo,
/**
* We need to use a translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new FunctionalTranslator(),
});

/**
* Now we can query the vector store.
* We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?".
* We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?".
* The retriever will automatically convert these questions into queries that can be used to retrieve documents.
*/
const query1 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are less than 90 minutes?"
);
const query2 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are rated higher than 8.5?"
);
const query3 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are directed by Greta Gerwig?"
);
const query4 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are either comedy or drama and are less than 90 minutes?"
);
console.log(query1, query2, query3, query4);
124 changes: 124 additions & 0 deletions examples/src/retrievers/memory_self_query.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { AttributeInfo } from "langchain/schema/query_constructor";
import { Document } from "langchain/document";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { FunctionalTranslator } from "langchain/retrievers/self_query/functional";
import { OpenAI } from "langchain/llms/openai";

/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];

/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];

/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const embeddings = new OpenAIEmbeddings();
const llm = new OpenAI();
const documentContents = "Brief summary of a movie";
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);
const selfQueryRetriever = await SelfQueryRetriever.fromLLM({
llm,
vectorStore,
documentContents,
attributeInfo,
/**
* We need to use a translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new FunctionalTranslator(),
});

/**
* Now we can query the vector store.
* We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?".
* We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?".
* The retriever will automatically convert these questions into queries that can be used to retrieve documents.
*/
const query1 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are less than 90 minutes?"
);
const query2 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are rated higher than 8.5?"
);
const query3 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are directed by Greta Gerwig?"
);
const query4 = await selfQueryRetriever.getRelevantDocuments(
"Which movies are either comedy or drama and are less than 90 minutes?"
);
console.log(query1, query2, query3, query4);
9 changes: 3 additions & 6 deletions examples/src/retrievers/pinecone_self_query.ts
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,6 @@ const attributeInfo: AttributeInfo[] = [

/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We use the Pinecone vector store here, but you can use any vector store you want.
* At this point we only support Chroma and Pinecone, but we will add more in the future.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
if (
Expand Down Expand Up @@ -120,10 +118,9 @@ const selfQueryRetriever = await SelfQueryRetriever.fromLLM({
/**
* We need to create a basic translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here (which works for Chroma and Pinecone), but you can create
* your own translator by extending BaseTranslator abstract class. Note that the
* vector store needs to support filtering on the metadata attributes you want to
* query on.
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new BasicTranslator(),
});
Expand Down
Loading

0 comments on commit d543fb9

Please sign in to comment.