This guide explains how to reproduce experiments with OpenAI-ada2 emebddings on the MS MARCO passage ranking task. In these experiments, we are using pre-encoded queries (i.e., cached results of query embeddings).
Let's start off by downloading the corpus. To be clear, the "corpus" here refers to the embedding vectors generated by OpenAI's ada2 embedding endpoint.
Download the tarball containing embedding vectors and unpack into collections/
:
wget https://rgw.cs.uwaterloo.ca/pyserini/data/msmarco-passage-openai-ada2.tar -P collections/
tar xvf collections/msmarco-passage-openai-ada2.tar -C collections/
The tarball is 109 GB and has an MD5 checksum of a4d843d522ff3a3af7edbee789a63402
.
Indexing is a bit tricky because the HNSW implementation in Lucene restricts vectors to 1024 dimensions, which is not sufficient for OpenAI's 1536-dimensional embeddings.
This issue is described here.
The resolution is to make vector dimensions configurable on a per Codec
basis, as in this patch in Lucene.
However, as of early August 2023, there is no public release of Lucene that has these features folded in.
Thus, there is no public release of Lucene can directly index OpenAI's ada2 embedding vectors.
However, we were able to hack around this limitation in this pull request. Our workaround is incredibly janky, which is why we're leaving it on a branch and not merging it into trunk. The sketch of the solution is as follows: we copy relevant source files from Lucene directly into our source tree, and when we build the fatjar, the class files of our "local versions" take precedence, and hence override the vector size limitations.
So, to get the indexing working, we'll need to pull the above branch, build, and index with the following command:
java -cp target/anserini-0.21.1-SNAPSHOT-fatjar.jar io.anserini.index.IndexHnswDenseVectors \
-collection JsonDenseVectorCollection \
-input collections/msmarco-passage-openai-ada2 \
-index indexes/lucene-hnsw.msmarco-passage-openai-ada2/ \
-generator LuceneDenseVectorDocumentGenerator \
-threads 16 -M 16 -efC 100 \
>& logs/log.msmarco-passage-openai-ada2 &
Note that we're not using target/appassembler/bin/IndexHnswDenseVectors
.
Instead, we directly rely on the fatjar.
The indexing job takes around three hours on our orca
server.
Upon completion, we should have an index with 8,841,823 documents.
Other than the indexing trick, retrieval and evaluation are straightforward.
Topics and qrels are stored here, which is linked to the Anserini repo as a submodule.
After indexing has completed, you should be able to perform retrieval as follows using HNSW indexes, replacing {SETTING}
with the desired setting out of [msmarco-passage.dev-subset.openai-ada2
, dl19-passage.openai-ada2
, dl20-passage.openai-ada2
, dl19-passage.openai-ada2-hyde
, dl20-passage.openai-ada2-hyde
]:
target/appassembler/bin/SearchHnswDenseVectors \
-index indexes/lucene-hnsw.msmarco-passage-openai-ada2/ \
-topics tools/topics-and-qrels/topics.{SETTING}.jsonl.gz \
-topicreader JsonIntVector \
-output runs/run.{SETTING}.txt \
-querygenerator VectorQueryGenerator -topicfield vector -threads 16 -hits 1000 -efSearch 1000 &
Evaluation can be performed using trec_eval
.
For msmarco-passage.dev-subset.openai-ada2
:
tools/eval/trec_eval.9.0.4/trec_eval -c -M 10 -m recip_rank tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage.dev-subset.openai-ada2.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage.dev-subset.openai-ada2.txt
Otherwise, set {QRELS}
as dl19-passage
or dl20-passage
according to the {SETTING}
and run:
tools/eval/trec_eval.9.0.4/trec_eval -c -l 2 -m map tools/topics-and-qrels/qrels.{QRELS}.txt runs/run.{SETTING}.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.{QRELS}.txt runs/run.{SETTING}.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -l 2 -m recall.1000 tools/topics-and-qrels/qrels.{QRELS}.txt runs/run.{SETTING}.txt
With the above commands, you should be able to reproduce the following results:
# msmarco-passage.dev-subset.openai-ada2
recip_rank all 0.3434
recall_1000 all 0.9841
# dl19-passage.openai-ada2
map all 0.4786
ndcg_cut_10 all 0.7035
recall_1000 all 0.8625
# dl20-passage.openai-ada2
map all 0.4771
ndcg_cut_10 all 0.6759
recall_1000 all 0.8705
# dl19-passage.openai-ada2-hyde
map all 0.5124
ndcg_cut_10 all 0.7163
recall_1000 all 0.8968
# dl20-passage.openai-ada2-hyde
map all 0.4938
ndcg_cut_10 all 0.6666
recall_1000 all 0.8919
Note that due to the non-deterministic nature of HNSW indexing, results may differ slightly between each experimental run. Nevertheless, scores are generally stable to the third digit after the decimal point.