A CLI tool over the top 1218 Python libraries.
Used for library q/a & code generation with all available OpenAI models
Website
|
Data Visualizer
|
PyPI
|
@fleet_ai
fleet-context.mp4
Install the package and run context
to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.
pip install fleet-context
context
If you'd like to run the CLI tool locally, you can clone this repository, cd into it, then run:
pip install -e .
context
If you have an existing package that already uses the keyword context
, you can also activate Fleet Context by running:
fleet-context
You can download any library's embeddings and load it up into a dataframe by running:
from context import download_embeddings
df = download_embeddings("langchain")
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 901k/901k [00:00<00:00, 2.64MiB/s]
id dense_embeddings metadata sparse_values
0 91cd9f22-b3b6-49e1-8672-e1e42a1cf766 [-0.014795871, -0.013938751, 0.02374646, -0.02... {'id': '91cd9f22-b3b6-49e1-8672-e1e42a1cf766',... {'indices': [4279915734, 3106554626, 771291085...
1 80cd620e-7408-4649-aaa7-3fe3c719b4ed [-0.0027519625, 0.013772411, 0.0019546314, -0.... {'id': '80cd620e-7408-4649-aaa7-3fe3c719b4ed',... {'indices': [1497795724, 573857107, 2203090375...
2 87a406ad-e413-42fc-8813-6fa042f80f6a [-0.022883521, -0.0036436971, 0.0026068306, 0.... {'id': '87a406ad-e413-42fc-8813-6fa042f80f6a',... {'indices': [1558403699, 640376310, 358389376,...
3 8bdd8dae-8384-414d-87d2-4390ca29d857 [-0.024882555, -0.0041470923, -0.011419726, -0... {'id': '8bdd8dae-8384-414d-87d2-4390ca29d857',... {'indices': [1558403699, 3778951566, 274301652...
4 8cc5eb61-317a-4196-8099-51c47ef70406 [-0.036361936, 0.0027855083, -0.013214805, -0.... {'id': '8cc5eb61-317a-4196-8099-51c47ef70406',... {'indices': [3586802366, 1110127215, 161253108...
You can see a full list of supported libraries & search through them on our website at the bottom of the page.
You can use the -l
or --libraries
followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all. View a list of all supported libraries on our website.
context -l langchain pydantic openai
You can select a different OpenAI model by using -m
or --model
. Defaults to gpt-4
. You can set your model to gpt-4-1106-preview
(gpt-4-turbo), gpt-3.5-turbo
, or gpt-3.5-turbo-16k
.
context -m gpt-4-1106-preview
Local model support is powered by LM Studio. To use local models, you can use --local
or -n
:
context --local
You need to download your local model through LM Studio. To do that:
- Download LM Studio. You can find the download link here: https://lmstudio.ai
- Open LM Studio and download your model of choice.
- Click the ↔ icon on the very left sidebar
- Select your model and click "Start Server"
The context window is defaulted to 3000. You can change this by using --context_window
or -w
:
context --local --context_window 4096
You can control the number of retrieved chunks by using -k
or --k_value
(defaulted to 15), and you can toggle whether the model cites its source by using -c
or --cite_sources
(defaults to true).
context -k 25 -c false
We saw a 37-point improvement for gpt-4
generation scores and a 34-point improvement for gpt-4-turbo
generation scores amongst a randomly sampled set of 50 libraries.
We attribute this to a lack of knowledge for the most up-to-date versions of libraries for gpt-4
, and a combination of relevant up-to-date information to generate with and relevance of information for gpt-4-turbo
.
Check out our visualized data here.
You can download all embeddings here.