Skip to content

Commit

Permalink
Merge branch 'main' into fastembed-support
Browse files Browse the repository at this point in the history
  • Loading branch information
Anush008 authored May 6, 2024
2 parents f47a304 + d7dace6 commit bfdf533
Show file tree
Hide file tree
Showing 28 changed files with 1,125 additions and 602 deletions.
13 changes: 9 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@ The DSPy documentation is divided into **tutorials** (step-by-step illustration
- Interviews: [Weaviate Podcast in-person](https://www.youtube.com/watch?v=CDung1LnLbY), and you can find 6-7 other remote podcasts on YouTube from a few different perspectives/audiences.
- **Tracing in DSPy** with Arize Phoenix: [Tutorial for tracing your prompts and the steps of your DSPy programs](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/dspy_tracing_tutorial.ipynb)
- [DSPy: Not Your Average Prompt Engineering](https://jina.ai/news/dspy-not-your-average-prompt-engineering), why it's crucial for future prompt engineering, and yet why it is challenging for prompt engineers to learn.
- **Tracing & Optimization Tracking in DSPy** with Parea AI: [Tutorial on tracing & evaluating a DSPy RAG program](https://docs.parea.ai/tutorials/dspy-rag-trace-evaluate/tutorial)
### B) Guides
Expand Down Expand Up @@ -136,24 +137,28 @@ You can find other examples tweeted by [@lateinteraction](https://twitter.com/la

**Some other examples (not exhaustive, feel free to add more via PR):**


- [DSPy Optimizers Benchmark on a bunch of different tasks, by Michael Ryan](https://github.com/stanfordnlp/dspy/tree/main/testing/tasks)
- [Sophisticated Extreme Multi-Class Classification, IReRa, by Karel D’Oosterlinck](https://github.com/KarelDO/xmc.dspy)
- [Haize Lab's Red Teaming with DSPy](https://blog.haizelabs.com/posts/dspy/) and see [their DSPy code](https://github.com/haizelabs/dspy-redteam)
- Applying DSPy Assertions
- [Long-form Answer Generation with Citations, by Arnav Singhvi](https://colab.research.google.com/github/stanfordnlp/dspy/blob/main/examples/longformqa/longformqa_assertions.ipynb)
- [Generating Answer Choices for Quiz Questions, by Arnav Singhvi](https://colab.research.google.com/github/stanfordnlp/dspy/blob/main/examples/quiz/quiz_assertions.ipynb)
- [Generating Tweets for QA, by Arnav Singhvi](https://colab.research.google.com/github/stanfordnlp/dspy/blob/main/examples/tweets/tweets_assertions.ipynb)
- [Compiling LCEL runnables from LangChain in DSPy](https://github.com/stanfordnlp/dspy/blob/main/examples/tweets/compiling_langchain.ipynb)
- [AI feedback, or writing LM-based metrics in DSPy](https://github.com/stanfordnlp/dspy/blob/main/examples/tweets/tweet_metric.py)
- [DSPy Optimizers Benchmark on a bunch of different tasks, by Michael Ryan](https://github.com/stanfordnlp/dspy/tree/main/testing/tasks)
- [DSPy Optimizers Benchmark on a bunch of different tasks, by Michael Ryan](https://github.com/stanfordnlp/dspy/tree/main/testing/README.md)
- [Indian Languages NLI with gains due to compiling by Saiful Haq](https://github.com/saifulhaq95/DSPy-Indic/blob/main/indicxlni.ipynb)
- [Sophisticated Extreme Multi-Class Classification, IReRa, by Karel D’Oosterlinck](https://github.com/KarelDO/xmc.dspy)
- [DSPy on BIG-Bench Hard Example, by Chris Levy](https://drchrislevy.github.io/posts/dspy/dspy.html)
- [Using Ollama with DSPy for Mistral (quantized) by @jrknox1977](https://gist.github.com/jrknox1977/78c17e492b5a75ee5bbaf9673aee4641)
- [Using DSPy, "The Unreasonable Effectiveness of Eccentric Automatic Prompts" (paper) by VMware's Rick Battle & Teja Gollapudi, and interview at TheRegister](https://www.theregister.com/2024/02/22/prompt_engineering_ai_models/)
- [Using DSPy, "The Unreasonable Effectiveness of Eccentric Automatic Prompts" (paper) by VMware's Rick Battle & Teja Gollapudi](https://arxiv.org/abs/2402.10949), and [interview at TheRegister](https://www.theregister.com/2024/02/22/prompt_engineering_ai_models/)
- [Optimizing Performance of Open Source LM for Text-to-SQL using DSPy and vLLM, by Juan Ovalle](https://github.com/jjovalle99/DSPy-Text2SQL)
- Typed DSPy (contributed by [@normal-computing](https://github.com/normal-computing))
- [Using DSPy to train Gpt 3.5 on HumanEval by Thomas Ahle](https://github.com/stanfordnlp/dspy/blob/main/examples/functional/functional.ipynb)
- [Building a chess playing agent using DSPy by Franck SN](https://medium.com/thoughts-on-machine-learning/building-a-chess-playing-agent-using-dspy-9b87c868f71e)

TODO: Add links to the state-of-the-art results on Theory of Mind (ToM) by Plastic Labs, the results by Haize Labs for Red Teaming with DSPy, and the DSPy pipeline from Replit.

TODO: Add links to the state-of-the-art results by the University of Toronto on Clinical NLP, on Theory of Mind (ToM) by Plastic Labs, and the DSPy pipeline from Replit.

There are also recent cool examples at [Weaviate's DSPy cookbook](https://github.com/weaviate/recipes/tree/main/integrations/dspy) by Connor Shorten. [See tutorial on YouTube](https://www.youtube.com/watch?v=CEuUG4Umfxs).
Expand Down
4 changes: 3 additions & 1 deletion docs/api/language_model_clients/AzureOpenAI.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ lm = dspy.AzureOpenAI(api_base='...', api_version='2023-12-01-preview', model='g

The constructor initializes the base class `LM` and verifies the provided arguments like the `api_provider`, `api_key`, and `api_base` to set up OpenAI request retrieval through Azure. The `kwargs` attribute is initialized with default values for relevant text generation parameters needed for communicating with the GPT API, such as `temperature`, `max_tokens`, `top_p`, `frequency_penalty`, `presence_penalty`, and `n`.

Azure requires that the deployment id of the Azure deployment to be also provided using the argument `deployment_id`.

```python
class AzureOpenAI(LM):
def __init__(
Expand Down Expand Up @@ -53,4 +55,4 @@ After generation, the completions are post-processed based on the `model_type` p
- `**kwargs`: Additional keyword arguments for completion request.

**Returns:**
- `List[Dict[str, Any]]`: List of completion choices.
- `List[Dict[str, Any]]`: List of completion choices.
35 changes: 22 additions & 13 deletions docs/api/modules/ChainOfThought.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,20 @@ class ChainOfThought(Predict):

self.activated = activated

signature = self.signature
*keys, last_key = signature.kwargs.keys()

DEFAULT_RATIONALE_TYPE = dsp.Type(prefix="Reasoning: Let's think step by step in order to",
desc="${produce the " + last_key + "}. We ...")

rationale_type = rationale_type or DEFAULT_RATIONALE_TYPE

extended_kwargs = {key: signature.kwargs[key] for key in keys}
extended_kwargs.update({'rationale': rationale_type, last_key: signature.kwargs[last_key]})

self.extended_signature = dsp.Template(signature.instructions, **extended_kwargs)
signature = ensure_signature(self.signature)
*_keys, last_key = signature.output_fields.keys()

rationale_type = rationale_type or dspy.OutputField(
prefix="Reasoning: Let's think step by step in order to",
desc="${produce the " + last_key + "}. We ...",
)

self.extended_signature = signature.prepend("rationale", rationale_type, type_=str)
```

**Parameters:**
- `signature` (_Any_): Signature of predictive model.
- `rationale_type` (_dsp.Type_, _optional_): Rationale type for reasoning steps. Defaults to `None`.
- `rationale_type` (_dspy.OutputField_, _optional_): Rationale type for reasoning steps. Defaults to `None`.
- `activated` (_bool_, _optional_): Flag for activated chain of thought processing. Defaults to `True`.
- `**config` (_dict_): Additional configuration parameters for model.

Expand Down Expand Up @@ -64,3 +61,15 @@ pred = generate_answer(question=question)
print(f"Question: {question}")
print(f"Predicted Answer: {pred.answer}")
```

The following example shows how to specify your custom rationale. Here `answer` corresponds to the last key to produce, it may be different in your case.

```python
#define a custom rationale
rationale_type = dspy.OutputField(
prefix="Reasoning: Let's think step by step in order to",
desc="${produce the answer}. We ...",
)
#Pass signature to ChainOfThought module
generate_answer = dspy.ChainOfThought(BasicQA, rationale_type=rationale_type)
```
27 changes: 9 additions & 18 deletions docs/api/modules/ChainOfThoughtWithHint.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,32 +8,23 @@ The constructor initializes the `ChainOfThoughtWithHint` class and sets up its a
class ChainOfThoughtWithHint(Predict):
def __init__(self, signature, rationale_type=None, activated=True, **config):
super().__init__(signature, **config)

self.activated = activated

signature = self.signature
*keys, last_key = signature.kwargs.keys()

DEFAULT_HINT_TYPE = dsp.Type(prefix="Hint:", desc="${hint}")

DEFAULT_RATIONALE_TYPE = dsp.Type(prefix="Reasoning: Let's think step by step in order to",
desc="${produce the " + last_key + "}. We ...")

rationale_type = rationale_type or DEFAULT_RATIONALE_TYPE

extended_kwargs1 = {key: signature.kwargs[key] for key in keys}
extended_kwargs1.update({'rationale': rationale_type, last_key: signature.kwargs[last_key]})
*keys, last_key = signature.fields.keys()
rationale_type = rationale_type or dspy.OutputField(
prefix="Reasoning: Let's think step by step in order to",
desc="${produce the " + last_key + "}. We ...",
)
self.extended_signature1 = self.signature.insert(-2, "rationale", rationale_type, type_=str)

extended_kwargs2 = {key: signature.kwargs[key] for key in keys}
extended_kwargs2.update({'hint': DEFAULT_HINT_TYPE, 'rationale': rationale_type, last_key: signature.kwargs[last_key]})

self.extended_signature1 = dsp.Template(signature.instructions, **extended_kwargs1)
self.extended_signature2 = dsp.Template(signature.instructions, **extended_kwargs2)
DEFAULT_HINT_TYPE = dspy.OutputField()
self.extended_signature2 = self.extended_signature1.insert(-2, "hint", DEFAULT_HINT_TYPE, type_=str)
```

**Parameters:**
- `signature` (_Any_): Signature of predictive model.
- `rationale_type` (_dsp.Type_, _optional_): Rationale type for reasoning steps. Defaults to `None`.
- `rationale_type` (_dspy.OutputField_, _optional_): Rationale type for reasoning steps. Defaults to `None`.
- `activated` (_bool_, _optional_): Flag for activated chain of thought processing. Defaults to `True`.
- `**config` (_dict_): Additional configuration parameters for model.

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/building-blocks/1-language_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Let's first make sure you can set up your language model. DSPy support clients f

## Setting up the LM client.

You can just call the constructor that connects to the LM. Then, use `dspy.configure` to declare this as the dexfault LM.
You can just call the constructor that connects to the LM. Then, use `dspy.configure` to declare this as the default LM.

For example, to use OpenAI language models, you can do it as follows.

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/building-blocks/3-modules.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 3

A **DSPy module** is a building block for programs that use LMs.

- Each built-in module abstracts a **prompting technique** (like chain of thought or ReAct). Crucially, they are generalized to handle any [DSPy Signature].
- Each built-in module abstracts a **prompting technique** (like chain of thought or ReAct). Crucially, they are generalized to handle any [DSPy Signature](https://dspy-docs.vercel.app/docs/building-blocks/signatures).

- A DSPy module has **learnable parameters** (i.e., the little pieces comprising the prompt and the LM weights) and can be invoked (called) to process inputs and return outputs.

Expand All @@ -17,7 +17,7 @@ A **DSPy module** is a building block for programs that use LMs.

Let's start with the most fundamental module, `dspy.Predict`. Internally, all other DSPy modules are just built using `dspy.Predict`.

We'll assume you are already at least a little familiar with [DSPy signatures], which are declarative specs for defining the behavior of any module we use in DSPy.
We'll assume you are already at least a little familiar with [DSPy signatures](https://dspy-docs.vercel.app/docs/building-blocks/signatures), which are declarative specs for defining the behavior of any module we use in DSPy.

To use a module, we first **declare** it by giving it a signature. Then we **call** the module with the input arguments, and extract the output fields!

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/building-blocks/7-assertions.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Specifically, when a constraint is not met:
- Past Output: your model's past output that did not pass the validation_fn
- Instruction: your user-defined feedback message on what went wrong and what possibly to fix

If the error continues past the `max_backtracking_attempts`, then `dspy.Assert` will halt the pipeline execution, altering you with an `dspy.AssertionError`. This ensures your program doesn't continue executing with “bad” LM behavior and immediately highlights sample failure outputs for user assessment.
If the error continues past the `max_backtracking_attempts`, then `dspy.Assert` will halt the pipeline execution, alerting you with an `dspy.AssertionError`. This ensures your program doesn't continue executing with “bad” LM behavior and immediately highlights sample failure outputs for user assessment.

- **dspy.Suggest vs. dspy.Assert**: `dspy.Suggest` on the other hand offers a softer approach. It maintains the same retry backtracking as `dspy.Assert` but instead serves as a gentle nudger. If the model outputs cannot pass the model constraints after the `max_backtracking_attempts`, `dspy.Suggest` will log the persistent failure and continue execution of the program on the rest of the data. This ensures the LM pipeline works in a "best-effort" manner without halting execution.

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/building-blocks/8-typed_predictors.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ prediction = predictor(input=doc_query_pair)
Let's see the output and its type.

```python
answer = prediction.answer
confidence_score = prediction.confidence
answer = prediction.output.answer
confidence_score = prediction.output.confidence

print(f"Prediction: {prediction}\n\n")
print(f"Answer: {answer}, Answer Type: {type(answer)}")
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/cheatsheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ class FactJudge(dspy.Signature):
context = dspy.InputField(desc="Context for the prediciton")
question = dspy.InputField(desc="Question to be answered")
answer = dspy.InputField(desc="Answer for the question")
factually_correct = dspy.OutputField(desc="Is the answer factually correct based on the context?", prefix="Facual[Yes/No]:")
factually_correct = dspy.OutputField(desc="Is the answer factually correct based on the context?", prefix="Factual[Yes/No]:")

judge = dspy.ChainOfThought(FactJudge)

Expand Down
10 changes: 6 additions & 4 deletions docs/docs/quick-start/minimal-example.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ We make use of the [GSM8K dataset](https://huggingface.co/datasets/gsm8k) and th

## Setup

Before we delve into the example, let's ensure our environment is properly configured. We'll start by importing the necessary modules and configuring our language model:
Before we jump into the example, let's ensure our environment is properly configured. We'll start by importing the necessary modules and configuring our language model:

```python
import dspy
Expand All @@ -33,7 +33,7 @@ Let's take a look at what `gsm8k_trainset` and `gsm8k_devset` are:
print(gsm8k_trainset)
```

The `gsm8k_trainset` and `gsm8k_devset` datasets contain a list of Examples with each example having `question` and `answer` field. We'll use these datasets to train and evaluate our model.
The `gsm8k_trainset` and `gsm8k_devset` datasets contain a list of Examples with each example having `question` and `answer` field.

## Define the Module

Expand All @@ -51,7 +51,7 @@ class CoT(dspy.Module):

## Compile and Evaluate the Model

With our simple program in place, let's move on to optimizing it using the [`BootstrapFewShot`](/api/optimizers/BootstrapFewShot) teleprompter:
With our simple program in place, let's move on to compiling it with the [`BootstrapFewShot`](/api/optimizers/BootstrapFewShot) teleprompter:

```python
from dspy.teleprompt import BootstrapFewShot
Expand All @@ -61,9 +61,11 @@ config = dict(max_bootstrapped_demos=4, max_labeled_demos=4)

# Optimize! Use the `gsm8k_metric` here. In general, the metric is going to tell the optimizer how well it's doing.
teleprompter = BootstrapFewShot(metric=gsm8k_metric, **config)
optimized_cot = teleprompter.compile(CoT(), trainset=gsm8k_trainset, valset=gsm8k_devset)
optimized_cot = teleprompter.compile(CoT(), trainset=gsm8k_trainset)
```

Note that BootstrapFewShot is not an optimizing teleprompter, i.e. it simple creates and validates examples for steps of the pipeline (in this case, the chain-of-thought reasoning) but does not optimize the metric. Other teleprompters like `BootstrapFewShotWithRandomSearch` and `MIPRO` will apply direct optimization.

## Evaluate

Now that we have a compiled (optimized) DSPy program, let's move to evaluating its performance on the dev dataset.
Expand Down
3 changes: 2 additions & 1 deletion docs/docs/tutorials/other_tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,5 @@ sidebar_position: 99999
- [DSPy webinar with MLOps Learners](https://www.youtube.com/watch?v=im7bCLW2aM4), a bit longer with Q&A.
- Hands-on Overviews of DSPy by the community: [DSPy Explained! by Connor Shorten](https://www.youtube.com/watch?v=41EfOY0Ldkc), [DSPy explained by code_your_own_ai](https://www.youtube.com/watch?v=ycfnKPxBMck), [DSPy Crash Course by AI Bites](https://youtu.be/5-zgASQKkKQ?si=3gnmVouT5_rpk_nu)
- Interviews: [Weaviate Podcast in-person](https://www.youtube.com/watch?v=CDung1LnLbY), and you can find 6-7 other remote podcasts on YouTube from a few different perspectives/audiences.
- **Tracing in DSPy** with Arize Phoenix: [Tutorial for tracing your prompts and the steps of your DSPy programs](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/dspy_tracing_tutorial.ipynb)
- **Tracing in DSPy** with Arize Phoenix: [Tutorial for tracing your prompts and the steps of your DSPy programs](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/dspy_tracing_tutorial.ipynb)
- **Tracing & Optimization Tracking in DSPy** with Parea AI: [Tutorial on tracing & evaluating a DSPy RAG program](https://docs.parea.ai/tutorials/dspy-rag-trace-evaluate/tutorial)
5 changes: 4 additions & 1 deletion dsp/modules/aws_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@ def __init__(
self._max_context_size: int = max_context_size
self._max_new_tokens: int = max_new_tokens

# make it consistent with equivalent LM::max_token
self.kwargs["max_tokens"] = max_new_tokens

self.kwargs = {
**self.kwargs,
**kwargs,
Expand All @@ -63,7 +66,7 @@ def _call_model(self, body: str) -> str | list[str]:
"""Call model, get generated input without the formatted prompt."""

def _estimate_tokens(self, text: str) -> int:
return len(text)/CHARS2TOKENS
return len(text) / CHARS2TOKENS

def _extract_input_parameters(
self,
Expand Down
9 changes: 8 additions & 1 deletion dsp/modules/azure_openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,14 @@ def __init__(
model: str = "gpt-3.5-turbo-instruct",
api_key: Optional[str] = None,
model_type: Literal["chat", "text"] = "chat",
system_prompt: Optional[str] = None,
**kwargs,
):
super().__init__(model)
self.provider = "openai"

self.system_prompt = system_prompt

# Define Client
if OPENAI_LEGACY:
# Assert that all variables are available
Expand Down Expand Up @@ -132,7 +135,11 @@ def basic_request(self, prompt: str, **kwargs):
kwargs = {**self.kwargs, **kwargs}
if self.model_type == "chat":
# caching mechanism requires hashable kwargs
kwargs["messages"] = [{"role": "user", "content": prompt}]
messages = [{"role": "user", "content": prompt}]
if self.system_prompt:
messages.insert(0, {"role": "system", "content": self.system_prompt})

kwargs["messages"] = messages
kwargs = {"stringify_request": json.dumps(kwargs)}
response = chat_request(self.client, **kwargs)

Expand Down
Loading

0 comments on commit bfdf533

Please sign in to comment.