Skip to content

Commit

Permalink
Merge pull request stanfordnlp#6 from stanfordnlp/main
Browse files Browse the repository at this point in the history
Merge from main
  • Loading branch information
Anindyadeep authored Jun 12, 2024
2 parents da0958b + a009cfd commit fded0f5
Show file tree
Hide file tree
Showing 9 changed files with 30 additions and 27 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,11 +72,11 @@ Or open our intro notebook in Google Colab: [<img align="center" src="https://co

By default, DSPy installs the latest `openai` from pip. However, if you install old version before OpenAI changed their API `openai~=0.28.1`, the library will use that just fine. Both are supported.

For the optional (alphabetically sorted) [Chromadb](https://github.com/chroma-core/chroma), [Qdrant](https://github.com/qdrant/qdrant), [Marqo](https://github.com/marqo-ai/marqo), Pinecone, [Snowflake](https://github.com/snowflakedb/snowpark-python) [Weaviate](https://github.com/weaviate/weaviate),
For the optional (alphabetically sorted) [Chromadb](https://github.com/chroma-core/chroma), [Groq](https://github.com/groq/groq-python), [Qdrant](https://github.com/qdrant/qdrant), [Marqo](https://github.com/marqo-ai/marqo), Pinecone, [Snowflake](https://github.com/snowflakedb/snowpark-python) [Weaviate](https://github.com/weaviate/weaviate),
or [Milvus](https://github.com/milvus-io/milvus) retrieval integration(s), include the extra(s) below:

```
pip install dspy-ai[chromadb] # or [qdrant] or [marqo] or [mongodb] or [pinecone] or [snowflake] or [weaviate] or [milvus]
pip install dspy-ai[chromadb] # or [groq] or [qdrant] or [marqo] or [mongodb] or [pinecone] or [snowflake] or [weaviate] or [milvus]
```

## 2) Documentation
Expand Down
4 changes: 2 additions & 2 deletions docs/api/functional/dspy_cot.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ sidebar_position: 4

#### `def cot(func) -> dspy.Module`

The `@cot` decorator is used to create a Chain of Thoughts module based on the provided function. It automatically generates a `dspy.TypedPredictor` and from the function's type annotations and docstring. Similar to predictor, but adds a "Reasoning" output field to capture the model's step-by-step thinking.
The `@cot` decorator is used to create a Chain of Thoughts module based on the provided function. It automatically generates a `dspy.TypedPredictor` from the function's type annotations and docstring. Similar to predictor, but adds a "Reasoning" output field to capture the model's step-by-step thinking.

* **Input**: Function with input parameters and return type annotation.
* **Output**: A dspy.Module instance capable of making predictions.
Expand All @@ -27,4 +27,4 @@ def generate_answer(self, context: list[str], question) -> str:
pass

generate_answer(context=context, question=question)
```
```
2 changes: 1 addition & 1 deletion docs/api/local_language_model_clients/Ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Here is the list of other models you can download:

Run model: `ollama run`

You can test a model by running the model with the `ollama run` command.
You need to start the model server with the `ollama run` command.

```bash
# run mistral
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/cheatsheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ fewshot_optimizer = BootstrapFewShot(metric=your_defined_metric, max_bootstrappe
your_dspy_program_compiled = fewshot_optimizer.compile(student = your_dspy_program, trainset=trainset)
```

#### Compiling a compiled program - bootstrapping a bootstraped program
#### Compiling a compiled program - bootstrapping a bootstrapped program

```python
your_dspy_program_compiledx2 = teleprompter.compile(
Expand Down Expand Up @@ -363,7 +363,7 @@ from dspy.teleprompt import COPRO

eval_kwargs = dict(num_threads=16, display_progress=True, display_table=0)

copro_teleprompter = COPRO(prompt_model=model_to_generate_prompts, task_model=model_that_solves_task, metric=your_defined_metric, breadth=num_new_prompts_generated, depth=times_to_generate_prompts, init_temperature=prompt_generation_temperature, verbose=False, log_dir=logging_directory)
copro_teleprompter = COPRO(prompt_model=model_to_generate_prompts, metric=your_defined_metric, breadth=num_new_prompts_generated, depth=times_to_generate_prompts, init_temperature=prompt_generation_temperature, verbose=False)

compiled_program_optimized_signature = copro_teleprompter.compile(your_dspy_program, trainset=trainset, eval_kwargs=eval_kwargs)
```
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Exporting DSPy programs is simply saving them as highlighted above!

- **How do I search my own data?**

Open source libraries such as [RAGautouille](https://github.com/bclavie/ragatouille) enable you to search for your own data through advanced retrieval models like ColBERT with tools to embdeed and index documents. Feel free to integrate such libraries to create searchable datasets while developing your DSPy programs!
Open source libraries such as [RAGautouille](https://github.com/bclavie/ragatouille) enable you to search for your own data through advanced retrieval models like ColBERT with tools to embed and index documents. Feel free to integrate such libraries to create searchable datasets while developing your DSPy programs!

- **How do I turn off the cache? How do I export the cache?**

Expand Down
23 changes: 9 additions & 14 deletions dsp/modules/lm.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,13 +46,14 @@ def inspect_history(self, n: int = 1, skip: int = 0):
prompt = x["prompt"]

if prompt != last_prompt:
if (
provider == "clarifai"
or provider == "google"
or provider == "groq"
or provider == "Bedrock"
or provider == "Sagemaker"
or provider == "premai"
if provider in (
"clarifai",
"cloudflare"
"google",
"groq",
"Bedrock",
"Sagemaker",
"premai",
):
printed.append((prompt, x["response"]))
elif provider == "anthropic":
Expand All @@ -66,8 +67,6 @@ def inspect_history(self, n: int = 1, skip: int = 0):
printed.append((prompt, x["response"].text))
elif provider == "mistral":
printed.append((prompt, x["response"].choices))
elif provider == "cloudflare":
printed.append((prompt, [x["response"]]))
elif provider == "ibm":
printed.append((prompt, x))
else:
Expand All @@ -87,12 +86,10 @@ def inspect_history(self, n: int = 1, skip: int = 0):
printing_value += prompt

text = ""
if provider == "cohere" or provider == "Bedrock" or provider == "Sagemaker":
if provider in ("cohere", "Bedrock", "Sagemaker", "clarifai", "claude", "ibm", "premai"):
text = choices
elif provider == "openai" or provider == "ollama":
text = " " + self._get_choice_text(choices[0]).strip()
elif provider == "clarifai" or provider == "claude":
text = choices
elif provider == "groq":
text = " " + choices
elif provider == "google":
Expand All @@ -101,8 +98,6 @@ def inspect_history(self, n: int = 1, skip: int = 0):
text = choices[0].message.content
elif provider == "cloudflare":
text = choices[0]
elif provider == "ibm" or provider == "premai":
text = choices
else:
text = choices[0]["text"]
printing_value += self.print_green(text, end="")
Expand Down
2 changes: 1 addition & 1 deletion dspy/predict/multi_chain_comparison.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def forward(self, completions, **kwargs):
f"«I'm trying to {rationale} I'm not sure but my prediction is {answer}»",
)

assert len(attempts) == self.M, len(attempts)
assert len(attempts) == self.M, f"The number of attempts ({len(attempts)}) doesn't match the expected number M ({self.M}). Please set the correct value for M when initializing MultiChainComparison."

kwargs = {
**{
Expand Down
15 changes: 11 additions & 4 deletions dspy/teleprompt/bootstrap.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,10 +112,17 @@ def _prepare_predictor_mappings(self):

for (name1, predictor1), (name2, predictor2) in zip(student.named_predictors(), teacher.named_predictors()):
assert name1 == name2, "Student and teacher must have the same program structure."
assert predictor1.signature.equals(
predictor2.signature,
), (f"Student and teacher must have the same signatures. "
f"{type(predictor1.signature)} != {type(predictor2.signature)}"
if hasattr(predictor1.signature, "equals"):
assert predictor1.signature.equals(
predictor2.signature,
), (f"Student and teacher must have the same signatures. "
f"{type(predictor1.signature)} != {type(predictor2.signature)}"
)
else:
# fallback in case if .equals is not implemented (e.g. dsp.Prompt)
assert predictor1.signature == predictor2.signature, (
f"Student and teacher must have the same signatures. "
f"{type(predictor1.signature)} != {type(predictor2.signature)}"
)
assert id(predictor1) != id(predictor2), "Student and teacher must be different objects."

Expand Down
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
"google-vertex-ai": ["google-cloud-aiplatform==1.43.0"],
"snowflake": ["snowflake-snowpark-python"],
"fastembed": ["fastembed"],
"groq": ["groq~=0.8.0"],
},
classifiers=[
"Development Status :: 3 - Alpha",
Expand Down

0 comments on commit fded0f5

Please sign in to comment.