Skip to content

Commit

Permalink
Fixed a minor typo in the README
Browse files Browse the repository at this point in the history
  • Loading branch information
hmoazam committed Apr 3, 2023
1 parent 7861f1d commit 7020cd3
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The **DSP** framework provides a programming abstraction for _rapidly building sophisticated AI systems_. It's primarily (but not exclusively) designed for tasks that are knowledge intensive (e.g., answering user questions or researching complex topics).

You write a **DSP program** in a few lines of code, describing at high level how the problem you'd like to solve should be _decomposed_ into smaller _transformations_. Transformations generate text (by invoking a language model; LM) and/or search for information (by invoking a retrieval model; RM) in high-level steps like `generate a search query to find missing information` or `answer this question using the supplied context`. Our [research paper](https://arxiv.org/abs/2212.14024) show that building NLP systems with **DSP** can easily outperform GPT-3.5 by up to 120%.
You write a **DSP program** in a few lines of code, describing at high level how the problem you'd like to solve should be _decomposed_ into smaller _transformations_. Transformations generate text (by invoking a language model; LM) and/or search for information (by invoking a retrieval model; RM) in high-level steps like `generate a search query to find missing information` or `answer this question using the supplied context`. Our [research paper](https://arxiv.org/abs/2212.14024) shows that building NLP systems with **DSP** can easily outperform GPT-3.5 by up to 120%.

**DSP** programs invoke LMs in a declarative way: you focus on the _what_ (i.e., the algorithmic design of decomposing the problem) and delegate _how_ the transformations are mapped to LM (or RM) calls to the **DSP** runtime. In particular, **DSP** discourages "prompt engineering", which we view much the same way as hyperparameter tuning in traditional ML: a final and minor step that's best done _after_ building up an effective architecture (and which could be delegated to automatic tuning).

Expand Down

0 comments on commit 7020cd3

Please sign in to comment.