Skip to content

Commit

Permalink
Updated FAQ.md
Browse files Browse the repository at this point in the history
  • Loading branch information
robertvacareanu committed Apr 13, 2024
1 parent 0705221 commit c11ddd2
Showing 1 changed file with 28 additions and 4 deletions.
32 changes: 28 additions & 4 deletions FAQ.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,39 @@
# Frequently Asked Questions (FAQ)

- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
- [Should I drop method `<X>` and use LLMs for regression?](#should-i-drop-method-x-and-use-llms-for-regression)
- [Why didn't you cite `<X>`?](#why-didnt-you-cite-x)
- [Why didn't you try method `<X>`?](#why-didnt-you-try-method-x)
- [Can you try other datasets?](#can-you-try-other-datasets)
- [How to add a new dataset?](#how-to-add-a-new-dataset)
- [Do you explain why this happens?](#do-you-explain-why-this-happens)
- [Hasn't previous research already demonstrated that Transformers can do regression?](#hasnt-previous-research-already-demonstrated-that-transformers-can-do-regression)
- [What if examples of Friedman #1, #2, #3 are already on the Web?](#what-if-examples-of-friedman-1-2-3-are-already-on-the-web)
- [Why does Linear Regression perform so well in Figure 1, for example, outperforming Gradient Boosting?](#why-does-linear-regression-perform-so-well-in-figure-1-for-example-outperforming-gradient-boosting)
- [How can I try it?](#how-can-i-try-it)

## Should I drop method <X> and use LLMs for regression?


## Should I drop method `<X>` and use LLMs for regression?

No, the message of the paper is not that LLMs are better than all traditional supervised method (e.g., Gradient Boosting) and that they should be used from now on. Instead, it highlights the surprisingly powerful in-context learning capabilities of pre-trained LLMs like GPT-4 and Claude 3. That is, despite no parameter update, LLMs can (sometimes) outperform methods like the ones aforementioned, at least in small dataset regime (We tested with at most $500$ examples, as per Appendix O).

## Why didn't you cite <X>?
## Why didn't you cite `<X>`?

Apologize for any oversight. Missed citations will be added in the next revision.


## Why didn't you try method <X>?
## Why didn't you try method `<X>`?

I am open to try more methods. I mostly tried some of the most popular ones available on sklearn.

## Can you try other datasets?

Sure.

## How to add a new dataset?

Please refer to [how_to_add_dataset.md](./how_to_add_dataset.md) on how to add a new dataset.

## Do you explain why this happens?

Expand All @@ -30,4 +50,8 @@ We considered this and discuss it in Appendix N. One approach we used was to wri

## Why does Linear Regression perform so well in Figure 1, for example, outperforming Gradient Boosting?

Because that dataset is linear. On non-linear datasets (e.g., see heatmap on this repository), Linear Regression performs worse.
Because that dataset is linear. On non-linear datasets (e.g., see heatmap on this repository), Linear Regression performs worse.

## How can I try it?

The simplest way would be to copy-paste some of the examples from `data/prompts`. I also provide links to chats in [data/prompts/README.md](./data/prompts/README.md). Note, however, that I used the API, not the chat for the experiments. I included links to chat conversation because it is easy to share and to view.

0 comments on commit c11ddd2

Please sign in to comment.