-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add new notebook for Gemini 2.0 Flash Thinking mode #1563
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @polong-lin and reviewers,
I'm currently reviewing this pull request and will post my detailed review in a few minutes. In the meantime, here's a quick summary to help everyone get up to speed:
This pull request adds a new Jupyter notebook titled intro_gemini_2_0_flash_thinking_mode.ipynb
to the generative-ai
repository. The notebook provides a tutorial on using Gemini 2.0 Flash Thinking mode.
Here's a breakdown of the changes:
- New Notebook: A complete Jupyter notebook (
gemini/getting-started/intro_gemini_2_0_flash_thinking_mode.ipynb
) has been added. This notebook is approximately 579 lines long and demonstrates how to use the Gemini 2.0 Flash Thinking mode for various complex tasks. - Content: The notebook includes sections on installation, authentication (Colab-specific), importing libraries, setting up a Google Cloud project, and several examples showcasing the capabilities of Gemini 2.0 Flash Thinking mode. These examples cover code simplification, geometry problems (using image input), understanding tables from images, generating questions for specific knowledge levels, statistics problems, and mathematical brain teasers.
- Model Usage: The notebook explicitly uses the
gemini-2.0-flash-thinking-exp-1219
model and provides comparisons with the standardgemini-2.0-flash-exp
model to highlight the differences in reasoning and thought processes. - Sharing: The notebook includes links to open it in Google Colab, Google Colab Enterprise, Vertex AI Workbench, and view it on GitHub. It also provides share buttons for LinkedIn, Bluesky, X (formerly Twitter), Reddit, and Facebook.
- Authors: The notebook clearly credits Guillaume Vernade and Eric Dong as authors.
Disclaimer: Based on the provided information, I believe this is a comprehensive summary. However, I haven't executed the code within the notebook, so I cannot guarantee the functionality of the examples.
Here's a little haiku to lighten the mood:
New code takes its flight,
Thinking mode, a clearer view,
Reviews make it shine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pull request introduces a new notebook demonstrating Gemini 2.0 Flash Thinking mode. The notebook is well-structured and provides clear examples of how to use the model. The examples showcase the model's thinking process, which is a valuable addition for understanding and debugging. The use of markdown and HTML for formatting and linking enhances readability. The PR description follows the template and includes a checklist, which is helpful. However, the "Your account is listed in CODEOWNERS
" item is unchecked. Please ensure this is addressed before merging. Also, the tests and linter have not been run yet. Please run nox -s format
and ensure they pass. Finally, consider adding a brief mention of this new notebook to the main README.md
file to increase visibility.
Here's a summary of the style guide elements I referenced in this review (none were explicitly provided, so I'm defaulting to common best practices for Python in Jupyter notebooks):
- Docstrings: Encouraged the use of docstrings to explain the purpose and usage of functions.
- Comments: Recommended adding comments to clarify complex logic or non-obvious code.
- Markdown Formatting: Suggested improvements to markdown formatting for better readability.
"source": [ | ||
"## Overview\n", | ||
"\n", | ||
"Gemini 2.0 Flash with Thinking, is an experimental model that explicitly showcases its thoughts. Built on the speed and performance of Gemini 2.0 Flash, this model is trained to use thoughts in a way that leads to stronger reasoning capabilities.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add an inline link to Documentation for this feature: https://cloud.google.com/vertex-ai/generative-ai/docs/thinking-mode
"source": [ | ||
"## Use the Gemini 2.0 Flash Thinking Mode\n", | ||
"\n", | ||
"The following examples are some complex tasks of what the Gemini 2.0 Flash Thinking mode can solve. In each of examples you can try using different models to see how this new model compares to other models. In some cases, you'll still get the good answer from the other models, in that case, re-run it a couple of times and you'll see that Gemini 2.0 Thinking mode is more consistent thanks to its thinking step." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The following examples are some complex tasks of what the Gemini 2.0 Flash Thinking mode can solve. In each of examples you can try using different models to see how this new model compares to other models. In some cases, you'll still get the good answer from the other models, in that case, re-run it a couple of times and you'll see that Gemini 2.0 Thinking mode is more consistent thanks to its thinking step." | |
"The following examples are some complex tasks of what the Gemini 2.0 Flash Thinking mode can solve. In each of examples you can try using different models to see how this new model compares to other models. In some cases, you'll still get a good answer from other models; however, on re-runs you'll see that Gemini 2.0 Flash Thinking mode is more consistent because of its thinking step." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The suggested revision is good. It improves clarity and flow. The phrasing "In some cases, you'll still get a good answer from other models; however, on re-runs you'll see that Gemini 2.0 Flash Thinking mode is more consistent because of its thinking step" is more precise and avoids potential ambiguity.
"source": [ | ||
"response = client.models.generate_content(\n", | ||
" model=MODEL_ID,\n", | ||
" contents=\"How can I simplify this? `(Math.round(radius/pixelsPerMile * 10) / 10).toFixed(1);`\",\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you bring this prompt out into a variable with multiline strings to make it easier to read?
"id": "d6cOmdVPC9nn" | ||
}, | ||
"source": [ | ||
"The model response has multiple parts. While you could use `response.text` to get all of it right away as usual it's actually more interesting to check each of them separately when using the thinking mode.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The model response has multiple parts. While you could use `response.text` to get all of it right away as usual it's actually more interesting to check each of them separately when using the thinking mode.\n", | |
"The model response has multiple parts. While you could use `response.text` to get the full text right away, it's actually more interesting to check each of them separately when using the thinking mode.\n", |
}, | ||
"outputs": [], | ||
"source": [ | ||
"Markdown(response.candidates[0].content.parts[0].text)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we put this in the previous cell? Or add a description of it?
"source": [ | ||
"As a comparison here's what you'd get with the \"classic\" [Gemini 2.0 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2) model.\n", | ||
"\n", | ||
"Unlike thinking mode, the normal model does not articulate its thoughts and tries to answer right away which can lead to more simpler answers to complex problems." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Unlike thinking mode, the normal model does not articulate its thoughts and tries to answer right away which can lead to more simpler answers to complex problems." | |
"Unlike thinking mode, the normal model does not articulate its thoughts and tries to answer right away which can lead to more simple answers to complex problems." |
"!wget https://storage.googleapis.com/generativeai-downloads/images/geometry.png -O geometry.png -q\n", | ||
"\n", | ||
"im = Image.open(\"geometry.png\").resize((256, 256))\n", | ||
"im" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need to download this image, you can send the uri directly using Part.from_uri()
And it can be displayed using IPython.display.Image(url)
}, | ||
"outputs": [], | ||
"source": [ | ||
"!wget https://storage.googleapis.com/generativeai-downloads/images/nfl.png -O nfl.png -q\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above about URL
"source": [ | ||
"### **Example 4**: Generating question for a specific level of knowledge\n", | ||
"\n", | ||
"This time, the questions require a few types of knowledge, including what is relevant to the Physics C exam. The questions generated are not the interesting part, but the reasoning to come up with them shows they are not just randomly generated.\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"This time, the questions require a few types of knowledge, including what is relevant to the Physics C exam. The questions generated are not the interesting part, but the reasoning to come up with them shows they are not just randomly generated.\n" | |
"This time, the questions require a few types of knowledge, including what is relevant to the AP Physics C exam. The questions generated are not the interesting part, but the reasoning to come up with them shows they are not just randomly generated.\n" |
Add an in-line link to the pages for the AP Physics C Exam such as https://apcentral.collegeboard.org/courses/ap-physics-c-mechanics/exam
Note: There are actually 2 different AP Physics C exams. The prompt should be adjusted to be about one specific exam.
"source": [ | ||
"response = client.models.generate_content(\n", | ||
" model=MODEL_ID,\n", | ||
" contents=\"Add mathematical operations (additions, substractions, multiplications) to get 746 using these numbers only once: 8, 7, 50, and 4\",\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to previous feedback, I'd recommend putting the prompt in a separate variable.
…1566) # Description Fixes comments in earlier PR #1563 from @holtskinner --------- Co-authored-by: Holt Skinner <[email protected]>
Description
Thank you for opening a Pull Request!
Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
CONTRIBUTING
Guide.CODEOWNERS
for the file(s).nox -s format
from the repository root to format).