To use gen
, you will need a valid Gemini API key set in the GEMINI_API_KEY
environment variable.
Note
If you don't already have one, go to Google AI Studio to create a key.
Plain text generation
$ gen dix noms de fleurs
At the command line, you may have to put the prompt inside double quotes to avoid confusing the shell
$ gen "a scheme function to compute the Levenshtein distance between two strings; only the code"
Pipe content into gen and save it in a text file
$ cat cmd.go | gen what does this code do? | tee report.txt
Obtain the token count
$ gen -t how many tokens would this prompt consume?
Parameterize prompts
gen -p a=1 -p b=2 "complete this sentence: replace {a} apple with {a} banana and {b} oranges for a good ..."
Set a system instruction from stdin and prompt from argument
echo "you understand english but always reply in french" | gen -s ten names for flowers
Set a system instruction from file option and prompt from argument
gen -s -f french.prompt ten names for flowers"
Set a system instruction from argument and enter chat
gen -c -s you understand english but always answer in German
Attach a file to the prompt and return total token count
gen -t -f ../twitter/img/123497680.jpg is this picture showing a face or a logo?
Enter chat mode to generate various SQL statements
cat classicmodels.sql | gen -c
Generate an architecture decision record using a parameterized template
cat adr.prompt | gen -p certified="certified AWS Solution Architect Professional" -s an architecture decision record to help my organization decide between storage technologies for storing and accessing 10TB worth of time series data - please provide concrete examples and figures where possible
List known Gemini models by invoking tool
gen -tool known models
Note
The -tool flag relies on Gemini API's Function Calling feature which is in Beta release.
Extract entities from text
w3m -dump https://lite.cnn.com/2024/07/27/asia/us-austin-trilateral-japan-south-korea-intl-hnk/index.html | gen -json extract entities
System instruction and prompts as files from iterative Prisonner's Dilemma paper
cat pd_system.prompt | gen -json -s -f pd.prompt
Generate a brief using an adapted version of Ali Abassi's prompt
cat brief_system.prompt | gen -c -s -f brief.prompt -p role="Sr. Business Analyst" -p department="ACME Technology Solutions" -p task="create a project brief" -p deliverable="project brief"
Usage: gen [options] <prompt>
Command-line interface to Google Gemini large language models
Requires a valid GEMINI_API_KEY environment variable set.
Content is generated by a prompt and an optional system instruction.
Additionally, supports stdin and .prompt files as valid inputs.
Options:
-V output model details and chat history
details include model name | maxInputTokens | maxOutputTokens | temp | top_p | top_k
-c enter chat mode after content generation
type two consecutive blank lines to exit
not supported on windows when stdin used
-f string
attach file to prompt where string is the path to the file
file with extension .prompt treated as prompt or system instruction
-h show this help message and exit
-json
response in JavaScript Object Notation
-m string
generative model name (default "gemini-1.5-flash")
-p value
zero or more prompt parameter values in format key=val
replaces all occurrences of {key} in prompt with val
-s treat first of stdin, file option or argument as system instruction
-t output total number of tokens
-temp float
changes sampling during response generation [0.0,2.0] (default 1)
-tool
invoke one of the tools {KnownModels,QueryPostgres}
-top_p float
changes how the model selects tokens for generation [0.0,1.0] (default 0.95)
-unsafe
force generation when gen aborts with FinishReasonSafety
-v show version and exit
This project is licensed under the MIT License.