Skip to content

Commit

Permalink
Documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
ggozad committed Sep 24, 2024
1 parent cc62b9f commit 387261a
Show file tree
Hide file tree
Showing 8 changed files with 41 additions and 24 deletions.
5 changes: 4 additions & 1 deletion CHANGES.txt
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
Changelog
=========

0.5.3 -
0.6.0 -
------------------

- Add support for tools/function calling.
[ggozad]

- Fix newline insertion in multi-line widget.
[ggozad]

Expand Down
18 changes: 16 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,21 @@ While Ollama is inferring the next message, you can press <kbd>Esc</kbd> to canc

Note that some of the shortcuts may not work in a certain context, for example pressing <kbd>↑</kbd> while the prompt is in multi-line mode.

### Tools

Since version `0.6.0` `oterm` supports integration with tools. Tools are special "functions" that can provide external information to the LLM model that it does not otherwise have access to.

The following tools are currently supported:

* `date_time` - provides the current date and time in ISO format.
* `current_location` - provides the current location of the user (longitude, latitude, city, region, country). Uses [ipinfo.io](https://ipinfo.io) to determine the location.
* `current_weather` - provides the current weather in the user's location. Uses [OpenWeatherMap](https://openweathermap.org) to determine the weather.
* `shell` - allows you to run shell commands and use the output as input to the model. Obviously this can be dangerous, so use with caution.

The tooling API in Ollama does not currently support streaming. When using tools, you will have to wait for the tools & model to finish before you see the response.

Note that tools integration is **experimental** and may change in the future. I particularly welcome contributions for new tools, but please bear in mind that any additional requirements in terms of dependencies or paid-for API usage should be kept to a minimum.

### Copy / Paste

It is difficult to properly support copy/paste in terminal applications. You can copy blocks to your clipboard as such:
Expand All @@ -86,10 +101,9 @@ For most terminals there exists a key modifier you can use to click and drag to
* `Gnome Terminal` <kbd>Shift</kbd> key.
* `Windows Terminal` <kbd>Shift</kbd> key.


### Customizing models

When creating a new chat, you may not only select the model, but also customize the the `system` instruction as well as the `parameters` (such as context length, seed, temperature etc) passed to the model. For a list of all supported parameters refer to the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values). Checking the `JSON output` checkbox will force the model to reply in JSON format. Please note that `oterm` will not (yet) pull models for you, use `ollama` to do that. All the models you have pulled or created will be available to `oterm`.
When creating a new chat, you may not only select the model, but also customize the the `system` instruction, `tools` used, as well as the `parameters` (such as context length, seed, temperature etc) passed to the model. For a list of all supported parameters refer to the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values). Checking the `JSON output` checkbox will force the model to reply in JSON format. Please note that `oterm` will not (yet) pull models for you, use `ollama` to do that. All the models you have pulled or created will be available to `oterm`.

You can also "edit" the chat to change the system prompt, parameters or format. Note, that the model cannot be changed once the chat has started.

Expand Down
Binary file modified screenshots/model_selection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 4 additions & 4 deletions src/oterm/tools/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@
)

from oterm.tools.date_time import DateTimeTool, date_time
from oterm.tools.location import LocationTool, get_current_location
from oterm.tools.location import LocationTool, current_location
from oterm.tools.shell import ShellTool, shell_command
from oterm.tools.weather import WeatherTool, get_current_weather
from oterm.tools.weather import WeatherTool, current_weather


class ToolDefinition(TypedDict):
Expand All @@ -23,6 +23,6 @@ class ToolDefinition(TypedDict):
available: Sequence[ToolDefinition] = [
{"tool": DateTimeTool, "callable": date_time},
{"tool": ShellTool, "callable": shell_command},
{"tool": LocationTool, "callable": get_current_location},
{"tool": WeatherTool, "callable": get_current_weather},
{"tool": LocationTool, "callable": current_location},
{"tool": WeatherTool, "callable": current_weather},
]
4 changes: 2 additions & 2 deletions src/oterm/tools/location.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
LocationTool = Tool(
type="function",
function=ToolFunction(
name="get_current_location",
name="current_location",
description="Function to return the current location, city, region, country, latitude, and longitude.",
parameters=Parameters(
type="object",
Expand All @@ -18,7 +18,7 @@
)


async def get_current_location():
async def current_location():

async with httpx.AsyncClient() as client:
try:
Expand Down
4 changes: 2 additions & 2 deletions src/oterm/tools/weather.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
WeatherTool = Tool(
type="function",
function=ToolFunction(
name="get_weather_info",
name="current_weather",
description="Function to return the current weather for the given location in Standard Units.",
parameters=Parameters(
type="object",
Expand All @@ -26,7 +26,7 @@
)


async def get_current_weather(latitude: float, longitude: float) -> str:
async def current_weather(latitude: float, longitude: float) -> str:
async with httpx.AsyncClient() as client:
try:
api_key = envConfig.OPEN_WEATHER_MAP_API_KEY
Expand Down
6 changes: 3 additions & 3 deletions tests/tools/test_location_tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,19 @@
import pytest

from oterm.ollamaclient import OllamaLLM
from oterm.tools.location import LocationTool, get_current_location
from oterm.tools.location import LocationTool, current_location


@pytest.mark.asyncio
async def test_location_tool():
llm = OllamaLLM(
model="mistral-nemo",
tool_defs=[
{"tool": LocationTool, "callable": get_current_location},
{"tool": LocationTool, "callable": current_location},
],
)
res = await llm.completion(
"In which city am I currently located?. Reply with no other text, just the city."
)
current_location = json.loads(await get_current_location()).get("city")
current_location = json.loads(await current_location()).get("city")
assert current_location in res
20 changes: 10 additions & 10 deletions tests/tools/test_weather_tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,18 @@
import pytest

from oterm.ollamaclient import OllamaLLM
from oterm.tools.location import LocationTool, get_current_location
from oterm.tools.weather import WeatherTool, get_current_weather
from oterm.tools.location import LocationTool, current_location
from oterm.tools.weather import WeatherTool, current_weather


@pytest.mark.asyncio
async def test_weather():
llm = OllamaLLM(
tool_defs=[
{"tool": WeatherTool, "callable": get_current_weather},
{"tool": WeatherTool, "callable": current_weather},
],
)
weather = json.loads(await get_current_weather(latitude=59.2675, longitude=10.4076))
weather = json.loads(await current_weather(latitude=59.2675, longitude=10.4076))
temperature = weather.get("main").get("temp") - 273.15

res = await llm.completion(
Expand All @@ -29,15 +29,15 @@ async def test_weather():
async def test_weather_with_location():
llm = OllamaLLM(
tool_defs=[
{"tool": LocationTool, "callable": get_current_location},
{"tool": WeatherTool, "callable": get_current_weather},
{"tool": LocationTool, "callable": current_location},
{"tool": WeatherTool, "callable": current_weather},
],
)
current_location = json.loads(await get_current_location())
location = json.loads(await current_location())
weather = json.loads(
await get_current_weather(
latitude=current_location.get("latitude"),
longitude=current_location.get("longitude"),
await current_weather(
latitude=location.get("latitude"),
longitude=location.get("longitude"),
)
)
temperature = weather.get("main").get("temp") - 273.15
Expand Down

0 comments on commit 387261a

Please sign in to comment.