-
Notifications
You must be signed in to change notification settings - Fork 5
Home
This project consists of two parts that work together: The Thistle Gulch Simulation running in a 3D game engine we call the "Runtime" and a Python Bridge (referred to simply as the "Bridge" from here on out) that acts similar to a client for the Runtime. While each CAN be run independently, you will need both parts for the system to function properly.
You will want to make sure you have all necessary dependencies including, python > 3.10, poetry, OpenAI API (by default), and the itch.io app and access to the runtime (Beta Users only for now) before you get started.
The README contains a Quickstart section that will setup the default SAGA using GPT-3.5-turbo.
Thistle Gulch is currently under heavily development. Check here if you experience issues.
Runtime Documentation
The Thistle Gulch Runtime is a standalone application that provides a 3D world to visualize and execute the actions chosen by SAGA and/or the thistle-gulch Bridge.
Bridge Documentation
The Bridge also leverages our open-source SAGA python library to generate actions and conversations. The Bridge allows many aspects of the simulation to be customized or overridden by manipulating the metadata and/or prompts that are sent to SAGA.
API Documentation
The Bridge sends and receives messages from the Runtime via the API to control the overall behavior of the simulation.
At this time, Fable is not charging for access to the Runtime and it's not our intention to do so any time soon. We even have plans to make more of it available as open-source but as we are using third-party models and libraries so we can't provide source level access right now, however the Runtime is currently available for free under our standard Fable Studio EULA. That said, all the Bridge code is available in this repo and under a non-commercial-use open-source license.
Fable doesn't offer any LLM services yet, so for the LLM functionality you will need to use either a local model or an LLM Service provider like OpenAI, Anthropic, Google, Mistral, etc. By default we provide OpenAI via the SAGA demo and use GPT-3.5-turbo by default. You can override this to one of their other models. As a convenience, we try to track token counts and will show you an estimate in the upper right corner. We DO NOT provide any guarantees around that, but we try to keep it up to date for the OpenAI models. GPT-4 has much better results, but the costs are 10x. Make sure you set sensible limits and are validating costs with your provider.
If you want to use another model, check the Demos. We already have one for Ollama (local llama and other models) that you can leverage right away and plan to include more over time, but you can always just write your own class to handle any provider. We build upon LangChain, so it can be very easy to swap them out. Just make sure it supports asyncio python functions. See our Ollama demo for an example of writing your own to support async.