Local LLM (lllm) A series of experiments with local LLMs TODOs Exp (Using Ollama) Run mistral 7b locally Run llama 2 locally Run custom prompts with llama 2 Run API calls against llama/mistral Exp (Get Twinny to work in VSCode) Done with caveats Exp (Using Ollama for a Chat UI) Create chat UI similar to this Exp (Create your own version of an open source LLM) Custom prompts + temps Exp (Train Mistral on your own data - finetuning or use RAG)