- Seattle, WA, United States
- https://nojvek.com
- @nojvek
Lists (1)
Sort Name ascending (A-Z)
Stars
This is the latest version of the internal repository from Pebble Technology providing the software to run on Pebble watches. Proprietary source code has been removed from this repository and it wiβ¦
π OpenHands: Code Less, Make More
llama3 implementation one matrix multiplication at a time
Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3.
An open source quadruped robot pet framework for developing Boston Dynamics-style four-legged robots that are perfect for STEM, coding & robotics education, IoT robotics applications, AI-enhanced rβ¦
We write your reusable computer vision tools. π
A framework to enable multimodal models to operate a computer.
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
An AI for playing NES Tetris at a high level. Based primarily on search & heuristic, with high quality board evaluation through value iteration.
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
An Open Source Playground with Agent Datasets and APIs for building and testing your own Autonomous Web Agents
A platform for creating interactive data visualizations
Learn how to design systems at scale and prepare for system design interviews
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
This repository contains demos I made with the Transformers library by HuggingFace.
Nightly release of ControlNet 1.1
π¦π Build context-aware reasoning applications
antimatter15 / alpaca.cpp
Forked from ggml-org/llama.cppLocally run an Instruction-Tuned Chat-Style LLM
CLI based natural language queries on local or remote data
Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)
The simplest, fastest repository for training/finetuning medium-sized GPTs.