Explore ๐ฎ Instill Core, a full-stack AI infrastructure tool for data, model and pipeline orchestration, designed to streamline every aspect of building versatile AI-first applications. Accessing ๐ฎ Instill Core is straightforward, whether you opt for โ๏ธ Instill Cloud or self-hosting via the instill-core repository. Please consult the documentation for more details.
๐ง Instill VDP - Pipeline orchestration for unstructured data ETL
๐ง Instill VDP, also known as VDP (Versatile Data Pipeline), serves as a powerful pipeline orchestration tool tailored to address unstructured data ETL challenges.
โ๏ธ Instill Model - Model orchestration for MLOps/LLMOps
โ๏ธ Instill Model is an advanced MLOps/LLMOps platform focused on seamlessly model serving, fine-tuning, and monitoring for persistent performance for unstructured data ETL.
๐พ Instill Artifact (coming soon) - Data orchestration for unified unstructured data representation
๐พ Instill Artifact orchestrates unstructured data to transform documents (e.g., HTML, PDF, CSV, PPTX, DOC), images (e.g., JPG, PNG, TIFF), audio (e.g., WAV, MP3 ) and video (e.g., MP4, MOV) into a unified AI-ready format. It ensures your data is clean, curated, and ready for extracting insights and building your Knowledge Base.
โ๏ธ Instill Component - An extensible integration framework for ๐ง Instill VDP
โ๏ธ Instill Component enhances ๐ง Instill VDP, unlocking limitless possibilities. Please visit the component repository for details.
Not quite into self-hosting? We've got you covered with โ๏ธ Instill Cloud. It's a fully managed public cloud service, providing you with access to all the features of ๐ฎ Instill Core without the burden of infrastructure management. All you need to do is to one-click sign up to start building your AI-first applications.
-
macOS or Linux - ๐ฎ Instill Core works on macOS or Linux
-
Windows - ๐ฎ Instill Core works on Windows through Windows Linux Subsystem (WSL2)
-
Install the lastest version of
yq
from the GitHub Repository, as the packageyq
is not installed on Ubuntu WSL2 by default -
Install the latest version of Docker Desktop on Windows and enable the WSL2 integration following the tutorial by Docker
-
(optional) Install
cuda-toolkit
on WSL2 following the tutorial by NVIDIA
-
-
Docker and Docker Compose - ๐ฎ Instill Core requires Docker Engine
v25
or later and Docker Composev2
or later to run all services locally. Please install the latest stable Docker and Docker Compose.
Use stable release version
Execute the following commands to pull pre-built images with all the dependencies to launch:
$ git clone -b v0.40.1-beta https://github.com/instill-ai/instill-core.git && cd instill-core
# Launch all services
$ make all
Note
We have restructured our project repositories. If you need to access ๐ฎ Instill Core projects up to version v0.13.0-beta
, please refer to the instill-ai/deprecated-core repository.
Use the latest version for local development
Execute the following commands to build images with all the dependencies to launch:
$ git clone https://github.com/instill-ai/instill-core.git && cd instill-core
# Launch all services
$ make latest PROFILE=all
Important
Code in the main branch tracks under-development progress towards the next release and may not work as expected. If you are looking for a stable alpha version, please use latest release.
๐ That's it! Once all the services are up with health status, the UI is ready to go at http://localhost:3000. Please find the default login credentials in the documentation.
To shut down all running services:
$ make down
Explore the documentation to discover all available deployment options.
To access ๐ฎ Instill Core and โ๏ธ Instill Cloud, you have a few options:
- ๐บ Instill Console
- โจ๏ธ Instill CLI
- ๐ฆ Instill SDK:
- Python SDK
- TypeScript SDK
- Stay tuned, as more SDKs are on the way!
For comprehensive guidance and resources, explore our documentation website and delve into our API reference.
We welcome contributions from the community! Whether you're a developer, designer, writer, or user, there are multiple ways to contribute:
We foster a friendly and inclusive environment for issue reporting. Before creating an issue, check if it already exists. Use clear language and provide reproducible steps for bugs. Accurately tag the issue (bug, improvement, question, etc.).
Please refer to the Contributing Guidelines for more details. Your code-driven innovations are more than welcome!
We are committed to providing a respectful and welcoming atmosphere for all contributors. Please review our Code of Conduct to understand our standards.
We have implemented a streamlined Issues Triage Process aimed at swiftly categorizing new issues and pull requests (PRs), allowing us to take prompt and appropriate actions.
Head over to our Discussions for engaging conversations:
- General: Chat about anything related to our projects.
- Polls: Participate in community polls.
- Q&A: Seek help or ask questions; our community members and maintainers are here to assist.
- Show and Tell: Showcase projects you've created using our tools.
Alternatively, you can also join our vibrant Discord community and direct your queries to the #ask-for-help channel. We're dedicated to supporting you every step of the way.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
See the LICENSE file for licensing information.