Welcome to Groq! 🚀
Are you tired of feeling like your AI and HPC inferences are stuck in neutral? Do you want to take your developer velocity to the next level? Well, buckle up, because Groq is here to turbocharge your workflow!
At Groq, we've developed the world's first Language Processing Unit™, or LPU. The Groq LPU has a deterministic, single core streaming architecture that lays the foundation for Groq Compiler™’s unique ability to know the exact performance results and compute time for any given workload - even before running on hardware. For you and us, the developers, this means we can deploy our models and applications with confidence - knowing they'll perform exactly as expected, every run. 🤝
Beyond the architecture, our open-source software is designed to empower developers like you to give you the tools you need to create innovative, powerful AI and HPC applications. With Groq as your engine, you can:
- Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥
- Know the exact performance and compute time for any given workload 🔮
- Take advantage of our cutting-edge technology to stay ahead of the competition 💪
Check out the following open-source resources to get started on your Groq journey:
- GroqFlow: For the easiest way to get started with building and running your models.
- Groq API: For integrating state-of-the-art LLMs, such as Llama-2, into your low latency applications.
- MLAgility: For benchmarking 1000s of models on CPU, GPU, and LPU.
For more information about Groq, check out our website. For support and access to all of our software, register on our portal.
Happy exploring!