-
Meltwater AB, Stockholm
- Stockholm, Sweden
- https://sites.google.com/site/mmihaltz/
Stars
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Efficient few-shot learning with Sentence Transformers
Lightning-fast serving engine for any AI model of any size. Flexible. Easy. Enterprise-scale.
A simple and efficient tool to parallelize Pandas operations on all available CPUs
Fast inference engine for Transformer models
Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
🦙 Integrating LLMs into structured NLP pipelines
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Code and documentation to train Stanford's Alpaca models, and generate the data.
A playbook for systematically maximizing the performance of deep learning models.
Task-based datasets, preprocessing, and evaluation for sequence models.
An open-source, cloud-native, unified time series database for metrics, logs and events, supporting SQL/PromQL/Streaming. Available on GreptimeCloud.
Tools for shrinking fastText models (in gensim format)
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Google Research
Model analysis tools for TensorFlow
A curated list of NLP resources for Hungarian
Train machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
A library for debugging/inspecting machine learning classifiers and explaining their predictions
The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.
The home repository of the NerKor corpus, a Hungarian gold standard named entity annotated corpus containing 1 million tokens.