Stars
Pretrain, finetune and serve LLMs on Intel platforms with Ray
xwu99 / llm-on-ray
Forked from intel/llm-on-rayPretrain, finetune and serve LLMs on Intel platforms with Ray
A high-throughput and memory-efficient inference and serving engine for LLMs
RayLLM - LLMs on Ray (Archived). Read README for more info.
An open-source, low-code machine learning library in Python
xwu99 / xgboost
Forked from dmlc/xgboostScalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Stanford CoreNLP wrapper for Apache Spark
A playground for experimenting ideas that may apply to Spark SQL/Catalyst
A simple spark standalone cluster for your testing environment purposses
Repo for counting stars and contributing. Press F to pay respect to glorious developers.
A curated list of awesome frameworks, libraries and software for the Java programming language.
Sample projects showcasing Scrapinghub tech
Java port(JNI) of Snappy & LZ4 compression codec
Benchmark suite for data compression library on the JVM
MPI-oriented extension of the Spark computational model
Intel® Nervana™ reference deep learning framework committed to best performance on all hardware
Java monitoring for the command-line, profiler included
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…