Skip to content
View gj-raza's full-sized avatar

Block or report gj-raza

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Starred repositories

7 stars written in Cuda
Clear filter

LLM training in simple, raw C/CUDA

Cuda 25,165 2,884 Updated Oct 2, 2024

Instant neural graphics primitives: lightning fast NeRF and more

Cuda 16,227 1,949 Updated Jan 27, 2025

[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl

Cuda 1,694 447 Updated Oct 9, 2023

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.

Cuda 1,256 146 Updated Nov 12, 2024

Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.

Cuda 899 53 Updated Jan 28, 2025

NeRFshop: Interactive Editing of Neural Radiance Fields

Cuda 456 24 Updated Mar 27, 2023

Cuda extensions for PyTorch

Cuda 10 2 Updated Jan 28, 2025