Skip to content
View GirinChutia's full-sized avatar
💭
👨‍💻
💭
👨‍💻

Block or report GirinChutia

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
GirinChutia/readme.md

Hi, I'm Girin 👋

  • 🔭 I’m currently working on Computer Vision models from Automatic Defect Recognitions (ADR).
  • 📫 How to reach me: [email protected]

Achievement & Recognitions

Out of a pool of over 2100 data scientists competing for over four months in the largest industrial AI data science competition in the world, I achieved an Global Rank 6 in the BMW SORDI.ai Hackathon 2022. This intense and prestigious competition was organized by some of the biggest names in tech, including Microsoft, NVIDIA, BMW Group + QUT Design Academy, idealworks, and the BMW Group. It was an incredible experience to showcase my skills and compete at such a high level.

The Writer Verification task began as a way to detect potential fraud in the banking sector by verifying signatures. This task is difficult because people's handwriting can vary significantly, making it necessary for the model to learn these variations. The task becomes even more complex in offline settings, where dynamic information about the writing process is not available, such as when writing on electronic equipment. The task of the challenge was to given a pair of handwritten text images, automatically determine whether they are written by the same writer or different writers.

Solution Codebase

Computer Vision Sandbox

ODII is a simple Python package designed to provide a unified and streamlined interface for running inference on multiple object detection models under one hood. ODII facilitates seamless interaction with a range of popular models, including YOLOX, YOLOv3, YOLOv4, YOLOv6, and YOLOv7, without the need to manage multiple codebases or installation processes.

This repository provides a user-friendly solution for training a Faster R-CNN model utilizing any custom COCO dataset. The Faster R-CNN algorithm is a widely used object detection framework known for its efficiency and accuracy in localizing and classifying objects within images. With this repository, one can seamlessly tailor the model to your specific needs. The primary focus of this repository is to streamline the process of training the Faster R-CNN model with any custom COCO dataset.

A pip package designed to seamlessly integrate and leverage the potent capabilities of the SAM (Segment Anything Model), requiring only the most minimal dependencies.

This is a project where I use a CNN that can recognize the currency of different denominations. I have also implemented a Streamlit app for easy inference of the trained model. Some practical use cases of the model include:

  1. Currency recognition system for visually impaired/blind people
  2. The project has the potential to develop a currency verification system.

Publications

Abstract:

Recognition of human activities plays a pivotal role in recent times for surveillance and security. The convolution neural network (CNN) based models are growing to classify human activities using micro- Doppler (μD) signatures. However, a larger number of parameters of the CNN models increases the computation cost and increases the size. The present work introduces a novel lightweight model, “LW −μ DCNN,” to classify human activities. The architecture of LW −μ DCNN has 438998 parameters with 7 layers. A total of six human activities are recorded in the FM CWR dataset, which is in the form of μD signatures. These μD signatures are converted into spectrogram images and are considered as input for the experiments. The size of the LW −μ DCNN model is only 5.2 MB, which is further optimized by considering quantization aware training, “QAT-LW- μ DCNN,” has size of 0.43 MB with minimal loss of accuracy. The extensive analysis shows that the LW −μ DCNN model achieves 97% of classification accuracy with a higher F1-score for every class than the other state-of-the-art models. The present paper also proposed two transfer learning approaches, i.e., InceptionV3 and MobileNetV1, for the experimental studies to classify human activities.


Pinned Loading

  1. FasterRCNN-Torchvision-FineTuning FasterRCNN-Torchvision-FineTuning Public

    Train Torchvision FasterRCNN model with custom COCO dataset

    Jupyter Notebook 12 3

  2. IndCurr IndCurr Public

    Indian Currency Recognition using AI

    Jupyter Notebook 4 1

  3. Object-Detection-Inference-Interface Object-Detection-Inference-Interface Public

    Object Detection Inference Interface (ODII) : A common interface for object detection model inference

    Jupyter Notebook 1 1

  4. SAM_ONNX SAM_ONNX Public

    A simple package for installing & running Segment Anything (SAM) model in ONNX format.

    Jupyter Notebook 1 1

  5. LiveCell-Segmentation LiveCell-Segmentation Public

    CANet (Chained Context Aggregation Network) for Semantic Segmentation on LIVECell Dataset

    Jupyter Notebook 1

  6. YoloX-Openvino YoloX-Openvino Public

    YOLOX OpenVINO Inference

    Jupyter Notebook