Skip to content

d-krylov/ONE

Repository files navigation

nnfw

A high-performance, on-device neural network inference framework

Goal

This project nnfw aims at providing a high-performance, on-device neural network (NN) inference framework that performs inference of a given NN model on processors, such as CPU, GPU, or NPU, in the target platform, such as the Linux kernel based OS including Tizen.

Project Documents

Getting started

Maintainers

Committers

Feature Request (NEW)

You can suggest development of nnfw's features that are not yet available.

The functions requested so far can be checked in the popular feature request list.

  • If the feature you want is on the list, 👍 to the body of the issue. The feature with the most 👍 is placed at the top of the list. When adding new features, we will prioritize them with this reference. Of course, it is good to add an additional comment which describes your request in detail.

  • For features not listed, create a new issue. Sooner or later, the maintainer will tag the FEATURE_REQUEST label and appear on the list.

We expect one of the most frequent feature requests would be the operator kernel implementation. It is good to make a request, but it is better if you contribute by yourself. See the following guide, How to Implement Operator Kernel, for help.

We are looking forward to your participation. Thank you in advance!

nncc

Re-targetable neural network (NN) model compilation framework

Goals

nncc, which stands for neural network compiler collection, aims to provide a general framework for compiling a given NN model to an artifact that runs on various target devices such as CPU, GPU, or NPU.

Maintainers

Committers

How to Contact

  • Please post questions, issues, or suggestions into Issues.

About

On-device Neural Engine

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 86.3%
  • C 5.6%
  • Python 3.7%
  • CMake 2.8%
  • Shell 1.5%
  • C# 0.1%