Skip to content

qiyue/mace

This branch is 1633 commits behind XiaoMi/mace:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

7c8589f · Jun 29, 2018
Jun 11, 2018
Jun 29, 2018
Jun 28, 2018
May 31, 2018
Jun 27, 2018
May 9, 2018
Jun 26, 2018
Sep 25, 2017
Aug 26, 2017
Apr 17, 2018
Jun 29, 2018
Jun 29, 2018
Jun 28, 2018
Jun 27, 2018
Jun 14, 2018

Repository files navigation

MACE

License pipeline status doc build status

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | 中文

Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing platforms. The design focuses on the following targets:

  • Performance
    • The runtime is highly optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up the convolution operations. Besides the fast inference speed, the initialization part is also intensively optimized to be faster.
  • Power consumption
    • Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs.
  • Responsiveness
    • UI responsiveness guarantee is sometimes obligatory when running a model. Mechanism like automatically breaking OpenCL kernel into small units is introduced to allow better preemption for the UI rendering task.
  • Memory usage and library footprint
    • Graph level memory allocation optimization and buffer reuse are supported. The core library tries to keep minimum external dependencies to keep the library footprint small.
  • Model protection
    • Model protection is the highest priority feature from the beginning of the design. Various techniques are introduced like converting models to C++ code and literal obfuscations.
  • Platform coverage
    • A good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based chips. CPU runtime is also compatible with most POSIX systems and architectures with limited performance.

Getting Started

Performance

MACE Model Zoo contains several common neural networks and models which will be built daily against a list of mobile phones. The benchmark results can be found in the CI result page (choose the latest passed pipeline, click release step and you will see the benchmark results).

Communication

Contributing

Any kind of contributions are welcome. For bug reports, feature requests, please just open an issue without any hesitance. For code contributions, it's strongly suggested to open an issue for discussion first. For more details, please refer to the contribution guide.

License

Apache License 2.0.

Acknowledgement

MACE depends on several open source projects located in the third_party directory. Particularly, we learned a lot from the following projects during the development:

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for their help.

About

Mobile AI Compute Engine

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 73.9%
  • Python 16.2%
  • C 6.4%
  • Java 2.2%
  • HTML 0.9%
  • Shell 0.3%
  • CMake 0.1%