Skip to content

Latest commit

 

History

History
 
 

vision-quickstart

ML Kit Vision Quickstart Sample App

Introduction

This ML Kit Quickstart app demonstrates how to use and integrate various vision based ML Kit features into your app.

Feature List

Features that are included in this Quickstart app:

Getting Started

  • Run the sample code on your Android device or emulator
  • Try extending the code to add new features and functionality

How to use the app

This app supports three usage scenarios: Live Camera, Static Image, and CameraX enabled live camera.

Live Camera scenario

It uses the camera preview as input and contains these API workflows: Object detection & tracking, Face Detection, Text Recognition, Barcode Scanning, Image Labeling, and Pose Detection. There's also a settings page that allows you to configure several options:

  • Camera
    • Preview size - Specify the preview size of rear/front camera manually (Default size is chosen appropriately based on screen size)
    • Enable live viewport - Toggle between blocking camera preview by API processing and result rendering or not
  • Object detection / Custom Object Detection
    • Enable multiple objects -- Enable multiple objects to be detected at once
    • Enable classification -- Enable classification for each detected object
  • Face Detection
    • Landmark mode -- Toggle between showing no or all facial landmarks
    • Contour mode -- Toggle between showing no or all contours
    • Classification mode -- Toggle between showing no or all classifications (smiling, eyes open/closed)
    • Performance mode -- Toggle between two operating modes (Fast or Accurate)
    • Face tracking -- Enable or disable face tracking
    • Minimum face size -- Choose the proportion of the head width to the image width
  • Pose Detection
    • Performance mode -- Allows you to switch between "Fast" and "Accurate" operation mode
    • Show in-frame likelihood -- Displays InFrameLikelihood score for each landmark
    • Visualize z value -- Uses different colors to indicate z difference (red: smaller z, blue: larger z)
    • Rescale z value for visualization -- Maps the smallest z value to the most red and the largest z value to the most blue. This makes z difference more obvious
    • Run classification -- Classify squat and pushup poses. Count reps in streaming mode.
  • Selfie Segmentation
    • Enable raw size mask -- Asks the segmenter to return the raw size mask which matches the model output size.

Static Image scenario

The static image scenario is identical to the live camera scenario, but instead relies on images fed into the app through the gallery.

CameraX Live Preview scenario

The CameraX live preview scenario is very similar to the native live camera scenario, but instead relies on CameraX live preview. Note: CameraX is only supported on API level 21+.

Support

License

Copyright 2020 Google, Inc.

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.