This app demonstrates how to build an end-to-end user experience with Google ML Kit APIs and following the new Material for ML design guidelines.
The goal is to make it as easy as possible to integrate ML Kit into your app with an experience that has been user tested:
- Real-time translate using the on-device Text Recognition, Language ID, Translate APIs - an end-to-end solution from text recognition to translate in live camera.
- Clone this repo locally
git clone https://github.com/googlecodelabs/mlkit-ios
- Find a
Podfile
in the folder and install all the dependency pods by running the following command:
cd mlkit-ios
cd translate
pod cache clean --all
pod install --repo-update
- Open the generated
TranslateDemo.xcworkspace
file. - Create a Firebase project in the Firebase console,if you don't already have one.
- Add a new iOS app into your Firebase project with a bundle ID like com.google.firebase.ml.md.
- Download
GoogleService-Info.plist
from the newly added app and add it to the ShowcaseApp project in Xcode. Remember to checkCopy items if needed
and selectCreate folder references
. - Select the project in Xcode and uncheck
Automatically manage signing
option inGeneral
tab, and choose your own provisioning file. - Build and run it on a physical device (the simulator isn't recommended, as the app needs to use the camera on the device).
This app demonstrates live text translate using the camera:
- Open the app and point the bounding box of the camera at a text of interest. The recognized text and it's detected language will show up on the top part of the bottom sheet.
- As you keep the camera stable to recognize a text, you'll see the translated version of this text on the bottom in real-time using the on-device Translate API.
- You can also switch the translate language using the chips below, or clicking the "More" chip to search & select any of the 59 languages available.
© Google, 2019. Licensed under an Apache-2 license.