Adding machine learning to your Android apps: Recognize text, faces, and landmarks

Share
  • September 24, 2019

When machine learning first launched, we could only aspire to use it with high computing devices over the cloud. The processor capabilities required to compute ML-based solutions were high, and the hardware needed to be powerful, compared to what was available in the raw devices.

However, things changed, and machine learning capabilities enhanced. The powerful algorithms of mobile devices and the new-age apps that are being devised presently are giving way to the creation of ML-based apps that can assist you on-the-go.

With time, you will have on-device machine learning capabilities thrust upon you, which will enhance the way you have been using this technology. 

If you have heard of on-device machine learning, let us give you a glimpse into how it works and the various applications of this form.

With on-device machine learning technology, you can make machine learning apps integrate with any third-party device, without the need for cloud technology. Embedding the solutions within apps has become easier, and these apps will run with Android or iOS gear without any issue. 

Here are a few applications of on-device machine learning that might interest you:

  • One of the major ways in which you can use on-device ML is by integrating your apps with smart assistants such as Cortana, Siri, or Google Assistant. The idea is to have them improve your ML capabilities.
  • You can use on-device ML in Snapchat filters. The application uses human faces, recognizes them through its machine learning capabilities and adds various filters, thus making it an engaging app for the people.
  • Gmail has taken this solution in its email feature. You can now use smart replies to answer an email instead of thinking through your next response. The smart reply feature can also be incorporated into various chat and messaging apps.
  • Machine learning can also be used within apps to identify and label objects that you see through Google Lens.

SEE ALSO: Reproduce machine learning results with pre-trained models in PyTorch Hub

If you have been planning your very own ML-based app solution, then here are a few things that you need to consider when adding ML to Android apps.

You need to use the Firebase ML kit if you want to add machine learning capabilities to your Android app. There are several in-built APIs that you can use to add the capability.

 

Using the Firebase API

  • Create an Android Studio project and a Firebase project. Then connect the two through the API that you want to use and communicate and connect with the Firebase.
  • You can either use the on-device models for adding ML to your app. Or, use one of the pre-trained models, customize it, and use it to suit the purpose of the app.
  • There is another way that you can go about adding the machine learning capabilities to the app. You can use both models, and allow users to decide the model they want to use at runtime.

Basic APIs that you can use to create ML apps

Recognize text

You will need to use the text recognition API for this purpose. It will not only analyse but also identify and process the text for your app. If there is an image that contains text, and your users want to copy the text, this type of ML-based app can help extract the text from the image, and keep a record of the information. There are various applications for this API, depending on your needs.

Recognize faces

Snapchat, as discussed earlier, is an excellent example of the on-device ML app that uses this application. Apart from that, you can also use a facial recognition feature as a security key to open your phone. The recognition feature allows you to blur the background while on a video call. The idea is to process and recognize the face while keeping the other things at bay. It is an excellent feature for certain applications.

Recognize images

The image recognition API allows you to identify places and objects based on a photo. Let’s say you receive a photo and don’t know where the place is located. You can use the image recognition API to know more about the place. Museums have been using the image recognition API to identify items placed in their museum, and then tell the history of the items to people.

Landmarks

Have you ever wondered, “I want to know the location of this photo my friend shared.” As a developer, you can introduce the landmark feature in an ML app that you are developing by introducing the API. It will help the users know famous landmarks and places within the image. 

This API will allow the app to automatically tag the place identified within the photo, thus giving an insight into the tourist attractions in a city or country. The idea is to enhance the user experience through machine learning.

Smart reply (coming soon)

Considering Google is constantly working on enhancing the experience, they are still working on new APIs to be included in the machine learning-based Android apps. One such API is Smart Reply. 

SEE ALSO: The Limitations of Machine Learning

If you have been using Gmail, you are already aware of Smart Reply, and how it works. It suggests the text or snippets that you can add to the message you are sending, and it is pretty contextual. It is in line with the suggested response feature, where the suggestions are made based on an understanding of your response.

How do you add ML features to a mobile app?

Here are the steps that you need to follow to create an on-device ML app for Android:

  • First, you need to create a project in Firebase. https://console.firebase.google.com is where you need to go to create the project.
  • Now, create a mobile app project in Android Studio. Next, click on Tools>Firebase, select ML kit and click on use ML kit to get started.
  • If you want to enable machine learning in offline mode, then you need to modify the Androidmanifest.xml.
  • According to the feature and API you want to add, you might want to integrate the hardware. For face recognition API, you need to integrate the camera.
  • Configure the hardware, next, capture the image, text or anything that you need to use ML within the app.

Use the models you have created or the pre-trained models to help the app recognize what you have added to the project. 

Before you begin the project, there are two things you might want to consider: 

  • When you are building an ML app, you are actually solving an issue on-hand. You need to be sure that the problem that you identified can be solved using ML alone. 
  • Once you are sure of the use of ML, you need to be sure that you have assessed all the requirements for developing the app. Thorough research will help you get the ideal app solution.

Summing up

Adding on-device machine learning capabilities to your app will enhance your app solution, and offer you a highly capable mobile app. To add ML to your Android app, you should use the Firebase SDK, which comes with a series of API.

According to the capability that you want to integrate, add the API, use a pre-trained or customized model, and launch your own ML application. It is important that you know what you are creating and how you plan to have it done before you move ahead with the launch of the application.

The post Adding machine learning to your Android apps: Recognize text, faces, and landmarks appeared first on JAXenter.

Source : JAXenter