1. Insights
  2. AI Data
  3. Article
  • Share on Facebook
  • Share via email

Three ways your smartphone is programmed to use AI and machine learning

Posted February 7, 2021
Hands using a smartphone on a black background

Apple transformed the cell phone industry with the release of the first-generation iPhone in 2007. At the time, Steve Jobs described the iPhone as an “iPod, a phone and an internet communicator.” The first-generation device included the basic functions of calling and text messaging, browsing the internet and listening to music — but it did not have a video recording function, front-facing camera or GPS. The most popular smartphone applications of today, such as Instagram and Uber, had not yet been created.

Following the first-generation iPhone, Apple and its competitors have continued to offer innovative and helpful features and applications on each new smartphone model. A major factor contributing to the evolution of smartphones is the development in AI and machine learning. Here are three ways that AI and machine learning are built into your smartphone.

Face ID

Apple started using deep learning for face detection in iOS 10. Now, instead of typing in a password or using your thumbprint, you can conveniently unlock your phone simply by looking at it. This feature, called Face ID on Apple devices, can also be used to confirm payments. Modern iPhones use a special camera for Face ID called the True Depth Camera, which maps your face and takes 3D photos that are used to authenticate you. The True Depth Camera uses three different tools to complete the facial recognition process:

  • Flood illuminator: The flood illuminator produces infrared light, part of the electromagnetic spectrum that’s invisible to the naked eye, to see your face better in dim lighting.
  • Dot projector: This tool projects 30,000 infrared dots onto your face that helps to outline features and create a 3D map of your face. When setting up Face ID for the first time, the app will ask you to rotate your head slightly; this is so that the dot projector can map your face from different angles.
  • Infrared camera: The infrared camera compares its view of your face with the 3D map of your face previously stored on the phone’s memory. If enough features match, then Face ID will unlock your device. If not, you can try again from a better angle or somewhere with better lighting.

Face ID uses machine learning so that the system can continue to adapt to changes in your expression, weight, hairstyle and accessories, and recognize your face more quickly. Even if you wear a scarf or grow a beard, the system will still learn to recognize you.

Siri and Google Assistant

Natural language processing (NLP) refers to any machine’s ability to hear what is verbally being said to it, understand the meaning, determine the appropriate action and respond in a language the user will understand. Built-in, voice-controlled personal assistants like Siri and Google assistant are based on speech recognition and natural language processing. Siri is actually an acronym that stands for Speech Interpretation and Recognition Interface.

For these personal assistant apps, the first step obviously is to hear the words that you are saying. The next, more complex step is for the apps to use statistics and machine learning to decipher the meaning of your words. Siri and Google assistant can predict your questions and requests based on the keywords that you use, in addition to your general speaking habits and language choice. Of course, this means that the app can assist you and provide more personal results if you use it more often.

Google Maps

Google Maps is preinstalled on most Android phones. If you’re an iPhone user, you can download it from the App Store to take advantage of its features such as the street parking difficulty indicator. With this function you can check how challenging it will be to find parking at your intended destination before even leaving your house.

Google used a combination of crowdsourced location data and machine learning algorithms to create the parking difficulty indicator. Put simply, the algorithm measures how long it took for users to find parking once they reached their destination. For example, users who circled around or drove up and down the street after arriving at their destination probably could not find a parking spot right away. The parking difficulty indicator is for users looking for street parking, so the algorithm also filters out users who parked in a private lot or traveled via taxi or public transportation, which might otherwise fool the system into thinking that parking was easily available. With this machine learning application, Google Maps can also predict parking difficulty for a certain location at a certain date and time, based on past data.

With the prominance of smartphone ownership and the ongoing development of artificial intelligence and machine learning, there is great promise for future innovation. If you’re looking to innovate and need help, start by contacting our AI Data solutions team today.


Check out our solutions

Test and improve your machine learning models via our global AI Community of 1 million+ annotators and linguists.

Learn more