Google has started a new conversation around hardware, moving from a narrative about specs to one about artificial intelligence and machine learning. The development of AI, hardware and software together has manifested itself in the Pixel smartphone line with Android 8.0 Oreo—a product distinct enough to make Google an instant contender alongside the likes of Samsung and Apple.
On the Pixel 2, Google’s use of AI and machine learning points to the company’s ability to use cloud-based computing and software to augment the capabilities of handheld devices. Software offerings like Assistant and Lens continue to improve through years of research and development, and are what really set this phone apart from the competition.
A quick squeeze on the sides of the phone fires up the Google Assistant from anywhere, or you can interact with voice by saying “OK Google” to speak with it. The conversational UI performs tasks inside apps and surpasses Siri when it comes to having a more natural, contextual conversation.
If you opt in to have Google Assistant always listening, the Pixel 2 also can identify a song playing around you on the radio or in the elevator. When Google Assistant identifies a song, the results appear at the bottom of the always-on display, and a quick double tap offers more information via Google Assistant. If you can set aside the privacy implications of a phone that is always listening, the blend of audio hardware and software to enable machine learning processes right on your device is pretty cool.
This is basically Google search for your eyes. You can open up Google Assistant and then hit the lens button to search for things with the camera. Through object recognition and machine learning, you will receive clickable links to learn more about landmarks or look up books, movies, albums and artwork. It’s not necessarily a super useful tool at the moment, but with time this can prove to be an insightful tool for the world around you.
You may have also noticed there is only one camera on the back, not two. A lot of new smartphones have moved to the dual-camera setup on the back for the Portrait mode. A single-camera setup on the back of the Pixel 2 also does Portrait mode, but with software blur, so no second lens needed. What is particularly impressive is that portrait mode will continue to get better with machine learning to take it to the next level.
Like many other phones, the Pixel 2 uses electronic image stabilization (EIS) to help maintain consistent framing. What is unique about the Pixel 2 is that it does all this stabilization automatically and in real time. The Pixel 2 has a "frame look ahead" feature that analyzes each individual frame of a saved video for movement. Machine learning then compares dominant movements from one frame to another and stabilizes accordingly.
For Google, making hardware is about selling products, but it's also about learning how hardware can better integrate AI. Every company involved in machine learning should be asking the question "how do we apply AI to rethink our products?" Rather than make AI just another feature, we need AI to fundamentally alter what each device is.