Google I/O is around the corner, and while the company has already teased us with some revamped services, we’re expecting to see a lot more. Personally, I’m hoping to see improvements to Google Lens, the image recognition tool the company has put into a few of its apps. Because right now, my friend, it ain’t great.
Case in point: it can’t even recognise Google’s own devices. Redditor Prudhvi4p pointed out an interesting oversight in the object identification feature baked into Google Assistant. When they took photos of their grey Google Home Mini, Google incorrectly identified it as either a ball or a tennis racket. Looking at the image, it’s understandable how Google could’ve dropped the ball on this one. The device’s proximity to a power strip, the uneven lighting, and the additional objects around the Mini don’t help.
So I decided to try it out myself, using the Lens feature in the Google Photos app for iOS on the Google Home devices I have at home. I also tested it on a newly planted succulent on my desk, which it correctly identified as a chalk dudleya.
Lens, from various angles, failed to identify any of the three Google Home devices I have. The explanation? “Hmm, not seeing this clearly yet,” with a link to what Google Lens can and can’t identify. While Google doesn’t claim Lens can identify consumer goods (only listing a few items like business cards, paintings, landmarks, plants, and animals), being unable to identify its own products from a bevy of angles highlights the limitations of this feature in its current state.
It also highlights how nascent and unfinished many of Google’s consumer-facing products feel, Google Home and Google Lens being pretty perfect examples. Lens couldn’t identify the dracaena plant I have, one of the few things it said it would be able to, nor could it identify a pretty traditional-looking thermometer I had in my drawer. With Google’s Home devices, it’s often a crapshoot whether or not I’ll be able to send music or podcasts to them without restarting my app, the device, or my router at least once a week.
My Google Home Mini, which Google Lens could not identify.
I understand Lens is built on top of Google’s computer vision research, which uses enormous databases of images to train computers to identify objects, so there might not be many Google Home photos to work with, but this example just reminds me of how sick I am of being a beta tester.
Sure, it’s a developer’s conference, so Google is almost certainly going to focus on the advances it has made in computer vision and machine learning and talk about how it’s making it easier for developers to apply its Material Design aesthetic to their apps. Personally, I’d prefer they fix what’s already out there, or at least quit releasing half-baked products.