Google Goggles was an image recognition app released by Google in 2010. Users could take a picture of something they wanted to know more about – a piece of art, a famous landmark, or a barcode. Then the app would search and provide more information about the picture subject.
Back in the late 00s, things such as image search and augmented reality were barely starting to become more than a part of a sci-fi plot. Still, people like David Petrou, an engineer in Google, were having ambitious dreams of what the future of mobile technology could look like.
In 2008, while still working on Google’s Bigtable database project, Petrou started spending his free time on what would end up becoming Goggles. Even though he didn’t have experience in computer vision or Java, he promptly taught himself, and within a month, had a working prototype of the tool. It was quite rudimentary with limited image-detection capability and slow, and not overly accurate, search results.
Nonetheless, Petrou and other specialists in Google saw a lot of potential in the technology long before Google’s biggest rivals - Apple and Microsoft, started working on similar tech, such as ARKit and HoloLens.
Petrou was joined by Neven Vision’s tech team – an earlier acquisition by Google that had the best computer vision technology at the time, and who had worked on Picasa and its image recognition features. Together they built a better version of the Goggles tool, which could identify widely-known images such as album art, famous paintings, landmarks, book covers, etc.
Goggles was released in 2010 as part of Google Labs. Despite its limitations of recognizing only certain types of images and the whole uploading, processing, and searching taking a staggering (from today’s perspective) 20 seconds, the tool was well-liked.
Furthermore, the developers had grand plans of what the app could grow to be. That included features such as simply pointing your camera at an object and getting a pop-up with relevant information, or even integrating AR and having the app adding all kinds of things to the identified objects.
While the overall idea behind Goggles was quite innovative and had almost limitless potential where AR was concerned, the actual application left a lot to be desired. The technology available to the app’s developers was years, even decades, behind what was needed to put their ideas into practice.
Mobile devices’ cameras were still low quality, and users were still learning to use them daily, let alone for things other than what they would use a normal camera for. There was still a lot of research to be done in computer vision and AI technology, and coveted features such as facial recognition were still far away.
There was talk of integrating the technology with non-handheld devices like glasses and even lenses. But again, this was planning way into the future, without having the hardware or software needed for such applications.
Throughout the years, the developers added various features to Goggles to attract more users and teach them a broader camera application. Goggles was used to scan barcodes to provide more information about different products or even detect text and process it for translation.
However, as time went by, it seemed that Google was spending less and less time and resources on the app’s development. Initially, they had planned for a full iOS version of the app. However, when it finally became available on iOS 4, it was a part of the Google app and it was removed a few years later with the 2014 update.
It’s of little surprise then that Google stopped updating the application around that time. It wasn’t until 2018 that they released another update, discontinuing support for Goggles and prompting users to download Google Lens.
Goggles’ apparent replacement in Google’s image-recognition tools, Google Lens, was released in 2017 and started as part of Google Assistant. The app could do pretty much what Goggles did, only with better cameras and software. Google still seems to have big plans for it. Considering the market’s growing interest in AR and image recognition, this time Google might end up implementing more of the ideas they’ve been working on for more than a decade.
Lately Google (and most other tech giants) is using a lot more advanced AI technology in all their products. That combined with Google Search's considerable power and their research in natural language processing and real-world processing as a whole, could turn out to be a very powerful combination for Lens.
Lens has already been integrated directly into the Android camera on Google Pixel devices and some newer Motorola, Xiaomi, and other devices. This way it can be used directly with various Google services, which would be a lot easier for users than having to access a separate app for the tool.
In addition to the image detection and translation service Goggles could do, Lens now can detect what style clothes or furniture are and help with home decoration and outfit planning and work together with Google Maps, with the idea of integrating AR sometime in the future.
The tool is also using far better hardware and software, which leads to more accurate search results. That, in turn, puts the app on par with newer tools such as Samsung’s Bixby Vision and Huawei’s HiVision.