34543099675_bb8dd53949_smartphone-google

Google: what’s wrong with a lack of jaw-dropping announcements?

In short: Nothing.

With an emphasis on incremental improvements to Google’s photo organizing, sharing and search solutions, Google’s maturing solutions are now becoming so easy, efficient, and helpful that many consumers will feel they can’t live without them. Perhaps equally important, Google is opening up more and more of its supporting AI technologies to developers – for companies who can’t afford to hire an army of AI PhDs.

The Google I/O announcements important to mobile imaging vendors

To put the photo-related announcements in perspective, note that Google CEO Sundar Pichai proclaimed that we are shifting from a mobile-first to an AI-first world. That sounds like a big paradigm shift, but last week’s AI announcements to me felt more incremental to what was already in progress (at Google or elsewhere) than revolutionary – no matter how much publicity the Google Photos and Lens news received in the last few days.

Let’s start with Google Photos, which now has 500 million monthly active users who upload 1.2B photos on any single day.

Google Photos now enables users (in the US so far) to create and order simple (one photo per page) photo books, not unlike the printed photo products that Flickr, Apple, Microsoft, and Amazon already offer in their respective photo cloud storage solutions.  It’s an approach I’ve advocated before: offering photo products as a feature in an environment where users already engage with their photos has compelling benefits over needing to install a dedicated app for a relatively infrequent use case.

But Google goes further than the true-and-proven. It will also use its AI algorithms to suggest to users to create a book after a particular event (think holiday or birthday party), as well as to recommend which photos to include

We expect that over time this will also tie into another new feature called Suggested Sharing, which suggests which digital photos the user might want to share and with whom.  Suggested Sharing could be ad hoc, for instance after a sports game, or semi-permanent through Google Photos’ new Shared Libraries, for instance for families. 

For photobooks, the next logical step is to not just suggest which photos to include in a certain book, but also who of your friends or family would most likely appreciate receiving this book as a gift.

 

The most exciting area where Google Photos’ incremental improvements are starting to pay off is face recognition: as we’re being inundated with photos, knowing who is in the picture is the most important customer photo organization need.

So let me expand on this for a moment.  Image recognition, in general, has quietly been getting better and better in the last few years. In fact, Pichai claims that Google’s deep-learning-based image recognition is now better than that of humans.  (Partly because users simply can’t know as many objects by name as computerized system might know).

The evolution of Image Recognition
The evolution of Image Recognition

And what is most important to recognize in photos?  The people who are in it.

Incidentally, I played a bit with Google’s face recognition a few weeks ago, prior to Google I/O.  As is probably typical for most consumers, it only occurred to me to do so after Google Photos proactively prompted me along the lines of: do you know this person?  (showing a photo of my late dad). 

pt

I clicked on this one photo and 100 or so other photos showed with my dad in it, including the black and white scanned photos of 60+ years ago when he was in his early twenties. No false positives and all based on a single image that triggered this selection! If I search for “dad food” I find all the photos of him taken during meals.  Of course, after this zero-effort exercise, I couldn’t resist trying out other friends or family members whom Google Photos highlighted as frequently featured in my photos.

Is Google’s face recognition really only based on image analysis, or does it smartly also draw conclusions from the photo’s metadata or the gazillion things Google knows about me, which I have, for better or worse, stopped worrying about?  I don’t know how Google does it.  And as a consumer, I don’t really care, as long as it works and requires virtually no effort on my end.

Compare Google Photos’ face recognition to the time-consuming and error-prone solutions offered in past desktop programs such as iPhoto or Picasa: it required multiple rounds of test images, the user needing to correct the program’s mistakes, and ultimately still resulting in too many false positives to make the face recognition even remotely useful. Now it just works.

In short: there is nothing wrong with incremental improvements, especially those that turn promising technologies into products or services that are good enough to use in real life.  In fact, Google Photos has evolved so much that for many iOS and Android users it has become their de facto camera roll replacement app (note that Google Photos is not even the official default photo app on Android or Google’s own Pixel smartphones).

Back to the Google I/O photo announcements. Google also introduced its Google Lens technology, which lets your smartphone camera interpret in real-time what your camera sees. It then turns its findings into proactive suggestions to the user.  Here also the concept is far from new: the Google Translate app, in real time, translates text your camera is pointed at; Google Goggles, initially released in 2010 (!), is a visual search app that provides information about whatever painting, landmark, or barcoded product your camera sees. 

What starts making Google Lens compelling is how it turns that knowledge of knowing what the user is looking at into actionable and useful knowledge in the proactive and intelligent tradition of Google Now and Google Assistant.  You point your camera at a restaurant and Google Lens could proactively provide you with the opening hours, a menu, or tell you if there’s a table available tonight when there’s nothing on your calendar.

Google Lens will be integrated into most Google properties, to start with Google Photos and Assistant.  

Photo app developers wanted – get the red carpet treatment as an Early Bird VIP Networking attendee at Mobile Photo Connect and submit a proposal for demoing your app at no extra charge!

A few more things… 

Google Instant Apps.  This technology, which provides app-like functionality inside the mobile browser rather than serving this through an app that the user needs to download and install, is finally out of closed beta. Instant Apps could be a great solution for apps that cater to low-frequency use cases, such as photo print products or occasion-specific collages.

As Ching-Mei Chen, co-founder of PicCollage, explains, “Instant Apps could be a great way to attract new customers by offering a low-threshold solution. A consumer could, for instance, use a Mother’s Day sticker pack without needing to install a full-sized app, which they might not use regularly or which might require too much device storage space.”

Google Tensor Processing Units.  At Google I/O last year, Google announced its tensor processing unit (TPU), a custom chip built specifically for machine learning and tailored for its open source TensorFlow machine learning library.  This year, it announced the second generation of these TPUs and the availability for use by external developers who want to leverage all this machine learning power without needing to develop it all themselves.  Machine learning is becoming the next battleground between Google, Amazon, and Microsoft, who are all vying to become the industry’s dominant cloud computing platform.

Google Kotlin. Google announced that it will promote Kotlin, a modern programming language for the Java Virtual Machine, as an alternative language for writing Android apps. Kotlin is interoperable with Java, which until now was Google’s primary language for writing Android apps (besides C++). Google will ship Android Studio 3.0 with Kotlin and support a non-profit foundation for Kotlin.

Google Daydream.  Google announced it is partnering with HTC and Lenovo to develop standalone Daydream headsets that won’t requiring inserting a smartphone.  In addition, Samsung announced that its Galaxy devices will work with Daydream as well, i.e. they will no longer exclusively work with Gear VR headsets.

Google Notifications in Android O.  Also announced at Google I/O, Android O will feature color-coded notification sorting by priorities and notification type – important changes for most photo and video app developers, who rely on notifications to engage with their users.

Vivid-Pix.  Memorial Day is coming up – How do you get great vacation photos?  Check out the free white paper review of various cameras and software programs from our friends at Vivid-Pix, and enjoy their tips for taking underwater photos.

Author: Paul Melcher

Paul Melcher is the founder of Kaptur. He is an entrepreneur, advisor, and consultant with a rich background in visual tech, content licensing, business strategy, and technology with more than 20 years experience in developing world-renowned photo based companies with already two successful exits.

Share This!Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Share on RedditBuffer this pageEmail this to someone

Leave a Reply