The photo app developers I spoke with are overwhelmingly excited about a range of WWDC announcements, even though one of these dumbfounded several developers. We’ll get into that one later; let’s start with the four most exciting announcements.
ARKit: making mixed reality easier to create and more realistic to view
Apple’s OS 11 developer ARKit enables developers to build mixed reality apps that interpret the imagery from the user’s iPhone camera by identifying surfaces, tracking motion, estimating scale as well as ambient lighting, and by providing fast and stable motion tracking.
What does that mean? If your camera sees a table in your living room, a mixed reality app can now let you place virtual objects, such as a lamp and a steaming cup of coffee, on the table’s surface. If you move the virtual lamp, the shadows on the table will automatically change direction.
The good thing? There are actually three:
- Building AR apps for viewing on iPhones or iPads will be easier than ever. Good news for iOS app developers.
- Mixed reality scenes will look much more real than we’re used to from apps like Pokémon Go, which basically superimpose virtual objects onto “flat” backgrounds. So the next generation of Pokémon Go or Snapchat Lenses could look much more impressive on iOS (whether these vendors actually care to offer apps optimized for just iOS is another question – they were conspicuously absent at the ARKit announcement).
- AR (similar to what we’ve described for VR) has a great future if users start adopting AR through low-threshold solutions, i.e. if these solutions don’t require AR glasses or fancy goggles. With OS 11 consumers will get a great AR experience by just using their iPhone’s camera and a great AR app. And if they want more? There is a whole range of other AR viewing/camera devices on the market, all the way up to HoloLens and Magic Leap.
Depth API: boost to visual machine learning
Developers will now be able to take advantage of the dual-camera Portrait mode on the iPhone 7 Plus through a new depth-of-field API. Developers now have an extra depth channel at their disposal in addition to the standard RGB channels.
Not only will this API make it easier to separate foreground from background objects in photo or video apps, it will also make developers’ machine learning more accurate and/or require smaller training sets, similar to how ML is easier to do with color photos than with grayscale photos – the ML systems will have more data points from the get-go.
The learnings could be used for many photo features, including intelligent photo editing in apps like Enlight, a closed beta adopter of the Depth API.
Under the hood: Apple’s push in machine learning
Apple is fully riding the wave of running more machine learning processes locally on the device rather than in the cloud, which not only cuts down on the latency involved in device-cloud traffic but also has security and privacy advantages. With Core ML, machine learning will run faster on Apple hardware thanks to Apple’s improved advanced graphics technology that optimizes ML processing. In addition, Apple is making it much easier for developers to write ML code for Apple’s platform, including the ability to port existing open source code to the Apple platform through just a few a few lines of code.
New photo and video file formats: saving disk space
Apple introduced the High-Efficiency Image Format (HEIF) for photos and HEVC (H.265) for video. Both formats aim to significantly reduce file sizes without a discernable loss in quality. Smaller file sizes and more storage (plus Apple’s assurance that HEIF is auto-transcoded to JPEG when needed): who wouldn’t applaud that, as most of us never have enough disk space?
For photo app developers and their customers, there’s one other reason why a new file format like HEIF, which will compress twice as much as standard JPEG files, is important. As the normal reaction of many users when they run out of space is to delete infrequently used apps, they might also inadvertently delete the photos edited in these apps, because many photo apps store edited photos inside the app bundles. That’s not a good thing for consumers, or for developers – so it’s great that we’ll see some space freed up thanks to more efficient compression. [you can read more about the HEIF format here]
Apple Photos Projects: leaving iOS photo app developers in the cold
Now, the most puzzling announcement: in High Sierra, Apple’s next MacOS, developers will be able to access a new Photos API to create Projects directly inside the desktop version of Apple’s Photos program – but not the iPhone or iPad versions of that program.
Examples of Projects include photobooks destined for print, slideshows, or photo-rich websites. Among the initial adopters highlighted during the keynote are three of our friends in the printed photo product space (iFolor, Shutterfly and WhiteWall), along with video slideshow maker, Animoto and website creation builder, Wix.
As a Photos extension developer, you can fully leverage Photos’ beefed up AI intelligence, which includes the ability to auto-create photo collections (called Memories) around certain occasions that are filtered for the most interesting photos, even for the most interesting areas within photos, such as faces. Developers can leverage not just the metadata behind these Memories, but also many other Photos features by using these inside their Projects without the need to develop these functions themselves. Plus they can offer their applications directly inside the Mac version of Photos, a widely used photo organizing program. No wonder a web/mobile-based provider like Shutterfly is going back in time to develop desktop photo product ordering software (for the first time in their history, as far as I know!).
What’s puzzling about the Projects API? For one, Apple is exposing its own photobook, calendar, postcard, and photo printing product services inside Photos to direct competition by any developer who invests in creating a Photos extension. I’m still scratching my head trying to understand Apple’s business rationale.
But the most puzzling aspect of the Photos Project API Is that it runs only in the desktop version of Apple Photos. Photos is a great program and Projects will make it even more useful. In fact, this desktop-only approach would have been a wonderful thing if I didn’t own a smartphone! But I do have a smartphone and I no longer live in the old days when all my photos came from my DLSR’s Micro-SD card and my Mac was the central hub of things. The world has moved on, not just to an “any device photo viewing” world, but also to an “any device photo organizing” and an “any device authoring” world. Think Adobe Creative Cloud, Dropbox, and even Apple’s own iCloud photo edit syncing features. Today’s consumer expects to be able to enhance their photos directly on their phone, edit a little more later at work in a web app, and change things again at night on their home computer. Or do it all on their phone.
The Photos Project API extensions are not mobile-first, not mobile-second, not even mobile-on-the-horizon – effectively leaving the developers of the currently available 69K iOS photo apps in the cold. Hopefully, that’s a short-term priorities issue that will be fixed in the near future. Still, Apple’s priorities with Projects are puzzling indeed.
A few more things…
Enlight. Lightricks’ best-selling photo editing app Enlight just won a prestigious 2017 Apple Design Award at WWDC, being the only photo app among the 12 winners. Hear more about Enlight at Mobile Photo Connect, October 24-25 in San Francisco, where Lightricks co-founder Itai Tsiddon will share his thoughts on his company’s innovative move towards a subscription-based model with Facetune 2, its next-generation facial image enhancement app. Plus we’ll get a glimpse of whatever else Lightricks might have in store for us!
***Photo app developers wanted – get the red carpet treatment as an Early Bird VIP Networking attendee at Mobile Photo Connect and submit a proposal for demoing your app at no extra charge! Sign up by June 30.***
Skype. As messaging platforms have morphed into photo platforms and photo platforms have morphed into messaging platforms, the folks behind Skype must have thought: why not? – and turned Skype into something that’s hard to summarize. The new Skype includes Snapchat-inspired Stories, access to the phone’s camera, and ephemeral visual content (lasting one week, that is). Not enough features? For good measure, Microsoft threw in chat bots based on their Cortona AI. Too little too late? Or too much too soon for those who just want to make a free call?
Author: Hans Hartman
Hans Hartman is president of Suite 48 Analytics, the leading research and analysis firm for the mobile photography market and organizer of Mobile Visual 1st, a yearly industry conference about mobile photography.
1 Comment