Outside of Instagram, the biggest success in the world of photo apps is certainly Facetune. With millions of paid downloads, it was the number one paid app in over 120 countries. The creation of 5 Israeli friends, it certainly benefited from Instagram’s success and especially its insatiable appetite for selfies. But unlike Instagram, Lightricks, the company behind Facetune, is a real software development company. It currently has 6 different photo/video editing apps in the app store, all very successful. We spoke with co-founder Itai Tsiddon ahead of his appearance at the 5th annual Mobile Photo Connect where he will hold a Firechat session…
– A little bit about you. What is your background:
I was born in the US but grew up in Israel. After serving four years in the IDF as an intelligence officer in an infantry unit, I enrolled in law school. I worked very briefly on the investment side in Israel before co-founding Lightricks with my four co-founders in 2013, while I was clerking in the Supreme Court of Israel. Following that Lightricks has been the majority of my career. I held a day job doing M&A in NYC for Davis Polk during the early bootstrapped days of Lightricks for a few months but otherwise didn’t have much in the way of a regular career.
– Since Facetune‘s huge success, Lightricks has been making new photo editors, one after the other. How do you keep users motivated?
By offering best in class products across different use cases, creativity verticals, and general user needs. Note that different products we make have very different user bases – Facetune 2, Enlight Photofox and Enlight Videoleap are good examples. Yet in each vertical, we aim to apply research grade innovation. We continuously innovate with our team of ~90 people by now, a sizable chunk of which comes from hardcore research backgrounds, to create wow experiences for users and push the boundaries of what is thought of as possible on mobile. All with a product first user-centric outlook that aims to delight users and abstract away from them the underlying complexities of building such powerful tools.
– With each new hardware release, the camera and software support delivered by Apple or Samsung is more and more sophisticated. Is there any room left for independent app developers to add photo-related features?
The dominant platforms on mobile, be they on the hardware level or the software one, by nature of their business model must aim to serve the billions, or at least hundreds of millions. While they can dabble on the side in more focused creativity applications, that is far from the core of their business. We have identified this as the field we want to address. If even 5% of those billions who lean towards creativity and wish to do more than consuming content on their mobile devices, there can be substantial room for a mobile tools company. In our view, for a variety of reasons too complex to go into here, this will require a company with a suite of tools and the advantages of scale in different area from research to user acquisition.
– As a mobile app photo editor developer, Lightricks has built a niche of its own. Who is your biggest threat, Mobile phone manufacturers, small upstart app developers or big software companies like Adobe ?
In the space of companies focused on creative tools, Adobe is certainly the company with the most resources and it is not an old-school Goliath you can make a fun of. It is a very nimble company relative to its size, they are doing amazing things in a lot of different spaces, we have a ton of respect for what they are doing and obviously keeping close tabs on their efforts.
– How does Lightricks integrate artificial intelligence it in its offering?
We invested a lot of effort creating DNNs infrastructure that allows us to move rapidly from prototyping using research ML tools like Tensorflow and Keras to actual shippable code. So at the moment, more and more algorithms we are using basically in all our products are moving to this architecture. From things like foreground/background separation in Facetune, to semantic segmentation and automatic layer creation in Photofox – it is all part of the big push to DNNs that we are doing internally.
– Do you see VR or AR taking a predominant role in consumer mobile photography (or will it be limited to games and shopping apps)?
Historically, every new computational platform and even mid-cycle iterations had a significant impact on what people are creating and considering art. The advent of mobile really democratized P&V tools and VR/AR is probably going to add something to the mix. One of the promises of VR is creating infinite working spaces since you are no longer restricted to a screen of fixed size. So it is kind of straightforward to imagine for example photo-galleries where artists can showcase their works in a virtual environment that was customized for that. So we think will see these fancy VR photo viewers pretty early on. After that, as more people experiment with the platform we will see more experimental stuff and hopefully something there will emerge as the next big thing in our field.
– Your company has its ears very close to how consumers use photography on a daily basis? What are your main observations?
– What area of computer vision research do you feel is most exciting?
No big surprises here, that is obviously DNNs. Many folks at the company have a classical image processing and computer vision education and were skeptical about it at first, but as time goes by more and more classical CV and IP problems are tractable with these architectures, getting state of the art results. The exciting thing here is that all of these neural networks have the same underlying architecture, which hopefully will allow to significantly reduce the time between prototyping and production level code.
– What do you look forward to the most from the Mobile Photo Connect?
Connecting with other people in the industry.
– What would you love Lightricks to offer that technology cannot yet deliver?
Without going into the realm of science fiction (5-10 years from now), there are certain image processing algorithms that can’t run in a real-time and thus not very applicable to video and in-camera live editing at the moment. The advent on DNNs allows us to do such diverse tasks as separation of foreground from background, estimation of depth from a single image, control (to a degree) of a scene illumination, in-painting etc… with basically the same underlying architecture. Ideally, we would like to apply all these effects live in camera and on video, but we will need another cycle or two of hardware in order to get there.
Kaptur is a proud media sponsor of Mobile Photo Connect
Photo by Alex Matravers
Author: Paul Melcher
Paul Melcher is the founder of Kaptur and Managing Director of Melcher System, a consultancy for visual technology firms. He is an entrepreneur, advisor, and consultant with a rich background in visual tech, content licensing, business strategy, and technology with more than 20 years experience in developing world-renowned photo-based companies with already two successful exits.