It used to be dead simple, a dark box and a lens. As technology evolved, so did the camera, getting ever more sophisticated and powerful, at least in the box. The lenses, besides being clearer, lighter, more efficient, is still a piece of glass with its limitation ( seize, weight, accuracy) . It doesn’t matter anymore. With computational imaging, what is captured by the lens is only the beginning of what the camera can do. Paul is one of the speakers at the upcoming LDV Vision Summit, May 19 & 20 th, in New York.

A little bit of yourself, what is your background ?
Co-founder and CTO at Algolux - Computational Optics
Paul Green, Co-founder and CTO at Algolux – Computational Optics

I’m currently the CTO and a co-founder of Algolux, a computational imaging startup based in Montreal.

I’m originally from California, where I did my undergrad at UC Berkeley (I worked as an undergrad researcher for another panelist Serge Belongie while he was a grad student there). I’ve always been interested in startups, so after Berkeley I worked for 2 years trying to build video codecs for mobile phones. Unfortunately we were a bit ahead of our time (remember your first Verizon flip phone in 2001?) so I decided to head to grad school at MIT.

I earned a PhD in CS, focusing on Computational Photography (my co-panelist Ramesh Raskar was on my PhD committee), which is something like the intersection of Computer Vision, Computer Graphics and Optics. My passion for photography started in grad school and I was particularly influenced by my office mate Eugene Hsu and my advisor Fredo Durand. After graduating from MIT, I worked for 2 years at a computer vision startup in Cambridge, MA. Eventually, academia lured my wife to Montreal and I followed. I like to joke that every time I move it gets colder.

Briging the gap. What computational imaging brings to photography
Briging the gap. What computational imaging brings to photography
What is the driving reason for the creation of Algolux?

Personally, it was a passion for photography and computer vision. On the business side, there is massive demand for photography. It’s estimated that 1 trillion photos will be taken in 2015 – mostly with mobile cameras – and growing 16% / year (Source). Mobile cameras are ubiquitous and coupled with ever increasing computational resources. I think we are reaching a critical mass of amazing algorithms and techniques that are being developed in academia, and that are ripe for use in the wild. On the other hand, the existing smartphone supply chain is fairly optimized when we think about components like SoC vendors, ISPs, and camera modules. Disrupting this established and complex ecosystem is a big challenge but we think we have a good path to success. This is being validated by the response we have been getting from the handset OEMs and camera module makers.

Is this the end of complex lens systems?

I don’t think so. We are certainly well past the end of “optics only” imaging systems. There is already a lot of computation going on behind the scenes to create a good quality image using today’s tiny cameras, and the lens system is only one part. Traditional optical design is a mature and developed discipline that has a proven method for delivering high quality imaging systems, but the first thing an optical designer will tell you is everything in optical design is a tradeoff; for example, size and weight for image quality, aperture size, etc.

Left: Standard Image Capture; Right: With Virtual Lens
Left: Standard Image Capture; Right: With Virtual Lens

At Algolux we are working towards enabling a new paradigm of coupling optical design with software processing which can help mitigate the tradeoffs in the design space. Now we can trade-off computational time, power, etc for image quality, new features, etc. The great thing is that computational power is still growing while the physical constraints of optics haven’t really changed much in dozens of years, so it’s only a matter of time before computational photography / computational optics really supplants the existing ecosystem. It’s definitely a bit of “software eats hardware”, where the software we write is much more complex than the optical system. So it may be the beginning of the end although we still have a ways to go.

Can we imagine a future with no lenses or just a simple piece of plastic ?

We already see signs of this in academic research and it’s quite promising. Naturally, it will take more time for this to penetrate mainstream photographic applications but I think you will see simple lensing for other applications like computer vision, health diagnostics, automotive, gesture, etc. For example, the human eye is actually a pretty simple lens system. Even the eyes of people with the best vision have many inherent aberrations (e.g. chromatic), yet our brain is able to compensate for many of them. This is where deep learning may come in.

What is the toughest challenge for computational photography ?

Within the smartphone market, I would say the toughest challenge is access. The camera hardware and software stack is a very closed ecosystem, where components of the supply chain are commoditized and each OEM has its own proprietary flavors. There has been some progress recently with the latest Android OS (Lollipop), but there is still a long way to go until full adoption by the OEMs.

Many people say the holy grail is packaging the quality and versatility of SLR cameras with the availability and convenience of your mobile device. There have been some computational photography successes in mobile photography (e.g. panorama and HDR modes) and some of the recent features that Google has developed like HDR+ and Lens Blur are very cool. But today when I really want great image quality I still turn to my SLR, assuming I have it with me which of course is the downside. What Lytro has done is great, but they are still niche because they haven’t yet been able to get their technology into a convenient form factor (ie your phone). Companies like Light, Pelican, and Core Photonics all seem to have that goal but it’s yet to be proven and there always seem to be tradeoffs. This is why having a more ubiquitous approach to processing images within any optics/hardware combination is a very powerful value proposition.

Another example of computational imaging.  Left: Standard Image Capture; Right: With Virtual Lens
Another example of computational imaging. Left: Standard Image Capture; Right: With Virtual Lens
Can computational photography embrace deep learning and computer vision and if so, what can we expect to see?

Absolutely. I see computational photography as a natural blending of many related areas but particularly computer vision. In our work at Algolux, we often use so-called natural image priors. These are mathematical descriptions of what makes the images you capture with your camera look “natural” (as opposed to synthetic, noisy, or unnatural). This is something that the brain can do pretty well, so I think deep learning could help us learn better models (priors) for natural images, which will improve our algorithms.

I also think down the line true scene understanding and recognition could really revolutionize photography.

Replacing the conventional lens. Expected growth of computational cameras in smartphones.
Replacing the conventional lens. Expected growth of computational cameras in smartphones.
With the LDV summit coming up, what do you hope to get from it ?

Evan has assembled an amazing group of experts in imaging, computer vision, and video and I’m very thrilled to take part in it, but I’m even more excited as an attendee of the summit and to hear from the other panelists and speakers. I expect to get a glimpse into the future of these exploding fields from many of the very people that are working to create it. Lastly, running a startup, I take every possible opportunity to do some recruiting. We’re always searching for exceptional engineers and researchers.

Tell us what is interesting and unique about the LDV Vision Summit
 Entrepreneurial Computer Vision Challenges? Who should compete and
 why?

First, I think that having a successful computer vision focused event of this kind is very exciting. We are used to conferences that are more research-oriented but the fact that the LDV Vision Summit brings together people from academia, industry and the investment world is truly unique. The Entrepreneurial Challenges take this one step further by having computer vision experts work on real-world problems and have their efforts evaluated by world-class judges. I think this is an amazing opportunity for scientists who wants to show off their skillset, and in fact can serve as the most practical kind of job interview with a Tier 1 set of employers.

 [ NDLR : Kaptur is a proud media partner of the LDV Vision Summit. Get up to 50% off ticket price if your purchase before Thursday, April 30, 2015. Use promo code : KAPTUR]

 

Photo by ifindkarma

Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Stipple, and more. Melcher received a Digital Media Licensing Association Award and is a board member of Plus Coalition, Clippn, and Anthology, and has been named among the “100 most influential individuals in American photography”

5 Comments


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.