8369050149_8eff83935a_computational

LDV Vision Summit : Computational Photography and video

As a refresher for those who did make it and a discovery for those who couldn’t, we will be publishing a series of transcripts from last year’s LDV Vision Summit.  First on our list is this conversation around computational photography and video from some of the top experts in the field:

Moderator: Evan Nisselson, LDV Capital

Panelists: Michael Cohen, Principal Researcher, Microsoft Research *,  Paul Green, CTO & Co-Founder, Algolux and Ramesh Raskar, Associate Professor, MIT Media Lab

Michael Cohen, Principal Researcher, Microsoft Research Paul Green, CTO & Co-Founder, Algolux and Ramesh Raskar, Associate Professor, MIT Media Lab, Evan Nisselson, LDV
(from L to R) : Michael Cohen, Principal Researcher, Microsoft Research,  Paul Green, CTO & Co-Founder, Algolux and Ramesh Raskar, Associate Professor, MIT Media Lab, Evan Nisselson, LDV. Photo © Robert Wright/LDV Vision Summit

Evan Nisselson: I’d like to invite up our panelists—Michael Cohen from Microsoft Research, Paul Green from Algolux, and Ramesh Raskar from MIT. We’re going to have a 35-minute conversation and the last five minutes will be questions from all of you and there will be mics in the audience.

Thank you guys for coming. I’d like to kick it off with a one-or-two minute, at the most: Who are you and what should the audience know in a couple sentences? Then we’re going to dig into real details about that. Why don’t you kick it off?

Michael Cohen: Sure. I’m Michael Cohen. I’m at Microsoft Research, where I’ve been for 20 years, but I’m really an academic at heart. MSR is certainly academic. I’ve been working primarily in computer graphics and now computational photography.

Evan Nisselson: Fantastic. I like concise.

Paul Green: Yeah, good morning. I’m Paul Green. I’m a co-founder and CTO of a Montreal-based computational photography startup called Algolux. At Algolux, our focus is on providing amazing image quality to today’s smartphones and next-generation smartphones. We enable things like redesigning the optics to make the camera thinner, adding new features like optical zoom, or, more practically, making the camera more manufacturable. Before that, I did my PhD at MIT, where I knew Ramesh well.

Evan Nisselson: Cool. Ramesh.

Ramesh Raskar: Hello, I’m Ramesh Raskar. I’m a faculty member at MIT Media Lab. Our group is called Camera Culture. In the group we try to make the invisible visible, and also create new ways to capture and share the visual information. Within that, we look at some extreme technologies like cameras that can see around corners or cameras that can create videos of light in motion at a trillion frames per second. We also look at medical devices, medical imaging, and computer vision. Millions of pictures are online and this will change our visual experience.

Evan Nisselson: What’s one of the fascinating things here is mixing. We did it last year and we did it this year, and I think it’s very critical. Mixing entrepreneurs, executives, researchers and professors from universities and corporations gives us a lot of interesting dynamics. We’ve got two research labs, one at a big company, one at a university, and a startup. Polar opposites, potentially, or many similarities.

There’s been a revolution at Microsoft in your group. Tell us a little bit about what’s changed.

Michael Cohen: Sure. A lot has changed over the last 20 years, as I’m sure you know. Probably one of the biggest things is that for the first 10 years at least, the way we would have impact is by trying to transfer technology out of Microsoft Research into the larger business organizations like Windows or Office, etc. Most recently, we’ve become our own little startup, I would say, within the company. We’re able to build some technology, we’re able to build it into an app, we have a designer in our group now who does beautiful designs, and get things shipped out. I don’t know if any of you saw last week we shipped out the hyperlapse app on Android phones, on Windows phones, on desktop, and actually a Cloud service, offering all around one vertical with our team, which is about 8 to 10 people within a company of 100,000. That’s a very unique undertaking.

Evan Nisselson: It sounds like you highlighted the “get things shipped out.” Is that probably the major change over the last 20 years?

Michael Cohen: Absolutely.

Evan Nisselson: You actually shipped a product.

Michael Cohen: We as a small group can actually ship things out to the public.

Evan Nisselson: Fantastic. Ramesh, why don’t you take it from your perspective? How is it similar or different from the lab that Michael works in and the projects that you’re doing? You’ve got tons of fascinating projects, but how does the structure work? How do you choose which projects to work on?

Ramesh Raskar: As Michael said, MSR is the economic arm of Microsoft. I consider the Media Lab as the industry arm of MIT, because we work with 80 Fortune 500 companies who are our main sponsors. We are very close to the real-world action. As much as we do very fundamental and applied research at the Media Lab, we are also very, very close to the passion of getting ideas out there in the real world. Right from e-ink that you see in Amazon Kindle to technologies that are far out there, including many visual technologies, [they are all] part of the Media Lab.

 

Evan Nisselson: Paul, how are you different from that? It’s a little obvious but give me your perspective. You used to be at MIT…

Paul Green: Obviously our mandate is a lot different. We’re a startup, we need to sell these things as opposed to doing…

Evan Nisselson: Make money.

Paul Green: Make money as opposed to doing interesting, head-in-the- clouds kind of research. Big ideas. We still try and do some of that. I have that background, of course.

Evan Nisselson: That’s interesting, because I would say this panel—maybe not in this audience as much—but if we had a survey of the world and asked them what computational photography was, they would say you’re doing something far out there. It sounds unrealistic, but for us it’s exciting and it’s real. Give us an example of more details of what you’re actually going to ship, to the extent that you can.

Paul Green: I said we’re focused on providing amazing image quality and that’s really what the end goal is. We realized pretty early that it’s really hard to sell image quality alone. There needs to be some business case around it. Something that the OEMs and the camera module makers really latched onto was the idea of yield. What we’re going to ship is basically software that can make the modules more manufacturable.

Paul Green, CTO & Co-Founder, Algolux
Paul Green, CTO & Co-Founder, Algolux. Photo © Robert Wright/LDV Vision Summit

Evan Nisselson:  So you’re not making hardware, just software.

Paul Green:  Just software. It’s really tightly integrated, of course.

Evan Nisselson: For a little foundation, in one sentence each of you—one sentence—define computational photography and video.

Michael Cohen: I’m going to give you two.

Evan Nisselson:  Now come on.

Michael Cohen:  One of them that I’ve repeated for the last 10 years is: “Capture, edit, share, and view as a single experience.” It’s very hard to wrap our heads around that. I think people begin to accept that, but you really need those four things in a single user/consumer experience.

Paul Green: I’d say you know it when you see it. I view it as the intersection of a lot of other areas like computer vision, optics, signal processing, image processing, machine learning.

Ramesh Raskar: In2005 we taught that computational photography is about creating completely new form factors like thin cameras, a compact flash that can be used as a studio light, or new experiences like interactive photo frames. Or creating impossible photos, pictures that can see around corners, or pictures that feel frozen in time—although the time was different scales. Somehow we lost that dream of computational photography as we defined it 10 years ago. Most of us have been busy for the last 10 years just making the damn camera phone do slightly better. Hopefully for the next 10 years we’ll go back to the dream and create amazing new ways to capture and share information.

Evan Nisselson: That’s a great lead. It’s interesting, you have three different perspectives but they are all really the same. There’s different ways of looking at things, which is typical for humans obviously. When you saw my presentation earlier about my first camera phone in 2003, which was a Sony Ericsson P800, and how I took a picture and started using MMS to send it to people… At that second is when I wrote that article. I couldn’t believe what was going to happen in our industry. Since my camera bag before was a Nikon F, a Roloflex, a Nikon FM—I’ve been photographing since 13 years old. I quickly said, “Okay, shitty pictures but the ability to communicate is unbelievable. How long until this camera phone will have 80% of the features of the DSLR so that there is absolutely no need for a DSLR?” We’re trending in that direction. People are using camera phones more, but that’s my question. Not only for humanity, but from a business perspective. Ramesh, what do you think about how long… Let’s say 80%. The reality is that the majority of people that photograph only use 1% of the features on their DSLR so it’s irrelevant. The computation, the quality, the output, the quality output—how long is it going to take us to have a camera phone that’s the same size, not one of these jumbo crap… Same size as a phone?

Ramesh Raskar: I’m going to look at the question a slightly different way. You’re asking me how can I build a horse that can run faster and faster? I’m going to say I’ll just give you a car. It’s calling a car a horseless carriage. I think we ought to change the conversation now and forget about SLR, like we forgot about LP records, and get into a completely new domain if you are to bring in a revolution in computational photography. You talked about connectivity as being a very important aspect, which SLR frankly didn’t care about. If you look to this generation, our set parameters that we thought are very important, like depth of field or bouquet or megapixels, are completely irrelevant. Even post-capture control, which is manipulation of raw and so on, are somewhat irrelevant. I think we are going to change the question. Instead of saying “80% of SLR,” can we say “10 times better visual experience with new technologies”?

Evan Nisselson: Than what?

Ramesh Raskar: Than what we don’t do now. If we’re living age of the horses…

Evan Nisselson: With a camera phone or with a camera?

Ramesh Raskar: Camera phone. Your Narrative Clip.

Evan Nisselson: So anything, it should be 10 times. The challenge there – is that 10 times the DSLR we don’t need. 99.9% of the world doesn’t need that. I’d love to change the discussion, but I think whatever the goal is it’s really to communicate. Right? 10 times better communication could be faster. And I question… There’s roadblocks. You’re saying if you can’t build that, I’ll give you a car if you want it. Right?

Ramesh Raskar: Yes.

Evan Nisselson: Then translate it before we go to the rest of the panel. Michael, you want to add? What are your thoughts? Add what you’re about to say.

Michael Cohen: I’m going to pile on as well and say that you’re asking: When will camera phones have 80% of the DSLRs? I would say they already have about 800% of DSLRs. It really comes back to that notion of two things. Capture, edit, share, and view—phones can do all of those things.

Evan Nisselson: Let me ask the question better.

Michael Cohen, Principal Researcher, Microsoft Research
Michael Cohen, Principal Researcher, Microsoft Research. Photos © Robert Wright/LDV Vision Summit

Michael Cohen: Let me add one more point. The other point is if we look at the value of those pixels that are captured, and we plotted the value over time, I think most people in here would draw a graph that starts out very high, right at that moment, and drops precipitously and then just wiggles along for a moment. If those pixels have 99% of their value in the first five minutes these days, again, the DSLR is not the place to capture those pixels. Quality is an expectation from a consumer. That quality expectation also is exactly the opposite. We’re willing to watch live things in horrible quality because of the excitement of watching it happen. We’re willing to look at things and consume things that are five minutes old because it’s very fresh. Again, as time goes on our expectations go up. I really think the question is…

Evan Nisselson: How would you ask the question better?

Michael Cohen: When will DSLRs have the capabilities of camera phones?

Evan Nisselson: That’s a good way to phrase it. I, however, can’t—I only want one device. I am a minimalist and many people like carrying less things around. Some people like carrying huge bags around. I want to be able to have, maybe this is just me and not the majority of people, I want to be able to have all the zoom lenses and all the tools that I used to use as a professional photographer on the same device that I use as my phone and my computer. My portable computer is also my high-end camera. When will we be using a camera phone to replace that? Ever, Paul?

Paul Green: I don’t think so. I think it’s going to be a while. There are inherent advantages to that form factor that are very hard to overcome in the cell phone. There are some startups even today that are making some buzz. You highlighted one, LinX, that was bought by Apple. There’s a couple of others, Pelican and Corephotonics and Light is making some noise, that are going in that direction. As they both said, I think the question is a bit the problem, I guess.

Evan Nisselson: Okay.

Ramesh Raskar: We are making a business case as opposed to research case now.

Prototype angle sensitive pixel camera (left). The data recorded by the camera prototype can be processed to recover a high-resolution 4D light field (center). As seen in the close-ups on the right, parallax is recovered from a single camera image.
Prototype angle sensitive pixel camera (left). The data recorded by the camera prototype can be processed to recover a high-resolution 4D light field (center). As seen in the close-ups on the right, parallax is recovered from a single camera image.

Evan Nisselson:  Actually, I disagree with that.You might be asked in that case, but I guarantee within 10, 15, or 20 years, whatever it is, my view is that 80% of the potential quality output and features—to do zoom and all the other features—will be on my camera phone and I will not need another camera. I might want another camera or another car or another horse. I might want to hold a different form factor, and all those things are right. But I disagree that it’s not going to happen.

Michael Cohen: I’ll try to answer your question a little more directly. It fundamentally is counting photons. You’re asking: When will this little device be able to capture as many photons as this big device ostensibly in the same amount of time? The answer to that is maybe never, but computation can do a lot for you. As we saw in the last keynote and as I’m sure we’ll see again, by combining not just this instant but maybe a little piece of time that I can capture over, or combining the gazillion photos out on the Internet, I’ll be able to do that. I’ll be able to go take a picture of my family in front of the Eiffel Tower on a horrible, miserable, misty day and it will come out like a beautiful clear day that I wish I was there on. Is that better than a DSLR? I don’t know.

Evan Nisselson: That’s another question. I don’t know if it is, but I guess for the use case that I mentioned of having one device that’s as good as that, there’s nobody saying “Well, you don’t need anything else.” I think that’s potential.

Michael Cohen: I was just going to add the phone is a very constrained device in terms of computation and power. The problem is that a lot of people just don’t care that much about image quality. They can’t detect it. You look at some of the Nokia phones that came out—they are actually pretty good cameras, but they weren’t commercially successful.

Evan Nisselson: I think that was less the camera and more the rest of the operating system. It brings up a point and you sent me a great synopsis of a speech you did recently, Ramesh. It relates to the photons. It’s superhuman vision hacking physics. I don’t know how it’s going to be done. I don’t really understand photons. I know what the word means, but that’s why I’m happy to have you guys here for this discussion. Hopefully, it’s interesting for the rest. I’ll have questions in about 10 minutes. Or if anybody raises their hand for questions… We want this to be as interactive as possible. So hacking physics sounds very similar to directionally what Michael was saying. Well, maybe we could one day do something like that. Is that what you’re talking about or something different?

Ramesh Raskar: It goes back to the question you asked about SLR – like. What we have been doing right now is called pixel hacking. We have been trying to take the camera phone and make it look like it’s an SLR. But I think our ultimate goal has to be a superhuman experience. Michael talks about being able to see through fog. I want to talk about being able to see around corners. Roger, by the way… I don’t know if you realize but he had big gear taking our pictures and he took out his iPhone and created a panorama. That’s why I gave him a thumbs up. I think he was giving his own vote on which one he likes. To get there, I think pixel hacking is not going to take us to this superhuman photography, superhuman experiences. We have to start doing some photon hacking to get there. “Photons,” of course, means particles of light. When we start thinking about the four standard elements that we, of course, have in a camera, which is optics, sensors, processing, and elimination… All of them involve photons in a certain way, of course… The processing is more on electrons and bits. We have to think about them at the time of capture. We can’t just say, “Let’s capture good raw and do everything post-capture.” Do certain things at the time of capture, and so on. As you know, some of these companies are going in the direction of choosing a camera array, or multi-spectral, but you’ve got to go well beyond that. In our group, we can read a book without opening it. We can read through the pages using other wavelengths. We are able to create pictures of people behind a wall. For that we are using other spectrums like microwave and terahertz.

Evan Nisselson: Tell us a little bit about that, and tell us about photographing around corners.

Ramesh Raskar: If you are to photograph around corners… If I could look outside this door, it seems like violating the definition of a camera because a camera is supposed to see what’s in front of it. But I can just flash light on the floor. It will bleed into what I cannot see. It will bleed back and a very tiny fraction of the light, it will come back to the camera. By analyzing these multiple bounces of light, basically the chatter of photons going back and forth very much like you have chatter for audio and reverberation, the reverberation of photons allows you to actually see around corners. We have demonstrated that and it’s difficult to do for photography because these things still cost $1 million. But we’re using it for endoscopy and so on. The same thing with seeing through walls. We’re able to use radio frequency signals to create full shapes of people that are completely behind walls. Now how will that play as photography? It’s not an SLR-like experience. It’s going to be a very unique experience where you will be able to see things that are superhuman, literally, and I think for when you watch the movies with superheroes, they always show imagery that looks like something an average person can understand. I don’t think superheroes need to see things the same way we appreciate the colors and motion and so on. We need to put ourselves in that mindset of Hey listen, visual photography was great, but the technological revolution is much faster than biological revolution. We have to adapt to how we experience these things. I think when phones came around, we understood that we can talk to somebody who is across the ocean. I think when it comes to photography, we are still stuck in the world that I’d rather see a picture that’s in front of me.

Evan Nisselson: I totally agree, and I love separating those two things. One was just a use case about that. I didn’t mean it to be the whole panel. Exactly like you’re saying and I want to talk more about it. That’s just one use case I think is fascinating because it has historically, since 2003 when people started doing it, or 2002… Obviously there was the big satellite phones. In 15 years when Algolux is continuing to grow and do fantastic, where do you see the company in several years? Vision-wise.

Paul Green: Ramesh spoke about you have to really… When you design these systems, to think about all the different parts to them. Not just pixel hacking. Our longer-term vision is really around taking a more holistic approach to image reconstruction; having a whole model of image generation and then modeling the physics of your optics, etc. and incorporating those things into our algorithms. It’s basically recasting imaging as an optimization problem that models the statistics of natural images and the physics process. That’s the longer-term vision and where we’re trying to head to.

Evan Nisselson: Michael, you’ve just released this new iteration of Microsoft Research. You’re now releasing products directly. What surprised you the most, that you didn’t expect, prior to releasing that you learned after releasing?

Michael Cohen: So it’s not the first product we’ve released. We have a set of them, and I encourage you to come and play with them and give us some feedback. The real surprises are in just what people do with it. The most fun thing is to put something out there and just watch what comes back. There’s all kinds of comments that come back and things like that, but the really exciting thing is to see the media that people capture and come back with. You really never know what they’re going to do with it.

Evan Nisselson: Are there some examples that you can share with us? What surprised you the most? Like, everybody all of a sudden saw it in the office and everybody started laughing?

Michael Cohen:
I think more of the ones that are “ooh” and “wow.”

Evan Nisselson: Give us an example of “ooh.”

Michael Cohen: The hyper lapse one we put out there was one that came back from some beautiful beach where somebody was walking for miles and miles and miles down this beach in hyperlapse. It’s just one of these super smooth, silky experiences that a) really makes me want to go to that beach instead of sitting in our lab, but b) I can just watch this thing forever. It’s one of those just wonderful experiences. Then the other thing, basically the kinds of things we see people do. The reception is very interesting. Probably our biggest complaint is that we’re not on every Android platform. I can’t even count how many there are. The desire for more of those platforms has been really encouraging as well.

Evan Nisselson: What keeps you up at night? Paul? In regards to work.

Paul Green: Obviously I’m in the day-to-day of just making this stuff work, and shipping it.

Evan Nisselson: Is there a certain thing, obviously say what you can, but are there certain things that are the most challenging? Is it hiring? Obviously, each of the three panelists has different challenges as far as hiring and resources. It’s obviously harder for a startup. What’s the biggest challenge?

Paul Green: Finding amazing people is I think a challenge for all startups. Definitely we’re no different. That’s one of the reasons we’re here.

Evan Nisselson: Hopefully you’re here for that. Paul Green: For sure, yeah, for that.

Ramesh Raskar: Me too.

Evan Nisselson: Are you looking to hire?

Ramesh Raskar: I’m always hiring.

Evan Nisselson: Okay.  

Paul Green: That’s number one for anything you do at a startup, research labs, wherever—it’s the people that are there. More existentially, I think just in the field of photography, it’s that nobody looks at their photographs anymore. They are so throw-away, I guess.

Evan Nisselson: People don’t look at photographs? After they’re created, you mean?

Paul Green: Yeah, they take them, they share them, and they disappear. They stay there I guess but no one looks at them. I think that’s why the hyperlapse stuff is actually really cool.

Evan Nisselson: I agree. One of the things recently I think is going to be a huge opportunity is as more and more smart solutions analyze content… There was a story recently of uploading 20,000 images to an online Cloud, at which point I think it was a Google product that brought out images together that told a story that somebody forgot. That was another “ooh aah” moment. I think that’s going to drastically change that experience.

Paul Green: There was an interesting thing about a grocery store, I think it was in Sweden or Denmark actually, doing face recognition and seeing all these connections of people who didn’t know each other, but by tracking the same time and space you could plot who overlapped.

Evan Nisselson: That’s what happened when I was looking for pictures in my horribly organized archive for this presentation. There’s a question out here. Yes, right over there, please stand up.

Audience member: Myron Kassaraba of MJK Partners. There’s a lot of cool stuff that can be enabled by computational photography—better pet photos, panoramic selfies—but what about significant benefits that computational photography could bring to either business or healthcare? Being able to tell that a motor or a compressor is about to fail, or not only telling if a person is happy or sad, but telling whether they’re sick. I guess part two is: Do you see computational photography and drones intersecting in interesting applications there?

Evan Nisselson: Great questions, Myron. Ramesh, do you want to take that?

Ramesh  Raskar: I think Myron brings up a very interesting conversation. We think about fusions, which are like photo and video are fusing, or camera and phone are converging. I think what you bring up is a very important debate, which is: Is computational photography and so-called computational imaging also converging? Because so far, cameras have been mostly used for visual memory. Taking photos and seeing them as they are. Computational imaging is more like A to I, analog to information. The goal eventually is not about pixels of photos, but actually some kind of information. You’re talking about the motor or a health condition and so on. I think we’re going to have a very interesting tension between these two of how can the same device serve both purposes? So far it fits more about A to I (analog to information), then you will make it more rich in capture, thermal, and other moralities, as opposed to what we are talking about. I think that’s really exciting. I hope more of that happens, because the computational imaging world has actually moved on very fast when it comes to medical or scientific imaging or military or satellites and so on. Many of those technologies will start coming into a photographic world and we won’t even realize it, that these are becoming part of our experiences and also influence the businesses in so many ways. Barcode reading is one of the few examples of how we are treating computational imaging. Recognition is improving, but we still think of recognition for the sake of photos. But recognition for the sake of tasks is also going to creep in.

Evan Nisselson: I think that’s a great example. Michael is going to take it, too.

Michael Cohen: I’ll try to answer your question a little more directly, but I echo that. What’s going to happen is that when you take a picture, there will be understanding of that picture in the device, which offers all sorts of capabilities for business. On your point about health, we saw the work that came out of MIT where you can take a video and actually see somebody’s pulse rate by exaggerating the change in color. Many of you probably saw that. That’s an amazing thing that we can actually watch and see somebody’s heart rate just by watching a video. The next step is by watching that pulse happening in your hand versus your face, literally the delay in time. The phase difference between the pulse in your hand and the pulse in your face will tell you the blood pressure. It just continues from there. I think there is an amazing ability for computational photography plus understanding, etc. to help out in both business and… I think health is a really, really exciting area right now.

Evan Nisselson: I totally agree. We’re going to have a couple of companies speak at our Summit tomorrow, Zebra Imaging and a bunch of others, talk about the medical side, the business applications, also satellite imaging. You bring up an interesting point in talking about what could improve my semantic use of images. That’s funny. Words! When I think of photography, I think of anything visual. You’re bringing up a point of what photography might be, correct me if I’m wrong, photography might be when you make a picture and imaging it might be anything that’s not personally captured. Is that the way you define it?

Ramesh Raskar: The way it’s typically distinguished is computational pho- tography is about creating a photo out of video, to be consumed by humans. And computational imaging is understanding information from these images.

Evan Nisselson: But when you say imaging and photography, just those two words, are they the same or they’re different in your mind?

Ramesh Raskar: Imaging and photography can be confused but computational imaging and computational photography…

Evan Nisselson: Are different.

Ramesh Raskar: It’s just semantics.

Evan Nisselson: Yeah, I know, but I’m just trying to understand because I want to make sure it’s clear for the audience. If I’m having this trouble, I’m sure others might be having questions. There is a question in the back of the audience.

Audience member: Hi there. I am Julian Green from Google. What do you think will be the next sensors added to camera phones?

Paul Green: I think there’s a lot of really cool stuff happening with time of flight sensors. For example, the work that you talked about, seeing around corners, there’s work that uses similar sensors, or the same sensors, which are much lower cost. This is the sensor you’re seeing now in the Kinect, I think. You can do a lot of 3D and depth and whatever else. I think there’s even the Project Tango from Google that’s using that.

Evan Nisselson: Any other questions from the audience? We’ve got about two minutes left. Yes, right here. Stand up, we don’t have to wait for them. Stand up. Speak up, Eric.

Audience member: Hey, I’m Erik Erwitt. I was just wondering if any of the panelists have an opinion on if graphene-based photo sensors may one day replace CMOS-based photon sensors in consumer electronics and why?

Ramesh Raskar: I think graphene is a very, very interesting development. It goes back to the conversation we had. We have done pixel hacking, we’ve done photon hacking now with cameras and so on, then they are moving into the physics hacking, and that’s where the sensors come in. Time of Flight is an example of that transformation and changing the sensors themselves— whether it’s black silicon or whether it’s graphene based—I think as long as we’re improving the quantum efficiency, which is you want to convert most of the photons into electrons, it’s very exciting. Another interesting thing about graphene is that you can create non planer sensors. As you know, the human eye, for example, has really simple optics. If you can just use curved sensors as opposed to flat sensors, all the complexity, all the wind that Roger is carrying and the lenses would disappear, and you can create one, two, three gigapixel cameras that are literally two by two centimeters. Two centimeters by two centimeters by two centimeters cubic form factor, if you just use monocentric lenses, which is just lenses like the human eye to concentric spheres, and a curved sensor. I think all the sensor technologies are going to change that. Right now you should invest in Paul’s company because he is solving some of those problems. Very soon, I would say 15 years timeline, we’ll see new types of silicon, new types of graphene that create curved sensors.

Evan Nisselson: This is great. We’re unfortunately out of time. We could talk for hours here, but we’ve got another 70 speakers and hopefully in the end we’ll have more interaction to learn more and spend more time during the breaks. We talked about different use cases, personal, business and other different kind of opportunities. What’s the one use case, or scenario, or “ohh ahh”—however we want to define it—that you can’t wait to happen? You don’t obviously have to say when, but you can’t wait for this to happen in regards to computational imaging. Ready? Who’s ready? Come on. Who’s ready? It’s a tough question. I didn’t send that in the email. Go ahead, Paul.

Paul Green: Go ahead.

Evan Nisselson: I like asking tough questions. There have to be surprises. It’s not a political campaign.

Paul Green: This was the age of information technology. The next revolution I guess is biology, biotech, so maybe bionics. When you can integrate sensors and become superhuman as Ramesh wants.

Evan Nisselson: Is that going to happen in our lifetime?

Paul Green: I think so.

Evan Nisselson: Great. Michael?

Michael Cohen: Very short term I would say it is being able to really tell my story very efficiently and with just the wonderful design that goes into carefully crafted stories.

Evan Nisselson: Great. Ramesh?

Ramesh Raskar: Superhuman experience. We really want to see the world like we haven’t ever imagined. The movies, as I said, are showing superheroes how they would see it. Whether they start with Harry- Potter like photo frames that I can really appreciate, changing viewpoint and lighting and seeing the future… On the other extreme, it creates deep emotional connections with people across the world—especially with cultures we don’t understand—to reduce some of the anxiety and tension to empathy, to allow me to learn completely new things.

Evan Nisselson: Fantastic. I can’t wait for my retina camera. I just made a picture of all you guys. Thank you very much for that conversation, you guys.

You can purchase a full transcript of all of the 2015 LDV Vision Summit panels :

The next LDV Vision Summit is scheduled for May 24 and 25. Tickets are available here

*( Michael Cohen has since left Microsoft Research for Facebook)

Photo by jared

Photo by BAKOKO

Author: Paul Melcher

Paul Melcher is the founder of Kaptur. He is an entrepreneur, advisor, and consultant with a rich background in visual tech, content licensing, business strategy, and technology with more than 20 years experience in developing world-renowned photo based companies with already two successful exits.

Share This!Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Share on RedditBuffer this pageEmail this to someone

Leave a Reply