Our relationship with the camera, the lens and the image is evolving. The static stance of capturing a moment in a photograph is still there, but this is rapidly changing. With the help of new emerging technologies, photography is becoming ubiquitous and pervasive. The advent of machine vision, deep learning, semantic search, augmented reality and virtual reality among other exponential technologies will inspire us to discover, create and experience more great imagery than ever before in human history.
Intelligent algorithms will inspire and teach us to make better pictures and even help us sort and curate them. Ultimately, it will help us tell our stories in more powerful ways. Taking pictures has also been augmented: the camera is now able to see and think with us (… or even for us).
We no longer crave the image itself, we crave its significance, its context. Compare the video above with silently talking men, to the implications of said imagery. The AI reads the context of the scene, it doesn’t care about the image itself. The image is the carrier of the value.
This intelligent way of seeing things is becoming our third eye. Just like our own eyes view and build an image and its context through our minds, so too does this third eye create extra context and it will build an augmented view through an external mind, powered by an intelligent grid of sensors and data.
An Imaging Mind.
Photo Hack Day 4
This new intelligent era of imaging is the main focal point of Photo Hack Day 4.
Photo Hack Day 4 is a 24-hour coding marathon where developers, designers, makers and visual artists come together to rapidly prototype new photo applications. The event is presented by Imaging Mind, EyeEm and Canon.
During Photo Hack Day 4 we’ll deep dive into this new intelligent era of photography: by bringing together photos, videos and other visual forms with hardware, interfaces, apps and services we can unlock the next step in visual culture and gain new perspectives on our world. We do this by ‘hacking’: finding low-key methods to create, combine, manipulate and transfer data across platforms, services, software and hardware. Challenges will be framed around four main themes: discover, inspire, create and experience. We’ll expose each of these challenges in separate blog posts leading up to the event.
But don’t be scared by all this talking about algorithms, intelligence and data: you’re free to work on whatever hack you like and we even made sure that there is a cool prize for the best analog hack!
So many images, so little of lasting use. The cost of making pictures is dwindling to next to zero. In an economic, cultural and political sense this signals disruption. New business models are emerging. Initially, only famous people had access to photography, today it’s a commodity. The image itself is often not the value anymore. The real value nowadays lies in the underlying metadata: the context encapsulated within the image. New experiences created around this metadata will be very popular.
Age of Context
We now live in the age of context — of personalised experiences.
Virtual Assistants will be instrumental in the intelligent era of photography. An intelligent personal assistant is a software agent performing tasks or services for an individual. These tasks or services are based on context, user input and the ability to access information from a variety of online sources.
Examples of such agents are Apple’s Siri, Google’s Google Now, Amazon Echo, Microsoft’s Cortana and Facebook’s M.
Jibo, the world’s first family robot, gives us an early glimpse at how the virtual assistant could change photography.
What happens if we challenge ourselves to envision how such a software agent would adapt to photography and other visual forms? What if we could start having conversations with our cameras and pictures, and the (visual) world around us at large? How would this work in practice, what new possibilities does it open up, and what does this tell us about the possible futures of photography and storytelling?
In order for such a software agent to work well for photography, it needs a deeper understanding of our images. It also requires a full understanding of our intentions so experiences can be tailored to our individual needs. This is why context is so important: our smartphones simply know our context better than any other device we carry with us. Gaining this deep understanding of context will be key to understanding the future of photography. If we want to unlock the future of photography, we first have to gain a better understanding of what we see at every step in the creative process.
Some of the guiding questions for the event:
- What if we could talk to our images and cameras? What kind of conversations would we be having with them? What questions would we ask? What answers are we looking for?
- How can we fill in the context of any given scene, so the photographer can use this to create a more meaningful picture?
- How can we help people develop their photography skills around the moment of capture — on the go?
- How can we inspire people to make great photography based on their personal context?
- How do we help people manage their vast photo collections?
- How do we filter out only those pictures that are relevant for me in the context that I interact with them?
- How can machines evaluate and assess the aesthetics of our images using computer vision and machine learning?
- How can we connect the wide variety of cameras in the world around us in a compelling user experience?
- How will photography make the transition to the virtual realm?
We hope to find answers to these and many more questions during Photo Hack Day 4.
This might all sound very intimidating, and in many ways it really is and that’s exactly the charm of a hackathon! Our experience is that the simplest hacks (those that do a single thing very well) are often the most successful.
To give you an idea, here are some examples of previous winners:
Previous Photo Hack Day winners include Photoration, an app that uses GPS, Google Maps, Foursquare and the EyeEm API to read the exact position from which a photo was taken and show it to you. Or take Tourist Eraser, an app designed to remove tourists and other non-static objects from scenes. By snapping multiple photographs of a scene, it can composite them into a single photo that only shows the static background — voila, clutter gone.
At our own Hack The Visual event in London, Splatmap took its inspiration from Nintendo’s Splatoon game: have two teams compete to take over the majority of the environment. By snapping a picture of a building, it would be added to a crowd-sourced 3D mapping of the local area you are playing in and added to your team’s side. Effectively turning the 3D scanning of architecture into a competitive game.
Almost all these hacks are very simple of nature but also feature an intelligent component (e.g. location services, image synthesis and 3D).
What can you add?
Having said all that, only one question remains: What will you add to the future of photography?
Come and join us, we can always use an extra pair of eyes!
There are only a few spots left for the event. RVSP today as either a participant or for the demos on Sunday afternoon. All the latest information related to the event including an overview of the challenges, prizes and our international jury can be found on photohackday.com.
Kaptur is a proud media partner of Photo Hack Day 4
Author: Floris van Eck
Floris van Eck is a technology strategist, visual culture anthropologist and speaker on emerging technologies. He is the co-founder of Imaging Mind and Notilde. Imaging Mind is a visual culture community and futurist agency dedicated to uncovering the future of imaging and how it manifests itself in technology and society. Aiming to build an ‘Imaging Mind’ of connected nodes. Notilde helps organisations explore and navigate new technological frontiers at the intersection of culture and technology. They do this through a combination of investigative journalism and experiential content. Their narratives provide insights outside the radar of traditional R&D.