We shop with our eyes. Whether buying fruits at the supermarket or a car, looking for a new pair of shoes or a hotel to stay, it is our vision that ultimately makes the final decision. It is no surprise then that visual tech is an emerging force in the retail e-commerce $1.9 trillion worldwide market.  Built on the latest advances of deep learning,  startups are offering ingenious solutions that empower online retailers to dramatically increase their sales. Germany-based Picalike, founder in 2010, is one of them. We had the opportunity to catch up with co-founder and ceo Sebastian Kielmann:

– A little about you, what is your background?

I  developed the first version of the picalike system after spending more than

Sebastian Kielmann, ceo and co-founder, Picalike
Sebastian Kielmann, ceo and co-founder, Picalike

8 years on R&D while working for SAP and several e-commerce companies. In 2010 we decided to launch Picalike as a recommendation engine based on computer vision because images are key for e-commerce. Now we are training our machines to understand the product images like humans do. Adding customer behavior data real-time we are leading the way to a superior personalization in online shops.

– What does Picalike solve? How does it work?

Picalike solves the “Paradox of choice” for online shops. To attract a broad audience online shops have to offer a very broad and extensively deep catalog. Shoppers get lost in tens of thousands of products, too many items never get to be displayed.

Picalike analyzes every single image of a given product feed and builds similarity indices between the images.

Adding the behavior of the user in our client’s online shop (clickstream) and users reaction to the image, we are able to display the most relevant products to each visitor based on his actual preferences. That improves conversion rates, boosts average order size and reduces bounces significantly. At the same time, it increases customer satisfaction.

The integration is easy (via a simple REST API) and serves currently these use cases:

  1. Recommending similar items (by taking also user behavior into account)
  2. Recommending next best offers to sold out products
  3. „shop the look“: scaling curated styles to loads of outfits representing the curated style
  4. Delivering personalized category listing pages
  5. Providing the most relevant products for retargeting and newsletters
  6. Returning data extracted from images such as category, attributes, colors etc.

– There are other “similarity visual search” on the market. How is Picalike different?

There are different types of visual search companies and it’s difficult to comment on other companies. We – however – take the image as a key for our analysis but use all other available information (e.g. price, size, availability etc.) and the user behavior to learn what is important to each individual customer. As we have realized that there is a correlation between similarity and what is commonly referred to as „taste“, we try to learn the user „taste“ in any given situation and match them with the products available from the latest feed. This is not only true for fashion but also for furniture, jewelry, accessories, paintings, photos, and shoes. 

– What is the typical upsell your customers experience after implementing Picalike? 

Most of our clients seeing a double-digit increase in conversion and shopping cart value but also reducing frustration and bounce rates on sold out products. Furthermore, our clients experience higher interaction rates and returning visitors on inspirational pages. Till now, none of our 40 clients uninstalled our technology, that means 100% customer satisfaction, 0% churn. That´s the best proof for excellent results and reasonable pricing.

 – How many reference points do you use to find similars?

There is no fixed value for this. Most of the information we need, we extract from

Fashion websites like FashionHype use Picalike´s visual tech to personalized category listing page, ranked by similarity of selected product
Fashion websites like FashionHype use Picalike´s visual tech to personalize a category listing page, ranked by similarity of selected product

the image and user behavior employing artificial intelligence (AI) and deep learning. We don’t use overall reference points since each product category has its own important information that cannot be quantified by reference points and so our system changes its focus from image to image and product to product. To us, this is the next logical level we are currently approaching. This level will enable the real semantic understanding of images.

– Do you see industry-wide patterns emerge (which ones) and are those fed back into the algorithm?

It is still a very young development in e-commerce.  The opportunities and the potentials are largely unknown to shop managers and continue to proof of being better than normal recommendation engines. However, employing AI to teach patterns and systems is something that many companies claim and a few do well. You could consider this a “pattern”.

We learn concepts from a vast amount of data we look at daily. Across our customer base, we learned a lot about the importance of attributes in certain categories. We generated a lot of knowledge throughout the years while performing our calculations and are now able to calibrate the weights of reference points and attributes related to the category.

– How does Picalike leverage deep learning to increase its value?

We use machine learning to extract the information from the images, learn their connection and predict trends, user expectations and label data. While labeling we can attribute the accurateness. So, yes machine learning has made a great step forward but there is still a lot to learn, experiment and research.

– Besides fashion, can Picalike be used by other industries and how?

Furniture is the second most relevant industry for our product, followed by sports

equipment, jewelry, accessories, decoration, glasses etc.  But we are working hard to allow all type of online shops use our system to increase their conversion, such as electronics, domestic appliances and so on.

Besides e-commerce, we also work on delivering solutions to security and marketing industries. At the end the “understanding brain” that we create is universal, thus the algorithm does not care what type of data is used to train the system.

 – Let’s talk technology. Obviously, visual matching is involved but what else did you need to develop?

We had to develop our own machine learning library and establish our own proprietary GPU Cluster (at the time one of the first in Europe) since many things we use, are not implemented in frameworks as tensor flow, cafe, deeplearn4j and so on. To be able to return complex calculations in real-time (15-20 ms), we developed our own database called Lilly and our own backend for customers to calibrate our system, to create curated styles and also to help our system when training. 

 – What would you like to see Picalike offer that technology cannot yet deliver?

There are two major developments we are working on and hope to get high-quality results in the near future. First of all is a part of classification, which is one of the major subjects on deep learning. But to increase the knowledge of the system, it has to learn to understand “known unknowns”. In standard classification problems, a system will always place an object to a known category but if the object on the image is unknown to the system, it will still believe to know it. For example: if a system can classify cars into different types, showing him an image of an airplane will lead it to classify it as one of the known car categories. This is obviously wrong. So the system has to learn to understand, that it’s seeing something it hasn’t learned before. By achieving this, we can train the system to ask for more data, when it finds something it hasn’t learned yet.

The second important subject is to correct the colors and illumination of an image taken by a camera under sub-optimal lighting conditions. This will then increase the quality of results for content-based image retrieval with photos taken by smartphones in real life (for example at a party).

Photo by Thiago Fernandes Marinho



Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Stipple, and more. Melcher received a Digital Media Licensing Association Award and is a board member of Plus Coalition, Clippn, and Anthology, and has been named among the “100 most influential individuals in American photography”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.