Manual labor when checking compliance with planograms, processing photos from merchandisers and preparing reports is prone to errors and requires double-checking. To improve these processes, we offer Goods Checker — a merchandising automation solution. 

What is Goods Checker and Its Purpose

What is Goods Checker

Goods Checker is a merchandising IT solution powered by computer vision, which consists of three modules. Each module automates its ascpect of merchandising:

  1. Planogram management: Plano Creator
  2. Checking display in the store by merchandisers: Check & Go
  3. Product recognition and analysis: Shelf Eye

A few words about each module. 

Plano Creator is a design tool to create planograms. In a few minutes, a manager can create a planogram that will provide a real-time presentation of the goods and commercial equipment. Planograms can be created for SKUs of any size

Check & Go is a mobile app designed to assist merchandisers at a retail outlet. The application helps field employees navigate tasks, view visited and unvisited retail outlets, compare displays with planograms, identify problems on shelves, etc. 

Shelf Eye is a neural network-based module that is responsible for recognizing products on shelves, comparing the layout against a planogram and generating analytics. 

Analytics is generated with the required breakdown: by SKU, store, category, merchandiser, etc. Reports are prepared automatically based on the results of computer vision operations and completed assignments of merchandisers. Examples of KPIs that can be tracked with Goods Checker: shelf share of a brand/sub-brand/SKU, both for the company itself and its competitors, reasons for the display not matching the planogram, omissions of goods, etc.  

We’ll describe Shelf Eye in more detail. 

Goods Checker Architecture: How the Solution is Built Inside

Goods Checker architecture

Goods Checker is built on a microservice architecture. Application microservices, their libraries, interactions and dependencies are packaged in Docker containers. Docker allows delivering or moving code faster and more efficiently, standardizes what your applications do, and in general saves money by optimizing the use of resources.

Microservice architecture helps customize the application.   

Microservice architecture allows individual parts of an application to operate independently of each other. This ensures reliability, easy update and cost-effectiveness.

Reliability. If failures occur in one microservice, then all other components of the system work stably. 

Easy update. One module is updated, the rest of the application modules work as usual. 

Quick start. The container is quickly rolled back if it’s not operational after the update. Startup takes 1–2 minutes. 

Cost-effectiveness. There is no need to maintain multiple operating systems, so fewer resources are needed for development and operation. 

Ease of development. The container in the test and production environments is the same, so it will perform the same everywhere. There are no situations where something is not executed due to a different environment, for example, OS version. 

Four Project Stages: from Creating a Dataset to Putting into Commercial Operation

The project to implement Goods Checker, or any computer vision-powered solution, consists of 4 stages: data preparation, selection of neural network architecture and model training, evaluation and additional training, and transition to commercial operation.  

The project to implement Goods Checker

Data Preparation

At the first stage, it is required to prepare data — photos of SKUs to be used in neural network training. We divide all data into three sets: training, validation and test. The neural network is trained using the training set, the quality of recognition is checked with the validation set during the training process, and the quality of the neural network is checked using the test dataset before launching into commercial operation. 

Photos should be as diverse as possible and close to reality. This means that in addition to perfect photogs, there should also be photos with hotspots, glare, blur, etc. The process of adding distortion is called augmentation. Augmentation helps improve recognition accuracy 

Architecture Selection and Model Training

The next steps are to select the model architecture and train the neural network. Several neural networks are involved in the recognition process, each responsible for its own function. 

All functions can be divided into three groups:

  1. Detection: identify an object in the image. For example, recognizing a pack of juice in a photo.
  2. Classification: classify the identified object (crop) as a product group. For example, determine that an identified pack of juice is brand A orange juice. 
  3. Segmentation: differentiate some objects from others and process only necessary ones. For example, the image shows equipment with several products: water and juices. We need to process juices only. The neural network recognizes goods, ignores water and detects only juices.  

Model architectures can be of different types, for example, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), transformers (Vision Transformer, ViT), etc.  

For each architecture we write a configuration — instructions on how training occurs. In the configuration we set the parameters for training the neural network: 

  • number of training cycles 
  • metrics to be achieved for learning to stop, 
  • minimum crop size
  • the need for image preprocessing, etc. 

Often the model is not trained from scratch, but the weights/checkpoints of already pretrained models are used. This approach helps to speed up the learning process and shorten project timelines.

After the architecture is ready, we launch the neural network usng the training data set. During training, the neural network goes through several cycles — iterations. At the end of each iteration, we check the neural network quality on a validation data set. Thus, the model gradually improves its parameters during the training process. 

Model Evaluation and Fine-Tuning

When the model has already been trained, we check its performance on a test data set. This will help to understand how accurately the neural network recognizes new data and whether there is overfitting. If the recognition accuracy is below 95%, we analyze the image labeling, determine which SKUs the neural network recognizes poorly, and also which features are most important to correct this. Then we run the model to learn again.  

This is repeated multiple times until we achieve the required recognition accuracy. 

Putting into Commercial Operation

We run the neural network on unlabeled “live” photos. This process is called inference. After the model consistently recognizes SKUs with an accuracy of 95% or higher, it is considered to be ready for use in the fields. We integrate Goods Checker into the customer’s IT infrastructure: as a stand-alone application or using an API.  

Synthetic Data Using Blender, or How to Reduce Project Time by Several Times

How to reduce project time with Goods Checker

In the course of neural network training, the most time- and labor-consuming stage is the creation of three data sets. On average, about 100 crops— labeled objects—are needed for each SKU type. Often photos are labeled manually: people highlight the required SKUs in each photo. 

To quickly assemble a dataset, we use Blender, where we create a 3D model of each SKU and “replicate” the dataset with the desired parameters. With Blender you can quickly create hundreds of images, both high-quality and distorted. Such a dataset is called synthetic. Back in 2019, an approach to training a neural network with a synthetic dataset was described and promoted by artificial intelligence experts from Google Cloud AI. 

By way of contrast, the manual process of creating a dataset can take several weeks, while synthetic datasets can be prepared in a few hours. 

Three Cases of Goods Checker Benefits for Companies

How Goods Checker benefits for companies

Let’s share three success stories of our customers: a merchandising agency, a premium chocolate distributor and a gas station chain.

Merchandising Agency Made Audits of Retail Outlets Faster

According to our customer, as a result of using Goods Checker, outlet audit time was reduced by 50% and reporting time was reduced by 70%. Thanks to this saved time, the employee will be able to additionally visit several more retail outlets, and the manager will be able to monitor the situation more quickly. In addition, the use of IT tools minimizes human error. This means that agency employees provide clients with accurate, complete and up-to-date information on time. Their clients, in turn, can see the actual picture on store shelves, quickly respond to changes, make decisions based on current data and push up sales of their goods.

Premium Chocolate Distributor is Able to Monitor Live Situation on Retail Shelves

A company distributing chocolate products in the CIS countries tested a similar solution. The IT tool compares product displays against planograms. As a result, the time required to audit a retail outlet was reduced, and it became more convenient for supervisors to work. The managers, in turn, learned about actual distortions of the display in the retail chain in real time. Now managers have a transparent display monitoring system and are able to improve their cooperation with the retail chain regarding product representation.

A Gas Station Chain Increased Sales of Complementary Products

A major chain of gas stations uses Goods Checker to monitor display and track items that are running low. The IT solution enabled increased sales of complementary products in stores due to compliance with the planogram developed by experts. In addition, due to good order on the shelves, the company was able to test different types of display and obtain reliable results, determine the most optimal arrangement of goods and increase the average purchase size. 

Product recognition accuracy in photos is 95%, and planogram compliance rates have improved from 50% to 90%.

Contact us

    I agree to receive information about the company's services by email.