Skip to content
Recommender systems

Real-time and In-session: state-of-the-art recommender system for Ahlsell

State-of-the-art real-time recommender system for Ahlsell

The Modulai team worked together with Ahlsell’s Applied AI team to develop a real-time session-based recommender system. The system weighs a customer’s intent in a given session on their website or smartphone app to provide contextually relevant recommendations.

Background
Many of Ahlsell’s customers are engaged in project-based work which means the products that are relevant to them can vary greatly from day to day. The same customer may be looking for completely different categories of products between various sessions, because they are ordering for different projects. Elevating Ahlsell’s recommendations to the next level requires online (or real-time) inference where the recommendation engine can make decisions based on the user’s in-session behavior.

Solution
In a case like this, careful balancing of model complexity and predictive performance is key to best meet Ahlsell’s needs. Higher predictive performance means customers getting recommendations better suited to their needs, indirectly increasing Ahlsell’s revenue. However, real-time, low-latency inference in the cloud is not free, and customer experience is negatively impacted by slow(er) load times. Focusing too hard on predictive performance may yield larger and more complex models whose operational costs, on the balance sheet, cannot justify their advantage in terms of accuracy. Several models of varying architecture and complexity were evaluated against each using metrics reflecting the criteria described above. The models generate vector representations (embeddings) that encode information about the user’s current session. These vectors are used in the model to score products based on how likely they are to be of interest to the user.

How we did it
The models were created with PyTorch and designed to be fully compatible with TorchScript for performance and portability. An Azure Machine Learning pipeline will be set up to ingest data, process it, and train the final model on an appropriate schedule. The model will be deployed as an Azure Machine Learning endpoint for seamless integration with other Azure resources used by Ahlsell and can be configured to scale up automatically based on the workload. An API will be set up within Ahlsell’s existing API management service, minimizing the work required by the front end developers to integrate the new model on the website.

Do you wanna talk more about how we can help your company?