The Elitist Shuffle And Recommenders

Rodrigo Agundez shows us a way of displaying fresh recommendations without retraining the recommender system:

Suppose you have 10,000 items in total that can be recommended to your user, you run the recommendation system over all the items and those 10,000 items get ranked in order of relevance of the content.

The application shows 5 items on the entry screen. The first time the user opens the application after the re-scoring process the top 5 ranked items are shown. It is decided that from now on (based on user control groups, investigation, AB testing, etc.) until the next re-scoring process the entry screen should not be the same every time and remain relevant for the user.

Based on an investigation from the data scientist it turns out that somewhat relevant items appear until item 100. Then the idea is to somehow shuffle those 100 items such that the top 5 items shown are still relevant but not the same.

Click through for an example in Python and how it compares favorably to a couple other shuffling algorithms.

Related Posts

Combining Keras With Apache MXNet

Lai Wei, et al, show how to build a neural network in Keras 2 using MXNet as the engine: Distributed training with Keras 2 and MXNet This article shows how to install Keras-MXNet and demonstrates how to train a CNN and an RNN. If you tried distributed training with other deep learning engines before, you […]

Read More

There Is No Easy Button With Predictive Analytics

Scott Mutchler dispels some myths: There are a couple of myths that I see more an more these days.  Like many myths they seem plausible on the surface but experienced data scientist know that the reality is more nuanced (and sadly requires more work). Myths: Deep learning (or Cognitive Analytics) is an easy button.  You […]

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Categories

May 2018
MTWTFSS
« Apr  
 123456
78910111213
14151617181920
21222324252627
28293031