We propose a model for learning user preference rankings for the purpose of making product recommendations. The model allows us to learn from pairwise preference statements or from (incomplete) rankings over more than two items. We present two algorithms for performing inference in this model, both with excellent scaling in the number of users and items. The superior predictive performance of the new method is demonstrated on the well-known sushi preference data set. In addition, we show how the model can be used effectively in an active learning setting where we select only a small number of informative items for learning. Copyright

Active learning, Approximate inference, Bayes, Collaborative learning, Preferences, Ranking, Recommendation
dx.doi.org/10.1145/2365952.2366009, hdl.handle.net/1765/81528
6th ACM Conference on Recommender Systems, RecSys 2012
Erasmus School of Economics

Salimans, T, Paquet, U, & Graepel, T. (2012). Collaborative learning of preference rankings. Presented at the 6th ACM Conference on Recommender Systems, RecSys 2012. doi:10.1145/2365952.2366009