Date on Master's Thesis/Doctoral Dissertation
8-2017
Document Type
Doctoral Dissertation
Degree Name
Ph. D.
Department
Computer Engineering and Computer Science
Degree Program
Computer Science and Engineering, PhD
Committee Chair
Nasraoui, Olfa
Committee Co-Chair (if applicable)
Altiparmak, Nihat
Committee Member
Altiparmak, Nihat
Committee Member
Lauf, Adrian
Committee Member
Sanders, Scott
Committee Member
Zurada, Jacek
Author's Keywords
recommender systems; machine learning; explanation; interpretable models; web mining
Abstract
Websites and online services thrive with large amounts of online information, products, and choices, that are available but exceedingly difficult to find and discover. This has prompted two major paradigms to help sift through information: information retrieval and recommender systems. The broad family of information retrieval techniques has given rise to the modern search engines which return relevant results, following a user's explicit query. The broad family of recommender systems, on the other hand, works in a more subtle manner, and do not require an explicit query to provide relevant results. Collaborative Filtering (CF) recommender systems are based on algorithms that provide suggestions to users, based on what they like and what other similar users like. Their strength lies in their ability to make serendipitous, social recommendations about what books to read, songs to listen to, movies to watch, courses to take, or generally any type of item to consume. Their strength is also that they can recommend items of any type or content because their focus is on modeling the preferences of the users rather than the content of the recommended items. Although recommender systems have made great strides over the last two decades, with significant algorithmic advances that have made them increasingly accurate in their predictions, they suffer from a few notorious weaknesses. These include the cold-start problem when new items or new users enter the system, and lack of interpretability and explainability in the case of powerful black-box predictors, such as the Singular Value Decomposition (SVD) family of recommenders, including, in particular, the popular Matrix Factorization (MF) techniques. Also, the absence of any explanations to justify their predictions can reduce the transparency of recommender systems and thus adversely impact the user's trust in them. In this work, we propose machine learning approaches for multi-domain Matrix Factorization (MF) recommender systems that can overcome the new user cold-start problem. We also propose new algorithms to generate explainable recommendations, using two state of the art models: Matrix Factorization (MF) and Restricted Boltzmann Machines (RBM). Our experiments, which were based on rigorous cross-validation on the MovieLens benchmark data set and on real user tests, confirmed that our proposed methods succeed in generating explainable recommendations without a major sacrifice in accuracy.
Recommended Citation
Abdollahi, Behnoush, "Accurate and justifiable : new algorithms for explainable recommendations." (2017). Electronic Theses and Dissertations. Paper 2744.
https://doi.org/10.18297/etd/2744