Date on Master's Thesis/Doctoral Dissertation


Document Type

Doctoral Dissertation

Degree Name

Ph. D.


Computer Engineering and Computer Science

Degree Program

Computer Science and Engineering, PhD

Committee Chair

Nasraoui, Olfa

Committee Co-Chair (if applicable)

Popa, Dan

Committee Member

Popa, Dan

Committee Member

Frigui , Hichem

Committee Member

Altiparmak, Nihat

Committee Member

Zhang, Hui

Author's Keywords

Fairness; bias; explainability; accuracy; machine learning; recommender system


Machine Learning (ML) algorithms are widely used in our daily lives. The need to increase the accuracy of ML models has led to building increasingly powerful and complex algorithms known as black-box models which do not provide any explanations about the reasons behind their output. On the other hand, there are white-box ML models which are inherently interpretable while having lower accuracy compared to black-box models. To have a productive and practical algorithmic decision system, precise predictions may not be sufficient. The system may need to have transparency and be able to provide explanations, especially in applications with safety-critical contexts such as medicine, aerospace, robotics, and self-driving vehicles; or in socially-sensitive domains such as credit scoring and predictive policing. This is because having transparency can help explain why a certain decision was made and this, in turn, could be useful in discovering possible biases that lead to discrimination against any individual or group of people. Fairness and bias are other aspects that need to be considered in evaluating ML models. Therefore, depending on the application domain, accuracy, explainability, and fairness from bias may be necessary in building a practical and effective algorithmic decision system. However, in practice, it is challenging to have a model that optimizes all of these three aspects simultaneously. In this work, we study ML criteria that go beyond accuracy in two different problems: 1) in collaborative filtering recommendation, where we study explainability and bias in addition to accuracy; and 2) in robotic grasp failure prediction, where we study explainability in addition to prediction accuracy.