Date on Master's Thesis/Doctoral Dissertation


Document Type

Master's Thesis

Degree Name



Computer Engineering and Computer Science

Degree Program

Computer Science, MS

Committee Chair

Nasraoui, Olfa

Committee Co-Chair (if applicable)

Frigui, Hichem

Committee Member

Frigui, Hichem

Committee Member

Amini, Amir A.

Author's Keywords

machine learning; interpretability; decision trees; classification; exceptional predictions; decision rules


Nowadays, algorithmic systems for making decisions are widely used to facilitate decisions in a variety of fields such as medicine, banking, applying for universities or network security. However, many machine learning algorithms are well-known for their complex mathematical internal workings which turn them into black boxes and makes their decision-making process usually difficult to understand even for experts. In this thesis, we try to develop a methodology to explain why a certain exceptional machine learned decision was made incorrectly by using the interpretability of the decision tree classifier. Our approach can provide insights about potential flaws in feature definition or completeness, as well as potential incorrect training data and outliers. It also promises to help find the stereotypes learned by machine learning algorithms which lead to incorrect predictions and especially, to prevent discrimination in making socially sensitive decisions, such as credit decisions as well as crime-related and policing predictions.