Date on Master's Thesis/Doctoral Dissertation
5-2019
Document Type
Doctoral Dissertation
Degree Name
Ph. D.
Department
Electrical and Computer Engineering
Degree Program
Electrical Engineering, PhD
Committee Chair
Zurada, Jacek
Committee Co-Chair (if applicable)
Inanc, Tamer
Committee Member
Inanc, Tamer
Committee Member
Rouchka, Eric
Committee Member
Zeng, Huacheng
Author's Keywords
deep learning; neural networks; artificial intelligence; nonnegativity-constraints; diverse feature extraction; interpretability
Abstract
In both supervised and unsupervised learning settings, deep neural networks (DNNs) are known to perform hierarchical and discriminative representation of data. They are capable of automatically extracting excellent hierarchy of features from raw data without the need for manual feature engineering. Over the past few years, the general trend has been that DNNs have grown deeper and larger, amounting to huge number of final parameters and highly nonlinear cascade of features, thus improving the flexibility and accuracy of resulting models. In order to account for the scale, diversity and the difficulty of data DNNs learn from, the architectural complexity and the excessive number of weights are often deliberately built in into their design. This flexibility and performance usually come with high computational and memory demands both during training and inference. In addition, insight into the mappings DNN models perform and human ability to understand them still remain very limited. This dissertation addresses some of these limitations by balancing three conflicting objectives: computational/ memory demands, interpretability, and accuracy. This dissertation first introduces some unsupervised feature learning methods in a broader context of dictionary learning. It also sets the tone for deep autoencoder learning and constraints for data representations in light of removing some of the aforementioned bottlenecks such as the feature interpretability of deep learning models with nonnegativity constraints on receptive fields. In addition, the two main classes of solution to the drawbacks associated with overparameterization/ over-complete representation in deep learning models are also presented. Subsequently, two novel methods, one for each solution class, are presented to address the problems resulting from over-complete representation exhibited by most deep learning models. The first method is developed to achieve inference-cost-efficient models via elimination of redundant features with negligible deterioration of prediction accuracy. This is important especially for deploying deep learning models into resource-limited portable devices. The second method aims at diversifying the features of DNNs in the learning phase to improve their performance without undermining their size and capacity. Lastly, feature diversification is considered to stabilize adversarial learning and extensive experimental outcomes show that these methods have the potential of advancing the current state-of-the-art on different learning tasks and benchmark datasets.
Recommended Citation
Ayinde, Babajide Odunitan, "Receptive fields optimization in deep learning for enhanced interpretability, diversity, and resource efficiency." (2019). Electronic Theses and Dissertations. Paper 3243.
https://doi.org/10.18297/etd/3243
Included in
Computational Engineering Commons, Other Electrical and Computer Engineering Commons, Signal Processing Commons