About

 

Hi There! My name is Joshua Michalenko and I am a 3rd year graduate student to Rice University's electrical engineering program. As a PhD. student advised by Rich Baraniuk in the DSP group, my current research falls into a broad range of machine learning techniques including graphical modeling/inference and deep learning techniques for natural language processing.

My probabilistic graphical modeling research aims to answer questions such as, “What are the best graphical based statistics to model how humans answer questions in free form text?”.Our DSP group leverages student data collected via an online learning platform run by OpenStax, a Rice-affliated, low-cost, educational material and book publishing non-profit. This data includes student interactions with online course materials, free form written responses, and quantitative responses. One current project I am working on is data-mining textual responses to uncover misconception patterns. We model the generation process (and a subsequent Markov Chain Monte Carlo inference algorithm) of how a student constructs free form responses to questions to explicitly model (and detect) common misconceptions that students make on a particular question, section, or chapter. In the future, this program will allow educators to hone in on common misconceptions, thereby increasing both the efficiency and efficacy of teacher-student interventions.

Within the last several years, a new paradigm of algorithmic modeling has emerged as state of the art in almost all modern classification based tasks. An explosion of available data, computational resources and layers of classical neural networks stacked consequently and trained with backpropagation has led to the reemergence of AI algorithms collectively called deep learning. A particular type of feed-forward neural network deep in time, coined Recurrent Neural Networks, is the current state of the art in sequential modeling tasks such as language translation, speech recognition, and activity recognition. Despite the impressive performance of neural networks, there remains a fundamental lack of understanding of how neural networks successfully function. There is little to no theory that describes how recurrent neural networks encode noisy grammars in human language. My current research aims at unveiling the black box that RNNs currently are, ultimately shedding light on more efficient model selection techniques and venues to improve the upon the current architecture. We have some highly surprising and non-intuitive results from this project in the works, results will soon be published.