.jpg)
About Me
My name is Austin Moreau, I have Bachelor's degrees in Computer Science with a focus on Machine Learning and Fine Arts, with a concentration on Painting as well as a Minor in Mathematics. I have previously been the Chairman of the Board for a nonprofit dedicated to providing affordable art studio space in Houston, Texas. I have worked as a research assistant in a neuroscience lab as well as a research lead for academic cybersecurity research. I have worked many years in the service industry and have held internship positions developing Internet of Things technology for automated natural resource extraction.
My Art
Painting
Through my painting I seek to develop an symbolic vocabulary representing an individual's identity, through which I speek while attempting to create art for said individual. I do this through an interview process wherein I generate an understanding of the individual. Using the knowledge gleaned through the interview process, I distill out subconcious aesthetic yearning to understand the image that would represent that individual. Below are samples of this process. The images are the images created from this process, while the audio is the raw, unedited interview. Upon completion, instead of a signature, a QR code is placed on the produced image that leads to the recording of the interview.
Not Painting
Self Conscience/Material Memory prototype demo reel from Eric Todd on Vimeo.
Self-Conscience/Physical Memory: An immersive, kinetic art installation driven by real-time and archival EEG signals
Self-Conscience/Physical Memory is a brain-controlled robotic sculpture in the University of Houston’s Noninvasive Brain Machine Interface Laboratory. Motorized and lluminated acrylic ceiling tiles shift the architecture of the space itself in response to EEG data. The height of the panels is driven by alpha power suppression in the central cortical areas, and the tiles’ color shifts with alpha power changes in the occipital and frontal lobes. The EEG data can be input in real-time by a single participant, or, in the absence of user input, the work also serves as a playback device for archival EEG recordings, a physical manifestation of a past experience, of a moment in someone’s life, a person both absent and present in that new moment.
Chapter to be included in the upcoming book
Brain-Computer Interfaces for Artistic Expression
By: Anton Nijholt
Bayou Shrine
The Bayou Shrine is an interactive audio-sequencer which was installed at Mason Park in Houston, TX from April to June 2018 that plays recorded sound clips sourced from interviews of residents of the neighborhood surrounding Mason Park and selections from a sound collage composed by a collaborating audio-engineer. The arrangement of the sound clips is controlled by the participants within the gazebo. The tempo of the eight steps in the audio sequence is controlled by rotating the top half of the central mirror-polished hourglass sculpture. The sound clip that is played at each of the eight steps is determined by the position of the eight wheels affixed to the columns attached to the legs of the gazebo, with each of the eight columns controlling one of the eight steps of the sequence.
By Matt Fries, Julian Luna, and Austin Moreau
Research
Sony Focused Research Proposal: Non-verbal Interaction between a Virtual Human and the Real World
Summer 2018 REU Program

Using Machine Vision to Autonomously Segment Video For Research Purposes
Abstract: The necessity for expedient documentation of the actions performed by subjects under recorded observation is necessary for many fields of research to effect the creation of a reference database for correlation of actions and stimuli. This research in particular sought to develop a program capable of documenting the actions a research subject was performing while wearing a cap designed to capture electroencephalography (EEG) signals. The subject in question was under observation for several months in total, which resulted in hundreds of hours of videos requiring documentation to understand how the subject’s activities would appear when represented in EEG. Documenting this amount of video by hand would require weeks of work by a team of volunteers typically. With the creation of software which takes advantage of the latest techniques in machine vision for object detection and recognition, this research resulted in the development of a rudimentary system to accomplish the same goal of video annotation automatically without any supervision from a human. The program was developed and then used to segment 11 sample videos for practical use-case quantification in the identification of classification accuracy across five classes of actions. The program was able to classify 88% of frames correctly across these sample videos.
Abstract: The necessity for expedient documentation of the actions performed by subjects under recorded observation is necessary for many fields of research to effect the creation of a reference database for correlation of actions and stimuli. This research in particular sought to develop a program capable of documenting the actions a research subject was performing while wearing a cap designed to capture electroencephalography (EEG) signals. The subject in question was under observation for several months in total, which resulted in hundreds of hours of videos requiring documentation to understand how the subject’s activities would appear when represented in EEG. Documenting this amount of video by hand would require weeks of work by a team of volunteers typically. With the creation of software which takes advantage of the latest techniques in machine vision for object detection and recognition, this research resulted in the development of a rudimentary system to accomplish the same goal of video annotation automatically without any supervision from a human. The program was developed and then used to segment 11 sample videos for practical use-case quantification in the identification of classification accuracy across five classes of actions. The program was able to classify 88% of frames correctly across these sample videos.

Using The Deep Learning Architecture Developed by Schirrmeister et al. To Classify EEG Signals Captured During The Creative Process
Abstract: Classifying EEG signals using state of the art deep learning techniques still leaves much to be desired. EEG is known for issues related to statistical noise as well as variance between subjects and even between sessions. The seminal research paper written by Schirrmeister et al., “Deep Learning With Convolutional Neural Networks for EEG Decoding and Visualization” described a deep learning architecture that was shown to be capable of classifying EEG with an accuracy of 92.4%. The EEG in question was EEG captured during the movement of hands and feet of research subjects. The architecture developed by Schirrmeitster et al. was also used to extract feature meaningfulness. Through perturbation of data and analysis of the subsequent change in classification accuracy, the researchers were capable of discovering the EEG channels and neuronal activation frequency bands most necessary in each class. This research was replicated with the intent of classifying actions related to the art production process while preserving the architecture developed in the original paper.
Testing The Effectiveness Of The Capsule Network When Classifying EEG
Abstract: The Capsule Network neural network architecture was developed recently by Hinton et al. as an attempt to imitate brain functionality and produce a more effective neural network architecture. The Capsule Network has been shown to be effective at classification of the MNIST hand written digit data set with 99.7% Accuracy while requiring less than 30 epochs to be trained to do so. It has been theorized that the capsule network will be resistant to issues effecting classification accuracy due to noise present in the input data which, if true could be a massive benefit to EEG classification. This research documents the use of The Capsule Network for classifying EEG data alongside more understood techniques.
Information Security Research and Education Program

Developing a Better Static Code Analysis Tool
Abstract: The aim of this project is to utilize machine learning methods developed for genomic sequencing to more effectively analyze binary executables for feature in code which will reliably indicate security vulnerabilities. First, by translating the binary to a DNA base analogue then, uploading this information to the online genome sequencing algorithm mVista, and finally processing these results the researchers were able to discern a set of features. Then with this information were able to identify the levels of occurence of the features in a set of 10 safe sets of code and a set of 10 unsafe pieces of code. Then through data analytics, the researchers were able to discern the unsafe from the safe code with 92% accuracy.