I work as a Graduate Research Assistant in the Adaptive Digital Media (ADAM) Lab with Dr. Brian Magerko. Currently, I am working as part of a team on LuminAI, an interactive installation piece in which a human participant and an AI agent can dance together. The AI agent learns from the user and uses the Viewpoints movement method to inform its dance movements.
My work on the project involves exploring how to design for socially interactive systems, developing methods for evaluating such systems (specifically in the context of museum exhibits), and investigating issues relating to material construction and embodied design practice. I spent the summer of 2017 furthering research on LuminAI as a guest artist at the Children's Museum of Pittsburgh.
LuminAI Demo Video
LuminAI at The Children's Museum of Pittsburgh
As part of my job as an Graduate Research Assistant for Dr. Brian Magerko, I work on the TuneTable project. TuneTable is an interactive tangible tabletop experience where users can collaboratively create sample-based music compositions using computer science coding concepts. The experience is designed to encourage users to get excited and want to learn more about computer science.
My work on the project focuses on better understanding how to evaluate co-creative experiences like the TuneTable in informal learning environments such as museums. As part of the evaluation team, I am investigating new ways of assessing whether the table is an engaging and effective tool for informal science learning, using qualitative video analysis and building on existing frameworks used in museum studies.
I graduated in May 2016 with Highest Honors in Computer Science from UNC Chapel Hill. My senior honors thesis (advised by Dr. Prasun Dewan) involved developing a program that predicts when students are facing difficulty while writing an essay and puts them in contact with real-time assistance. This was an extension of a previous project that you can read more about here.
Many students face difficulty when writing documents due to various reasons such as language barriers, content misunderstanding, or lack of formal writing education. They are often too shy or busy to visit a writing center or speak with a professor during office hours. Technology also falls short in this arena. Asynchronous collaboration systems require students to self-report when they are struggling and many students tend to under-report difficulty. Synchronous collaboration systems eliminate the need for self-reporting, but require teachers to constantly monitor their students. By combining synchronous and asynchronous collaboration paradigms, this project is able to create a mixed-focus collaborative writing system in which students and teachers engage in collaboration only when triggered by an automatically or manually generated event that indicates the student is facing difficulty. This mixed-focus system was created by combining two existing architectures: 1) the EclipseHelper difficulty architecture for inferring programming difficulty, and 2) the Google Docs collaborative writing environment. The new, combined architecture allows teachers to intervene and offer remote assistance to their students when they are automatically notified that a student is facing difficulty. A user study was conducted to evaluate this new architecture. Students used the system to complete a two-page paper given in a class they were taking, and data were recorded during the writing and help-giving process. Using both qualitative and quantitative analysis, the data were evaluated. Overall, students found the help-giving model easy to use and appreciated the feedback they received. However, difficulty was predicted infrequently, likely as a result of inherent differences between writing and programming. Future work will involve further analysis of the data in order to improve the difficulty prediction algorithm.
The architecture I developed tries to automatically infer when the student is facing difficulty, but it also allows a student to manually indicate that they are struggling by pressing a button. When the student is facing difficulty, a teacher will be notified via email. The teacher can then help the student in real-time.
The architecture I developed in my thesis combined the difficulty inference architecture of EclipseHelper (a tool that predicts when student programmers are facing difficulty) with the collaboration and communication architecture of Google Docs. By combining these two architectures, Iwas able to create a new architecture to facilitate a difficulty-triggered collaborative writing environment. In this new architecture, user commands are collected from Google Docs, mapped to EclipseHelper command categories, and passed into the EclipseHelper difficulty inference algorithm. Once a prediction is generated, it is sent back to Google Docs, where the student can choose to correct it or continue with their work.
In my job as a research assistant for the UNC Computer Science department, I worked as part of a team under Dr. Prasun Dewan on a project that predicts when programmers are facing difficulty and then offers a number of interactive teaching and learning tools to help them surmount this difficulty.
My contribution to the project was a graphical visualization of the algorithm. This visualizer tool is part of a testbed that allows researchers to analyze user data in order to improve the algorithm. The GUI shows the values of different features at different times (i.e. how many of the user's actions are debugging actions, how many are navigation actions, etc.), provides a visualization of whether the predicted difficulty matches the actual difficulty, shows the type of difficulty, and displays the number of web links visited in an effort to surmount this difficulty.
You can watch a video demo of the project to the left (my GUI is described at 1:10 in the first video, and again at 3:44 in the second video), as well as view an image of my visualization tool.
You can also check out the full project on Github.
A Testbed for Automatic Detection of Collaborators' Status
My contribution is described at 1:10.
Tracking Interaction Commands and Incremental Programmer Difficulty Status
My contribution is described at 3:44.
Algorithm Visualization GUI
Arena Tracker 2.0
I worked for a year as a research developer for the UNC Biology department. As part of this job, I developed an application to assist with research into the migration patterns of sea turtles. Arena Tracker 2.0 (in combination with custom USB4 hardware equipment) allows the user to run simultaneous trials on multiple swimming sea turtles, measuring the angle at which they are swimming and exporting this data to a .txt file, which is then analyzed by the biologists.
You can check out the code for Arena Tracker 2.0 on my Github.
Arena Tracker 2.0 GUI
This UI, built using Java's Swing toolkit, allows four different turtles to be tracked simultaneously. It visually displays the angle that the turtle is swimming at, allows the researcher to calibrate the tracker using the "Reset" button, allows the researcher to create timed trials, and calculates certain angle statistics upon trial completion.
Arena Tracker Output
Timestamps (left column) and angle values (right column) are output in a .txt file that can then be input into other analysis programs used by researchers in the department.