I work as a Graduate Research Assistant in the Expressive Machinery Lab with Dr. Brian Magerko. Currently, I am working as part of a team on LuminAI, an interactive installation piece in which a human participant and an AI agent can dance together. The AI agent learns from the user and uses the Viewpoints movement method to inform its dance movements.
My work on the project involves exploring how to design for socially interactive systems, developing methods for evaluating such systems (specifically in the context of museum exhibits), and investigating issues relating to material construction and embodied design practice. I spent the summer of 2017 furthering research on LuminAI as a guest artist at the Children's Museum of Pittsburgh.
LuminAI Demo Video
LuminAI at The Children's Museum of Pittsburgh
As part of my job as an Graduate Research Assistant for Dr. Brian Magerko, I work on the TuneTable project. TuneTable is an interactive tangible tabletop experience where users can collaboratively create sample-based music compositions using computer science coding concepts. The experience is designed to encourage users to get excited and want to learn more about computer science.
My work on the project focuses on better understanding how to evaluate co-creative experiences like the TuneTable in informal learning environments such as museums. As part of the evaluation team, I am investigating new ways of assessing whether the table is an engaging and effective tool for informal science learning, using qualitative video analysis and building on existing frameworks used in museum studies.
TuneTable Demo Video
Personal Equity Index
As part of a class on community-based research and design, my team and I worked in collaboration with the City of Atlanta to develop a participatory workshop called the Personal Equity Index. The City of Atlanta is exploring how to represent and address issues of equity in the city, and the Personal Equity Index is designed to aid in humanizing statistics and quantitative data with qualitative stories about (in)equity. The workshop, developed through an iterative design research process, is centered around the idea that equity is personal. In the workshop, participants use workbooks and card decks that we designed to explore how they feel about issues of equity and inequity in Atlanta.
The insights from this workshop will enable the City of Atlanta to examine the qualitative stories and personal experiences of residents in Atlanta on both on an individual and community wide scale.
We structured the workshop around three key ideas: intersectional identity, personal experiences, and emotions/feelings. Participants are first asked to share three words that best describe who they are, as a way of expressing their own personal, intersectional identity. They are also asked to situate themselves in a neighborhood that they live in Atlanta. They are then asked to discuss their experiences living in Atlanta, focusing both on things they are grateful for and things they wish they could change about the city. Finally, they are asked to discuss their overall feelings about living in Atlanta, and share a story of a time the city made them feel a certain way (e.g. included, vulnerable).
The workbook prompts participants to use the cards to explore their personal stories of equity in Atlanta. Participants use the Experiences cards to explore what they are grateful for and what they wish they could change in Atlanta, and they use the Feelings cards to explore a time they have felt a certain way (e.g. vulnerable, included) in Atlanta.
Feelings and Experiences Card Decks
The Feelings card deck consists of emotion words like vulnerable or included. The Experiences card deck is broken down into six sub-decks, each of which focuses on a different aspect of the City of Atlanta’s agenda - Transportation, Safety, Health and Wellbeing, Empowerment, Inclusivity, and Housing.
City of Atlanta Workshop
We held a pilot workshop with a group of City of Atlanta officials, who participated in a one-hour workshop and offered feedback on our work.
The Shape of Story
The Shape of Story is an interactive story circle experience in which participants collectively create a story line-by-line. Artificial intelligence in narrative understanding is used in conjunction with a symbolic visual language in order to visualize this story in real-time. The result is a communally created narrative art piece.
You can check out the Github repository for the project here.
Every element of the visualization, from color to shape to the speed at which lines are drawn, corresponds to an element of the story that is told.
Physical Installation Design
Fabric panels (right) delineate a circular room (center) where participants sit in a circle on pillows (left top) surrounding a circular projection screen (left bottom). As participants collectively tell a story, the visualization is projected in the center of the circle.
With the help of local Atlanta artist Jessica Brooke Anderson, we constructed a sculptural speaking device. This device is passed around the circle, and participants use it to share their piece of the story.
Sound Happening is an interactive installation in which participants can collaboratively create music together by moving colorful balls around a defined interaction space. A webcam tracks the location and the color of the balls and generates music according to those parameters. I work as part of a team on this project, focusing primarily on installation design and evaluation.
I installed and studied Sound Happening as part of an artist residency at the Children’s Museum of Pittsburgh during the summer of 2017. Sound Happening has also recently been accepted for presentation at the 2019 ACC Creativity and Innovation Festival at the Smithsonian.
Sound Happening at the Clough Art Crawl, Georgia Tech
Sound Happening Demo
In my job as a research assistant for the UNC Computer Science department, I worked as part of a team under Dr. Prasun Dewan on a project that predicts when programmers are facing difficulty and then offers a number of interactive teaching and learning tools to help them surmount this difficulty.
My contribution to the project was a graphical visualization of the algorithm. This visualizer tool is part of a testbed that allows researchers to analyze user data in order to improve the algorithm. The GUI shows the values of different features at different times (i.e. how many of the user's actions are debugging actions, how many are navigation actions, etc.), provides a visualization of whether the predicted difficulty matches the actual difficulty, shows the type of difficulty, and displays the number of web links visited in an effort to surmount this difficulty.
You can watch a video demo of the project to the left (my GUI is described at 1:10 in the first video, and again at 3:44 in the second video), as well as view an image of my visualization tool.
My work on this project has led to several papers, including a demo paper at the 2015 ACM Conference on Intelligent User Interfaces and a short paper and poster paper at the 2018 IEEE Conference on Visual Languages and Human Centered Computing (see Publications).
You can also check out the full project on Github.
Algorithm Visualization GUI
A Testbed for Automatic Detection of Collaborators' Status
My contribution is described at 1:10.
Tracking Interaction Commands and Incremental Programmer Difficulty Status
My undergraduate honors thesis (advised by Dr. Prasun Dewan) involved developing a program that predicts when students are facing difficulty while writing an essay and puts them in contact with real-time assistance. This was an extension of a previous project that you can read more about here.
Many students face difficulty when writing documents due to various reasons such as language barriers, content misunderstanding, or lack of formal writing education. They are often too shy or busy to visit a writing center or speak with a professor during office hours. Technology also falls short in this arena. Asynchronous collaboration systems require students to self-report when they are struggling and many students tend to under-report difficulty. Synchronous collaboration systems eliminate the need for self-reporting, but require teachers to constantly monitor their students. By combining synchronous and asynchronous collaboration paradigms, this project is able to create a mixed-focus collaborative writing system in which students and teachers engage in collaboration only when triggered by an automatically or manually generated event that indicates the student is facing difficulty. This mixed-focus system was created by combining two existing architectures: 1) the EclipseHelper difficulty architecture for inferring programming difficulty, and 2) the Google Docs collaborative writing environment. The new, combined architecture allows teachers to intervene and offer remote assistance to their students when they are automatically notified that a student is facing difficulty. A user study was conducted to evaluate this new architecture. Students used the system to complete a two-page paper given in a class they were taking, and data were recorded during the writing and help-giving process. Using both qualitative and quantitative analysis, the data were evaluated. Overall, students found the help-giving model easy to use and appreciated the feedback they received. However, difficulty was predicted infrequently, likely as a result of inherent differences between writing and programming. Future work will involve further analysis of the data in order to improve the difficulty prediction algorithm.
The architecture I developed tries to automatically infer when the student is facing difficulty, but it also allows a student to manually indicate that they are struggling by pressing a button. When the student is facing difficulty, a teacher will be notified via email. The teacher can then help the student in real-time.
The architecture I developed in my thesis combined the difficulty inference architecture of EclipseHelper (a tool that predicts when student programmers are facing difficulty) with the collaboration and communication architecture of Google Docs. By combining these two architectures, Iwas able to create a new architecture to facilitate a difficulty-triggered collaborative writing environment. In this new architecture, user commands are collected from Google Docs, mapped to EclipseHelper command categories, and passed into the EclipseHelper difficulty inference algorithm. Once a prediction is generated, it is sent back to Google Docs, where the student can choose to correct it or continue with their work.