The abstract for my proposed dissertation work investigating how to design AI literacy interventions for public spaces is included below. Photos to come as my research progresses!
Artificial intelligence (AI) is becoming increasingly prevalent in our everyday lives—in places as personal as our social media news feeds, cars, and homes. However, there are still many misconceptions regarding what exactly AI is, what it is capable of, and how it works. Recent initiatives have begun to investigate how to communicate high-level ideas about AI, such as the idea that computers learn from data, to non-technical learners through the development of K-12 course curricula and technology that can be used in these contexts. Public spaces such as museums have the potential to serve as an alternative venue for AI literacy interventions, potentially broadening access to individuals who may not have AI devices in their homes or schools. Experiences involving co-creative, embodied interaction with AI may be particularly well-suited for AI literacy interventions in public, informal learning spaces, due to the open-ended, social nature of these interactions and research that has shown that creativity can be a powerful inroad for learning about computing. My proposed research will take a step towards encouraging more widespread AI literacy by exploring how to design public-facing installations that allow non-technical learners to interact with and learn about AI in co-creative, embodied contexts and foster interest in AI. In this proposal, I describe my completed work designing co-creative AI for public spaces, discuss my ongoing research on defining AI literacy, and present plans for future design research investigating how to design and evaluate co-creative, embodied AI literacy interventions for public spaces.
I lead a team of undergraduate and masters students working on the LuminAI project in the Expressive Machinery Lab. LuminAI is an interactive installation in which a human participant and an AI agent can improvise movement together. The AI agent learns from the user and uses dance theory to inform its choice of movement responses.
Currently, we are exploring new modes of representing and understanding human movement in LuminAI. We recently developed a pipeline for clustering unlabeled motion data based on feature vectors (accepted as a forthcoming paper to the 2019 ACM conference on Movement in Computing) and are working on developing a visualization tool for better understanding and refining the clustering algorithm. We are also currently investigating how to use Laban’s theory of space in dance to guide the agent’s movement understanding and generation. Finally, we have explored a variety of design questions through LuminAI, including how to design socially interactive systems and co-creative agents for public spaces. Under my leadership, LuminAI has been shown at a variety of museums, festivals, and art events, including the Children’s Museum of Pittsburgh, the CODAME Art + Tech Festival in San Francisco, and the ACC Creativity and Innovation Festival at the Smithsonian Museum of American History in Washington, DC.
LuminAI Demo Video
LuminAI at The Children's Museum of Pittsburgh
LuminAI at the CODAME Art + Tech Festival
TuneTable is an interactive tangible tabletop experience in which users can collaboratively create sample-based music compositions using computer science coding concepts. The experience is designed to broaden perceptions of CS, encourage interest formation in CS, and facilitate learning of some introductory computing concepts.
I work on the evaluation team for TuneTable, where we focus on better understanding how to evaluate co-creative experiences in informal learning spaces. We are investigating new ways of assessing whether the table is an engaging and effective tool for informal science learning, using qualitative video analysis and building on existing frameworks used in museum studies. A paper on our work studying participants’ physical engagement with TuneTable was recently published at ACM Creativity and Cognition 2019.
TuneTable Demo Video
Tangible blocks are connected on the tabletop surface to create and manipulate music. Participants can place sample blocks (top left) to play sound samples, which can be manipulated using other blocks, such as the loop block (top middle), which will cause a sample to repeat over and over again.
Interacting with TuneTable
Bee a Pollinator!
Bee a Pollinator! is an exhibit designed by myself and a colleague for the Children’s Museum of Atlanta. The installation is designed to teach kids about pollination via a playful, interactive experience. Families are invited to use stuffed felt bee wands to pick up colorful velcro “pollen” balls. If the player moves the pollen ball to the matching colored flower hole, the stem of the standing flower begins to light up. After three balls have been added, a fruit in the center of the flower moves up and down. This experience was designed to communicate three key learning goals about pollination: 1) bees pollinate flowers; 2) pollen sticks to bees; 3) flowers are only pollinated when a bee leaves behind the “matching” pollen; and 4) pollinated flowers produce fruit. The installation uses USB cameras and a color recognition program to detect the color of the pollen balls and an Arduino with NeoPixels and servo attachments to power the lights and fruit movement.
Players move colorful pollen balls to matching flower holes in order to make the stems of the flowers light up. After three balls are added, a fruit pops up from the center of the flower.
We custom-sewed stuffed bees out of felt. When participants stuck one of the bees in the hole in the installation, velcro “pollen” balls would stick to it.
The installation structure was constructed from a combination of plywood and cardboard.
The petals of the standing flowers were constructed by wrapping custom made aluminum and steel wire frames with colorful stocking material.
We used a rack and pinion mechanism (laser cut out of acrylic using designs from Paper Mech) attached to a servo motor to move the plastic fruit in the center of the flower up and down. The servo was controlled by an Arduino board.
Personal Equity Index
The City of Atlanta is exploring how to represent and address issues of equity in the city, and the Personal Equity Index is a participatory workshop designed to aid in humanizing statistics and quantitative data with qualitative stories about (in)equity. The workshop, developed through an iterative design research process in collaboration with the City of Atlanta, is centered around the idea that equity is personal. In the workshop, participants use workbooks and card decks that we designed to explore how they feel about issues of equity and inequity in Atlanta.
The insights from this workshop will enable the City of Atlanta to examine the qualitative stories and personal experiences of residents in Atlanta on both on an individual and community wide scale.
We structured the workshop around three key ideas: intersectional identity, personal experiences, and emotions/feelings. Participants are first asked to share three words that best describe who they are, as a way of expressing their own personal, intersectional identity. They are also asked to situate themselves in a neighborhood that they live in Atlanta. They are then asked to discuss their experiences living in Atlanta, focusing both on things they are grateful for and things they wish they could change about the city. Finally, they are asked to discuss their overall feelings about living in Atlanta, and share a story of a time the city made them feel a certain way (e.g. included, vulnerable).
The workbook prompts participants to use the cards to explore their personal stories of equity in Atlanta. Participants use the Experiences cards to explore what they are grateful for and what they wish they could change in Atlanta, and they use the Feelings cards to explore a time they have felt a certain way (e.g. vulnerable, included) in Atlanta.
Feelings and Experiences Card Decks
The Feelings card deck consists of emotion words like vulnerable or included. The Experiences card deck is broken down into six sub-decks, each of which focuses on a different aspect of the City of Atlanta’s agenda - Transportation, Safety, Health and Wellbeing, Empowerment, Inclusivity, and Housing.
City of Atlanta Workshop
We held a pilot workshop with a group of City of Atlanta officials, who participated in a one-hour workshop and offered feedback on our work.
The Shape of Story
The Shape of Story is an interactive story circle experience in which participants collectively create a story line-by-line. Artificial intelligence in narrative understanding is used in conjunction with a symbolic visual language in order to visualize this story in real-time. The result is a communally created narrative art piece.
You can check out the Github repository for the project here or our paper on the project which was published at the Workshop on Intelligent Narrative Technologies at AAAI’s AIIDE 2017.
Every element of the visualization, from color to shape to the speed at which lines are drawn, corresponds to an element of the story that is told.
Physical Installation Design
Fabric panels (right) delineate a circular room (center) where participants sit in a circle on pillows (left top) surrounding a circular projection screen (left bottom). As participants collectively tell a story, the visualization is projected in the center of the circle.
With the help of local Atlanta artist Jessica Brooke Anderson, we constructed a sculptural speaking device. This device is passed around the circle, and participants use it to share their piece of the story.
Sound Happening is an interactive installation in which participants can collaboratively create music together by moving colorful balls around a defined interaction space. A webcam tracks the location and the color of the balls and generates music according to those parameters. I work as part of a team on this project, focusing primarily on installation design and evaluation.
I installed and studied Sound Happening as part of an artist residency at the Children’s Museum of Pittsburgh during the summer of 2017. This experience led to a late-breaking paper at ACM’s 2018 Interaction Design for Children on understanding parent-child play with Sound Happening. Sound Happening was also recently shown as a performance at the 2019 ACC Creativity and Innovation Festival at the Smithsonian Museum of American History.
Sound Happening at the Clough Art Crawl, Georgia Tech
Sound Happening Demo
Sound Happening Diagram
In my job as an undergraduate research assistant for the UNC Computer Science department, I worked as part of a team under Dr. Prasun Dewan on a project that predicts when programmers are facing difficulty and then offers a number of interactive teaching and learning tools to help them surmount this difficulty.
My contribution to the project was a graphical visualization of the algorithm. This visualizer tool is part of a testbed that allows researchers to analyze user data in order to improve the algorithm. The GUI shows the values of different features at different times (i.e. how many of the user's actions are debugging actions, how many are navigation actions, etc.), provides a visualization of whether the predicted difficulty matches the actual difficulty, shows the type of difficulty, and displays the number of web links visited in an effort to surmount this difficulty.
You can watch a video demo of the project to the left (my GUI is described at 1:10 in the first video, and again at 3:44 in the second video), as well as view an image of my visualization tool.
My work on this project has led to several papers, including a demo paper at the 2015 ACM Conference on Intelligent User Interfaces and a short paper and poster paper at the 2018 IEEE Conference on Visual Languages and Human Centered Computing (see Publications).
You can also check out the full project on Github.
Algorithm Visualization GUI
A Testbed for Automatic Detection of Collaborators' Status
My contribution is described at 1:10.
Tracking Interaction Commands and Incremental Programmer Difficulty Status
My undergraduate honors thesis (advised by Dr. Prasun Dewan) involved developing a program that attempts to predict when students are facing difficulty while writing an essay and puts them in contact with real-time assistance. This was an extension of a previous project that you can read more about here.
Many students face difficulty when writing documents due to various reasons such as language barriers, content misunderstanding, or lack of formal writing education. They are often too shy or busy to visit a writing center or speak with a professor during office hours. Technology also falls short in this arena. Asynchronous collaboration systems require students to self-report when they are struggling and many students tend to under-report difficulty. Synchronous collaboration systems eliminate the need for self-reporting, but require teachers to constantly monitor their students. By combining synchronous and asynchronous collaboration paradigms, this project is able to create a mixed-focus collaborative writing system in which students and teachers engage in collaboration only when triggered by an automatically or manually generated event that indicates the student is facing difficulty. This mixed-focus system was created by combining two existing architectures: 1) the EclipseHelper difficulty architecture for inferring programming difficulty, and 2) the Google Docs collaborative writing environment. The new, combined architecture allows teachers to intervene and offer remote assistance to their students when they are automatically notified that a student is facing difficulty. A user study was conducted to evaluate this new architecture. Students used the system to complete a two-page paper given in a class they were taking, and data were recorded during the writing and help-giving process. Using both qualitative and quantitative analysis, the data were evaluated. Overall, students found the help-giving model easy to use and appreciated the feedback they received. However, difficulty was predicted infrequently, likely as a result of inherent differences between writing and programming. Future work will involve further analysis of the data in order to improve the difficulty prediction algorithm.
The architecture I developed tries to automatically infer when the student is facing difficulty, but it also allows a student to manually indicate that they are struggling by pressing a button. When the student is facing difficulty, a teacher will be notified via email. The teacher can then help the student in real-time.
The architecture I developed in my thesis combined the difficulty inference architecture of EclipseHelper (a tool that predicts when student programmers are facing difficulty) with the collaboration and communication architecture of Google Docs. By combining these two architectures, Iwas able to create a new architecture to facilitate a difficulty-triggered collaborative writing environment. In this new architecture, user commands are collected from Google Docs, mapped to EclipseHelper command categories, and passed into the EclipseHelper difficulty inference algorithm. Once a prediction is generated, it is sent back to Google Docs, where the student can choose to correct it or continue with their work.