Since late 2005, Christian Ziegler, an assistant professor in Arizona State University's School of Arts, Media and Engineering, has been working on an interactive stage technology that creates an immersive environment for theater performances using artificial intelligence, hanging light structures and physical movements on the stage.
“Connecting human movement to light as a pas de deux was the goal for my first prototypes,” Ziegler said.
Ziegler is now working on his latest production: ODO, an AI chatbot.
The new stage technology uses AI algorithms to conduct natural conversations with the audience and deep learning to interact with the environment. The system has sensors to hear and see the audience, using facial recognition algorithms and crowd cluster tools to understand emotions and physical behavior. The interaction between AI and audience is primarily through chatbots on a mobile application Ziegler’s team created.
“ODO uses a 3D system of lights and robotic movement to express him/her/itself as a virtual character in the real world,” he said. “ODO is designed as an actor to lead a play, communicating with (at least) five audience members.”
WATCH: View a full demonstration of the AI lighting system.
Here, Ziegler answers a view questions about his technology.
Question: You mentioned that this performance, “No Body lives here (ODO),” is based on Antoine de Saint-Exupéry’s "Little Prince" and Stanley Kubrick’s HAL9000 in “2001: A Space Odyssey.” How did this inspire the design for ODO?
Answer: The play is a journey through a poetic cosmos like in the novel of the Little Prince. Like in the original novel of Saint-Exupéry, the people living on the planets in ODO are representing a philosophical or sometimes more ordinary way to express life or how it is to be alive. ODO is an AI chatbot and aware that she/he is not human. So she/he invites us on a virtual journey to get to know us, but also to test us.
Q: The audience is a critical part of the performance. Could you describe how the performance will work?
A: Each participant gets a special phone with an app to talk or text to ODO. Also there are scenes where there are movement games. The up and down movements of the phones can change the vertical movement of the lights or bring energy to water to generate waves. ODO also senses the presence of the audience, the clustering of groups in space. ODO also looks right into your face and can read your emotions!
Q: What are you hoping the audience takes away from this performance?
A: The audience is positioned like the chorus in ancient Greek theater between the act at the center of the stage and the audience world. The chorus is a translator of the play or is commenting on the play – an active role between passive watching and acting. In ODO, the audience is connected to the play as well. The play doesn’t exist without the audience communicating with an AI chatbot. I think theater can’t survive without developing new forms of play and communication on stage.
Q: What draws you to working at the intersection of hybrid live performance, movement and technology?
A: I have collaborated with choreographers and theater directors since 1995 on the research on digital tools for theater. New technologies like AI, AR, robotics, machine learning and the free access to gaming engines bring new ideas to theater. Question is: What can we do with it to develop interesting content and new challenging ways to communicate on stage to keep theater interesting? I think theater is the most important art form to practice social interaction and also to have a platform to discuss how we want to live together in the future. Experiments in theater mirror the society and their ability to adapt and change.
Q: How has your background in performance art helped you during this project/performance?
A: Before I studied architecture and media arts, I also was performing on stage. It changes the perception of presence and physicality if you are in front of an audience. I developed a strong sensation of physical and nonverbal communication and how important it is to design a theater piece with all aspects of human communication beyond mere audio-visual senses. My projects are postdramatic, meaning the spoken and written text is equally powerful as all other media on stage.
Q: Is there anything else you would like to mention about this project?
A: Due to coronavirus, I had to adapt my piece to an audience of five-plus members. I am trying to see that as a challenge and not as an obstacle for the way art projects can be produced in the times of a pandemic. It is sad to see that gathering with others in a confined space is something one should rather avoid these days. I am very concerned that theater as a fascinating social event of a gathering of strangers (may also remain) a risk after the pandemic.
Ziegler enlisted help from various professors and students in the School of Arts, Media and Engineering to develop and test his AI character for the performance, including Assistant Professor Suren Jayasuriya, research specialist Connor Rawls, Research Professor Brandon Mechtely, Instructor Tejaswi Linge Gowda, students Ravi Bhushan, Karthik Kulkarni and Vishal Pandey, and former School of Arts, Media and Engineering student Chris Zlaket. He also received support from ZKM Karlsruhe (Hertz Laboratory), Arizona State University Synthesis Center and the support program of the Endowment for Independent Theater, as well as funds from the Ministry of Science, Research and Art Baden Württemberg, the Department of Culture of the City of Munich and the special program "Configurations" of the Endowment for Performing Arts in Berlin, Germany.