Research

Participatory Steering of Complex Adaptive Systems

At Synthesis @ ASU Phoenix, we’ve built techniques for composing responsive media spaces for improvisatory events in which people can collectively steer richly structured visual, sonic, or tangible physical media.

Synthesis @ ASU and its predecessor Topological Media Lab (2001 – 2013) have created rich movement-modulated media spaces in which we could freely invent and coordinate sense-making activity via gesture, voice, images, sounds, or generally physical activity in large or intimate spaces.

We have made a kit of theater-grade scenographic technologies with realtime media and are developing techniques for composing not pre-scripted events, but for conditioning live events in which people can act freely without constraint on their intention or mode of expression.  The goal is to augment and enrich the range of expression available to the “inhabitants” of these events.  It’s important to add that our free-form multi-modal environments  include all the usual “off-the-shelf” tools of representation and interaction (web browsers, Skype, Matlab, whiteboards, charcoal and paper, post-its, role-playing games, body-storming, etc.)

In collaboration with CECAN’s international partners, we want to evolve our techniques and our systems toward meeting different models and scenarios of participatory engagement with social impact around the world


 

Collaborators 

Brandon Mechtley, Ph.D, Synthesis

Sha Xin Wei, Director, Synthesis

Ariane Middel, Ph.D, AME & CIDSE

Garrett L. Johnson, Ph.D, Synthesis

Jonathan Bratt, Ph.D, Geography

Connor Rawls, Synthesis