Telematic Embodied Learning


How can we design learning occasions that allow students and teachers in different locations to use the affordances of their respective physical spaces and bodily abilities (Rajko 2017, Sheets-Johnstone 2011, 2013, 2016) together with the usual array of “foveal” technologies like videoconferencing for learning?

Synthesis Center combines techniques that allow people to freely move about, manipulate physical objects, and use their senses of touch and proprioception. To that we adjoin portals (multi-channel live video wormholes)  that can be placed in various parts of a physical space, mated tabletops (Montpellier et al 2015) to provide common work surfaces shared across locations, and mated objects as multivalent physical props that can serve in either improvised or prepared scenarios. (Tangible, Embedded and Embodied Interaction )

Why Telematic Embodied Learning?

When imagining the future of education, especially higher ed, we prefigure a post-carbon (post-jet travel) world where highly connected, highly diverse learners and mentors can benefit from telematic capabilities. At the same time, the importance of in-person teaching cannot be understated, and has been confirmed by students who are coping with interferences in their academic careers due to the pandemic. (Inside HigherEd 2020)

Online tools tend to overlook the importance of the body for social, mental, and physical health, as well as of the physical environment in which learners and instructors situate their experiences (Stolz 2014, Wagner & Shahjahan 2015, Kontra et al 2012, Schmidt et al 2019, Sheets-Johnstone 2009, 2013; Mangen & Velay 2010). The opportunities for sociality that emerge in face-to-face learning are thus deeply restricted when interactions are moved online, especially when carried out asynchronously.

We propose the design and practice of new learning tools that do not ignore the importance of the body, the mind, and the physical environment for distance learning. (Skulmowski & Rey 2018)  Furthermore, we emphasize that these tools are not designed to replace in-person learning, which remains vital; instead, we aim to augment synchronous learning experiences for either remote or face-to-face classroom settings when this would result in significant benefits to instructors and learners.  (Robbins & AydedeGill 2015, Dewey 1997)


The Synthesis Center at Arizona State University is working to address these issues by designing multimedia environments that take advantage of collective, embodied, tangible, and spatial affordances.

This is a multidisciplinary effort that prioritizes:

  • Shared experience in live settings, paying attention to spatial, corporeal and social affordances to enable ensemble activities.
  • Tools that are gesture-based, minimizing fatigue due to extensive use of screen-based technology.
  • Accommodating unanticipated pedagogical practices invented by teachers (or students) for specialized needs, which may have distinct gestural idioms and techniques.
  • Minimizing content development costs and user cognitive load without compromising the social aspects of learning: body language, a sense of physical presence, tangible affordances, and synchronous interaction between learners and teachers.
  • Imbuing media objects (image, video, sound, text, freehand squiggle) with a tangibility that is analogous to that of physical experience.
  • Easy incorporation of ad hoc bodily action and movement, and the affordances of physical surroundings and objects into pedagogical activity without requiring additional engineering or apps.

Seed projects

Diagrammatic: A gestural, multimedia note-taking tool for students to track spontaneous conceptual connections across diverse learning materials and for educators to design collaborative exercises.

Media Choreography and Playful Environments: In this responsive media studio course, students with no programming background will make persuasive performances and installations using pre-built media processing software instruments and accessories.   In each module we will furnish students with one or more standalone applications built from Max/MSP/Jitter (decisions still to be made on cross-platform support).    So, no coding is necessary to create events, installations, games, performances, or augmented environments.  The emphasis is on experiential design, ensemble activity and where possible augmenting the physical environment beyond screens and keyboards.

Virtual Classroom: Telematic workspace where students can work on individual assignments in a synchronous setting; aims to build a sense of each other’s presences and interaction between students and teachers.


Sutured rooms: Portals on tabletops and walls that extend the physical environment to enhance the sense of shared space between distant interlocutors.

Mated tabletops: Common work surfaces shared across locations, which participants can synchronously modify.

For more technical resources, see Synthesis Center techniques.


  • Gesture tracking
  • Systems chart tracing
  • Augmented Reality
  • Streaming multi-channel video and audio
  • Custom hardware for physical interaction
  • Real-time complex systems simulation (e.g. weather patterns)


Garrett L Johnson (AME MAS PhD): Diagrammatic research lead, experience design

Gabriele Carotti-Sha (San Francisco): External projects outreach coordinator, experience design

Muindi F Muindi (U Washington, Seattle): Performative, experience design

Tianhang Liu (AME Digital Culture): Augmented reality; research assistant

Andrew Robinson (Synthesis, Weightless, AME Digital Culture): Realtime media, user interfaces

Ivan Mendoza (AME Digital Culture): Gesture tracking, 3D graphics


Connor Rawls (Synthesis): Media choreography systems, network media, responsive environments

Pete Weisman (AME Technical Director): Audio-visual systems, responsive environments

Omar Faleh (Weightless, Morscad, Montreal): Augmented reality and urban spaces, architecture, interactive media.

Sha Xin Wei PhD (Synthesis, AME): Director, experience design, external projects