News
Event
November 1, 2025

In Corpore: Suono, Luce e Gesto
Kneading Media: Rhythm workshop with gestural media / In Corpore: Suono, Luce e Gesto
https://www.instagram.com/p/DP6JueijKEJ
Sha Xin Wei + Gabriele Carotti Sha | 18:00 – 19:30 CET 1 November 2025 | Porto Burci, Vicenza, Italy
OVERVIEW
Three phases: Introduction, Experiment / Studio, Showcase
1. INTRODUCTION (15min)
We will show how to use whole-body movement to continuously shape video and sound using gestural media software instruments from Synthesis and the Topological Media Lab. We create responsive environments in which people, free of screens and keyboards, can improvise fresh, meaningful movement to make sense in collective, improvised sense-making. In this workshop, we’ll introduce gestural media instruments that concentrate and layer rhythms continuously across video, sound and lighting, in any kind of ad hoc, physical activity.
https://synthesiscenter.net/techniques
We’ll show you how to use two families of gestural media instruments : an audio delaysequencer and a videosounder. The delaysequencer takes a sound and streams up to 16 delayed copies of it to make thicker streams and rhythms to your design. The videosounder maps activity in portions of a video stream to sound so you can match a “soundscape” to what the camera can detect as movement. These gestural instruments are a kind of multimodal “AI” a couple of generations ahead of large language models and neural nets. In studio, you can try tailoring for yourself these gestural instruments to sense and thicken rhythms in your own improvisatory situations.
2. EXPERIMENT (30min)
Participants break into teams of 3 or 4. Each team should have one laptop that can run the Max/MSP/Jitter code that we supply. People can specialize if they like to :
(1) Gather (or record) up to 64 short samples of sound or music;
(2) Find good scenes and camera positions to track activity in a location chosen by the team.
(3) Find a good location in the studio for movements that yield interesting textures of sound / music
(4) Vary the settings on the delay-sequencer to create a custom sound-to-sound instrument.
(5) Vary the settings on the videosounder to create a custom video-to-multichannel-sound instrument.
Try your own hand at creating custom instruments. Musicians and sound-oriented people can try designing a suite of delaysequencer instruments, each with its own characteristic textures of delays and processing. You introduce a sound into your mic, or a sound file of your choice. Then you apply a delay, feedback, and pitch-shift to the sound. You can layer in up to 16 delayed versions of the sound. Try clapping and singing through your instrument. Playing with the delays, gains, feedbacks and re-pitchings creates a myriad of different textural effects, each one a live sound-processing instrument.
Movement or gesture-oriented people can create a soundscape with their own banks of audio, to associate with different parts of a physical space. Use either a separate camera connected to your laptop or your laptop’s built-in camera to sense where and when people move across parts of a space. Plan the sounds associated with each sub-region of the camera view. The sounds will be played louder or softer according to how much activity is in the corresponding region of the camera. If there’s movement across many regions, you will hear concurrent sounds, so you should try to compose the layout of sounds to work together as textures. (See demo video of videosounder.) We’ll supply different banks of sounds: ocean, forest, human voices, foley effects, songs, etc. But bring your own collections of short sound files (AIFF), or come prepared to gather for collections of music or sounds online.
3. SHOWCASE
We’ll walk around from station to station to see what each team has created — either a composite 5’ performance or a prototype installation.
THANKS
Thanks to Todd Ingalls, Connor Rawls, Julian Stein, and Andrew Robinson for SC realtime media software kit. Thanks to Megahub.it for local support.