Multi-Node Shell is a performance which utilizes biosensors to explore sound potential as a language through body movement perception. Neural networks process sensor data, serving a dual role: teaching the machine expressive body- sound associations and recognizing behavioral states affecting software and sound. When encountering new gestures, distinct from previous training, the software produces unexpected sound results, rendering the machine “creative” in its own right. The project establishes an inquiry into learning methods between humans and machines. If initially the research question was about what we can teach an artificial intelligence, now the interest has shifted to what and how much artificial intelligence can teach us.
Multi-Node Shell is a performance by Luca Pagan presented by Umanesimo Artificiale
Luca Pagan (1993, Venice) is a sound artist, performer and independent researcher focusing on the correlation between humans and emerging technologies. Through performances, he explores the interplay of physical movement, sound and environment. Luca’s innovative approach involves creating wearable technology and utilizing biotechnology to capture performance expressiveness, emphasizing machine learning techniques.
He has collaborated with Istituto Italiano di Tecnologia (IIT), Umanesimo Artificiale, MAEID Studio, LOREM, Giorgio Sancristoforo. His work has been exhibited at Ars Electronica (Linz), Biennale di Architettura (Venice), Iklectik Art Lab (London) Museo MAXXI (Rome), Fundación Princesa de Asturias (Oviedo), JRC Joint Research Center (Ispra), Transmedia Research Institute (Fano), PARC Performing Arts Research Center (Florence), Fondazione A.Pini (Milan), Zone Digitali Festival (Bergamo), Cosmo (Venice), Milan Fashion Week (Milan).
Website
www.lucapagan.info
Instagram
www.instagram.com/_lucapagan_
Multi-Node Shell is a performance which utilizes biosensors to explore sound potential as a language through body movement perception. Neural networks process sensor data, serving a dual role: teaching the machine expressive body- sound associations and recognizing behavioral states affecting software and sound. When encountering new gestures, distinct from previous training, the software produces unexpected sound results, rendering the machine “creative” in its own right. The project establishes an inquiry into learning methods between humans and machines. If initially the research question was about what we can teach an artificial intelligence, now the interest has shifted to what and how much artificial intelligence can teach us.
Multi-Node Shell is a performance by Luca Pagan presented by Umanesimo Artificiale
Luca Pagan (1993, Venice) is a sound artist, performer and independent researcher focusing on the correlation between humans and emerging technologies. Through performances, he explores the interplay of physical movement, sound and environment. Luca’s innovative approach involves creating wearable technology and utilizing biotechnology to capture performance expressiveness, emphasizing machine learning techniques.
He has collaborated with Istituto Italiano di Tecnologia (IIT), Umanesimo Artificiale, MAEID Studio, LOREM, Giorgio Sancristoforo. His work has been exhibited at Ars Electronica (Linz), Biennale di Architettura (Venice), Iklectik Art Lab (London) Museo MAXXI (Rome), Fundación Princesa de Asturias (Oviedo), JRC Joint Research Center (Ispra), Transmedia Research Institute (Fano), PARC Performing Arts Research Center (Florence), Fondazione A.Pini (Milan), Zone Digitali Festival (Bergamo), Cosmo (Venice), Milan Fashion Week (Milan).