From Noise To Form—Ambisonic Growth Simulation
Design Researchers: Lia Coleman, Meredith Binnette, Yime Hu, Zack Davey, Qihang Li
Science Collaborators: Dr. Jennifer Bissonnette, Ben Gagliardi
As part of the Hyundai Motor Group’s Learning From Nature research, I lead a team of design researchers focusing on one fascinating question— how does form emerge when the boundary between the natural, artificial, and the automated becomes obsolete? The Posthuman Mobility Lab has critically explored the process of Nature Simulation in Art, Design, and Technology and developed a design methodology for Cyborg Natures. We explored the evolutionary development process and environmental adaptation that results in unique capabilities and morphologies in plant, insect, and animal design. In our research, we employed style transfer and image synthesis capabilities of Generative Adversarial Network algorithms to analyze and combine the visual data into custom-generated models.
What does it mean to be inspired by Nature when we live in artificial and cyborg landscapes? What new design methodologies can we develop in this techno-ecology?
Inspired by tropism we began looking at electrical sound as a potential source of stimuli and a force that connects us across species and landscapes, connecting and firing up our brains, bodies, organisms, plants and fiber optic cables that connect us across continents and send signals back every time we run an internet search. Our sound design consisted of two coexisting sounds: that of manipulated white noise and self-produced electrical sounds. The electrical sounds were generated from sine waves in Ableton Live, altered and morphed via effects until fitting a sound that we associate with electricity.
Our sample of white noise was altered via ambisonic (spatial) field recordings of the ocean. In the software Max/MSP, we programmed the amplitude data of the ocean field recording to control that of white noise. The recording was then translated back into ambisonic audio, in order for the white noise to spatially mimic the sound of the ocean itself. To facilitate communication with our StyleGAN models, we first took amplitude data from our recordings as well as the X,Y,Z coordinates of the spatial audio to determine the size and direction of the growth of our model. This data was then entered in manually into the software Cinema 4D to produce this growth.
The organic form responds to its environment. The points to the left are (x,y,z) coordinates of the where the audio source moves over time. Turning those into keyframes and using them as attractors, the form grows towards this spatial soundscape.
Controlled growth simulations were created by adding gravity fields to each sound source, then releasing a particle emitter to travel naturally through the space. From there, the particle paths are traced and a volume is built in their path.
INFORMATION
Anastasiia Raina is an Associate Professor at RISD. In her research-based practice, Anastasiia explores the aesthetics of technologically mediated nature and the environment through machine vision, evolutionary biology, and incorporating biotechnology into the design practice. She draws upon scientific inquiry and works with scientists to generate new methodologies in design. (Read full profile).
SCROLL TO THE TOP