SPOTLIGHT TRACK

Wonder Why

KREACH

0:00 / 4:27

Follow

KREACH

Spotlight
Your Track

Google NSynth - Sounds That Are Nothing Like You’ve Heard Before

Slowly but surely, the involvement of technology in the world of music has been increasing! So much so that just last year in 2016, the very first completely Artificial Intelligence (a.k.a. AI) composed track was released. From “Daddy's car” created by Paris based Flow Machines to Google Magenta’s own 90 second piano melody, this concept of completely machine generated music is on its way to freakishly becoming a reality!

Magenta - a project from the Google Brain team, has been all about putting the ‘art’ back into Artificial Intelligence since it was announced last year. It aimed at exploring how machine learning could be used to create compelling art and music. The ultimate goal with Magenta is to advance AI generated music and art and build a community of artists around it. But it order to reach this goal, that means building generative systems that plug-in with the tools artists are already working with - first developing algorithms that can learn how to generate art and music, potentially creating artistic content on their own.

Taking concrete steps in this direction, last month the team at Magenta announced NSynth (Neural Synthesizer), “a novel approach to music synthesis designed to aid the creative process” as they state on their official website. The site also goes on to explain, “Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.”


Wired went on to best break-down and explain the workings of NSynth explaining that it all begins with a massive database of sounds. A wide range of notes from about a thousand different instruments have been fed into a neural network. By analyzing these notes, the neural net learned the audible characteristics of each instrument. Then it goes on to create a mathematical “vector” for each one of them and eventually using these vectors, a machine can mimic the sound of each instrument. But what's important to note here is that at the same time it can also combine the sounds of the more than one instrument and create a combination - never before heard sound.


As audio samples from Wired showed us, NSynth has already made some solid hybrid combinations using the sounds of a flute & organ, organ & bass, and bass & flute in their exploration phase.


The team from Google will publicly demonstrate and expand on the workings of this technology later this week at - an annual multi-day art, music, and technology festival by Moog, held from May 18th to 21st in Durham,


H/T: Fact Mag

Tags Artificial Intelligence Google NSynth
Melody Siganporia Contributing Writer

When it comes to music, it's all about the Melody. And no, it's not just for namesake! Digital marketer by day, front row raver by night!

Comments