Aodyo Instruments shared this set of videos, demonstrating the possibilities of using their new Loom controller with the Waldorf Iridium Core synthesizer.
The Loom is a multidimensional multitouch surface that supports independent control of pitch (X), timbre (Y), and dynamics (Z) for each finger. It can be used to control any MIDI instrument, but these demos are designed to highlight it can be used with an MPE-compatible synth like the Iridium Core.
The first video, embedded above, demonstrates using the Loom’s Ribbon and Strum modes with the Iridium. The second video, below, demonstrates Ribbon, Drumkit and Pad modes:
Production of the Aodyo Instruments Loom is being funded via a Kickstarter project, and it is available to project backers starting at €259, 35% off the planned retail price. The project has already more than doubled its funding goal.
*Note: Crowdfunded projects can involve risk. See the project website for more information.
ZAK Sound has introduced Atmospheres, a free reverb plugin for macOS and Windows that’s designed to create immersive spaces and dreamy textures.
Here’s what they say about it:
“Atmospheres is a one-knob plugin for crafting dreamy and atmospheric textures. It was created using its big brother Deep Waters processor, renowned for its water-based approach to creating distinctive reverbs.
This free plugin allows you to create textures and effects without thinking too much, with just a knob and a button. The Randomizer function allows you to change many internal parameters and reverb types with a single touch.”
Stability AI’s music generator can now edit and transform your samples and produce full-length, structured tracks. Now Stable Audio 2.0 is edging closer to becoming a revolutionary tool for music-makers.
Last year, we reported on the launch of Stable Audio, a text-to-audio AI music generator from the makers of Stable Diffusion capable of producing musical clips of up to 90 seconds based on text prompts provided by the user.
This week, Stability AI has launched Stable Audio 2.0, a substantial update to its AI-powered platform that brings the ability to generate “high-quality” and fully structured tracks of up to three minutes in length, in addition to melodies, backing tracks, stems and sound effects.
What’s more interesting than the increase in track length and structural complexity, though, is that Stable Audio 2.0 can now process and transform audio uploaded by the user.
This can be used for all sorts of musical applications; in a demo shared on Stable Audio’s YouTube channel, a synth bassline is transformed into the sound of a bass guitar playing the same melody, a snippet of scat singing is used to determine the pitch and rhythm of a drum part, and a beatboxing recording is morphed into a lo-fi hip-hop beat.
Stable Audio says that its audio-to-audio capabilities are intended to help its users “transform ideas into fully produced samples”. While still in its early stages, the possibilities here are endless, and this new feature could become a hugely useful tool in a music production workflow – especially if it was integrated into a DAW or a plugin.
When asked Stability AI if this kind of integration was something they might explore in the future, and while they didn’t comment specifically on any future projects, they said that “enhancing current production workflows by providing more granular control and editing capabilities” is a focus for the company.
Stable Audio 2.0’s ability to generate sound and audio effects has also been improved, giving users the potential to add AI-generated imitations of real-world sounds to their creations.
Like its first iteration, Stable Audio 2.0 was trained on a dataset of 800,000 audio files from the AudioSparx sound library. All of the artists behind the music in the dataset were given the option to ‘opt out’ and stop their music being used in the model training process.
In November last year, Stable Audio VP Ed Newton-Rex resigned from his post, describing the practices used in training the model as “exploitative”.
“Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works,” Newton-Rex said in a statement. “I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.”
While Stability AI has refused to provide us with comment on Ed’s statement, they reiterated that the Stable Audio model is solely trained on data from their partners at AudioSparx. “All of AudioSparx’s artists are compensated and were given the option to ‘opt out’ of the model training,” they told us.
When asked Stability AI whether they believe that tools such as Stable Audio have the potential to impact the livelihoods of working musicians as they continue to advance in sophistication.
“Our mission at Stability AI is to build the foundation to amplify humanity’s potential, which includes artists,” they replied. “Our aim is to expand the creative toolkit for artists and musicians with our cutting-edge technology, thereby enhancing their creativity.”