Adobe Research launched its new experimental AI designed to revolutionize the process of music creation and audio editing.
The Project Music GenAI Control will allow creators to generate music based on text prompts and edit it according to their needs.
Adobe's Project Music GenAI Control as Music Co-Creator
Adobe Research explained that the new project aims to aid people craft music for various projects. Broadcasters, podcasters and any audio-related field can take advantage of the AI-powered tool.
"One of the exciting things about these new tools is that they aren't just about generating audio-they're taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It's a kind of pixel-level control for music," Adobe Research senior research scientist Nicholas Bryan stated.
The model uses the same process as Firefly. Users are required to input a text prompt like "sad jazz" or "powerful dance" to generate the music they want. Afterward, fine-grained editing will be integrated directly into the workflow.
Project Music GenAI Control Eliminates Manual Work
Users can also use a reference melody and ask the model to adjust the tempo, structure and repeating patterns. They can also increase and decrease the audio's intensity, extend the length of the clip, re-mix a section, or create a repeatable loop.
The Project Music GenAI Control will help eliminate the labor of manually cutting existing music to make different parts from intros, outros and background audio.
Adobe's new generative AI tool is still in its development stage. The project is in collaboration with experts from the University of California in San Diego and the School of Computer Science at Carnegie Mellon University.
The tool is not yet available to the public and has no specific release date.
Related Article : Adobe Cancels $20 Billion Figma Acquisition