YouTube has just unveiled two new AI music experiments that aim to clone artist voices and co-create tools with musicians, in partnership with Google DeepMind and its brand-new AI music generation model, ‘Lyria’.
The experiments, which are part of YouTube’s AI Experiments program, are designed to showcase the potential of AI for music creation and collaboration, as well as to gather feedback from users and artists to shape future products.
The first experiment, called ‘Dream Track’, allows users to generate 30-second AI vocal tracks from text prompts, using Lyria, a deep neural network that can learn from any musical data and generate realistic and expressive music in various genres and styles. Users can enter any text and choose an artist’s voice to create a unique AI song. Some of the participating artists include T-Pain, Charlie Puth, Sia, Demi Lovato, and John Legend.
The second experiment, called ‘Music AI Tools’, enables users to hum a melody and generate instrumentals, or seamlessly edit the style of a track, using advanced audio processing and synthesis techniques. Users can also customize and fine-tune their musical creations with intuitive and interactive interfaces.
The experiments are currently limited to a small test group, but YouTube has released a demo video that showcases some of the amazing results and features of the tools. The video also features testimonials from some of the artists who collaborated with YouTube and Google DeepMind on the project, such as T-Pain, who said: “AI is the future of music. It’s the future of everything.”
YouTube’s new AI music experiments are part of its ongoing efforts to explore the intersection of AI and creativity, and to empower users and artists to discover new ways of expressing themselves through music. YouTube hopes that these experiments will inspire new genres and styles of music, as well as foster new collaborations and communities.
You can watch the demo video.
Add a Comment: