Google launched Magenta RealTime — a new neural network for creating music in real-time. Instead of generating entire tracks, the model creates music in 2-second segments, making it usable even on low-end hardware.
Model trained on 190,000 hours of instrumental music from open sources (without vocals) and contains 800 million parameters. It analyzes the last 10 seconds of the generated melody, ensuring musical coherence. Generating one fragment takes about 1.25 seconds when running on Google Colab (free tier).