Papers
arxiv:2410.00344
Integrating Text-to-Music Models with Language Models: Composing Long Structured Music Pieces
Published on Oct 1, 2024
Authors:
Abstract
Recent music generation methods based on transformers have a context window of up to a minute. The music generated by these methods is largely unstructured beyond the context window. With a longer context window, learning long-scale structures from musical data is a prohibitively challenging problem. This paper proposes integrating a text-to-music model with a large language model to generate music with form. The papers discusses the solutions to the challenges of such integration. The experimental results show that the proposed method can generate 2.5-minute-long music that is highly structured, strongly organized, and cohesive.
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2410.00344 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2410.00344 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2410.00344 in a Space README.md to link it from this page.