17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Music creation is typically composed of two parts: composing the musical score, and then performing the score with instruments to make sounds. While recent work has made much progress in automatic music generation in the symbolic domain, few attempts have been made to build an AI model that can render realistic music audio from musical scores. Directly synthesizing audio with sound sample libraries often leads to mechanical and deadpan results, since musical scores do not contain performance-level information, such as subtle changes in timing and dynamics. Moreover, while the task may sound like a text-to-speech synthesis problem, there are fundamental differences since music audio has rich polyphonic sounds. To build such an AI performer, we propose in this paper a deep convolutional model that learns in an end-to-end manner the score-to-audio mapping between a symbolic representation of music called the piano rolls and an audio representation of music called the spectrograms. The model consists of two subnets: the ContourNet, which uses a U-Net structure to learn the correspondence between piano rolls and spectrograms and to give an initial result; and the TextureNet, which further uses a multi-band residual network to refine the result by adding the spectral texture of overtones and timbre. We train the model to generate music clips of the violin, cello, and flute, with a dataset of moderate size. We also present the result of a user study that shows our model achieves higher mean opinion score (MOS) in naturalness and emotional expressivity than a WaveNet-based model and two commercial sound libraries. We open our source code at https://github.com/bwang514/PerformanceNet

          Related collections

          Most cited references2

          • Record: found
          • Abstract: not found
          • Article: not found

          Signal estimation from modified short-time Fourier transform

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Computational Models of Expressive Music Performance: The State of the Art

              Bookmark

              Author and article information

              Journal
              11 November 2018
              Article
              1811.04357
              dba164b3-4904-4958-bc22-51b21e69e763

              http://arxiv.org/licenses/nonexclusive-distrib/1.0/

              History
              Custom metadata
              8 pages, 6 figures, AAAI 2019 camera-ready version
              cs.SD cs.MM eess.AS

              Electrical engineering,Graphics & Multimedia design
              Electrical engineering, Graphics & Multimedia design

              Comments

              Comment on this article