1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Despite recent advancements in learning-based motion in-betweening, a key limitation has been overlooked: the requirement for character-specific datasets. In this work, we introduce AnyMoLe, a novel method that addresses this limitation by leveraging video diffusion models to generate motion in-between frames for arbitrary characters without external data. Our approach employs a two-stage frame generation process to enhance contextual understanding. Furthermore, to bridge the domain gap between real-world and rendered character animations, we introduce ICAdapt, a fine-tuning technique for video diffusion models. Additionally, we propose a ``motion-video mimicking'' optimization technique, enabling seamless motion generation for characters with arbitrary joint structures using 2D and 3D-aware features. AnyMoLe significantly reduces data dependency while generating smooth and realistic transitions, making it applicable to a wide range of motion in-betweening tasks.

          Related collections

          Author and article information

          Journal
          11 March 2025
          Article
          2503.08417
          c6e94305-b5a4-417b-b611-57afc22edbc9

          http://creativecommons.org/licenses/by-nc-sa/4.0/

          History
          Custom metadata
          68U05
          11 pages, 10 figures, CVPR 2025
          cs.GR cs.AI cs.CV cs.LG cs.MM

          Artificial intelligence,Graphics & Multimedia design
          Artificial intelligence, Graphics & Multimedia design

          Comments

          Comment on this article