Logo : New Design and Benchmark for Multi-Instance Video Editing

1Korea Advanced Institute of Science and Technology, South Korea
2Adobe Research, California
3Chung-Ang University, South Korea

*Equal Contribution, Co-Corresponding Authors

TL;DR: MIVE is a general-purpose zero-shot multi-instance video editing framework that uses novel sampling and probability redistribution techniques to enable faithful instance edits while preventing unintended changes in diverse video scenarios.

Abstract

Recent AI-based video editing has enabled users to edit videos through simple text prompts, significantly simplifying the editing process. However, recent zero-shot video editing techniques primarily focus on global or single-object edits, which can lead to unintended changes in other parts of the video. When multiple objects require localized edits, existing methods face challenges, such as unfaithful editing, editing leakage, and lack of suitable evaluation datasets and metrics. To overcome these limitations, we propose a zero-shot Multi-Instance Video Editing framework, called MIVE. MIVE is a general-purpose mask-based framework, not dedicated to specific objects (e.g., people). MIVE introduces two key modules: (i) Disentangled Multi-instance Sampling (DMS) to prevent editing leakage and (ii) Instance-centric Probability Redistribution (IPR) to ensure precise localization and faithful editing. Additionally, we present our new MIVE Dataset featuring diverse video scenarios and introduce the Cross-Instance Accuracy (CIA) Score to evaluate editing leakage in multi-instance video editing tasks. Our extensive qualitative, quantitative, and user study evaluations demonstrate that MIVE significantly outperforms recent state-of-the-art methods in terms of editing faithfulness, accuracy, and leakage prevention, setting a new benchmark for multi-instance video editing.

Method

In this work, we present MIVE, a general-purpose, zero-shot Multi-Instance Video Editing framework that disentangles the multi-instance video editing process, achieving faithful edits and reduced attention leakage. Our MIVE effectively disentangles the multi-instance video editing process through our (i) Disentangled Multi-instance Sampling (DMS) which reduces editing leakage and (ii) Instance-centric Probability Redistribution (IPR) which enhances editing localization and faithfulness.

In our DMS (shown below), we independently modify each instance using the latent parallel sampling (blue box), and the multiple denoised instance latents are harmonized through the noise parallel sampling (green box) preceded by the latent fusion (yellow box) and re-inversion (orange box).

Since our DMS requires that the edited objects appear within their masks, we propose our IPR (shown below) to ensure that this condition is consistently met.

MIVE Dataset

We also present our new MIVE Dataset specifically designed for multi-instance video editing tasks. MIVE Dataset features 200 diverse videos sourced from the VIPSeg dataset.

We generated and summarized the source captions using LLaVa and Llama 3, respectively. We then manually inserted tags in the source captions to establish instance-to-mask correspondence. Finally, we generated the target edit captions using Llama 3. We show a sample input video and source and target captions below. The target instance captions are color-coded to match the color of the masks.

Source Caption: In a domestic setting, a person in a gray hoodie stands in front of washing machine A and washing machine B against a blue wall, with a blue recycling trash can to the left.

Source Video:
Target Caption: In a domestic setting, an alien stands in front of oven and yellow washing machine against a blue wall, with a blue recycling trash can to the left.


Masked Source Video:

MIVE Editing Results

Hover over the videos to see the original video and text prompts.

Single-Instance Editing: MIVE Dataset Multi-Instance Editing: MIVE Dataset
"a Pembroke Welsh Corgi"
"an astronaut in blue suit",
"a sorceress with black mask",
"a red metallic robot"
Multi-Instance Editing: Video-in-the-Wild Partial Instance Editing: Video-in-the-Wild
"a white rabbit",
"a colorful parrot"
"a pink dress with flowers"

Comparison

Attention leakage examples are shown in green arrows while unfaithful editing examples are shown in red arrows. The target instance captions are color-coded to match the color of the masks.

BibTeX

@article{teodoro2024mive,
  title={MIVE: New Design and Benchmark for Multi-Instance Video Editing},
  author={Samuel Teodoro and Agus Gunawan and Soo Ye Kim and Jihyong Oh and Munchurl Kim},
  journal={arXiv preprint arXiv:2412.12877},
  year={2024}
}