FMA-Net++: Motion- and Exposure-Aware
Real-World Joint Video Super-Resolution and Deblurring

1Korea Advanced Institute of Science and Technology (KAIST),
2Chung-Ang University
Co-corresponding authors

Interactive Real-World Demo

Captured with Samsung Galaxy S23+ (via Pro Video Mode) at Gangnam-daero, Seoul

Drag the slider handle to compare Blurry LR Input vs FMA-Net++
(Tap anywhere to Play/Pause, Scroll/Pinch to Zoom, Drag to Pan)

Blurry LR Input
FMA-Net++
1.0x
Scroll/Pinch to Zoom • Drag to Pan

Abstract

Real-world video restoration is plagued by complex degradations from motion coupled with dynamically varying exposure—a key challenge largely overlooked by prior works and a common artifact of auto-exposure or low-light capture.

We present FMA-Net++, a framework for joint video super-resolution and deblurring that explicitly models this coupled effect of motion and dynamically varying exposure. FMA-Net++ adopts a sequence-level architecture built from Hierarchical Refinement with Bidirectional Propagation blocks, enabling parallel, long-range temporal modeling. Within each block, an Exposure Time-aware Modulation layer conditions features on per-frame exposure, which in turn drives an exposure-aware Flow-Guided Dynamic Filtering module to infer motion- and exposure-aware degradation kernels.

FMA-Net++ decouples degradation learning from restoration: the former predicts exposure- and motion-aware priors to guide the latter, improving both accuracy and efficiency. To evaluate under realistic capture conditions, we introduce REDS-ME (multi-exposure) and REDS-RE (random-exposure) benchmarks. Trained solely on synthetic data, FMA-Net++ achieves state-of-the-art accuracy and temporal consistency on our new benchmarks and GoPro, outperforming recent methods in both restoration quality and inference speed, and generalizes well to challenging real-world videos.

Proposed Framework

The overall architecture of FMA-Net++. We propose a flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA).

Proposed Framework

Figure 1. Architecture of FMA-Net++.

HRBP Structure

Figure 2. Structure of the Hierarchical Refinement with Bidirectional Propagation (HRBP) block.

Quantitative Results

Comparison with state-of-the-art methods on REDS-ME and REDS-RE benchmarks. FMA-Net++ achieves superior performance while maintaining high efficiency.

Quantitative Results 1

Table 1. Quantitative comparison on REDS4-ME-5:4 and REDS4-ME-5:5 dataset.

Performance Gain

Figure 3. Performance vs. Runtime (GoPro dataset).

Quantitative Results 2

Table 2. Quantitative comparison on REDS-RE & GoPro dataset.

Qualitative Results

Qualitative Results

Figure 4. Visual comparisons on challenging real-world videos (NIQE↓ / MUSIQ↑).