MoBluRF

:

Motion Deblurring Neural Radiance Fields for Blurry Monocular Video

IEEE TPAMI

Minh-Quan Viet Bui*      Jongmin Park*      Jihyong Oh     Munchurl Kim
*Co-first authors (equal contribution)
Co-corresponding authors
KAIST, South Korea        Chung-Ang University, South Korea

Dynamic Deblurring NVS Comparisons on Blurry iPhone

Dynamic Deblurring NVS Comparisons on Stereo Blur (Test-view)

Abstract

We propose a novel motion deblurring NeRF framework for blurry monocular video, called MoBluRF, consisting of a Base Ray Initialization (BRI) stage and a Motion Decomposition-based Deblurring (MDD) stage. In the BRI stage, we coarsely reconstruct dynamic 3D scenes and jointly initialize the base rays which are further used to predict latent sharp rays, using the inaccurate camera pose information from the given blurry frames. In the MDD stage, we introduce a novel Incremental Latent Sharp-rays Prediction (ILSP) approach for the blurry monocular video frames by decomposing the latent sharp rays into global camera motion and local object motion components.

Framework Architecture

network

Overview of our MoBluRF framework. o effectively optimize the sharp radiance field with the imprecise camera poses from blurry video frames, we design our MoBluRF consisting of two main procedures (Algo. 2) of (a) Base Ray Initialization (BRI) Stage (Sec. III-C and Algo. 1) and (b) Motion Decomposition-based Deblurring (MDD) Stage (Sec. III-D).

Quantitative Comparisons with Other Methods

On the Blurry iPhone dataset

We utilize the co-visibility masked image metrics, including mPSNR, mSSIM, and mLPIPS, following the approach introduced by Dycheck. These metrics mask out the regions of the test video frames which are not observed by the training camera. We further utilize tOF to measure the temporal consistency of reconstructed video frames.

On the Stereo Blur dataset

We utilize PSNR, SSIM, and LPIPS to compare the performance of different methods, following the approach of DyBluRF. We reproduce the performance of DyBluRF using the offcial code.

BibTeX

@ARTICLE{11017407,
      author={Bui, Minh-Quan Viet and Park, Jongmin and Oh, Jihyong and Kim, Munchurl},
      journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
      title={MoBluRF: Motion Deblurring Neural Radiance Fields for Blurry Monocular Video}, 
      year={2025},
      pages={1-18},
      keywords={Neural radiance field;Dynamics;Cameras;Rendering (computer graphics);Optimization;Trajectory;Geometry;Training;Three-dimensional displays;Kernel;Motion Deblurring NeRF;Dynamic NeRF;Video View Synthesis},
      doi={10.1109/TPAMI.2025.3574644}}