BeSplat: Gaussian Splatting from a Single Blurry Image and Event Stream

WACV 2025 Workshop
overview

Starting from a single motion-blurred image and its corresponding event stream, BeSplat jointly learns the scene representation through Gaussian Splatting and recovers the camera motion trajectories effectively. It demonstrates state-of-the-art performance on both synthetic and real datasets, excelling in deblurring, view-consistent novel view synthesis, and rendering sharp images with faster training and lower GPU memory consumption.

Abstract

Novel view synthesis has been greatly enhanced by the development of radiance field methods. The introduction of 3D Gaussian Splatting (3DGS) has effectively addressed key challenges, such as long training times and slow rendering speeds, typically associated with Neural Radiance Fields (NeRF), while maintaining high-quality reconstructions. In this work (BeSplat), we demonstrate the recovery of sharp radiance field (Gaussian splats) from a single motion-blurred image and its corresponding event stream.

Our method jointly learns the scene representation via Gaussian Splatting and recovers the camera motion through Bézier SE(3) formulation effectively, minimizing discrepancies between synthesized and real-world measurements of both blurry image and corresponding event stream. We evaluate our approach on both synthetic and real datasets, showcasing its ability to render view-consistent, sharp images from the learned radiance field and the estimated camera trajectory. To the best of our knowledge, ours is the first work to address this highly challenging ill-posed problem in a Gaussian Splatting framework with the effective incorporation of temporal information captured using the event stream.

Keywords: Gaussian Splatting, Event Stream, Pose Estimation, Deblurring, Novel View Synthesis, 3D from a Single Image

Pipeline

Pipeline Overview

Results

The qualitative evaluation results are presented, showcasing performance on both synthetic and real datasets, respectively. The experimental results demonstrate that our method performs on par with BeNeRF while providing significant advantages in accelerated training times, real-time rendering capabilities, and reduced GPU memory usage. Specifically, it highlights that prior learning-based methods struggle to generalize, whereas our method maintains reconstruction quality. Furthermore, our approach achieves competitive results on real noisy datasets.

Hover over or click the image to view the outputs of our method!

Real-World Dataset Results

Image GIF
Image GIF
Image GIF
Image GIF
Image GIF


Synthetic Dataset Results

Image GIF
Image GIF
Image GIF
Image GIF
Image GIF

Comparison

To assess the effectiveness of our method in terms of image deblurring, we compare it with state-of-the-art deep learning-based single-image deblurring techniques, including DeblurGANv2, MPRNet, NAFNet, Restormer, event-enhanced single-image deblurring method EDI, and BeNeRF.

Real World Dataset

comparison_1


Synthetic Dataset

comparison_2

BibTeX

@misc{matta2024besplatgaussiansplatting,
      title={BeSplat: Gaussian Splatting from a Single Blurry Image and Event Stream}, 
      author={Gopi Raju Matta and Reddypalli Trisha and Kaushik Mitra},
      year={2024},
      eprint={2412.19370},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.19370}, 
}