Starting from a single motion-blurred image and its corresponding event stream, BeSplat jointly learns the scene representation through Gaussian Splatting and recovers the camera motion trajectories effectively.
It demonstrates state-of-the-art performance on both synthetic and real datasets, excelling in deblurring, view-consistent novel view synthesis, and rendering sharp images with faster training and lower GPU memory consumption.
Novel view synthesis has been greatly enhanced by the development of radiance field methods. The introduction of 3D Gaussian Splatting (3DGS) has effectively addressed key challenges, such as long training times and slow rendering speeds, typically associated with Neural Radiance Fields (NeRF), while maintaining high-quality reconstructions.
In this work (BeSplat), we demonstrate the recovery of sharp radiance field (Gaussian splats) from a single motion-blurred image and its corresponding event stream.
Our method jointly learns the scene representation via Gaussian Splatting and recovers the camera motion through Bézier SE(3) formulation effectively, minimizing discrepancies between synthesized and real-world measurements of both blurry image and corresponding event stream. We evaluate our approach on both synthetic and real datasets, showcasing its ability to render view-consistent, sharp images from the learned radiance field and the estimated camera trajectory.
To the best of our knowledge, ours is the first work to address this highly challenging ill-posed problem in a Gaussian Splatting framework with the effective incorporation of temporal information captured using the event stream.
Keywords: Gaussian Splatting, Event Stream, Pose Estimation, Deblurring, Novel View Synthesis, 3D from a Single Image