Differentiable Voxel-based X-ray Rendering Improves Sparse-View 3D CBCT Reconstruction
We present DiffVox, a self-supervised framework for Cone-Beam Computed Tomography (CBCT) reconstruction by directly optimizing a voxelgrid representation using physics-based differentiable X-ray rendering. Further, we investigate how the different implementations of the X-ray image formation model in the renderer affect the quality of 3D reconstruction and novel view synthesis. When combined with our regularized voxel-based learning framework, we find that using an exact implementation of the discrete Beer-Lambert law for X-ray attenuation in the renderer outperforms both widely used iterative CBCT reconstruction algorithms and modern neural field approaches, particularly when given only a few input views. As a result, we reconstruct high-fidelity 3D CBCT volumes from fewer X-rays, potentially reducing ionizing radiation exposure and improving diagnostic utility.
Keywords: Cone-Beam Computed Tomography, 3D Reconstruction, self-supervised learning, voxel-grid optimization
Topic(s):Computer Science
Physics
Mathematics
Presentation Type: Oral Presentation
Session: 108-5
Location: MG 1000
Time: 9:45