Self-supervised Dual-domain Swin Transformer for Sparse-view CT Reconstruction

Yadav B, Raghunath A, Weber F, Maier A (2026)


Publication Type: Conference contribution, Conference Contribution

Publication year: 2026

Publisher: Springer

Pages Range: 19–25

Conference Proceedings Title: Bildverarbeitung für die Medizin 2026

Event location: Lubeck DE

ISBN: 978-3-658-51100-5

DOI: 10.1007/978-3-658-51100-5_4

Abstract

Sparse-view computed tomography (CT) reconstruction suffers from streak artifacts and loss of fine detail after filtered back-projection (FBP). To alleviate these issues, we propose a self-supervised dual-domain swin transformer (DuDoSwin) that performs sinogram angular super-resolution and image-domain refinement, connected via a differentiable FBP bridge for end-to-end optimization. On the AAPM Low-Dose CT dataset, DuDoSwin achieves superior reconstruction quality and perceptual fidelity compared to existing learning-based and interpolation-based methods, improving quantitative (PSNR/SSIM/LPIPS) metrics under severe angular undersampling (4×, 8×, 16×). By jointly modeling projection and image domains, the proposed dual-domain design restores sharp anatomical structures and enhances perceptual quality, contributing to higher-quality low-dose CT reconstruction. The implementation is available at https://github.com/bipin-y-lab/DuDoSwin.

Authors with CRIS profile

How to cite

APA:

Yadav, B., Raghunath, A., Weber, F., & Maier, A. (2026). Self-supervised Dual-domain Swin Transformer for Sparse-view CT Reconstruction. In Bildverarbeitung für die Medizin 2026 (pp. 19–25). Lubeck, DE: Springer.

MLA:

Yadav, Bipin, et al. "Self-supervised Dual-domain Swin Transformer for Sparse-view CT Reconstruction." Proceedings of the German Conference on Medical Image Computing, Lubeck Springer, 2026. 19–25.

BibTeX: Download