Published March 18, 2022 | Version 1.2
Journal article Open

Multi-domain Integrative Swin Transformer network for Sparse-View CT Reconstruction

  • 1. Sun Yat-Sen University

Description

# MIST-net (https://arxiv.org/abs/2111.14831) .

V1.2 updata

Add fbp2proj.py to generate sparse-views projection and images and you can use your own data for training. We also provide an example of input data (./fbp/train/label/0000.mat), which is a projection data of 880*2200. Note that you need to convert your data format (Dicom/.dcm/nii/...) into Matlab (.mat) first.

### Citation
If you use this code for your research, please cite our paper:

```

@article{pan3991087multi,
  title={Multi-Domain Integrative Swin Transformer Network for Sparse-View Tomographic Reconstruction},
  author={Pan, Jiayi and Wu, Weiwen and Gao, Zhifan and Zhang, Heye},
  journal={Available at SSRN 3991087}
}

```

Issue: The sparse-view data reconstruction is one of typical underdetermined inverse problems, how to reconstruct high-quality CT images from dozens of projections is still a challenge in practice. 

Goal: To address this challenge, in this article we proposed a Multi-domain Integrative Swin Transformer network (MIST-net).

Features: 

(1) The MIST-net incorporated lavish domain features from data, residual-data, image, and residual-image using flexible network architectures. Here, the residual-data and residual-image domains network components can be considered as the data consistency module to eliminate interpolation errors in both residual data and image domains, and then further retain image details. 

(2): To detect the image features and further protect image edge, the trainable Sobel Filter was incorporated into the network to improve the encode-decode ability. 

(3). With the classical Swin Transformer, we further designed the high-quality reconstruction transformer (i.e., Recformer) to improve the reconstruction performance. The Recformer inherited the power of Swin transformer to capture the global and local features of the reconstructed image. 

### Install dependencies
cuda=11.1, python=3.8, torch=1.8
Before trainng, you should install a library named CTLIB developed by Wenjun Xia in Sichuan University (https://github.com/xwj01/CTLIB)

Due to the update of the "CTLIB" library, we provide the old version of the package.

```
python setup.py install
```


###  Train and Test
Run

```
python mainMISTnet.py
```


### Results
see [paper](https://arxiv.org/ftp/arxiv/papers/2111/2111.14831.pdf) .
 

Files

MISTv1.2.zip

Files (24.2 MB)

Name Size Download all
md5:865dfb7a6d37465d6548c7f3cedb16c9
24.2 MB Preview Download