Indexed by:
Abstract:
Multi-modality medical images can provide relevant or complementary information for a target (organ, tumor or tissue). Registering multi-modality images to a common space can fuse these comprehensive information, and bring convenience for clinical application. Recently, neural networks have been widely investigated to boost registration methods. However, it is still challenging to develop a multi-modality registration network due to the lack of robust criteria for network training. In this work, we propose a multi-modality registration network (MMRegNet), which can perform registration between multi-modality images. Meanwhile, we present spatially encoded gradient information to train MMRegNet in an unsupervised manner. The proposed network was evaluated on the public dataset from MM-WHS 2017. Results show that MMRegNet can achieve promising performance for left ventricle registration tasks. Meanwhile, to demonstrate the versatility of MMRegNet, we further evaluate the method using a liver dataset from CHAOS 2019. Our source code is publicly available (https://github.com/NanYoMy/mmregnet ). © 2022, Springer Nature Switzerland AG.
Keyword:
Reprint 's Address:
Email:
Source :
ISSN: 0302-9743
Year: 2022
Volume: 13131 LNCS
Page: 151-159
Language: English
0 . 4 0 2
JCR@2005
Cited Count:
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: