Two MICCAI 2024 Papers
Cache-Driven Spatial Test-Time Adaptation for Cross-Modality Medical Image Segmentation.
Diffusion-Enhanced Transformation Consistency Learning for Retinal Image Segmentation.
In Cache-Driven Spatial Test-Time Adaptation for Cross-Modality Medical Image Segmentation, we propose a novel method called Spatial Test-Time Adaptation (STTA) to address the domain gap between source and target domains in medical image segmentation. This approach integrates inter-layer spatial information into test-time adaptation (TTA) for medical imaging tasks. To mitigate error accumulation, STTA employs a multi-head ensemble derived from augmented inputs, ensuring consistency by minimizing the entropy of the aggregated outputs. Additionally, STTA introduces a cache mechanism during iterative adaptation to restore the source model weights, thereby preventing catastrophic forgetting.
In Diffusion-Enhanced Transformation Consistency Learning for Retinal Image Segmentation, we present a semi-supervised segmentation method, Diffusion-Enhanced Transformation Consistency Learning (DiffTCL), aimed at improving label utilization efficiency in semantic segmentation models. The method begins with self-supervised diffusion pretraining to establish a robust initial model, enhancing the accuracy of early pseudo-labels in subsequent consistency training and reducing error accumulation. Furthermore, the researchers develop a Transformation Consistency Learning (TCL) framework specifically for retinal images, effectively leveraging unlabeled data. In TCL, predictions from affine transformations of the image are used to supervise the outputs of elastic and pixel-level transformations.