Deep learning methods have shown remarkable success in many medical imaging tasks over the past few years. However, there exists a challenge that current deep learning models are usually data-hungry, requiring massive amounts of high-quality annotated data to achieve high performance.
Firstly, collecting large scale medical imaging datasets is usually expensive and time-consuming, and the regulatory and governance also raise additional challenges for practical healthcare applications. Moreover, the domain shifts in medical data caused by factors such as different medical devices, different subject cohorts and different scanning configurations and conditions have been challenging for deploying the AI models for real-world applications. Secondly, acquiring the data annotations is even more of a challenge as the experienced and knowledgeable clinicians are required to provide high quality annotations. The annotation process is labour-intensive and time-consuming when it comes to the segmentation tasks, especially for 3D medical data, such as CT, OCT and MRI scans etc.. Minutes to hours may be required for the clinicians to annotate single image, given the complexity of the segmentation tasks. Thirdly, it is infeasible to deploy large deep learning models to edge devices for various medical tasks within a low-resource situation, especially with hardware constraints for practical clinical applications in the era of Telehealth and Metaverse.
The vanilla deep learning models usually have limited ability of learning from sparse training samples. Consequently, to enable efficient and practical deep learning models for medical imaging, there is a need for research methods that can handle limited number of training data, limited labels and limited hardware constraints when deploying the model. To address the limited data challenge, recent methods related to data efficiency such as transfer learning, domain adaptation that can mitigate the domain shifts problem in medical imaging have been proposed in medical image analysis field. Besides, label efficiency methods such as partially-supervised learning, annotation-efficient learning and weakly supervised learning methods including semi-supervised, unsupervised, self-supervised as well as contrastive learning have been widely studied in this field including recent published work in MICCAI conferences etc. However, hardware efficiency related topics such as neural network compression, neural architecture search, etc has not been fully explored in the field. Therefore, in this workshop, we will encourage submissions on this topic to discuss the potential research problems raised by hardware efficiency, preparing for more AI applications for Metaverse and Telemedicine.
|
Prof. Russ Greiner: "Learning Models that Predict Objective, Actionable Labels" Abstract: Many medical researchers want a tool that “does what a top medical clinician does, but does it better”. This presentation explores this goal. This requires first defining what “better” means, leading to the idea of outcomes that are “objective” and then to ones that are actionable, with a meaningful evaluation measure. We will discuss some of the subtle issues in this exploration – what does “objective” mean, the role of the (perhaps personalized) evaluation function, multi-step actions, counterfactual issues, distributional evaluations, etc. Collectively, this analysis argues we should learn models whose outcome labels are objective and actionable, as that will lead to tools that are useful and cost-effective. Bio: Prof. Russ Greiner worked in both academic and industrial research before settling at the University of Alberta, where he is now a Professor in Computing Science and the founding Scientific Director of the Alberta Machine Intelligence Institute. He has been Program/Conference Chair for various major conferences, and has served on the editorial boards of a number of ournals. He was elected a Fellow of the AAAI, has been awarded a McCalla Professorship and a Killam Annual Professorship; and in 2021, received the CAIAC Lifetime Achievement Award and became a CIFAR AI Chair. In 2022, the Telus World of Science museum honored him with a panel, and he received the (UofA) Precision Health Innovator Award, then in 2023, he received the CS-Can | Info-Can Lifetime Achievement Award. For his mentoring, he received a 2020 FGSR Great Supervisor Award, then in 2023, the Killam Award for Excellence in Mentoring. He has published over 300 refereed papers, most in the areas of machine learning and recently medical informatics, including 5 that have been awarded Best Paper prizes. The main foci of his current work are (1) bio- and medical- informatics; (2) survival prediction; and (3) formal foundations of learnability. |
|
Xinxing Xu Institute of High Performance Computing (IHPC), A*STAR, Singapore. xuxinx@ihpc.a-star.edu.sg |
|
Xiaomeng Li The Hong Kong University of Science and Technology, Hongkong, China. eexmli@ust.hk |
|
Dwarikanath Mahapatra Inception Institute of Artificial Intelligence, Abu Dhabi, UAE. dwarikanath.mahapatra@inceptioniai.org |
|
Li Cheng ECE dept., University of Alberta, Canada. lcheng5@ualberta.ca |
|
Caroline Petitjean LITIS, University of Rouen, France. caroline.petitjean@univ-rouen.fr |
|
Benoît Presles ImViA laboratory, University of Burgundy, Dijon, France. benoit.presles@u-bourgogne.fr |
|
Huazhu Fu Institute of High Performance Computing (IHPC), A*STAR, Singapore. hzfu@ieee.org |