Nordic Machine Intelligence https://journals.uio.no/NMI <p>Nordic Machine Intelligence (NMI) is a non-commercial, peer-reviewed, open-access journal. The journal will publish original research articles, literature reviews, conference articles related to NORA's Norwegian and Nordic conferences, articles related to the <a href="https://www.nora.ai/Competition/">NMI Challenge</a>, statements and other educational material within all aspects of artificial intelligence.</p> en-US anne.hakansson@uit.no (Anne Håkansson) b.j.singstad@fys.uio.no (Bjørn-jostein Singstad) Mon, 01 Nov 2021 17:36:23 +0100 OJS 3.2.1.4 http://blogs.law.harvard.edu/tech/rss 60 Polyp and Surgical Instrument Segmentation with Double Encoder-Decoder Networks https://journals.uio.no/NMI/article/view/9107 <p>This paper describes a solution for the MedAI competition, in which participants were required to segment both polyps and surgical instruments from endoscopic images. Our approach relies on a double encoder-decoder neural network which we have previously applied for polyp segmentation, but with a series of enhancements: a more powerful encoder architecture, an improved optimization procedure, and the post-processing of segmentations, based on tempered model ensembling. Experimental results show that our method produces segmentations that show a good agreement with manual delineations provided by medical experts.</p> Adrian Galdran Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9107 Mon, 01 Nov 2021 00:00:00 +0100 Dual Parallel Reverse Attention Edge Network : DPRA-EdgeNet https://journals.uio.no/NMI/article/view/9116 <ul> <li class="show"><img src="https://journals.uio.no/public/site/images/debayanbh/network-arch-final-2.png" alt="DPRA-EdgeNet" width="500" height="339" /> In this paper, we propose Dual Parallel Reverse Attention Edge Network (DPRA-EdgeNet), an architecture that jointly learns to segment an object and its edge. Specifically, the model uses two cascaded partial decoders to form two initial estimates of the object segmentation map and its corresponding edge map. This is followed by a series of object decoders and edge decoders which work in conjunction with dual parallel reverse attention modules. The dual parallel reverse attention (DPRA) modules repeatedly prunes the features at multiple scales to put emphasis on the object segmentation and the edge segmentation respectively. Furthermore, we propose a novel decoder block that uses spatial and channel attention to combine features from the preceding decoder block and reverse attention (RA) modules for object and edge segmentation. We compare our model against popular segmentation models such as U-Net, SegNet and PraNet and demonstrate through a five fold cross validation experiment that our model improves the segmentation accuracy significantly on the Kvasir-SEG dataset and Kvasir-Instrument dataset.</li> </ul> Debayan Bhattacharya, Christian Betz, Dennis Eggert, Alexander Schlaefer Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9116 Mon, 01 Nov 2021 00:00:00 +0100 T-MIS: Transparency Adaptation in Medical Image Segmentation https://journals.uio.no/NMI/article/view/9120 <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>We often locate ourselves in a trade-off situation between what is predicted and understanding why the predictive modeling made such a prediction. This high-risk medical segmentation task is no different where we try to interpret how well has the model learned from the image features irrespective of its accuracy. We propose image-specific fine-tuning to make a deep learning model adaptive to specific medical imaging tasks. Experimental results reveal that: a) proposed model is more robust to segment previously unseen objects (negative test dataset) than state-of-the-art CNNs; b) image-specific fine-tuning with the proposed heuristics significantly enhances segmentation accuracy; and c) our model leads to accurate results with fewer user interactions and less user time than conventional interactive segmentation methods. The model successfully classified ’no polyp’ or ’no instruments’ in the image irrespective of the absence of negative data in training samples from Kvasir-seg and Kvasir-Instrument datasets.</p> </div> </div> </div> Ayush Somani, Divij Singh, Dilip K. Prasad, Alexander Horsch Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9120 Mon, 01 Nov 2021 00:00:00 +0100 EM-Net: An Efficient M-Net for segmentation of surgical instruments in colonoscopy frames https://journals.uio.no/NMI/article/view/9122 <p>This paper addresses the Instrument Segmentation Task, a subtask for the “MedAI: Transparency in Medical Image Segmentation” challenge. To accomplish the subtask, our team “Med_Seg_JU” has proposed a deep learning-based framework, namely “EM-Net: An Efficient M-Net for segmentation of surgical instruments in colonoscopy frames”. The proposed framework is inspired by the M-Net architecture. In this architecture, we have incorporated the EfficientNet B3 module with U-Net as the backbone. Our proposed method obtained a JC of 0.8205, DSC of 0.8632, PRE of 0.8464, REC of 0.9005, F1 of 0.8632, and ACC of 0.9799 as evaluated by the challenge organizers on a separate test dataset. These results justify the efficacy of our proposed method in the segmentation of the surgical instruments.</p> Debapriya Banik, Kaushiki Roy, Debotosh Bhattacharjee Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9122 Mon, 01 Nov 2021 00:00:00 +0100 Automatic Polyp and InstrumentSegmentation in MedAI-2021 https://journals.uio.no/NMI/article/view/9125 <p>Polyp and instrument segmentation plays a vital role in the early diagnosis of colorectal cancer (CRC) in that physicians visually inspect the bowel with an endoscope to identify polyps. However, recent works only focus on the accuracy of prediction in the positive samples while omitting the False-Positive (FP) predictions in the negative samples that might mislead the physicians. Here, we propose a novel Dual Model Filtering (DMF) strategy, which efficiently removes FP predictions in negative samples with metrics based threshold setting. To better adapt high-resolution input with various distributions, we embed the PVTv2 backbone to the framework SINetV2 as our model since the polyp segmentation is one of the downstream tasks of camouflaged object detection (COD). Experiments on challenging MedAI datasets demonstrate our method achieves excellent performance. We also conduct extensive experiments to study the effectiveness of the DMF.</p> YuCheng Chou Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9125 Mon, 01 Nov 2021 00:00:00 +0100 Explainable Medical Image Segmentation via Generative Adversarial Networks and Layer-wise Relevance Propagation https://journals.uio.no/NMI/article/view/9126 <p>This paper contributes in automating medical image segmentation by proposing generative adversarial network based models to segment both polyps and instruments in endoscopy images. A main contribution of this paper is providing explanations for the predictions using layer-wise relevance propagation approach, showing which pixels in the input image are more relevant to the predictions. The models achieved 0.46 and 0.70, on Jaccard index and 0.84 and 0.96 accuracy, on the polyp segmentation and the instrument segmentation, respectively.</p> Awadelrahman M. A. Ahmed; Leen A. M. Ali Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9126 Mon, 01 Nov 2021 00:00:00 +0100 More Birds in the Hand -Medical Image Segmentation using a Multi-Model Ensemble Framework https://journals.uio.no/NMI/article/view/9128 <p>In this paper, we introduce a multi-model ensemble framework for medical image segmentation. We first collect a set of state-of-the-art models in this field and further improve them through a series of architecture refinement moves and a set of specific training skills. We then integrate these fine-tuned models into a more powerful ensemble framework. Preliminary experiment results show that the proposed multi-model ensemble framework performs well under the given polyp and instrument datasets.</p> Yung-Han Chen, Pei-Hsuan Kuo, Yi-Zeng Fang, Wei-Lin Wang Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9128 Mon, 01 Nov 2021 00:00:00 +0100 Kvasir-Instruments and Polyp Segmentation Using UNet https://journals.uio.no/NMI/article/view/9130 <p>This paper aims to describe the methodology used to develop, fine-tune and analyze a UNet model for creating masks for two datasets: Polyp Segmentation and Instrument Segmentation, which are part of MedAI challenge. For training and validation, we have used the same methodology on both tasks and finally on the hidden testing dataset the model resulted in an accuracy of 0.9721, dice score of 0.7980 for the instrumentation task, and the accuracy of 0.5646 and a dice score of 0.4100 was achieved for the Polyp Segmentation.</p> Sumit Pandey, Arvind Keprate Copyright (c) 2021 Nordic Machine Intelligence https://creativecommons.org/licenses/by/4.0 https://journals.uio.no/NMI/article/view/9130 Thu, 09 Dec 2021 00:00:00 +0100 Employing GRU to combine feature maps in DeeplabV3 for a better segmentation model https://journals.uio.no/NMI/article/view/9131 <p>In this paper, we aim to enhance the segmentation capabilities of DeeplabV3 by employing Gated Recurrent Neural Network (GRU). A 1-by-1 convolution in DeeplabV3 was replaced by GRU after the Atrous Spatial Pyramid Pooling (ASSP) layer to combine the input feature maps. The convolution and GRU have sharable parameters, though, the latter has gates that enable/disable the contribution of each input feature map. The experiments on unseen test sets demonstrate that employing GRU instead of convolution would produce better segmentation results. The used datasets are public datasets provided by MedAI competition.</p> Mahmood Haithami, Amr Ahmed, Iman Yi Liao, Hamid Jalab Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9131 Mon, 01 Nov 2021 00:00:00 +0100 Transfer Learning in Polyp and Endoscopic Tool Segmentation from Colonoscopy Images https://journals.uio.no/NMI/article/view/9132 <p>Colorectal cancer is one of the deadliest and most widespread types of cancer in the world. Colonoscopy is the procedure used to detect and diagnose polyps from the colon, but today's detection rate shows a significant error rate that affects diagnosis and treatment. An automatic image segmentation algorithm may help doctors to improve the detection rate of pathological polyps in the colon. Furthermore, segmenting endoscopic tools in images taken during colonoscopy may contribute towards robotic assisted surgery. In this study, we trained and validated both pre-trained and not pre-trained segmentation models on two different data sets, containing images of polyps and endoscopic tools. Finally, we applied the models on two separate test sets and the best polyp model got a dice score 0.857 and the test instrument model got a dice score 0.948. Moreover, we found that pre-training of the models increased the performance in segmenting polyps and endoscopic tools.</p> Bjørn-Jostein Singstad, Nefeli Panagiota Tzavara Copyright (c) 2021 Nordic Machine Intelligence https://creativecommons.org/licenses/by/4.0 https://journals.uio.no/NMI/article/view/9132 Tue, 16 Nov 2021 00:00:00 +0100 Improving Polyp Segmentation in Colonoscopy using Deep Learning https://journals.uio.no/NMI/article/view/9136 <p>Colorectal cancer is one of the major causes of cancer-related deaths globally. Although colonoscopy is considered as the gold standard for examination of colon polyps, there is a significant miss rate of around 22-28 %. Deep learning algorithms such as convolutional neural networks can aid in the detection and describe abnormalities in the colon that clinicians might miss during endoscopic examinations. The "MedAI: Transparency in Medical Image Segmentation" competition provides an opportunity to develop accurate and automated polyp segmentation algorithms on the same dataset provided by the challenge organizer. We participate in the polyp segmentation task of the challenge and provide a solution based on the dual decoder attention network (DDANet). The DDANet is an encoder-decoder-based architecture based on a dual decoder attention network. Our experimental results on the organizers' dataset showed a dice coefficient of 0.7967, Jaccard index of 0.7220, a recall of 0.8214, a precision of 0.8359, and an accuracy of 0.9557. Our results on unseen datasets suggest that deep learning and computer vision-based methods can effectively solve automated polyp segmentation tasks.</p> Saurab, Vabesh, Ritika, Debesh, Ashish Rauniyar Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9136 Mon, 01 Nov 2021 00:00:00 +0100 Iterative deep learning for improved segmentation of endoscopic images https://journals.uio.no/NMI/article/view/9137 <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>Iterative segmentation is a unique way to prune the segmentation maps initialized by faster inference techniques or even unsupervised traditional thresholding methods. We used our previous feedback attention-based method for this work and demonstrate that with optimal iterative procedure our method can reach competitive accuracies in endoscopic imaging. For this work, we have applied this segmentation strategy for polyps and instruments.</p> </div> </div> </div> Sharib Ali, Nikhil K Tomar Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9137 Mon, 01 Nov 2021 00:00:00 +0100 Explainable U-Net model forMedical Image Segmentation https://journals.uio.no/NMI/article/view/9142 <p>In this nutshell, we propose a simple, efficient, and explainable deep learning-based U-Net algorithm for the MedAI challenge, focusing on precise segmentation of polyp and instrument and transparency on algorithms. We develop a straightforward encoder-decoder-based algorithm for the task above. We make an effort to make a simple network as much as possible. Specially, we focus on input resolution and width of the model to find the best optimal settings for the network. We perform ablation studies to cover this aspect.</p> Sahadev Poudel, Sang-Woong Lee Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9142 Mon, 01 Nov 2021 00:00:00 +0100 Segmentation of Polyp Instruments using UNet based deep learning model https://journals.uio.no/NMI/article/view/9145 <p>In this paper, we present a UNet architecture-based deep learning method that is used to segment polyp and instruments from the image data set provided in the MedAI Challenge2021. For the polyp segmentation task, we developed a UNet based algorithm for segmenting polyps in images taken from endoscopies. The main focus of this task is to achieve high segmentation metrics on the supplied test dataset. Similarly for the polyp segmentation task, in the instrument segmentation task, we have developed UNet based algorithms for segmenting instruments present in colonoscopy videos.</p> Rishav Kumar Rajak, Ashar Mirza Copyright (c) 2021 Nordic Machine Intelligence https://creativecommons.org/licenses/by/4.0 https://journals.uio.no/NMI/article/view/9145 Wed, 01 Dec 2021 00:00:00 +0100 Attention U-Net ensemble for interpretable polyp and instrument segmentation https://journals.uio.no/NMI/article/view/9157 <p>The difficulty associated with screening and treating colorectal polyps alongside other gastrointestinal pathology presents an opportunity to incorporate computer-aided systems. This paper develops a deep learning pipeline that accurately segments colorectal polyps and various instruments used during endoscopic procedures. To improve transparency, we leverage the Attention U-Net architecture, enabling visualisation of the attention coefficients to identify salient regions. Moreover, we improve performance by incorporating transfer learning using a pre-trained encoder, together with test-time augmentation, softmax averaging, softmax thresholding and connected component labeling to further refine predictions.</p> Michael Yeung Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9157 Mon, 01 Nov 2021 00:00:00 +0100 Transformer Based Multi-model Fusion for Medical Image Segmentation https://journals.uio.no/NMI/article/view/9171 <p>We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in <a href="https://github.com/dongbo811/MedAI-2021">https://github.com/dongbo811/MedAI-2021</a>.</p> Bo Dong, Wenhai Wang, Jinpeng Li Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9171 Mon, 01 Nov 2021 00:00:00 +0100 MedAI: Transparency in Medical Image Segmentation https://journals.uio.no/NMI/article/view/9140 <p>MedAI: Transparency in Medical Image Segmentation is a challenge held for the first time at the Nordic AI Meet that focuses on medical image segmentation and transparency in machine learning (ML)-based systems. We propose three tasks to meet specific gastrointestinal image segmentation challenges collected from experts within the field, including two separate segmentation scenarios and one scenario on transparent ML systems. The latter emphasizes the need for explainable and interpretable ML algorithms. We provide a development dataset for the participants to train their ML models, tested on a concealed test dataset.</p> Steven Hicks; Debesh Jha, Vajira Thambawita, Pål Halvorsen, Bjørn-Jostein Singstad, Sachin Gaur, Klas Pettersen, Morten Goodwin, Sravanthi Parasa, Thomas de Lange, Michael Riegler Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9140 Mon, 01 Nov 2021 00:00:00 +0100