https://journals.uio.no/NMI/issue/feed Nordic Machine Intelligence 2021-11-01T17:36:23+01:00 Anne Håkansson anne.hakansson@uit.no Open Journal Systems <p>Nordic Machine Intelligence (NMI) is a non-commercial, peer-reviewed, open-access journal. The journal will publish original research articles, literature reviews, conference articles related to NORA's Norwegian and Nordic conferences, articles related to the <a href="https://www.nora.ai/Competition/">NMI Challenge</a>, statements and other educational material within all aspects of artificial intelligence.</p> https://journals.uio.no/NMI/article/view/9107 Polyp and Surgical Instrument Segmentation with Double Encoder-Decoder Networks 2021-10-19T11:23:53+02:00 Adrian Galdran agaldran@gmail.com <p>This paper describes a solution for the MedAI competition, in which participants were required to segment both polyps and surgical instruments from endoscopic images. Our approach relies on a double encoder-decoder neural network which we have previously applied for polyp segmentation, but with a series of enhancements: a more powerful encoder architecture, an improved optimization procedure, and the post-processing of segmentations, based on tempered model ensembling. Experimental results show that our method produces segmentations that show a good agreement with manual delineations provided by medical experts.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9116 Dual Parallel Reverse Attention Edge Network : DPRA-EdgeNet 2021-10-19T11:44:39+02:00 Debayan Bhattacharya debayan.bhattacharya@tuhh.de Christian Betz cbetz@mailinator.com Dennis Eggert deggert@mailinator.com Alexander Schlaefer aschlaefer@mailinator.com <ul> <li class="show"><img src="https://journals.uio.no/public/site/images/debayanbh/network-arch-final-2.png" alt="DPRA-EdgeNet" width="500" height="339" /> In this paper, we propose Dual Parallel Reverse Attention Edge Network (DPRA-EdgeNet), an architecture that jointly learns to segment an object and its edge. Specifically, the model uses two cascaded partial decoders to form two initial estimates of the object segmentation map and its corresponding edge map. This is followed by a series of object decoders and edge decoders which work in conjunction with dual parallel reverse attention modules. The dual parallel reverse attention (DPRA) modules repeatedly prunes the features at multiple scales to put emphasis on the object segmentation and the edge segmentation respectively. Furthermore, we propose a novel decoder block that uses spatial and channel attention to combine features from the preceding decoder block and reverse attention (RA) modules for object and edge segmentation. We compare our model against popular segmentation models such as U-Net, SegNet and PraNet and demonstrate through a five fold cross validation experiment that our model improves the segmentation accuracy significantly on the Kvasir-SEG dataset and Kvasir-Instrument dataset.</li> </ul> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9120 T-MIS: Transparency Adaptation in Medical Image Segmentation 2021-10-14T18:40:54+02:00 Ayush Somani ayush.somani@uit.no Divij Singh divij.singh.cse20@itbhu.ac.in Dilip Prasad dilip.prasad@uit.no Alexander Horsch alexander.horsch@uit.no <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>We often locate ourselves in a trade-off situation between what is predicted and understanding why the predictive modeling made such a prediction. This high-risk medical segmentation task is no different where we try to interpret how well has the model learned from the image features irrespective of its accuracy. We propose image-specific fine-tuning to make a deep learning model adaptive to specific medical imaging tasks. Experimental results reveal that: a) proposed model is more robust to segment previously unseen objects (negative test dataset) than state-of-the-art CNNs; b) image-specific fine-tuning with the proposed heuristics significantly enhances segmentation accuracy; and c) our model leads to accurate results with fewer user interactions and less user time than conventional interactive segmentation methods. The model successfully classified ’no polyp’ or ’no instruments’ in the image irrespective of the absence of negative data in training samples from Kvasir-seg and Kvasir-Instrument datasets.</p> </div> </div> </div> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9122 EM-Net: An Efficient M-Net for segmentation of surgical instruments in colonoscopy frames 2021-10-19T12:08:15+02:00 Debapriya Banik debu.cse88@gmail.com Kaushiki Roy kroy@mailinator.com Debotosh Bhattacharjee debotoshbhattacharjee@mailinator.com <p>This paper addresses the Instrument Segmentation Task, a subtask for the “MedAI: Transparency in Medical Image Segmentation” challenge. To accomplish the subtask, our team “Med_Seg_JU” has proposed a deep learning-based framework, namely “EM-Net: An Efficient M-Net for segmentation of surgical instruments in colonoscopy frames”. The proposed framework is inspired by the M-Net architecture. In this architecture, we have incorporated the EfficientNet B3 module with U-Net as the backbone. Our proposed method obtained a JC of 0.8205, DSC of 0.8632, PRE of 0.8464, REC of 0.9005, F1 of 0.8632, and ACC of 0.9799 as evaluated by the challenge organizers on a separate test dataset. These results justify the efficacy of our proposed method in the segmentation of the surgical instruments.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9125 Automatic Polyp and InstrumentSegmentation in MedAI-2021 2021-10-19T11:40:06+02:00 YuCheng Chou johnson111788@gmail.com <p>Polyp and instrument segmentation plays a vital role in the early diagnosis of colorectal cancer (CRC) in that physicians visually inspect the bowel with an endoscope to identify polyps. However, recent works only focus on the accuracy of prediction in the positive samples while omitting the False-Positive (FP) predictions in the negative samples that might mislead the physicians. Here, we propose a novel Dual Model Filtering (DMF) strategy, which efficiently removes FP predictions in negative samples with metrics based threshold setting. To better adapt high-resolution input with various distributions, we embed the PVTv2 backbone to the framework SINetV2 as our model since the polyp segmentation is one of the downstream tasks of camouflaged object detection (COD). Experiments on challenging MedAI datasets demonstrate our method achieves excellent performance. We also conduct extensive experiments to study the effectiveness of the DMF.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9126 Explainable Medical Image Segmentation via Generative Adversarial Networks and Layer-wise Relevance Propagation 2021-10-19T12:26:25+02:00 Awadelrahman M. A. Ahmed awadrahman@gmail.com Leen A. M. Ali leenama@mailinator.com <p>This paper contributes in automating medical image segmentation by proposing generative adversarial network based models to segment both polyps and instruments in endoscopy images. A main contribution of this paper is providing explanations for the predictions using layer-wise relevance propagation approach, showing which pixels in the input image are more relevant to the predictions. The models achieved 0.46 and 0.70, on Jaccard index and 0.84 and 0.96 accuracy, on the polyp segmentation and the instrument segmentation, respectively.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9128 More Birds in the Hand -Medical Image Segmentation using a Multi-Model Ensemble Framework 2021-10-19T11:57:24+02:00 Yung-Han Chen g0410440@gmail.com Pei-Hsuan Kuo psh09018@gmail.com Yi-Zeng Fang objdoctor891213a@gmail.com Wei-Lin Wang molenchuchu0214@gmail.com <p>In this paper, we introduce a multi-model ensemble framework for medical image segmentation. We first collect a set of state-of-the-art models in this field and further improve them through a series of architecture refinement moves and a set of specific training skills. We then integrate these fine-tuned models into a more powerful ensemble framework. Preliminary experiment results show that the proposed multi-model ensemble framework performs well under the given polyp and instrument datasets.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9130 Kvasir-Instruments and Polyp Segmentation Using UNet 2021-10-18T14:57:22+02:00 Sumit Pandey spandey@mailinator.com Arvind Keprate arvind.keprate@oslomet.no <p>This paper aims to describe the methodology used to develop, fine-tune and analyze a UNet model for creating masks for two datasets: Polyp Segmentation and Instrument Segmentation, which are part of MedAI challenge. For training and validation, we have used the same methodology on both tasks and finally on the hidden testing dataset the model resulted in an accuracy of 0.9721, dice score of 0.7980 for the instrumentation task, and the accuracy of 0.5646 and a dice score of 0.4100 was achieved for the Polyp Segmentation.</p> 2021-12-09T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9131 Employing GRU to combine feature maps in DeeplabV3 for a better segmentation model 2021-10-19T10:34:51+02:00 Mahmood Haithami mss3331@hotmail.com Amr Ahmed amrahmed@mailinator.com Iman Yi Liao imanliao@mailinator.com Hamid Jalab hamidj@mailinator.com <p>In this paper, we aim to enhance the segmentation capabilities of DeeplabV3 by employing Gated Recurrent Neural Network (GRU). A 1-by-1 convolution in DeeplabV3 was replaced by GRU after the Atrous Spatial Pyramid Pooling (ASSP) layer to combine the input feature maps. The convolution and GRU have sharable parameters, though, the latter has gates that enable/disable the contribution of each input feature map. The experiments on unseen test sets demonstrate that employing GRU instead of convolution would produce better segmentation results. The used datasets are public datasets provided by MedAI competition.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9132 Transfer Learning in Polyp and Endoscopic Tool Segmentation from Colonoscopy Images 2021-10-19T11:28:40+02:00 Bjørn-Jostein Singstad b.j.singstad@fys.uio.no Nefeli Panagiota Tzavara tzavaranefeli@ieee.org <p>Colorectal cancer is one of the deadliest and most widespread types of cancer in the world. Colonoscopy is the procedure used to detect and diagnose polyps from the colon, but today's detection rate shows a significant error rate that affects diagnosis and treatment. An automatic image segmentation algorithm may help doctors to improve the detection rate of pathological polyps in the colon. Furthermore, segmenting endoscopic tools in images taken during colonoscopy may contribute towards robotic assisted surgery. In this study, we trained and validated both pre-trained and not pre-trained segmentation models on two different data sets, containing images of polyps and endoscopic tools. Finally, we applied the models on two separate test sets and the best polyp model got a dice score 0.857 and the test instrument model got a dice score 0.948. Moreover, we found that pre-training of the models increased the performance in segmenting polyps and endoscopic tools.</p> 2021-11-16T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9136 Improving Polyp Segmentation in Colonoscopy using Deep Learning 2021-10-19T10:49:40+02:00 Saurab Rauniyar rauniyar9@gmail.com Vabesh Kumar Jha jhavabesh@gmail.com Ritika Kumari Jha ritikajha972@gmail.com Debesh Jha debesh@simula.no Ashish Rauniyar ashish.rauniyar@sintef.no <p>Colorectal cancer is one of the major causes of cancer-related deaths globally. Although colonoscopy is considered as the gold standard for examination of colon polyps, there is a significant miss rate of around 22-28 %. Deep learning algorithms such as convolutional neural networks can aid in the detection and describe abnormalities in the colon that clinicians might miss during endoscopic examinations. The "MedAI: Transparency in Medical Image Segmentation" competition provides an opportunity to develop accurate and automated polyp segmentation algorithms on the same dataset provided by the challenge organizer. We participate in the polyp segmentation task of the challenge and provide a solution based on the dual decoder attention network (DDANet). The DDANet is an encoder-decoder-based architecture based on a dual decoder attention network. Our experimental results on the organizers' dataset showed a dice coefficient of 0.7967, Jaccard index of 0.7220, a recall of 0.8214, a precision of 0.8359, and an accuracy of 0.9557. Our results on unseen datasets suggest that deep learning and computer vision-based methods can effectively solve automated polyp segmentation tasks.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9137 Iterative deep learning for improved segmentation of endoscopic images 2021-10-19T10:26:31+02:00 Sharib Ali sharib.ali@eng.ox.ac.uk Nikhil K Tomar tomar@mailinator.com <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>Iterative segmentation is a unique way to prune the segmentation maps initialized by faster inference techniques or even unsupervised traditional thresholding methods. We used our previous feedback attention-based method for this work and demonstrate that with optimal iterative procedure our method can reach competitive accuracies in endoscopic imaging. For this work, we have applied this segmentation strategy for polyps and instruments.</p> </div> </div> </div> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9142 Explainable U-Net model forMedical Image Segmentation 2021-10-19T12:28:23+02:00 Sahadev Poudel sahadevp093@gmail.com Sang-Woong Lee slee@gachon.ac.kr <p>In this nutshell, we propose a simple, efficient, and explainable deep learning-based U-Net algorithm for the MedAI challenge, focusing on precise segmentation of polyp and instrument and transparency on algorithms. We develop a straightforward encoder-decoder-based algorithm for the task above. We make an effort to make a simple network as much as possible. Specially, we focus on input resolution and width of the model to find the best optimal settings for the network. We perform ablation studies to cover this aspect.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9145 Segmentation of Polyp Instruments using UNet based deep learning model 2021-10-19T16:46:22+02:00 Rishav Kumar Rajak rajak@mailinator.com Ashar Mirza ashar1.iitd@gmail.com <p>In this paper, we present a UNet architecture-based deep learning method that is used to segment polyp and instruments from the image data set provided in the MedAI Challenge2021. For the polyp segmentation task, we developed a UNet based algorithm for segmenting polyps in images taken from endoscopies. The main focus of this task is to achieve high segmentation metrics on the supplied test dataset. Similarly for the polyp segmentation task, in the instrument segmentation task, we have developed UNet based algorithms for segmenting instruments present in colonoscopy videos.</p> 2021-12-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9157 Attention U-Net ensemble for interpretable polyp and instrument segmentation 2021-10-25T18:33:33+02:00 Michael Yeung mjyy2@cam.ac.uk <p>The difficulty associated with screening and treating colorectal polyps alongside other gastrointestinal pathology presents an opportunity to incorporate computer-aided systems. This paper develops a deep learning pipeline that accurately segments colorectal polyps and various instruments used during endoscopic procedures. To improve transparency, we leverage the Attention U-Net architecture, enabling visualisation of the attention coefficients to identify salient regions. Moreover, we improve performance by incorporating transfer learning using a pre-trained encoder, together with test-time augmentation, softmax averaging, softmax thresholding and connected component labeling to further refine predictions.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9171 Transformer Based Multi-model Fusion for Medical Image Segmentation 2021-10-29T18:15:38+02:00 Bo Dong bodong.cv@gmail.com Wenhai Wang wangwenhai362@smail.nju.edu.cn Jinpeng Li jipadam@gmail.com <p>We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in <a href="https://github.com/dongbo811/MedAI-2021">https://github.com/dongbo811/MedAI-2021</a>.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence https://journals.uio.no/NMI/article/view/9140 MedAI: Transparency in Medical Image Segmentation 2021-10-27T12:48:27+02:00 Steven Hicks steven@simula.no Debesh Jha debesh@simula.no Vajira Thambawita vajira@simula.no Pål Halvorsen paalh@simula.no Bjørn-Jostein Singstad b.j.singstad@fys.uio.no Sachin Gaur sachin.gaur@nora.ai Klas Pettersen k.h.pettersen@nora.ai Morten Goodwin morten.goodwin@uia.no Sravanthi Parasa vaidhya209@gmail.com Thomas de Lange thomas.de.lange@gu.se Michael Riegler michael@simula.no <p>MedAI: Transparency in Medical Image Segmentation is a challenge held for the first time at the Nordic AI Meet that focuses on medical image segmentation and transparency in machine learning (ML)-based systems. We propose three tasks to meet specific gastrointestinal image segmentation challenges collected from experts within the field, including two separate segmentation scenarios and one scenario on transparent ML systems. The latter emphasizes the need for explainable and interpretable ML algorithms. We provide a development dataset for the participants to train their ML models, tested on a concealed test dataset.</p> 2021-11-01T00:00:00+01:00 Copyright (c) 2021 Nordic Machine Intelligence