Clinical Evidence
Mandible AI

Link to paper

Layered deep learning for automatic mandibular segmentation in cone-beam computed tomography

Author links open overlay panel Pieter-Jan Verhelst a b, Andreas Smolders c, Thomas Beznik c, Jeroen Meewis a b, Arne Vandemeulebroucke b, Eman Shaheen a b, Adriaan Van Gerven c, Holger Willems c, Constantinus Politis a b, Credit

a Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium

b OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium

c Relu BV, Kapeldreef 60, BE-3001, Leuven, Belgium

d Department of Dental Medicine, Karolinska Institutet, Box 4064, 141 04 Huddinge, Sweden

Abstract

Objective

To develop and validate a layered deep learning algorithm which automatically creates three-dimensional (3D) surface models of the human mandible out of cone-beam computed tomography (CBCT) imaging.

Materials & methods

Two convolutional networks using a 3D U-Net architecture were combined and deployed in a cloud-based artificial intelligence (AI) model. The AI model was trained in two phases and iteratively improved to optimize the segmentation result using 160 anonymized full skull CBCT scans of orthognathic surgery patients (70 preoperative scans and 90 postoperative scans). Finally, the final AI model was tested by assessing timing, consistency, and accuracy on a separate testing dataset of 15 pre- and 15 postoperative full skull CBCT scans. The AI model was compared to user refined AI segmentations (RAI) and to semi-automatic segmentation (SA), which is the current clinical standard. The time needed for segmentation was measured in seconds. Intra- and inter-operator consistency were assessed to check if the segmentation protocols delivered reproducible results. The following consistency metrics were used: intersection over union (IoU), dice similarity coefficient (DSC), Hausdorff distance (HD), absolute volume difference and root mean square (RMS) distance. To evaluate the match of the AI and RAI results to those of the SA method, their accuracy was measured using IoU, DSC, HD, absolute volume difference and RMS distance.

Results

On average, SA took 1218.4s. RAI showed a significant drop (p<0.0001) in timing to 456.5s (2.7-fold decrease). The AI method only took 17s (71.3-fold decrease). The average intra-operator IoU for RAI was 99.5% compared to 96.9% for SA. For inter-operator consistency, RAI scored an IoU of 99.6% compared to 94.6% for SA. The AI method was always consistent by default. In both the intra- and inter-operator consistency assessments, RAI outperformed SA on all metrics indicative of better consistency. With SA as the ground truth, AI and RAI scored an IoU of 94.6% and 94.4%, respectively. All accuracy metrics were similar for AI and RAI, meaning that both methods produce 3D models that closely match those produced by SA.

Conclusion

A layered 3D U-Net architecture deep learning algorithm, with and without additional user refinements, improves time-efficiency, reduces operator error, and provides excellent accuracy when benchmarked against the clinical standard.

Clinical significance

Semi-automatic segmentation in CBCT imaging is time-consuming and allows user-induced errors. Layered convolutional neural networks using a 3D U-Net architecture allow direct segmentation of high-resolution CBCT images. This approach creates 3D mandibular models in a more time-efficient and consistent way. It is accurate when benchmarked to semi-automatic segmentation.

Graphical abstract

Image, graphical abstract
Download : Download full-size imageDownload : Download high-res image (161KB)

Introduction

Cone-Beam Computed Tomography (CBCT) is a well-established imaging modality of the head- and neck region [1]. With lower radiation dose, increased spatial resolution, seated position of the patient and lower machine-investment cost, CBCT became a prominent tool in imaging of the craniomaxillofacial (CMF) bones and the dental apparatus [1], [2], [3]. It has also initiated an era of virtual treatment planning [4]. This planning relies on three-dimensional (3D) surface models, acquired through segmentation of the exported Digital Imaging and Communications in Medicine (DICOM) data. Such models are imported in virtual treatment planning software suites and surgical treatments are simulated. To transfer the simulation to the patient, the surgeon can choose to produce surgical guides or patient specific implants through computer-aided manufacturing methods. Surgical guides aid in transferring the surgical plan to the actual patient by indicating an osteotomy site or the correct position and angulation of a drilling sequence. They are used in oncological surgery, dental implant surgery and orthognathic surgery [5]. 3D surface models of CMF structures and the dental apparatus are also used to produce patient specific implants. These patient specific implants have seen a spike in their use as they offer a good fit to the existing anatomy of a patient without the need for extensive sculpting of the acceptor site. Furthermore, they are tailor-made to deliver a specific and predictable result, that is sometimes hard to achieve with stock implants. Patient specific implants have been introduced in osteosynthesis plates [6], temporomandibular joint replacement [6], custom-made meshes for bone regeneration and bone augmentation [7,8], root analogue dental implants [9,10] and subperiosteal implants [11]. These applications require highly accurate surface models. For multi-slice CT (MSCT), this is usually achieved with thresholding and region growing. In CBCT data however, the intrinsic low image contrast, the lack of Hounsfield Units and increased noise and artifacts make this difficult, requiring substantial manual edits [12,13]. This current semi-automatic (SA) clinical standard is characterized by a high processing time and high risk of user induced error.

Artificial intelligence (AI) models in image segmentation, more specifically deep learning (DL) algorithms, promise to overcome the aforementioned caveats [14,15]. AI has been defined by the American National Standard Dictionary of Information Technology as ‘the capability of a device to perform functions are normally associated with human intelligence such as reasoning, learning and self-improvement’ [16]. Machine learning is a type of AI technology frequently used in medical image analysis. It allows computers to learn the inherent statistical patterns of pairs of data (f.e. DICOM data) and annotated labels (f.e. image segmentation). This type of supervised learning allows the computer to eventually make predictions on how a specific anatomical structure should be segmented on new cases [17]. DL is a subtype of machine learning which takes the computer's autonomy even further by layering its algorithms in artificial neural networks, mimicking the human brain. By this layering, the DL networks become even more powerful [14]. Convolutional Neural Networks (CNN), a type of DL algorithms most commonly applied to analyze images, have shown promising results in image segmentation in recent years [18]. These networks were already introduced in the 1980s [19,20]. However, CNNs rely on large amounts of data and training requires extensive computational resources, which impeded their practical application. Starting with AlexNet in 2012 [21], the increase in available data and surge in computing power has drawn much attention back to this field. In many computer vision applications, CNNs now outperform more classical approaches. Specifically for biomedical image segmentation, the U-Net architecture has beaten state-of-the-art performance in many applications [22]. This led to the development of many DL algorithms based on U-Net. For CMF CBCT scans, U-Nets have been successfully applied for the segmentation of different structures [23], [24], [25]. Current limitations in graphics processing unit (GPU) memory however limit the size of the image, making accurate direct segmentation of high-resolution skull images impossible, which creates difficulties for segmenting large structures, such as the mandible. This work proposes a novel two-step approach, in which one U-Net on a full-size low-resolution image is combined with another U-Net segmenting high-resolution region of interests.

The aim of this study was to develop and validate a layered deep learning algorithm that automatically creates 3D surface models of the human mandible from a CBCT scan. The hypothesis is that such an AI model could provide accurate 3D surface models of the mandible in a more reliable and time efficient way than the current clinical standard, being SA segmentation.

Section snippets

Study design

The study design is illustrated in figure 1. As the aim was to develop and validate an AI for automatic mandibular segmentation, this study consisted out of a development, training, and testing phase. This study was performed in accordance with the Artificial intelligence in dental research checklist by Schwendicke et al (Supplementary material S1) [17].

The development of the AI model is described in Section 2.3 Model Architecture. Training of the model was subdivided into two stages. In the

Timing

Table 2 provides an overview of required timing. The SA method took on average 20 minutes and 18 seconds. The AI model required 17 seconds on average to produce a binary segmentation result, a 71.3-fold decrease when compared to the SA method. When the AI model was combined with user refinements and STL generation (RAI method), mean timing increased to 456.16 seconds or 7 minutes and 36 seconds, a 2.7-fold decrease when compared to the SA method. Analysis of the studentized residuals showed

Discussion

3D models of CMF structures play a crucial role in diagnostics, treatment planning and patient communication. Historically, two main approaches for creating these models have been used: volume rendering and surface rendering [28]. If 3D visualization is the goal, volume rendering is the method of choice due to its higher accuracy when it comes to mixed tissue interfaces and time-efficiency [29]. However, when interaction with the 3D model is required, as is the case in 3D printing or virtual

Credit authorship contribution statement

Pieter-Jan Verhelst: Conceptualization, Methodology, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Project administration. Andreas Smolders: Methodology, Software, Validation, Writing – original draft. Thomas Beznik: Methodology, Software, Validation, Formal analysis, Visualization, Writing – original draft. Jeroen Meewis: Data Curation, Validation, Investigation, Writing – review & editing. Arne Vandemeulebroucke: Data curation, Validation, Writing:

Declaration of Competing Interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The data used for this study was retrieved from the LORTHOG Register, Department of Oral & Maxillofacial Surgery, University Hospital Leuven, Belgium, B322201526790. The development of the Virtual Patient Creator was partly made possible with the support of a Development Project of VLAIO (Flanders Innovation & Entrepreneurship).

References (34)

  • B. Schlueter et al. Cone Beam Computed Tomography 3D Reconstruction of the Mandibular Condyle Angle Orthod. (2008)
  • A. Cucchi et al. Clinical and volumetric outcomes after vertical ridge augmentation using computer-aided-design/computer-aided manufacturing (CAD/CAM) customized titanium meshes: a pilot study BMC Oral Health (2020)
  • F. Luongo et al. Custom-made synthetic scaffolds for bone reconstruction: A retrospective, multicenter clinical study on 15 patients Biomed Res. Int. (2016)
  • T. Dantas et al. Customized Root-Analogue Implants: A Review on Outcomes from Clinical Trials and Case Reports Materials (Basel) (2021)

Would you like to learn more?

Feel free to schedule a meeting with us.