AI-powered prostate cancer detection: a multi-centre, multi-scanner validation study

Francesco Giganti, Nadia Moreira da Silva, Michael Yeung, Lucy Davies, Amy Frary, Mirjana Ferrer Rodriguez, Nikita Sushentsev, Nicholas Ashley, Adrian Andreou, Alison Bradley, Chris Wilson, Giles Maskell, Giorgio Brembilla, Iztok Caglic, Jakub Suchánek, Jobie Budd, Zobair Arya, Jonathan Aning, John Hayes, Mark De BonoNikhil Vasdev, Nimalan Sanmugalingam, Paul Burn, Raj Persad, Ramona Woitek, Richard Hindley, Sidath Liyanage, Sophie Squire, Tristan Barrett, Steffi Barwick, Mark Hinton, Anwar R. Padhani, Antony Rix, Aarti Shah, Evis Sala

Research output: Contribution to journalArticlepeer-review

Abstract

Objectives: Multi-centre, multi-vendor validation of artificial intelligence (AI) software to detect clinically significant prostate cancer (PCa) using multiparametric magnetic resonance imaging (MRI) is lacking. We compared a new AI solution, validated on a separate dataset from different UK hospitals, to the original multidisciplinary team (MDT)-supported radiologist’s interpretations.
Materials and methods: A Conformité Européenne (CE)-marked deep-learning (DL) computer-aided detection (CAD) medical device (Pi) was trained to detect Gleason Grade Group (GG) ≥ 2 cancer using retrospective data from the PROSTATEx dataset and five UK hospitals (793 patients). Our separate validation dataset was on six machines from two manufacturers across six sites (252 patients). Data included in the study were from MRI scans performed between August 2018 to October 2022. Patients with a negative MRI who did not undergo biopsy were assumed to be negative (90.4% had prostate-specific antigen density < 0.15 ng/mL2). ROC analysis was used to compare radiologists who used a 5-category suspicion score.
Results: GG ≥ 2 prevalence in the validation set was 31%. Evaluated per patient, Pi was non-inferior to radiologists (considering a 10% performance difference as acceptable), with an area under the curve (AUC) of 0.91 vs. 0.95. At the predetermined risk threshold of 3.5, the AI software’s sensitivity was 95% and specificity 67%, while radiologists at Prostate Imaging-Reporting and Data Systems/Likert ≥ 3 identified GG ≥ 2 with a sensitivity of 99% and specificity of 73%. AI performed well per-site (AUC ≥ 0.83) at the patient-level independent of scanner age and field strength.
Conclusion: Real-world data testing suggests that Pi matches the performance of MDT-supported radiologists in GG ≥ 2 PCa detection and generalises to multiple sites, scanner vendors, and models.
Key Points
Question: The performance of artificial intelligence-based medical tools for prostate MRI has yet to be evaluated on multi-centre, multi-vendor data to assess generalisability.
Findings: A dedicated AI medical tool matches the performance of multidisciplinary team-supported radiologists in prostate cancer detection and generalises to multiple sites and scanners.
Clinical relevance: This software has the potential to support the MRI process for biopsy decision-making and target identification, but future prospective studies, where lesions identified by artificial intelligence are biopsied separately, are needed.
Original languageEnglish
Pages (from-to)1-10
Number of pages10
JournalEuropean Radiology
Early online date28 Feb 2025
DOIs
Publication statusE-pub ahead of print - 28 Feb 2025

Fingerprint

Dive into the research topics of 'AI-powered prostate cancer detection: a multi-centre, multi-scanner validation study'. Together they form a unique fingerprint.

Cite this