TY - JOUR
T1 - TEMSET-24K: Densely Annotated Dataset for Indexing Multipart Endoscopic Videos using Surgical Timeline Segmentation
AU - Bilal, Muhammad
AU - Alam, Mahmood
AU - Bapu, Deepashree
AU - Korsgen, Stephan
AU - Lal, Neeraj
AU - Bach, Simon
AU - Hajiyavand, Amir M.
AU - Ali, Muhammed
AU - Soomro, Kamran
AU - Qasim, Iqbal
AU - Capik, Paweł
AU - Khan, Aslam
AU - Khan, Zaheer
AU - Vohra, Hunaid
AU - Caputo, Massimo
AU - Beggs, Andrew D.
AU - Qayyum, Adnan
AU - Qadir, Junaid
AU - Ashraf, Shazad Q.
N1 - © The Author(s). This is an open access article distributed under the Creative Commons Attribution License, to view a copy of the license, see: https://creativecommons.org/licenses/by/4.0/
PY - 2025/8/14
Y1 - 2025/8/14
N2 - Indexing endoscopic surgical videos is vital in surgical data science, forming the basis for systematic retrospective analysis and clinical performance evaluation. Despite its significance, current video analytics rely on manual indexing, a time-consuming process. Advances in computer vision, particularly deep learning, offer automation potential, yet progress is limited by the lack of publicly available, densely annotated surgical datasets. To address this, we present TEMSET-24K, an open-source dataset comprising 24,306 trans-anal endoscopic microsurgery (TEMS) video microclips. Each clip is meticulously annotated by clinical experts using a novel hierarchical labeling taxonomy encompassing “phase, task, and action” triplets, capturing intricate surgical workflows. To validate this dataset, we benchmarked deep learning models, including transformer-based architectures. Our in silico evaluation demonstrates high accuracy (up to 0.99) and F1 scores (up to 0.99) for key phases like “Setup” and “Suturing.” The STALNet model, tested with ConvNeXt, ViT, and SWIN V2 encoders, consistently segmented well-represented phases. TEMSET-24K provides a critical benchmark, propelling state-of-the-art solutions in surgical data science.
AB - Indexing endoscopic surgical videos is vital in surgical data science, forming the basis for systematic retrospective analysis and clinical performance evaluation. Despite its significance, current video analytics rely on manual indexing, a time-consuming process. Advances in computer vision, particularly deep learning, offer automation potential, yet progress is limited by the lack of publicly available, densely annotated surgical datasets. To address this, we present TEMSET-24K, an open-source dataset comprising 24,306 trans-anal endoscopic microsurgery (TEMS) video microclips. Each clip is meticulously annotated by clinical experts using a novel hierarchical labeling taxonomy encompassing “phase, task, and action” triplets, capturing intricate surgical workflows. To validate this dataset, we benchmarked deep learning models, including transformer-based architectures. Our in silico evaluation demonstrates high accuracy (up to 0.99) and F1 scores (up to 0.99) for key phases like “Setup” and “Suturing.” The STALNet model, tested with ConvNeXt, ViT, and SWIN V2 encoders, consistently segmented well-represented phases. TEMSET-24K provides a critical benchmark, propelling state-of-the-art solutions in surgical data science.
U2 - 10.1038/s41597-025-05646-w
DO - 10.1038/s41597-025-05646-w
M3 - Article
SN - 2052-4463
VL - 12
JO - Scientific Data
JF - Scientific Data
IS - 1
M1 - 1424
ER -