University of Hertfordshire

By the same authors

Standard

The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain. / Ragusa, Francesco; Furnari, Antonio; Livatino, Salvatore; Farinella, Giovanni .

2021. Paper presented at 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, Hawaii, United States.

Research output: Contribution to conferencePaperpeer-review

Harvard

Ragusa, F, Furnari, A, Livatino, S & Farinella, G 2021, 'The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain.', Paper presented at 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, United States, 5/01/21 - 9/01/21. <https://arxiv.org/abs/2010.05654>

APA

Ragusa, F., Furnari, A., Livatino, S., & Farinella, G. (2021). The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain.. Paper presented at 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, Hawaii, United States. https://arxiv.org/abs/2010.05654

Vancouver

Ragusa F, Furnari A, Livatino S, Farinella G. The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain.. 2021. Paper presented at 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, Hawaii, United States.

Author

Ragusa, Francesco ; Furnari, Antonio ; Livatino, Salvatore ; Farinella, Giovanni . / The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain. Paper presented at 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, Hawaii, United States.

Bibtex

@conference{f28b5f0fea2941a399d94a7aec45e716,
title = "The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain.",
abstract = "Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settingsand in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-objectinteractions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicyrelease the dataset at https://iplab.dmi.unict.it/MECCANO.",
author = "Francesco Ragusa and Antonio Furnari and Salvatore Livatino and Giovanni Farinella",
year = "2021",
month = jan,
day = "9",
language = "English",
note = "2021 IEEE Winter Conference on Applications of Computer Vision, WACV ; Conference date: 05-01-2021 Through 09-01-2021",
url = "http://wacv2021.thecvf.com/home",

}

RIS

TY - CONF

T1 - The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain.

AU - Ragusa, Francesco

AU - Furnari, Antonio

AU - Livatino, Salvatore

AU - Farinella, Giovanni

PY - 2021/1/9

Y1 - 2021/1/9

N2 - Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settingsand in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-objectinteractions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicyrelease the dataset at https://iplab.dmi.unict.it/MECCANO.

AB - Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settingsand in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-objectinteractions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicyrelease the dataset at https://iplab.dmi.unict.it/MECCANO.

M3 - Paper

T2 - 2021 IEEE Winter Conference on Applications of Computer Vision

Y2 - 5 January 2021 through 9 January 2021

ER -