Scaling up SoccerNet with multi-view spatial localization and re-identification

Anthony Cioppa, Adrien Deliège, Silvio Giancola, Bernard Ghanem, Marc Van Droogenbroeck

Research output: Contribution to journalArticlepeer-review

20 Scopus citations


Soccer videos are a rich playground for computer vision, involving many elements, such as players, lines, and specific objects. Hence, to capture the richness of this sport and allow for fine automated analyses, we release SoccerNet-v3, a major extension of the SoccerNet dataset, providing a wide variety of spatial annotations and cross-view correspondences. SoccerNet’s broadcast videos contain replays of important actions, allowing us to retrieve a same action from different viewpoints. We annotate those live and replay action frames showing same moments with exhaustive local information. Specifically, we label lines, goal parts, players, referees, teams, salient objects, jersey numbers, and we establish player correspondences between the views. This yields 1,324,732 annotations on 33,986 soccer images, making SoccerNet-v3 the largest dataset for multi-view soccer analysis. Derived tasks may benefit from these annotations, like camera calibration, player localization, team discrimination and multi-view re-identification, which can further sustain practical applications in augmented reality and soccer analytics. Finally, we provide Python codes to easily download our data and access our annotations.
Original languageEnglish (US)
JournalScientific data
Issue number1
StatePublished - Jun 21 2022

Bibliographical note

KAUST Repository Item: Exported on 2022-06-23
Acknowledgements: This work was supported by the Service Public de Wallonie (SPW) Recherche under the DeepSport project and Grant N°. 2010235 (ARIAC by, the FRIA, and KAUST Office of Sponsored Research through the Visual Computing Center (VCC) funding.


Dive into the research topics of 'Scaling up SoccerNet with multi-view spatial localization and re-identification'. Together they form a unique fingerprint.

Cite this