http://opendata.unex.es/recurso/ciencia-tecnologia/investigacion/publicaciones/Publicacion/2024-1476

Literals

  • ou:urlOrcid
  • dcterms:title
    • Multi-person 3D pose estimation from unlabelled data
  • dcterms:publisher
    • Machine Vision and Applications
  • bibo:eissn
    • 1432-1769
  • dcterms:creator
    • Rodriguez-Criado D.
  • fabio:hasPublicationYear
    • 2024
  • ou:eid
    • 2-s2.0-85189472584
  • bibo:doi
    • 10.1007/s00138-024-01530-6
  • bibo:issn
    • 0932-8092
  • dcterms:contributor
    • Daniel Rodriguez-Criado, Pilar Bachiller-Burgos, Luis J. Manso, George Vogiatzis
  • ou:tipoPublicacion
    • Article
  • vivo:identifier
    • 2024-1476
  • ou:bibtex
    • @article{dfe497eb3dcb42b58883bbb7d9e86394, title = 'Multi-person 3D pose estimation from unlabelled data', abstract = 'Its numerous applications make multi-human 3D pose estimation a remarkably impactful area of research. Nevertheless, it presents several challenges, especially when approached using multiple views and regular RGB cameras as the only input. First, each person must be uniquely identified in the different views. Secondly, it must be robust to noise, partial occlusions, and views where a person may not be detected. Thirdly, many pose estimation approaches rely on environment-specific annotated datasets that are frequently prohibitively expensive and/or require specialised hardware. Specifically, this is the first multi-camera, multi-person data-driven approach that does not require an annotated dataset. In this work, we address these three challenges with the help of self-supervised learning. In particular, we present a three-staged pipeline and a rigorous evaluation providing evidence that our approach performs faster than other state-of-the-art algorithms, with comparable accuracy, and most importantly, does not require annotated datasets. The pipeline is composed of a 2D skeleton detection step, followed by a Graph Neural Network to estimate cross-view correspondences of the people in the scenario, and a Multi-Layer Perceptron that transforms the 2D information into 3D pose estimations. Our proposal comprises the last two steps, and it is compatible with any 2D skeleton detector as input. These two models are trained in a self-supervised manner, thus avoiding the need for datasets annotated with 3D ground-truth poses.', keywords = '3D multi-pose estimation, Skeleton matching, Deep learning, Graph neural networks, Self-supervised learning', author = 'Daniel Rodriguez-Criado and Pilar Bachiller-Burgos and Manso, {Luis J.} and George Vogiatzis', note = 'Copyright {\textcopyright} The Author(s), 2024. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article{\textquoteright}s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article{\textquoteright}s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.', year = '2024', month = apr, day = '6', doi = '10.1007/s00138-024-01530-6', language = 'English', volume = '35', journal = 'Machine Vision and Applications', issn = '0932-8092', publisher = 'Springer', }
  • vcard:url
  • ou:urlScopus
  • ou:vecesCitado
    • 0
  • bibo:volume
    • 35

Typed Literals

Recognized prefixes