Fully end to end composite recurrent convolution network for deformable facial tracking in the wild

  • Authors
  • Aspandi D, Martínez O, Sukno FM, Binefa X
  • UPF authors
  • SUKNO, FEDERICO ; MARTINEZ PUJOL, ORIOL; BINEFA VALLS, XAVIER; LATIF ., DECKY ASPANDI;
  • Authors of the book
  • -
  • Book title
  • amp; Gesture Recognition, FG 2019
  • Publisher
  • IEEE
  • Publication year
  • 2019
  • Pages
  • 115-122
  • ISBN
  • 978-1-7281-0089-0
  • Abstract
  • Human facial tracking is an important task in computer vision, which has recently lost pace compared to other facial analysis tasks. The majority of current available tracker possess two major limitations: their little use of temporal information and the widespread use of handcrafted features, without taking full advantage of the large annotated datasets that have recently become available. In this paper we present a fully end-to-end facial tracking model based on current state of the art deep model architectures that can be effectively trained from the available annotated facial landmark datasets. We build our model from the recently introduced general object tracker Re 3 , which allows modeling the short and long temporal dependency between frames by means of its internal Long Short Term Memory (LSTM) layers. Facial tracking experiments on the challenging 300-VW dataset show that our model can produce state of the art accuracy and far lower failure rates than competing approaches. We specifically compare the performance of our approach modified to work in tracking-by-detection mode and showed that, as such, it can produce results that are comparable to state of the art trackers. However, upon activation of our tracking mechanism, the results improve significantly, confirming the advantage of taking into account temporal dependencies.
  • Complete citation
  • Aspandi D, Martínez O, Sukno FM, Binefa X. Fully end to end composite recurrent convolution network for deformable facial tracking in the wild. In: -. 14th IEEE International Conference on Automatic Face & Gesture Recognition, FG 2019. 1 ed. Lille: IEEE; 2019. p. 115-122.
Bibliometric indicators
  • 8 times cited Scopus
  • Índex Scimago de 0