EUTOPIA-SIF Post-Doctoral Fellowship – Call for applications 2021/2022

 
EUTOPIA-SIF Post-Doctoral Fellowship – Call for applications 2021/2022

The EUTOPIA European University has launched the second call for applications of the MSCA COFUND “EUTOPIA Science and Innovation Post-Doctoral Fellowship Programme EUTOPIA-SIF), for the recruitment of 21 post-doctoral fellows by the EUTOPIA universities.

Extensive research mobility is integral to the fellowships with two compulsory secondment periods: one at another EUTOPIA university (co-host university) and one with an external academic or non-academic 
partner institution with the aim of fostering the fellows’ entrepreneurial spirit, tangible research impact and innovation. Furthermore, fellows will access a rich training programme, career guidance and academic supervision.
 
Application platform (the deadline is 10 January 2022, 13:00 CET.)
 

New research support position : 
  • Research support position on deep learning-based sports video analysis:

Open Call  (deadline: November 11, 2020) 

  • Job Description:

Research, develop, test and validate methods for the analysis of sports video.

The group is seeking a research assistant who will collaborate in a national project (MICINN/FEDER U project, reference PGC2018-098625-B-I00) where we work towards the automatic understanding of a 3D dynamic scene from a video sequence of it. The goal is to segment the dynamic scene into different objects, their individual sounds and their trajectories, infer the depth ordering of the scene along time -thus the occlusions and disocclusions-, complete the occluded objects, and generate new dynamic content.

The tasks associated with this job position are in the context of semantic sport analysis. In particular, the review, implementation or adaptation if needed, and evaluation of state-of-the-art deep learning and computer vision methods to tackle several problems for the automatic analysis of soccer videos.


New research support position : 
  • Research support position on deep learning-based computer vision:

Open Call  (deadline: October 15, 2020) 

  • Job Description:

Research, develop, test and validate methods for the joint inpainting of motion and dynamic shapes in video.

The group is seeking a research assistant who will collaborate in a national project (MICINN/FEDER U project, reference PGC2018-098625-B-I00) where we work towards the automatic understanding of a 3D dynamic scene from a video sequence of it. The goal is to segment the dynamic scene into different objects, their individual sounds and their trajectories, infer the depth ordering of the scene along time -thus the occlusions and disocclusions-, complete the occluded objects, and generate new dynamic content.

The first task is the review, implementation or adaptation if needed, and evaluation of state-of-the-art deep learning methods that tackle the problem of joint inpainting of motion and dynamic shapes. A second and related task is the further research and development of a method developed within the group for the automatic inpainting of an incomplete optical flow by means of a deep learning strategy.


Open PhD position : 
  • New PhD INPhINIT “la Caixa” fellowship on Multimodal Geometric and Semantic Scene Understanding:

Open Call  (deadline: February 4th, 2020) 

  • Research Description:

Multimodal information is everywhere. Moreover, the population is no longer a mere spectator nor a consumer of this type of information but has become a producer of digital contents that are frequently captured using mobile devices and shared over the Internet. Technology users expect to have at their disposal automatic tools that extract an analysis of the contents of a video not only at the geometric level but also at the semantic one. Moreover, video contains multiple modalities of information such as text, audio and visual data. Each of the data modalities both complement or reinforce the semantic content of the other modalities. This project is aligned with this challenge.

The project aim is the automatic understanding of a 3D dynamic scene from a video sequence of it and use this understanding for different applications. In particular, what is the geometric and semantic multimodal description of the recorded scene. We are particularly interested in the self-supervised learning of spatio-temporal features from a video sequence, using multimodal information such as audio, text and visual information.

This research has multiple applications in different scenarios, such as cross-modal transfer and cross-modal search, i.e. transfer one modality (e.g. audio or text) from one video to another, or search for video content related to a specific audio query; the recovery of lost parts of one of the modalities leveraging the information in the other modalities; the detection of anomalous events in any of the data modalities or in the whole video; the anticipation of future actions.

Applications are managed through the program website. Start application here and SEARCH BY PROJECT TITLE / JOB POSITION TITLE:

Multimodal Geometric and Semantic Scene Understanding (Dr.Coloma Ballester)

 

Postdoc positions: 

  • Ramon y Cajal :  Call

Deadline: 14 January 2020.

Deadline: 14 January 2020.

  • Beatriu de Pinos :  Call 

Deadline: 1 February 2020.

  • Contact us for additional information.

 

Research support position: 

  • Our group is seeking a research assistant who will collaborate in a national project (MICINN/FEDER U project, reference PGC2018-098625-B-I00) where we work towards the automatic understanding of a 3D dynamic scene from a video sequence of it. The goal is to segment the dynamic scene into different objects, their individual sounds and their trajectories, infer the depth ordering of the scene along time -thus the occlusions and disocclusions-, complete the occluded objects, and generate new dynamic content.
  • Call and job details

Deadline: 7 February 2020.