Postdoc position in deep learning-based methods and tools for video matting. Funded by EMERALD european project

 
IPR (Postdoctoral Project Researcher) in Research, develop, test and validate deep learning-based methods and tools for video matting, video content analysis and understanding
 

We are looking for a qualified post-doctoral fellow to perform research in computer vision and deep learning models applied to video post-production. This research is part of a European funded project related to Artificial Intelligence and Process Automation for Sustainable Entertainment and Media, namely, the EMERALD project with Grant agreement ID: 101119800, HORIZON-CLh4-2022-DIGITAL-EMERGING-02 (https://cordis.europa.eu/project/id/101119800).

The goal is to develop deep-learning-based tools for high-quality automatic video matting under non optimal capture conditions and settings and without the need of a trimap. The tool will allow the integration of remote presenters or performers into virtual scenes and sets for broadcast/streaming media. The candidate will work in close relation with researchers at the university and researchers at a leading company that produces real-time 3D graphics and virtual set solutions for broadcasters and film producers. The contract duration is two years and will be conducted at Universitat Pompeu Fabra, Barcelona, Spain.

Application (the deadline is February 28, 2024, 13:00 CET.)
 

 

EUTOPIA-SIF Post-Doctoral Fellowship – Call for applications 2021/2022

 

EUTOPIA-SIF Post-Doctoral Fellowship – Call for applications 2021/2022

The EUTOPIA European University has launched the second call for applications of the MSCA COFUND “EUTOPIA Science and Innovation Post-Doctoral Fellowship Programme EUTOPIA-SIF), for the recruitment of 21 post-doctoral fellows by the EUTOPIA universities.

Extensive research mobility is integral to the fellowships with two compulsory secondment periods: one at another EUTOPIA university (co-host university) and one with an external academic or non-academic 
partner institution with the aim of fostering the fellows’ entrepreneurial spirit, tangible research impact and innovation. Furthermore, fellows will access a rich training programme, career guidance and academic supervision.
 
Application platform (the deadline is 10 January 2022, 13:00 CET.)


New research support position : 

  • Research support position on deep learning-based sports video analysis:

Open Call  (deadline: November 11, 2020) 

  • Job Description:

Research, develop, test and validate methods for the analysis of sports video.

The group is seeking a research assistant who will collaborate in a national project (MICINN/FEDER U project, reference PGC2018-098625-B-I00) where we work towards the automatic understanding of a 3D dynamic scene from a video sequence of it. The goal is to segment the dynamic scene into different objects, their individual sounds and their trajectories, infer the depth ordering of the scene along time -thus the occlusions and disocclusions-, complete the occluded objects, and generate new dynamic content.

The tasks associated with this job position are in the context of semantic sport analysis. In particular, the review, implementation or adaptation if needed, and evaluation of state-of-the-art deep learning and computer vision methods to tackle several problems for the automatic analysis of soccer videos.


New research support position : 

  • Research support position on deep learning-based computer vision:

Open Call  (deadline: October 15, 2020) 

  • Job Description:

Research, develop, test and validate methods for the joint inpainting of motion and dynamic shapes in video.

The group is seeking a research assistant who will collaborate in a national project (MICINN/FEDER U project, reference PGC2018-098625-B-I00) where we work towards the automatic understanding of a 3D dynamic scene from a video sequence of it. The goal is to segment the dynamic scene into different objects, their individual sounds and their trajectories, infer the depth ordering of the scene along time -thus the occlusions and disocclusions-, complete the occluded objects, and generate new dynamic content.

The first task is the review, implementation or adaptation if needed, and evaluation of state-of-the-art deep learning methods that tackle the problem of joint inpainting of motion and dynamic shapes. A second and related task is the further research and development of a method developed within the group for the automatic inpainting of an incomplete optical flow by means of a deep learning strategy.


Open PhD position : 

  • New PhD INPhINIT “la Caixa” fellowship on Multimodal Geometric and Semantic Scene Understanding:

Open Call  (deadline: February 4th, 2020) 

  • Research Description:

Multimodal information is everywhere. Moreover, the population is no longer a mere spectator nor a consumer of this type of information but has become a producer of digital contents that are frequently captured using mobile devices and shared over the Internet. Technology users expect to have at their disposal automatic tools that extract an analysis of the contents of a video not only at the geometric level but also at the semantic one. Moreover, video contains multiple modalities of information such as text, audio and visual data. Each of the data modalities both complement or reinforce the semantic content of the other modalities. This project is aligned with this challenge.

The project aim is the automatic understanding of a 3D dynamic scene from a video sequence of it and use this understanding for different applications. In particular, what is the geometric and semantic multimodal description of the recorded scene. We are particularly interested in the self-supervised learning of spatio-temporal features from a video sequence, using multimodal information such as audio, text and visual information.

This research has multiple applications in different scenarios, such as cross-modal transfer and cross-modal search, i.e. transfer one modality (e.g. audio or text) from one video to another, or search for video content related to a specific audio query; the recovery of lost parts of one of the modalities leveraging the information in the other modalities; the detection of anomalous events in any of the data modalities or in the whole video; the anticipation of future actions.

Applications are managed through the program website. Start application here and SEARCH BY PROJECT TITLE / JOB POSITION TITLE:

Multimodal Geometric and Semantic Scene Understanding (Dr.Coloma Ballester)

 

Postdoc positions: 

  • Ramon y Cajal :  Call

Deadline: 14 January 2020.

Deadline: 14 January 2020.

  • Beatriu de Pinos :  Call 

Deadline: 1 February 2020.

  • Contact us for additional information.

 

Research support position: 

  • Our group is seeking a research assistant who will collaborate in a national project (MICINN/FEDER U project, reference PGC2018-098625-B-I00) where we work towards the automatic understanding of a 3D dynamic scene from a video sequence of it. The goal is to segment the dynamic scene into different objects, their individual sounds and their trajectories, infer the depth ordering of the scene along time -thus the occlusions and disocclusions-, complete the occluded objects, and generate new dynamic content.
  • Call and job details

Deadline: 2 March 2020.