Past Research Seminars
February  
February, 21st
 
15:30 h

 

Room 55.309
Invited Research Seminar
 
Grades in the Machine: What Machine Learning Means for Cognitive Models
 
By Charles Lang, Columbia University 
 
Abstract
 
Regardless of the specific algorithmic methodology employed, building an intelligent agent involves the reduction of complex data into a lower dimensional representation that the machine can use to make predictions about the world. For an autonomous vehicle the machine must reduce complex sensor inputs into representations about the physical surroundings. Within education technology applications this means creating representations of students from student data. As these models become more sophisticated and begin modeling cognitive ability and other psycho-social constructs it is important that we ask critical questions about the use and meaning of these machine representations within educational contexts. Machine representations have substantial similarities with older data representations of students such as grades, standardized tests and rubric scoring, but differ in one important way, the number of dimensions that inferences may be based on. High dimensional representations may create problems for educational organizations (for example, they are not human interpretable) while at the same time the new representations do not solve any of the well documented social, cultural, political and pedagogical tensions inherent in older formats.
 
Biography
 
Charles Lang is Visiting Assistant Professor in Learning Analytics at Teachers College, Columbia University where he is co-Director of the Masters of Science in Learning Analytics. His research interests center on the use of big data in education and the role of online assessment data in understanding student learning. Specifically, Charles studies innovative methodologies for assessing student learning (predictive analytics, personalization and graphical models of knowledge) and how these new tools can be incorporated into instructional workflow.
 
Host Davínia Hernández-Leo
February, 22nd
 
14:30 h

 

Room 55.309

PhD Research Seminar

euCanSHare: Bypassing the long and winding road towards “true” big data in biology, medicine and beyond

By Karim Lekadir

Abstract

The big data revolution continues to have a transformative effect on research and innovation in a wide range of scientific and societal domains. In computer vision, for example, databases such as ImageNet now include tens of millions of images from tens of thousands of semantic categories, leading every year to important methodological advances and technological applications. However, in other domains such as in biomedicine, the promise of big data is faced with ethical/legal, operational and financial constraints, which have made it a very hard challenge to establish large-scale research databases covering multiple data types and populations. In this talk, I will first present the euCanSHare H2020 project, which aims at addressing the lack of large heterogeneous databases (including biological, imaging and clinical data), by developing the information technology tools and the data science algorithms that will enable to integrate and co-analyse multiple smaller databases, thus totalling an unprecedented 1,000,000 records. I will also describe the computational challenges and investigated solutions to enable automated and robust large-scale, multi-type and multi-cohort data analysis like never before. I will list emerging opportunities that the euCanSHare project will offer for personalised medicine and translational research. I will conclude with future perspectives in integrative data science at UPF for bypassing the long and winding road towards “true” big data in biology, medicine and beyond.

Biography

Dr Karim Lekadir is a Ramon y Cajal researcher at the Barcelona Centre for New Medical Technologies, Universitat Pompeu Fabra, Barcelona. He received a PhD in Computing from Imperial College London and was a postdoctoral researcher at Stanford University, USA. His algorithm developed during his PhD for cardiac functional quantification has been FDA/CE marked and is used in more than 250 clinical centres worldwide. He participated in several EU projects in the field of computational biomedicine, including the euHeart project for computational modelling of personalised interventions in cardiology. Through his work on statistical shape modelling using partial least squares, he finished in the first position of the MICCAI 2015 Challenge on myocardial infarct classification. His current research focuses on the development of data science, machine learning and image computing approaches for the integrative analysis of large-scale biomedical data. He is currently the Project Coordinator of the euCanSHare H2020 project (2018-2022) funded by the European Commission (6 million Euros), leading a consortium of 16 institutions to address data sharing and big data approaches in cardiovascular personalised medicine. He is an Associate Editor of the IEEE Transactions on Medical Imaging and a Guest Associate Editor on the Frontiers Special Issue on Artificial Intelligence and Cardiac Imaging.

February, 26th  
 
12:30 h

 

Room 55.309

PhD Research Seminar

Facial Analysis for Emotions, Interaction and Beyond

By Federico Sukno

Abstract

In this talk I will present our research in facial analysis over the last 4 years. I will start by motivating the interest in facial analysis for diverse applications, briefly covering the more traditional ones related to identity recognition for law enforcement and to the automatic recognition of facial expressions. The latter has become especially relevant in the last few years, given its importance for the understanding of human behavior and for advanced human-computer interaction. This will be showcased from specific research work done in our group in automatic head pose estimation, emotion recognition (both from discrete and dimensional approaches) and automatic lip reading.

In the second part of the talk, I will discuss other emerging applications that go beyond the traditional analysis of identity and expressions to target extracting more subtle information from the face. Some of this information, however, might be not apparent or it might even be hidden to us, and it could only be recovered by means of specialized techniques. This will be showcased from applications in photoplethysmography, facial asymmetries, craniofacial dysmorphologies and human factors.

Biography

Dr Federico Sukno is a Ramón y Cajal Fellow at UPF. He received the degree in electrical engineering at La Plata National University (Argentina, 2000) and the Ph.D. degree in biomedical engineering at Zaragoza University (Spain, 2008). His research activity has been framed in the field of image analysis with statistical models of shape and appearance, targeting diverse applications, most of which related to facial analysis. He is the author or co-author of more than 60 peer-reviewed publications, including 21 journal publications (from which 16 are Q1 journals and 10 are in the top-ranked journals in the fields of artificial intelligence and medical imaging, e.g. IEEE T Pattern Anal, IEEE T Med Imaging, IEEE T Image Process, IEEE T Cybernetics, Int J Comput Vision, Med Image Anal, Pattern Recogn). He has participated in 4 national and 5 international research projects (FP6, FP7, H2020 and Welcome Trust), as well as in 5 technology transfer projects acting as coordinator or PI in several of them. He has been a Marie Curie and a Ramon & Cajal fellowships in 2012 and 2015, which constitute two highly-competitive and prestigious individual grants in the EU and the Spanish systems, respectively.

February, 27th  
 
17:30 h
 
Auditorium
 

Invited Research Seminar

Deep Reinforcement Learning with demonstrations  

by Olivier Pietquin

Abstract

Deep Reinforcement Learning (DRL) has recently experienced increasing interest after its success at playing video games such as Atari, DotA or Starcraft as well as defeating grand masters at Go and Chess. However, many tasks remain hard to solve with DRL, even given almost unlimited compute power and simulation time. These tasks often share the common problem of being "hard exploration tasks". In this talk, we will show how using demonstrations (even sub-optimal) can help in learning policies that can reach human level performance or even super-human performance on some of these tasks, especially the remaining unsolved Atari games or human-machine dialogues.

Biography

Olivier Pietquin obtained an Electrical Engineering degree from the Faculty of Engineering, Mons (FPMs, Belgium) in June 1999 and a PhD degree in April 2004. In 2011, he received the Habilitation à Diriger des Recherches (French Tenure) from the University Paul Sabatier (Toulouse, France). Between 2005-2013, he was a professor at the Ecole Superieure d'Electricite (Supelec, France), and subsequently joined the University Lille 1 as a full professor in 2013. In 2014, he has been appointed at the Institut Universitaire de France as a junior fellow. He is now on leave with Google, first at Google DeepMind in London and, since 2018, with Brain in Paris. His research interests include spoken dialog systems evaluation, simulation and automatic optimisation, machine learning (especially direct and inverse reinforcement learning), speech and signal processing. Title: Deep Reinforcement Learning with demonstrations Abstract: Deep Reinforcement Learning (DRL) has recently experienced increasing interest after its success at playing video games such as Atari, DotA or Starcraft as well as defeating grand masters at Go and Chess. However, many tasks remain hard to solve with DRL, even given almost unlimited compute power and simulation time. These tasks often share the common problem of being "hard exploration tasks". In this talk, we will show how using demonstrations (even sub-optimal) can help in learning policies that can reach human level performance or even super-human performance on some of these tasks, especially the remaining unsolved Atari games or human-machine dialogues. Web: https://ai.google/research/people/105812 http://www.lifl.fr/~pietquin/

Host: Gergely Neu

February, 28th  
 
12:00 h

 

Room 55.309

Invited Research Seminar

A Theory of Regularized Markov Decision Processes 

by Matthieu Geist

Abstract

Many recent successful (deep) reinforcement learning algorithms make use of regularization, generally based on entropy or on Kullback-Leibler divergence. We propose a general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: we consider a larger class of regularizers, and we consider the general modified policy iteration approach, encompassing both policy iteration and value iteration. The core building blocks of this theory are a notion of regularized Bellman operator and the Legendre-Fenchel transform, a classical tool of convex optimization. This approach allows for error propagation analyses of general algorithmic schemes of which (possibly variants of) classical algorithms such as Trust Region Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy Programming are special cases. This also draws connections to proximal convex optimization, especially to Mirror Descent.

Biography

Matthieu Geist obtained an Electrical Engineering degree and an MSc degree in Applied Mathematics in Sept. 2006 (Supélec, France), a PhD degree in Applied Mathematics in Nov. 2009 (University Paul Verlaine of Metz, France) and a Habilitation degree in Feb. 2016 (University Lille 1, France). Between Feb. 2010 and Sept. 2017, he was an assistant professor at CentraleSupélec, France. In Sept. 2017, he joined University of Lorraine, France, as a full professor in Applied Mathematics (Interdisciplinary Laboratory for Continental Environments, CNRS-UL). Since Sept. 2018, he is on secondment at Google Brain, as a research scientist (Paris, France). His research interests include machine learning, especially reinforcement learning and imitation learning.

Host: Gergely Neu

February, 28th  
 
15:30 h

 

Room 55.309

PhD Research Seminar

Statistical course and Design of Experiments 

By  Simone Tassani

Abstract

The course of statistics aims to introduce a number of tools for master/Ph.D. students and post docs. The presented tools will play a role in planning many kind of studies, properly analyse the results and understand if data analysed by other researchers are or not reliable.

The course will start with a brief digression over the several implications that bad statistics have today over the scientific society and why every researcher should know the basic concepts behind a statistical analysis.
Than the first part of the course will follow introducing General Linear Modelling and its most common applications: F-test, Monofactorial Analysis of Variance (ANOVA)   
March  
March, 5th  
 
15:30 h

 

Room 55.309

Invited Research Seminar

TBA

By Souneil Park

Abstract

TBA

Host: Carlos Castillo

March, 7th  
 
15:30 h

 

Room 55.309

PhD Research Seminar

Statistical course and Design of Experiments 

By  Simone Tassani

Abstract

The course of statistics aims to introduce a number of tools for master/Ph.D. students and post docs. The presented tools will play a role in planning many kind of studies, properly analyse the results and understand if data analysed by other researchers are or not reliable.

In the second class the non-parametric "equivalent" of ANOVA will be introduced: Kruskal-Wallis test.

Monofactorial and multifactorial analysis will be presented, together with the definition of Type I and Type II error, multiple comparison errors and tests for multiple comparison.

March, 14th  
 

15:30 h

Room 55.410

PhD Research Seminar

Statistical course and Design of Experiments 

By  Simone Tassani

Abstract

The course of statistics aims to introduce a number of tools for master/Ph.D. students and post docs. The presented tools will play a role in planning many kind of studies, properly analyse the results and understand if data analysed by other researchers are or not reliable.

Projects with more than two factors will be presented. This will lead to the presentation of some examples of Design of Experiment (Latin and Greek-Latin squares) for the reduction of the number of experiments and to the concept of orthonormality.
The course will close with the description of linear regression.
 

If the seminar is ofered via streaming in: 

- Room 55.309 or 55.410 follow this link

- Auditorium follow this link

If you are interested in giving a Research Seminar or you would like to invite a speaker please fill in the following form RSDetails Form .