Updates on the activities around deep learning are discussed in a specific distribution list. If you would like to be included, please contact aurelio (dot) ruiz upf (dot) edu

  • Deep Learning Study Group
  • Deep Learning Seminars. Several seminars are organised around Machine Learning in general, and Deep Learning in specific, within the DTIC Seminars, hosting activities of the BCN Machine Learning Study Group or organised ad-hoc. Unless explicitly stated, they are open to anybody interested in joining, and often recorded.

Past seminars and associated materials

Deep Learning Study Group

- 2018-2019: new edition led by Carina Silberer (Dept. Language Sciences) and Guillermo Jiménez.

After november 7th 2018, the meetings will take place on Wednesday 12h - 13h.

June 26, 16:00h, room 52.213. Guillem and Gerard Martí. Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., & Shen, D. (2017, September). Medical image synthesis with context-aware generative adversarial networks. https://arxiv.org/pdf/1612.05362

June 12, 16:00h, room 52.201. Jeremy Barnes. Unsupervised Machine Translation. arxiv.org/pdf/1710.11041…

- 2016-17

Deep Learning and Data Science Seminars (selected)

  • 27/02/2019 17:30h, Auditorium 

 

Deep Reinforcement Learning with demonstrations. Olivier Pietquin  

Deep Reinforcement Learning (DRL) has recently experienced increasing interest after its success at playing video games such as Atari, DotA or Starcraft as well as defeating grand masters at Go and Chess. However, many tasks remain hard to solve with DRL, even given almost unlimited compute power and simulation time. These tasks often share the common problem of being "hard exploration tasks". In this talk, we will show how using demonstrations (even sub-optimal) can help in learning policies that can reach human level performance or even super-human performance on some of these tasks, especially the remaining unsolved Atari games or human-machine dialogues.

Biography

Olivier Pietquin obtained an Electrical Engineering degree from the Faculty of Engineering, Mons (FPMs, Belgium) in June 1999 and a PhD degree in April 2004. In 2011, he received the Habilitation à Diriger des Recherches (French Tenure) from the University Paul Sabatier (Toulouse, France). Between 2005-2013, he was a professor at the Ecole Superieure d'Electricite (Supelec, France), and subsequently joined the University Lille 1 as a full professor in 2013. In 2014, he has been appointed at the Institut Universitaire de France as a junior fellow. He is now on leave with Google, first at Google DeepMind in London and, since 2018, with Brain in Paris. His research interests include spoken dialog systems evaluation, simulation and automatic optimisation, machine learning (especially direct and inverse reinforcement learning), speech and signal processing. Title: Deep Reinforcement Learning with demonstrations Abstract: Deep Reinforcement Learning (DRL) has recently experienced increasing interest after its success at playing video games such as Atari, DotA or Starcraft as well as defeating grand masters at Go and Chess. However, many tasks remain hard to solve with DRL, even given almost unlimited compute power and simulation time. These tasks often share the common problem of being "hard exploration tasks". In this talk, we will show how using demonstrations (even sub-optimal) can help in learning policies that can reach human level performance or even super-human performance on some of these tasks, especially the remaining unsolved Atari games or human-machine dialogues. Web: https://ai.google/research/people/105812 http://www.lifl.fr/~pietquin/ 

  • 09/11/2018 12:00h, room 52.213

Deep Learning and Music. Kyle McDonald. 

The last five years have seen incredible results from machine learning using modern neural networks, also called Deep Learning. Some of this research has presented completely new ways of working with digital media including sound and images. I will provide an informal survey of new approaches to music composition and sound synthesis based on neural networks, including some of my own work in this direction.

Biography: Kyle McDonald is an artist working with code. He is a contributor to open source arts-engineering toolkits like openFrameworks, and builds tools that allow artists to use new algorithms in creative ways. He has a habit of sharing ideas and projects in public before they're completed. He creatively subverts networked communication and computation, explores glitch and systemic bias, and extends these concepts to reversal of everything from identity to relationships. Kyle has been an adjunct professor at NYU's ITP, and a member of F.A.T. Lab, community manager for openFrameworks, and artist in residence at STUDIO for Creative Inquiry at Carnegie Mellon, as well as YCAM in Japan. His work is commissioned by and shown at exhibitions and festivals around the world, including: NTT ICC, Ars Electronica, Sonar/OFFF, Eyebeam, Anyang Public Art Project, Cinekid, CLICK Festival, NODE Festival, and many others. He frequently leads workshops exploring computer vision and interaction.

http://kylemcdonald.net/

Further details

  • 14/03/2018 12:30h, auditorium

The statistical foundations of learning to control. Ben Recht (University of California, Berkeley)

Given the dramatic successes in machine learning and reinforcement learning over the past half decade, there has been a resurgence of interest in applying these techniques to continuous control problems in robotics, self-driving cars, and unmanned aerial vehicles.  Though such control applications appear to be straightforward generalizations of standard reinforcement learning, few fundamental baselines have been established prescribing how well one must know a system in order to control it.  In this talk, I will discuss how one might merge techniques from statistical learning theory with robust control to derive such baselines for such continuous control.  I will explore several examples that balance parameter identification against controller design and demonstrate finite sample tradeoffs between estimation fidelity and desired control performance.  I will describe how these simple baselines give us insights into shortcomings of existing reinforcement learning methodology.  I will close by listing several exciting open problems that must be solved before we can build robust, safe learning systems that interact with an uncertain physical environment.

Biography: Benjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Ben's research group studies the theory and practice of optimization algorithms with a focus on applications in machine learning and data analysis. They are particularly interested in busting machine learning myths and establishing baselines for data analysis. Ben is the recipient of a Presidential Early Career Awards for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization, the 2014 Jamon Prize, the 2015 William O. Baker Award for Initiatives in Research, and the NIPS Test of Time Award.

  • 15/06/2017 12:30h, room 55.410 (DTIC-UPF Invited Seminars)

Music and Art with Machine Learning. Generative Models of Drawing and Sound. Douglas Eck, Google

I'll give an overview talk about Magenta, a project investigating music and art generation using deep learning and reinforcement learning. I'll discuss some of the goals of Magenta and how it fits into the general trend of AI moving into our daily lives. I'll talk about two specific recent projects. First I'll discuss our research on Teaching Machines to Draw with SketchRNN, a LSTM recurrent neural network able to construct stroke-based drawings of common objects. SketchRNN is trained on thousands of crude human-drawn images representing hundreds of classes. Second I'll talk about NSynth, a deep neural network that learns to make new musical instruments via a WaveNet-style temporal autoencoder. Trained on hundreds of thousands of musical notes, the model learns to generalize in the space of musical timbres, allowing musicians to explore new sonic spaces such as sounds that exist somewhere between a bass guitar and a flute. This will be a high-level overview talk with no need for prior knowledge of machine learning models such as LSTM or WaveNet.

Short bio

Doug is a Research Scientist at Google leading Magenta, a Google Brain project working to generate music, video, image and text using deep learning and reinforcement learning. A main goal of Magenta is to better understanding how AI can enable artists and musicians to express themselves in innovative new ways. Before Magenta, Doug led the Google Play Music search and recommendation team. From 2003 to 2010 Doug was an Associate Professor in Computer Science at the University of Montreal's MILA Machine Learning lab, where he worked on expressive music performance and automatic tagging of music audio.

  • 15/06/2017 13:30h. Melià Sky Hotel. 22@ Network Agora (lunch seminars)

Machine learning, predict what you want. Ricardo Baeza, CTO CTENT and Prof. DTIC-UPF

  • 22/03/2017 19:00h, room 52.033 (BCN ML Study Group)

H20 Deep Water - Making Deep Learning Accessible to Everyone. Jo-fai, data scientist at H20.ai

More than words: Machine learning applied to Marketing. Cristina Aranda. Intelygenz & MujeresTech.

Machine learning in industry: textbooks vs real life. José A. Rodríguez. BBVA Big Data Research

  • 02/11/2016 19:00h (BCN ML study group). Room 52.023

What matters and what doesn't? The machine learning challenges in online adds. Nicolas le Roux, Criteo.

Data and Algorithmic Bias in the Web. Ricardo Baeza-Yates (NTENT & DTIC-UPF Web Research Group)

  • 15/06/2016, 19:00h (BCN ML study group)

Big Crisis Data - an exciting frontier for applied computing. Carlos Castillo, Director of Research for Data Mining, Eurecat (Video)

  • 13/06/2016, 11:00h (room 55.410). Internal seminar

Experimenting with Musically Motivated Convolutional Neural Networks General communication. Jordi Pons, Music Technology Group, DTIC-UPF

  • 24/05/2016, 19:00h (BCN ML study group)

The Good, the Bad and the Ugly in Deep Learning. Joan Bruna, UC Berkeley (Video and slides)

  • 10/05/2016, 15:30h (DTIC Seminar)

Deep Learning for Transition-based Natural Language Processing. Miguel Ballesteros, Natural Language Processing (TALN) group, DTIC-UPF

  • 29/04/2016, 11:00h (GRIB at CEXS-UPF Seminar)

Incremental Unsupervised Training of Deep Architectures. Davide Maltoni, University of Bologna

  • 25/04/2016, 15:30h (room 55.230, internal DTIC-UPF seminar)

Deep learning for learning optimal controllers. Vicenç Gómez, Artificial Intelligence group, DTIC-UPF

Suggested reading:

Real-Time Stochastic Optimal Control for Multi-agent Quadrotor Systems
http://arxiv.org/abs/1502.04548

  • 03/03/2016, 15:30h (DTIC Seminars)

Deep Learning. Alexandros Karatzoglou, Telefónica Research

  • 26/02/2016, 11:00h (GRIB at CEXS-UPF Seminar)

Deep Neural Networks and Reinforcement Learning for Building Intelligent Machines. Silvia Chiappa, Google Deep Mind, UK.

  • 25/02/2016, 19:00h (BCN ML Study Group)

Lessons Learned from Building Real-Life ML Systems. Xavier Amatriain, Quora. (Video)