Transfer learning in NLP
Over the last two years, the field of Natural Language Processing (NLP) has witnessed the emergence of transfer learning methods and architectures which significantly improved upon the state-of-the-art on pretty much every NLP tasks.
The wide availability and ease of integration of these transfer learning models are strong indicators that these methods will become a common tool in the NLP landscape as well as a major research direction.
In this talk, I'll present a quick overview of modern transfer learning methods in NLP and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks, focusing on open-source solutions.
- Lead the Science team at HuggingFace Inc.
- Has been programming since he was 10, writing video games and interactive software in Assembly and C/C++
- Got accepted for a PhD at MIT (Cambridge, USA) but ended up PhD in Statistical/Quantum physics at Sorbonne University and ESPCI (Paris, France)
- Was consulting for many Deep-Learning/AI/ML startups
- Interested in Natural Language Processing, Deep Learning and Computational Linguistics
- Twitter, Github, Medium