top of page

NLP with Transformers and the Hugging Face Ecosystem 🤗

Since their introduction in 2017, Transformers have become the de facto standard for tackling a wide range of NLP tasks in both academia and industry. In this workshop, we'll teach you the core concepts behind Transformers and how to train these models in the Hugging Face ecosystem.

October 22, 2021

7:00 pm - 10:00 pm (JST)

Workshop Summary

Natural language processing (NLP) is currently undergoing a period of rapid progress, with new benchmarks broken on a monthly basis. The main driver of these breakthroughs is the Transformer architecture, which relies on a self-attention mechanism to encode the importance of symbol order in text sequences. When pretrained on a large corpus of unlabelled text, these novel architectures have proven to be an effective way to capture long-term dependencies, sentiment and other syntactic structures in natural language. For example, the SQuAD benchmark for question-answering is dominated by Transformer models, some of which achieve superhuman performance! 

Since Transformers are straightforward to parallelize across elements of the input sequence, they can be trained at much larger scales than recurrent or convolutional neural networks. With the advent of transfer learning for NLP, these large pretrained models can be then “fine-tuned” on a wide variety of tasks and languages, all with modest compute.

However, until recently it was not easy to integrate Transformers in practical applications. Each model would typically be associated with a single codebase and API, making it difficult to change between models. To address this need, the team at HuggingFace have developed the open-source Transformers library to make these powerful models accessible to the wider machine learning community. The library provides a unified API that makes it simple to fine-tune these models on downstream tasks, and provides a centralised model Hub for users to share and utilise pretrained models.

These developments in fundamental research and open-source software indicate that Transformers are likely to become a cornerstone of NLP. With a hands-on approach, this workshop will provide you with a foundation on how Transformers work, what tasks they are good for, and how to integrate them in your applications.

Prerequisites and difficulty level

 

This workshop is for data scientists and machine learning / software engineers who may have heard about the recent breakthroughs involving Transformers, but are lacking an in-depth guide to help them adapt these models to their own use-cases. 

We assume that participants are comfortable programming in Python and common libraries for machine learning like pandas, numpy, scikit-learn, and matplotlib.

 

We also assume the participant has practical experience with deep learning, such as preparing data, training models on GPUs, and evaluating model performance.

Materials & communication channel

👉 GitHub repo: https://github.com/huggingface/workshops/tree/main/machine-learning-tokyo

👉 Discord: https://hf.co/join/discord

Instructors

Lewis Tunstall is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. A former theoretical physicist, he has over 10 years experience translating complex subject matter to lay audiences and has taught machine learning to university students at both the graduate and undergraduate levels.

Leandro von Werra is a machine learning engineer at Hugging Face. He has several years of industry experience bringing NLP projects to production by working across the whole machine learning stack, and is the creator of a popular Python library called TRL that combines Transformers with reinforcement learning.

👉 Hugging Face

2.png
bottom of page