Back to Courses
Advanced 6 weeks 3 sessions/week

NLP & Transformer Models Masterclass

A comprehensive deep dive into Natural Language Processing and Transformer models. From attention mechanisms to BERT and GPT, build production NLP systems for classification, NER, QA, and summarisation.

£3,497 per person

What You'll Learn

01

Text Preprocessing

Robust text processing pipelines. Tokenisation strategies (BPE, WordPiece, SentencePiece), normalisation, handling multilingual text, and building preprocessing pipelines that work at scale.

02

Word Embeddings

From Word2Vec to contextual embeddings. Understand vector spaces, similarity measures, embedding visualisation, and how modern embedding models capture semantic meaning.

03

Attention Mechanism Deep Dive

Master attention from the ground up. Scaled dot-product attention, multi-head attention, cross-attention, and why attention is the key innovation that powers modern NLP.

04

BERT & Variants

Understanding BERT and its family: RoBERTa, ALBERT, DeBERTa, and distilled variants. Pre-training objectives, fine-tuning for downstream tasks, and choosing the right model size.

05

GPT Architecture

Decoder-only Transformers in depth. Autoregressive generation, causal attention masks, temperature and sampling strategies, and understanding the design choices behind GPT-4 and similar models.

06

Text Classification & Named Entity Recognition

Build production classifiers and NER systems. Multi-label classification, few-shot classification, custom entity recognition, and handling noisy real-world text data.

07

Sentiment Analysis & Question Answering

Advanced sentiment analysis beyond positive/negative. Aspect-based sentiment, extractive and generative QA systems, and building reliable text understanding pipelines.

08

Summarisation

Extractive and abstractive summarisation. PEGASUS, BART, and LLM-based summarisation. Evaluation metrics (ROUGE, BERTScore), handling long documents, and building summarisation APIs.

Who Is This For

ML engineers who want to specialise in NLP and build production text processing systems

Software engineers building products that need text classification, extraction, or generation

Data scientists working with text data who want to go beyond basic bag-of-words approaches

Prerequisites

Python and basic deep learning knowledge required. You should be comfortable with PyTorch, understand neural network training, and have some exposure to Transformer concepts. Our Deep Learning Masterclass provides ideal preparation.

Course Format

Live Online Sessions

Interactive sessions with real-time Q&A and screen sharing

Recorded Replays

All sessions recorded and available for 12 months after the course

Hands-on Projects

Real-world projects that build your portfolio as you learn

1-on-1 Mentoring

Personal mentoring sessions to address your specific questions

Certificate of Completion

Industry-recognised certificate upon successful completion

Schedule & Pricing

£3,497
6 weeks · 3 sessions per week · 18 sessions total
  • Live interactive sessions
  • 12-month replay access
  • 1-on-1 mentoring
  • Certificate included
Enrol Now

Your Instructors

PP

PeusoPeupon Expert Team

Our instructors are seasoned practitioners with years of experience building production AI systems. They hold certifications across major cloud platforms and have trained thousands of professionals worldwide.

Frequently Asked Questions

This course covers the full NLP landscape -- embeddings, BERT, classification, NER, QA, and summarisation. The LLM Fine-Tuning course focuses specifically on customising large generative models. Many students take both for complete NLP mastery.

Absolutely. LLMs are powerful but expensive and slow for many tasks. Specialised NLP models for classification, NER, and extraction are faster, cheaper, and often more accurate for specific tasks. Understanding the full NLP toolkit makes you a more effective engineer.

We primarily work with English but cover multilingual models (XLM-R, mBERT) and techniques for low-resource languages. The principles apply to any language.

Basic NLP exposure is helpful but not required if you have deep learning experience. We build from foundations (tokenisation, embeddings) up to advanced architectures. The Deep Learning Masterclass is sufficient preparation.