Join today

Generative AI with Large Language Models

Unlock the potential of Generative AI with Large Language Models through this comprehensive course, gaining foundational knowledge, practical skills, and insights from expert AWS AI practitioners.

Why should I take this course?

Unlock the potential of Generative AI with Large Language Models through this comprehensive course, gaining foundational knowledge, practical skills, and insights from expert AWS AI practitioners.
Welcome to this exciting course on Generative AI with Large Language Models (LLMs). In this comprehensive program, we'll explore the immense power and potential of LLMs as a developer's tool, capable of rapidly building complex AI applications that used to take months in just days or weeks. We'll take a deep dive into the technical details, covering model training, instruction tuning, fine-tuning, and the generative AI project life cycle framework. Generative AI and LLMs are versatile technologies applicable across various industries, similar to deep learning and electricity. As few people know how to use this cutting-edge technology, companies are actively seeking skilled individuals to build applications with LLMs. This course, led by expert AWS instructors and developed with inputs from industry experts and scientists, caters to AI enthusiasts, engineers, and data scientists familiar with Python and basic ML concepts.

What you will learn

Throughout the program, you'll gain a thorough understanding of LLMs, learning how to optimize, deploy, and integrate them into your applications:

Section 1 covers the transformer architecture and training of LLMs, while Section 2 focuses on instruction fine-tuning for specific tasks. In Section 3, you'll explore aligning model outputs with human values for safer and more helpful results. Get ready to embark on an enriching journey into the world of Generative AI and LLMs!
Empty space, drag to resize
Section 1 Learning Objectives
  • Discuss model pre-training and the value of continued pre-training vs fine-tuning
  • Define the terms Generative AI, large language models, prompt, and describe the transformer architecture that powers LLMs
  • Describe the steps in a typical LLM-based, generative AI model lifecycle and discuss the constraining factors that drive decisions at each step of model lifecycle
  • Discuss computational challenges during model pre-training and determine how to efficiently reduce memory footprint
  • Define the term scaling law and describe the laws that have been discovered for LLMs related to training dataset size, compute budget, inference requirements, and other factors.
Empty space, drag to resize
Section 2 Learning Objectives
  • Describe how fine-tuning with instructions using prompt datasets can improve performance on one or more tasks
  • Define catastrophic forgetting and explain techniques that can be used to overcome it
  • Define the term Parameter-efficient Fine Tuning (PEFT)
  • Explain how PEFT decreases computational cost and overcomes catastrophic forgetting
  • Explain how fine-tuning with instructions using prompt datasets can increase LLM performance on one or more tasks
Empty space, drag to resize
Section 3 Learning Objectives
  • Describe how RLHF uses human feedback to improve the performance and alignment of large language models
  • Explain how data gathered from human labelers is used to train a reward model for RLHF
  • Define chain-of-thought prompting and describe how it can be used to improve LLMs reasoning and planning abilities
  • Discuss the challenges that LLMs face with knowledge cut-offs, and explain how information retrieval and augmentation techniques can overcome these challenges


Gain foundational knowledge, practical skills, and a functional understanding of how generative AI works

Deep Dive

Dive into the latest research on Gen AI to understand how companies are creating value with cutting-edge technology

Use Cases

Instruction from expert AWS AI practitioners who actively build and deploy AI in business use-cases today

Our students love us

Excellent course with engaging content and instructors. I have a much better understanding of how and what is going on under the hood of transformers and generative AI as a field.
A very good course covering many different areas, from use cases, to the mathematical underpinnings and the societal impacts.
This is an amazing course for anyone wanting to start with LLMs. Surprisingly it does not require any previous knowledge of NLP and anyone can get along with the course quite easily.
This course is a deep dive into the nitty-gritty of how large language models work. I've taken a few other courses on generative AI, and this one is by far the most comprehensive. It covers everything from the basics of LLMs to how to fine-tune them for specific tasks.
 I highly recommend this (Intermediate level) course to start with LLMs. Having a bit of knowledge in RL (Reinforcement Learning) will help you grasp some concepts covered in the course quickly (and correctly).
A great foundational course.
This course teaches about things like: Instruction fine-tuning, FLAN, PEFT, LoRA, RLHF/RLAIF, PPO, PAL, RAG, ReAct, optimization and lifecycle of LLMs.
Meet the instructor


DeepLearning.AI is an education technology company that is empowering the global workforce to build an AI-powered future through world-class education, hands-on training, and a collaborative community.
Patrick Jones - Course author

Get in touch now and start improving your skills to achieve high performance.

First Name
Last Name
E-mail address
Your message
Thank you!
Go mobile

Download Our
New App

Create and manage your courses directly from your dashboard and keep your curriculum consistent without feeling overwhelmed.
Write your awesome label here.