New Large Language Model Courses with edX
As Large Language Model (LLM) applications disrupt countless industries, generative AI is becoming an important foundational technology. The demand for LLM-based applications is skyrocketing, and so is the demand for engineers who can build them.
Today, we’re thrilled to announce the new Large Language Models program, the first expert-led online courses that are specifically focused on building and using language models in modern applications. Through dynamic lectures, demos, and hands-on labs taught by industry leaders and researchers, students will learn how to develop and productionize LLM applications. Students will also build an understanding of the theory and key innovations behind foundation models and how to fine tune them, including Databricks’ Dolly, enabling you to easily and affordably add value to your business through LLMs. The courses will cover the latest techniques in the LLM space such as prompt engineering (using LangChain), embeddings, vector databases, and model tuning.
The LLMs program consists of two courses, LLMs: Application through Production and LLMs: Foundation Models from the Ground Up. Among the lecturers for the courses will be Stanford Professor Matei Zaharia, as well as the technical team that built the Databricks Dolly model. Consistent with our goal of democratizing AI, course materials will be free for anyone to audit. Learners can also pay a nominal fee for access to a managed compute environment for course labs, graded exercises, and a completion certificate.
The first course, LLMs: Application through Production is aimed at developers, data scientists, and engineers looking to build LLM-centric applications with the latest and most popular frameworks. Additionally, it will cover the following topics:
- How to apply LLMs to real-world problems in NLP using popular libraries, such as Hugging Face and LangChain.
- How to add domain knowledge and memory into LLM pipelines using embeddings and vector databases.
- Understand the nuances of pre-training, fine-tuning, and prompt engineering, and apply that knowledge to fine-tune a custom chat model.
- How to evaluate the efficacy and bias of LLMs.
- How to implement LLMOps and multi-step reasoning best practices for an LLM workflow.
The second course, LLMs: Foundation Models from the Ground Up is aimed at data scientists interested in diving into the details of foundation models and the key innovations that led to the proliferation of transformer-based models. It will cover:
- How the theory and innovations of foundation models, including attention, decoders, and encoders, led to GPT-4
- How to leverage transfer learning techniques such as one-shot, few-shot learning, and knowledge distillation, to reduce the size of LLMs while retaining performance
- Where this domain is headed with current LLM research and developments
By the end of the program, learners will have built their own end-to-end production-ready LLM workflows. Learners will receive a professional certificate upon successful completion of the program that can be shared on resumes and LinkedIn.