Objectives
After completing this course, the learner will be able to:
■ Compare and contrast types of AI
■ Sketch the AI application deployment choices
■ Sketch high-level LLM architecture
■ Provide an intuitive explanation of how LLMs work
■ List various popular LLMs and communities
■ Engineer prompts for various tasks
■ Sketch the process to enhance LLM with RAG and Vector database
Outline
1. Introduction to AI
1.1 Types of AI
1.2 AI use cases in 5G
1.3 Application deployment choices
2. Introduction to LLM
2.1 Architecture
2.2 How LLMs work
2.3 Popular LLMs
2.4 Communities
Exercise: Explain LLM and how they work
3. Interacting with LLM
3.1 Ways to customize LLM responses
3.2 Incorporating your own data
3.3 APIs for LLMs
Exercise: Access LLM with APIs
4. Prompt Engineering with Playground
4.1 The Magic of Prompts
Exercise: Basic prompts with public LLM
4.2 Prompt Anatomy
4.3 Zero-shot Prompts
Exercise: Basic prompts
4.4 Few-shot learning
Exercise: Multi-shot prompts
4.5 Chain of Thought (CoT)
Exercise: CoT Exercise
4.6 Submission Cycle
Exercise: Submission Cycle Exercise
5. Engineering Prompts for Specific Tasks
5.1 Disabling a BGP peer
Exercise: Adding a LAG
6. Retrieval Augmented Generation (RAG)
6.1 Why do we need RAG?
6.2 RAG Architecture
6.3 VectorDB and tokenization
6.4 Importance of search criteria
Exercise: Using RAG with LLM
7. Fine-tuning
7.1 What is fine-tuning?
7.2 How to fine-tune?
7.3 When to use fine-tuning?
8. Putting It All Together
9. Ask us about using your platforms/data