Cantitate/Preț
Produs

Quick Start Guide to Large Language Models: Addison-Wesley Data & Analytics Series

Autor Sinan Ozdemir
en Limba Engleză Paperback – 13 oct 2024
The Practical, Step-by-Step Guide to Using LLMs at Scale in Projects and Products
Large Language Models (LLMs) like ChatGPT are demonstrating breathtaking capabilities, but their size and complexity have deterred many practitioners from applying them. In Quick Start Guide to Large Language Models, pioneering data scientist and AI entrepreneur Sinan Ozdemir clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems.
Ozdemir brings together all you need to get started, even if you have no direct experience with LLMs: step-by-step instructions, best practices, real-world case studies, hands-on exercises, and more. Along the way, he shares insights into LLMs' inner workings to help you optimize model choice, data formats, parameters, and performance. You'll find even more resources on the companion website, including sample datasets and code for working with open- and closed-source LLMs such as those from OpenAI (GPT-4 and ChatGPT), Google (BERT, T5, and Bard), EleutherAI (GPT-J and GPT-Neo), Cohere (the Command family), and Meta (BART and the LLaMA family).
  • Learn key concepts: pre-training, transfer learning, fine-tuning, attention, embeddings, tokenization, and more
  • Use APIs and Python to fine-tune and customize LLMs for your requirements
  • Build a complete neural/semantic information retrieval system and attach to conversational LLMs for retrieval-augmented generation
  • Master advanced prompt engineering techniques like output structuring, chain-ofthought, and semantic few-shot prompting
  • Customize LLM embeddings to build a complete recommendation engine from scratch with user data
  • Construct and fine-tune multimodal Transformer architectures using opensource LLMs
  • Align LLMs using Reinforcement Learning from Human and AI Feedback (RLHF/RLAIF)
  • Deploy prompts and custom fine-tuned LLMs to the cloud with scalability and evaluation pipelines in mind
"By balancing the potential of both open- and closed-source models, Quick Start Guide to Large Language Models stands as a comprehensive guide to understanding and using LLMs, bridging the gap between theoretical concepts and practical application."
--Giada Pistilli, Principal Ethicist at HuggingFace
"A refreshing and inspiring resource. Jam-packed with practical guidance and clear explanations that leave you smarter about this incredible new field."
--Pete Huang, author of The Neuron
Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (2) 26677 lei  17-23 zile +2428 lei  5-11 zile
  Pearson – 11 sep 2023 26677 lei  17-23 zile +2428 lei  5-11 zile
  Pearson Education – 13 oct 2024 29393 lei  22-36 zile

Din seria Addison-Wesley Data & Analytics Series

Preț: 29393 lei

Preț vechi: 36741 lei
-20% Nou

Puncte Express: 441

Preț estimativ în valută:
5627 5787$ 4668£

Carte disponibilă

Livrare economică 27 ianuarie-10 februarie

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780135346563
ISBN-10: 0135346568
Pagini: 384
Dimensiuni: 176 x 230 x 21 mm
Greutate: 0.63 kg
Ediția:2nd edition
Editura: Pearson Education
Seria Addison-Wesley Data & Analytics Series


Cuprins

Foreword xv Preface xvii Acknowledgments xxi About the Author xxiii
Part I: Introduction to Large Language Models 1
Chapter 1: Overview of Large Language Models 3 What Are Large Language Models? 4 Popular Modern LLMs 20 Domain-Specific LLMs 22 Applications of LLMs 23 Summary 29
Chapter 2: Semantic Search with LLMs 31 Introduction 31 The Task 32 Solution Overview 34 The Components 35 Putting It All Together 51 The Cost of Closed-Source Components 54 Summary 55
Chapter 3: First Steps with Prompt Engineering 57 Introduction 57 Prompt Engineering 57 Working with Prompts Across Models 65 Building a Q/A Bot with ChatGPT 69 Summary 74
Part II: Getting the Most Out of LLMs 75
Chapter 4: Optimizing LLMs with Customized Fine-Tuning 77 Introduction 77 Transfer Learning and Fine-Tuning: A Primer 78 A Look at the OpenAI Fine-Tuning API 82 Preparing Custom Examples with the OpenAI CLI 84 Setting Up the OpenAI CLI 87 Our First Fine-Tuned LLM 88 Case Study: Amazon Review Category Classification 93 Summary 95
Chapter 5: Advanced Prompt Engineering 97 Introduction 97 Prompt Injection Attacks 97 Input/Output Validation 99 Batch Prompting 103 Prompt Chaining 104 Chain-of-Thought Prompting 111 Revisiting Few-Shot Learning 113 Testing and Iterative Prompt Development 123 Summary 124
Chapter 6: Customizing Embeddings and Model Architectures 125 Introduction 125 Case Study: Building a Recommendation System 126 Summary 144
Part III: Advanced LLM Usage 145
Chapter 7: Moving Beyond Foundation Models 147 Introduction 147 Case Study: Visual Q/A 147 Case Study: Reinforcement Learning from Feedback 163 Summary 173
Chapter 8: Advanced Open-Source LLM Fine-Tuning 175 Introduction 175 Example: Anime Genre Multilabel Classification with BERT 176 Example: LaTeX Generation with GPT2 189 Sinan's Attempt at Wise Yet Engaging Responses: SAWYER 193 The Ever-Changing World of Fine-Tuning 206 Summary 207
Chapter 9: Moving LLMs into Production 209 Introduction 209 Deploying Closed-Source LLMs to Production 209 Deploying Open-Source LLMs to Production 210 Summary 225
Part IV: Appendices 227
Appendix A: LLM FAQs 229 Appendix B: LLM Glossary 233 Appendix C: LLM Application Archetypes 239
Index 243

Notă biografică

Sinan Ozdemir is currently the founder and CTO of Shiba Technologies. Sinan is a former lecturer of Data Science at Johns Hopkins University and the author of multiple textbooks on data science and machine learning. Additionally, he is the founder of the recently acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. He holds a master's degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco, CA.