ChatGPT: Principles and Architecture
Autor Ge Chengen Limba Engleză Paperback – iun 2025
Sections focus on the principles, architecture, pretraining, transfer learning, and middleware programming techniques of ChatGPT, providing a useful resource for the research and academic communities. It is ideal for the needs of industry professionals, researchers, and students in the field of AI and computer science who face daily challenges in understanding and implementing complex large language model technologies.
- Offers comprehensive insights into the principles and architecture of ChatGPT, helping readers understand the intricacies of large language models
- Details large language model technologies, covering key aspects such as pretraining, transfer learning, middleware programming, and addressing technical aspects in an accessible manner
- Includes real-world examples and case studies, illustrating how large language models can be applied in various industries and professional settings
- Provides future developments and potential innovations in the field of large language models, preparing readers for upcoming changes and technological advancements
Preț: 790.03 lei
Preț vechi: 987.54 lei
-20% Nou
Puncte Express: 1185
Preț estimativ în valută:
151.18€ • 158.56$ • 125.85£
151.18€ • 158.56$ • 125.85£
Carte nepublicată încă
Doresc să fiu notificat când acest titlu va fi disponibil:
Se trimite...
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9780443274367
ISBN-10: 0443274363
Pagini: 300
Dimensiuni: 152 x 229 mm
Editura: ELSEVIER SCIENCE
ISBN-10: 0443274363
Pagini: 300
Dimensiuni: 152 x 229 mm
Editura: ELSEVIER SCIENCE
Cuprins
1. The New Milestone in AI - ChatGPT
2. In-Depth Understanding of Transformer Architecture
3. Generative Pretraining
4. Unsupervised Multi-task and Zero-shot Learning
5. Sparse Attention and Content-based Learning in GPT-3
6. Pretraining Strategies for Large Language Models
7. Proximal Policy Optimization Algorithms
8. Human Feedback Reinforcement Learning
9. Low-Compute Domain Transfer for Large Language Models
10. Middleware Programming
11. The Future Path of Large Language Models
2. In-Depth Understanding of Transformer Architecture
3. Generative Pretraining
4. Unsupervised Multi-task and Zero-shot Learning
5. Sparse Attention and Content-based Learning in GPT-3
6. Pretraining Strategies for Large Language Models
7. Proximal Policy Optimization Algorithms
8. Human Feedback Reinforcement Learning
9. Low-Compute Domain Transfer for Large Language Models
10. Middleware Programming
11. The Future Path of Large Language Models