Mastering Transformers: The Journey from BERT to Large Language Models and Stable Diffusion
1145643159
Mastering Transformers: The Journey from BERT to Large Language Models and Stable Diffusion
31.99 In Stock
Mastering Transformers: The Journey from BERT to Large Language Models and Stable Diffusion

Mastering Transformers: The Journey from BERT to Large Language Models and Stable Diffusion

Mastering Transformers: The Journey from BERT to Large Language Models and Stable Diffusion

Mastering Transformers: The Journey from BERT to Large Language Models and Stable Diffusion

eBook

$31.99 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

Product Details

ISBN-13: 9781837631506
Publisher: Packt Publishing
Publication date: 06/03/2024
Sold by: Barnes & Noble
Format: eBook
Pages: 462
File size: 19 MB
Note: This product may take a few minutes to download.

About the Author

Savaş Yıldırım graduated from the Istanbul Technical University Department of Computer Engineering and holds a Ph.D. degree in Natural Language Processing (NLP). Currently, he is an associate professor at the Istanbul Bilgi University, Turkey, and is a visiting researcher at the Ryerson University, Canada. He is a proactive lecturer and researcher with more than 20 years of experience teaching courses on machine learning, deep learning, and NLP. He has significantly contributed to the Turkish NLP community by developing a lot of open source software and resources. He also provides comprehensive consultancy to AI companies on their R&D projects. In his spare time, he writes and directs short films, and enjoys practicing yoga.
Meysam Asgari-Chenaghlu is an AI manager at Carbon Consulting and is also a Ph.D. candidate at the University of Tabriz. He has been a consultant for Turkey's leading telecommunication and banking companies. He has also worked on various projects, including natural language understanding and semantic search.

Table of Contents

Table of Contents
  1. From Bag-of-Words to the Transformer
  2. A Hands-On Introduction to the Subject
  3. Autoencoding Language Models
  4. Autoregressive Language Models
  5. Fine-Tuning Language Model for Text Classification
  6. Fine-Tuning Language Models for Token Classification
  7. Text Representation
  8. Boosting Your Model Performance
  9. Parameter Efficient Fine-Tuning
  10. Zero-Shot and Few-Shot Learning in NLP
  11. Explainable AI (XAI) for NLP
  12. Working with Efficient Transformers
  13. Cross-Lingual Language Modeling
  14. Serving Transformer Models
  15. Model Tracking and Monitoring
  16. Vision Transformers
  17. Tabular Transformers
  18. Multi-Model Transformers
From the B&N Reads Blog

Customer Reviews