Description
Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs by Ben Auffarth, ISBN-13: 978-1835083468
[PDF eBook eTextbook] – Available Instantly
- Publisher: Packt Publishing (December 22, 2023)
- Language: English
- 360 pages
- ISBN-10: 1835083463
- ISBN-13: 978-1835083468
2024 Edition – Get to grips with the LangChain framework to develop production-ready applications, including agents and personal assistants.
Key Features
- Learn how to leverage LangChain to work around LLMs’ inherent weaknesses
- Delve into LLMs with LangChain and explore their fundamentals, ethical dimensions, and application challenges
- Get better at using ChatGPT and GPT models, from heuristics and training to scalable deployment, empowering you to transform ideas into reality
Book Description
ChatGPT and the GPT models by OpenAI have brought about a revolution not only in how we write and research but also in how we can process information. This book discusses the functioning, capabilities, and limitations of LLMs underlying chat systems, including ChatGPT and Gemini. It demonstrates, in a series of practical examples, how to use the LangChain framework to build production-ready and responsive LLM applications for tasks ranging from customer support to software development assistance and data analysis – illustrating the expansive utility of LLMs in real-world applications.
Unlock the full potential of LLMs within your projects as you navigate through guidance on fine-tuning, prompt engineering, and best practices for deployment and monitoring in production environments. Whether you’re building creative writing tools, developing sophisticated chatbots, or crafting cutting-edge software development aids, this book will be your roadmap to mastering the transformative power of generative AI with confidence and creativity.
What you will learn
- Create LLM apps with LangChain, like question-answering systems and chatbots
- Understand transformer models and attention mechanisms
- Automate data analysis and visualization using pandas and Python
- Grasp prompt engineering to improve performance
- Fine-tune LLMs and get to know the tools to unleash their power
- Deploy LLMs as a service with LangChain and apply evaluation strategies
- Privately interact with documents using open-source LLMs to prevent data leaks
Who this book is for
The book is for developers, researchers, and anyone interested in learning more about LangChain. Whether you are a beginner or an experienced developer, this book will serve as a valuable resource if you want to get the most out of LLMs using LangChain.
Basic knowledge of Python is a prerequisite, while prior exposure to machine learning will help you follow along more easily.
Table of Contents:
Preface
Who this book is for
What this book covers
To get the most out of this book
Work with notebooks and projects
Get in touch
What Is Generative AI?
Introducing generative AI
What are generative models?
Why now?
Understanding LLMs
How do GPT models work?
Transformers
Pre-training
Tokenization
Conditioning
How have GPT models evolved?
Model size
The GPT model series
PaLM and Gemini
Llama and Llama 2
Claude 1–3
Mixture of Experts (MoE)
How to use LLMs
What are text-to-image models?
What can AI do in other domains?
Summary
Questions
LangChain for LLM Apps
Going beyond stochastic parrots
What are the limitations of LLMs?
How can we mitigate LLM limitations?
What is an LLM app?
What is LangChain?
Exploring key components of LangChain
What are chains?
What are agents?
What is memory?
What are tools?
How does LangChain work?
LangChain package structure
Comparing LangChain with other frameworks
Summary
Questions
Getting Started with LangChain
How to set up the dependencies for this book
Exploring cloud integrations
Environment setup and API keys
OpenAI
Hugging Face
Google
Building blocks for LLM interaction
LLMs
Fake LLM
Chat models
Prompts
Chains
LangChain Expression Language
Text-to-Image
Dall-E
Replicate
Image understanding
Running local models
Hugging Face Transformers
llama.cpp
GPT4All
Prototyping an application for customer service
Sentiment analysis
Text classification
Document summarization
Applying map-reduce
Monitoring token usage
Summary
Questions
Building Capable Assistants
Answering questions with tools
Tool use
Defining custom tools
Tool decorator
Subclassing BaseTool
StructuredTool dataclass
Error handling
Implementing a research assistant with tools
Building a visual interface
Exploring agent architectures
Extracting structured information from documents
Mitigating hallucinations through fact-checking
Summary
Questions
Building a Chatbot Like ChatGPT
What is a chatbot?
From vectors to RAG
Vector embeddings
Embeddings in LangChain
Vector storage
Vector indexing
Vector libraries
Vector databases
Document loaders
Retrievers in LangChain
kNN retriever
PubMed retriever
Custom retrievers
Implementing a chatbot with a retriever
Document loaders
Vector storage
Conversation Memory: Preserving Context
ConversationBufferMemory
ConversationBufferWindowMemory
ConversationSummaryMemory
ConversationKGMemory
CombinedMemory
Long-term persistence
Moderating responses
Guardrails
Summary
Questions
Developing Software with Generative AI
Software development and AI
Code LLMs
Writing code with LLMs
Vertex AI
StarCoder
StarChat
Llama 2
Small local model
Automating software development
Implementing a feedback loop
Tool use
Error handling
Finishing touches to our developer
Summary
Questions
LLMs for Data Science
The impact of generative models on data science
Automated data science
Data collection
Visualization and EDA
Preprocessing and feature extraction
AutoML
Using agents to answer data science questions
Data exploration with LLMs
Summary
Questions
Customizing LLMs and Their Output
Conditioning LLMs
Methods for conditioning
Reinforcement learning with human feedback
Low-rank adaptation
Inference-time conditioning
Fine-tuning
Setup for fine-tuning
Open-source models
Commercial models
Prompt engineering
Prompt techniques
Zero-shot prompting
Few-shot learning
CoT prompting
Self-consistency
ToT
Summary
Questions
Generative AI in Production
How to get LLM apps ready for production
How to evaluate LLM apps
Comparing two outputs
Comparing against criteria
String and semantic comparisons
Running evaluations against datasets
How to deploy LLM apps
FastAPI web server
Ray
How to observe LLM apps
Tracking responses
Observability tools
LangSmith
PromptWatch
Summary
Questions
The Future of Generative Models
The current state of generative AI
Challenges
Trends in model development
Big Tech vs. small enterprises
Artificial General Intelligence
Economic consequences
Creative industries
Education
Law
Manufacturing
Medicine
Military
Societal implications
Misinformation and cybersecurity
Regulations and implementation challenges
The road ahead
Other Books You May Enjoy
Index
Ben Auffarth is a full-stack data scientist with more than 15 years of work experience. With a background and Ph.D. in computational and cognitive neuroscience, he has designed and conducted wet lab experiments on cell cultures, analyzed experiments with terabytes of data, run brain models on IBM supercomputers with up to 64k cores, built production systems processing hundreds and thousands of transactions per day, and trained language models on a large corpus of text documents. He co-founded and is the former president of Data Science Speakers, London.
What makes us different?
• Instant Download
• Always Competitive Pricing
• 100% Privacy
• FREE Sample Available
• 24-7 LIVE Customer Support