Library Hours
Monday to Friday: 9 a.m. to 9 p.m.
Saturday: 9 a.m. to 5 p.m.
Sunday: 1 p.m. to 9 p.m.
Naper Blvd. 1 p.m. to 5 p.m.

LEADER 00000cam a2200361Mu 4500 
003    OCoLC 
005    20240129213017.0 
006    m        d         
007    cr |n||||||||| 
008    201011s2020    xx      o     ||| 0 und d 
020    9781492083290|q(paperback) 
020    1492083291|q(paperback) 
035    (OCoLC)1202550935 
040    VT2|beng|cVT2|dEBLCP|dTOH|dOCLCQ|dLANGC|dOCLCQ|dOCLCL 
049    INap 
082 04 006.3/1 
082 04 006.3/1|qOCoLC|223/eng/20230216 
099    eBook O'Reilly for Public Librairies 
245 00 Introducing MLOps|h[electronic resource] /|cLéo Dreyfus-
       Schmidt ... [et. al.].|h[O'Reilly electronic resource] 
260    [S.l.] :|bO'Reilly Media, Inc.,|c2020. 
300    1 online resource 
500    Title from content provider. 
505 0  Cover -- Copyright -- Table of Contents -- Preface -- Who 
       This Book Is For -- How This Book Is Organized -- 
       Conventions Used in This Book -- O'Reilly Online Learning 
       -- How to Contact Us -- Acknowledgments -- Part I. MLOps: 
       What and Why -- Chapter 1. Why Now and Challenges -- 
       Defining MLOps and Its Challenges -- MLOps to Mitigate 
       Risk -- Risk Assessment -- Risk Mitigation -- MLOps for 
       Responsible AI -- MLOps for Scale -- Closing Thoughts -- 
       Chapter 2. People of MLOps -- Subject Matter Experts -- 
       Data Scientists -- Data Engineers -- Software Engineers --
       DevOps -- Model Risk Manager/Auditor 
505 8  Machine Learning Architect -- Closing Thoughts -- Chapter 
       3. Key MLOps Features -- A Primer on Machine Learning -- 
       Model Development -- Establishing Business Objectives -- 
       Data Sources and Exploratory Data Analysis -- Feature 
       Engineering and Selection -- Training and Evaluation -- 
       Reproducibility -- Responsible AI -- Productionalization 
       and Deployment -- Model Deployment Types and Contents -- 
       Model Deployment Requirements -- Monitoring -- DevOps 
       Concerns -- Data Scientist Concerns -- Business Concerns -
       - Iteration and Life Cycle -- Iteration -- The Feedback 
       Loop -- Governance -- Data Governance 
505 8  Process Governance -- Closing Thoughts -- Part II. MLOps: 
       How -- Chapter 4. Developing Models -- What Is a Machine 
       Learning Model? -- In Theory -- In Practice -- Required 
       Components -- Different ML Algorithms, Different MLOps 
       Challenges -- Data Exploration -- Feature Engineering and 
       Selection -- Feature Engineering Techniques -- How Feature
       Selection Impacts MLOps Strategy -- Experimentation -- 
       Evaluating and Comparing Models -- Choosing Evaluation 
       Metrics -- Cross-Checking Model Behavior -- Impact of 
       Responsible AI on Modeling -- Version Management and 
       Reproducibility -- Closing Thoughts 
505 8  Chapter 5. Preparing for Production -- Runtime 
       Environments -- Adaptation from Development to Production 
       Environments -- Data Access Before Validation and Launch 
       to Production -- Final Thoughts on Runtime Environments --
       Model Risk Evaluation -- The Purpose of Model Validation -
       - The Origins of ML Model Risk -- Quality Assurance for 
       Machine Learning -- Key Testing Considerations -- 
       Reproducibility and Auditability -- Machine Learning 
       Security -- Adversarial Attacks -- Other Vulnerabilities -
       - Model Risk Mitigation -- Changing Environments -- 
       Interactions Between Models -- Model Misbehavior 
505 8  Closing Thoughts -- Chapter 6. Deploying to Production -- 
       CI/CD Pipelines -- Building ML Artifacts -- What's in an 
       ML Artifact? -- The Testing Pipeline -- Deployment 
       Strategies -- Categories of Model Deployment -- 
       Considerations When Sending Models to Production -- 
       Maintenance in Production -- Containerization -- Scaling 
       Deployments -- Requirements and Challenges -- Closing 
       Thoughts -- Chapter 7. Monitoring and Feedback Loop -- How
       Often Should Models Be Retrained? -- Understanding Model 
       Degradation -- Ground Truth Evaluation -- Input Drift 
       Detection -- Drift Detection in Practice 
520    More than half of the analytics and machine learning (ML) 
       models created by organizations today never make it into 
       production. Instead, many of these ML models do nothing 
       more than provide static insights in a slideshow. If they 
       aren't truly operational, these models can't possibly do 
       what you've trained them to do. This book introduces 
       practical concepts to help data scientists and application
       engineers operationalize ML models to drive real business 
       change. Through lessons based on numerous projects around 
       the world, six experts in data analytics provide an 
       applied four-step approach-Build, Manage, Deploy and 
       Integrate, and Monitor-for creating ML-infused 
       applications within your organization. You'll learn how to
       : Fulfill data science value by reducing friction 
       throughout ML pipelines and workflows Constantly refine ML
       models through retraining, periodic tuning, and even 
       complete remodeling to ensure long-term accuracy Design 
       the ML Ops lifecycle to ensure that people-facing models 
       are unbiased, fair, and explainable Operationalize ML 
       models not only for pipeline deployment but also for 
       external business systems that are more complex and less 
       standardized Put the four-step Build, Manage, Deploy and 
       Integrate, and Monitor approach into action. 
590    O'Reilly|bO'Reilly Online Learning: Academic/Public 
       Library Edition 
700 1  Dreyfus-Schmidt, Léo. 
856 40 |uhttps://ezproxy.naperville-lib.org/login?url=https://
       learning.oreilly.com/library/view/~/9781492083283/?ar
       |zAvailable on O'Reilly for Public Libraries 
938    ProQuest Ebook Central|bEBLB|nEBL6417152 
994    92|bJFN