Library Hours
Monday to Friday: 9 a.m. to 9 p.m.
Saturday: 9 a.m. to 5 p.m.
Sunday: 1 p.m. to 9 p.m.
Naper Blvd. 1 p.m. to 5 p.m.
     
Limit search to available items
2710 results found. Sorted by relevance | date | title .
Results Page:  Previous Next

Title LLM security workshop : tackling OWASP's top 10 risks head-on. [O'Reilly electronic resource]

Edition [First edition].
Publication Info. [Place of publication not identified] : Packt Publishing, 2023.
QR Code
Description 1 online resource (1 video file (2 hr., 1 min.)) : sound, color.
Playing Time 020100
Description digital rdatr
video file rdaft
Instructional films lcgft
Performer Clint Bodungen, instructor.
Summary LLMs introduce new attack vectors that can compromise your AI systems. This intensive workshop equips you with hands-on skills to tackle the OWASP Top 10 most critical risks for securing enterprise-grade LLM applications. Led by cybersecurity expert Clint Bodungen, this masterclass focuses on fortifying your LLM stack against the OWASP Top 10 most critical risks. Diving deep into the attack vectors unique to these powerful generative models, you will learn hands-on techniques to safeguard your apps built on large language models. The workshop covers a wide range of practical methods to harden your LLM security posture. You will discover how to protect against supply chain attacks through vulnerable third-party code, libraries, models and plugins. The session outlines processes to prevent unauthorized data access, theft of proprietary data, and poisoning of your training dataset. Through interactive examples and sample code, you will grasp approaches to filter malicious user input, sanitize model outputs, and implement robust validation mechanisms. The workshop specially focuses on skill-building around prompt engineering as a powerful mechanism to keep generative models restricted within secure guardrails. What you will learn How to safeguard your LLM apps from supply chain vulnerabilities Ways to prevent data poisoning, unauthorized access, and theft Techniques to filter malicious user input and sanitize model output Methods to block jailbreaking and misuse of your LLMs Tools and frameworks to automate security mechanisms in your stack Audience Developers, data scientists, and security professionals seeking to fortify their enterprise-grade large language model (LLM) applications against cybersecurity threats. This workshop is designed for individuals interested in hands-on learning to secure LLMs and mitigate risks outlined in OWASP's Top 10. About the Author Clint Bodungen: Clint Bodungen is a globally recognized cybersecurity authority and brings over a quarter-century of experience to the table. A veteran of the United States Air Force and seasoned professional at notable cybersecurity firms like Symantec, Kaspersky Lab, and Booz Allen Hamilton, he is renowned for his innovative approaches in the field. Clint has contributed to the field as the author of two insightful books: 'Hacking Exposed: Industrial Control Systems' and 'ChatGPT for Cybersecurity Cookbook.' These works underscore his wide-ranging knowledge and expertise in cybersecurity, establishing him as a thought leader in this ever-evolving field.
Subject Computer networks -- Security measures.
Artificial intelligence -- Computer programs -- Security measures.
Genre Instructional films.
Nonfiction films.
Internet videos.
Added Author Bodungen, Clint E., presenter.
Packt Publishing, publisher.
ISBN 9781835880746 (electronic video)
1835880746 (electronic video)
Patron reviews: add a review
Click for more information
EVIDEO
No one has rated this material

You can...
Also...
- Find similar reads
- Add a review
- Sign-up for Newsletter
- Suggest a purchase
- Can't find what you want?
More Information