Webinar

Perspectives on AI, Hype and Security


Thursday, March 14, 2024

2:00 - 3:00 PM EST

60 minutes, including Q&A


It’s 2024, and the AI hype continues to build. With countless articles and statements being made, it can be hard to determine whether statements and claims are hope or reality. This poses challenges for anyone trying to make sense of claims and plan accordingly.

Join us for a grounded conversation where we puncture the hype and focus on topics surrounding AI affecting security professionals. We discuss the impact of generative AI on the security industry, its risks, the realities, and what you need to know to travel the road ahead.

Sponsored by:

Snyk

Speakers

Nathan Hamiel

Senior Director of Research

Kudelski Security

Nathan Hamiel is Senior Director of Research at Kudelski Security where he leads the fundamental and applied research team. Part of the Innovation group working to define the future of products and services for the company, his team focuses on privacy, advanced cryptography, emerging technologies, and special projects. He is also responsible for the research function at the company, connecting the dots between the various business units and focusing on collaboration both internal and external to the company. For over 20 years, he has helped customers worldwide solve complex security challenges and accelerate innovation.

Nathan spends his time focusing on emerging and disruptive technologies and their intersection with information security. This research includes new approaches to difficult security problems and the safety, security, and privacy of artificial intelligence. He is a proponent of agility and simplification and their application in solving security challenges. Nathan is a regular public speaker and has presented his research at global security events, including Black Hat, DEF CON, HOPE, ShmooCon, SecTor, ToorCon, and many others. He is also a veteran member of the Black Hat review board, where he serves as the track lead for the AI, ML, and Data Science track.


Ram Shankar Siva Kumar

Data Cowboy, Microsoft; Harvard

Ram Shankar Siva Kumar is a Data Cowboy working on the intersection of machine learning and security. At Microsoft, he founded the AI Red Team, bringing together an interdisciplinary group of researchers and engineers to proactively attack AI systems and defend them from attacks. His work on AI and Security has appeared in industry conferences like RSA, Black Hat, Defcon, BlueHat, DerbyCon, MIRCon, Infiltrate, and academic workshops at NeurIPS, ICLR, ICML, IEEE S&P, ACM - CCS. His work has been covered by Bloomberg, VentureBeat, Wired, and Geekwire. He founded the Adversarial ML Threat Matrix, an ATT&CK style framework enumerating threats to machine learning. His work on adversarial machine learning appeared notably in the National Security Commission on Artificial Intelligence (NSCAI) Final report presented to the United States Congress and the President. He is an affiliate at the Berkman Klein Center for Internet and Society at Harvard University, and Tech Policy Fellow at UC Berkeley.


Rich Harang

Principal Security Architect

NVIDIA

Rich Harang is a Principal Security Architect at NVIDIA, specializing in ML/AI systems, with over a decade of experience at the intersection of computer security, machine learning, and privacy. He received his PhD in Statistics from the University of California Santa Barbara in 2010. Prior to joining NVIDIA, he led the Algorithms Research team at Duo, led research on using machine learning models to detect malicious software, scripts, and web content at Sophos AI, and worked as a Team Lead at the US Army Research Laboratory. His research interests include adversarial machine learning, addressing bias and uncertainty in machine learning, and ways to use machine learning to support human analysis. Richard’s work has been presented at USENIX, BlackHat, IEEE S&P workshops, and DEF CON AI Village, among others, and has also been featured in The Register and KrebsOnSecurity.


Ariel Herbert-Voss

CEO and Founder

RunSybil

Ariel Herbert-Voss is founder and CEO of RunSybil, a company focused on offensive security automation using large language models and other AI techniques. Ari previously joined OpenAI as their first security research scientist, focused on developing algorithmic exploits for large language models and led red team engagements for the GPT3 and Codex model releases. He has published research at BlackHat, DEF CON, and NeurIPS, and co-founded the DEF CON AI VIllage community.


Randall Degges

Senior Director, Developer Relations

Snyk

 


Steve Paul

Moderator

Black Hat

Sustaining Partners