SECURITY EDITION

April 10, 2026

Computer History Museum

0 0 0

00

Days

00

Hours

00

Minutes

00

Seconds

About

Speakers

Agenda

Volunteers

Sponsors

Venue

Contact Us

Register

About

Speakers

Agenda

Volunteers

Sponsors

Venue

Contact Us

AWS Community Day

About AWS Community Day

The world would be such a better place if everyone took information security seriously. Simple misconfigurations and poor security hygiene can lead to catastrophic losses. Education and awareness are the keys to avoiding such disasters.

This Community Day, let's pledge to learn something new, or to make a new friend in the community, or find a security management tool to help us make the world a better place.

The AWS Community Day features expert-led talks, technical workshops, hands-on labs, and networking opportunities with industry leaders and fellow enthusiasts from around the globe. Whether you're an experienced professional or a newcomer in the world of AWS, come join us. Be part of the movement to create a better, smarter, and more connected world.

Topics at the AWS Community Day

Security Governance
Security Assurance
Identity And Access Management
Threat Detection
Vulnerability Management
Infrastructure Protection
Data Protection
Application Security
Incident Response

KEYNOTE SPEAKERS

Christopher Rae: Head of AI Security Go-to-Market, AWS

Christopher Rae: Head of AI Security Go-to-Market, AWS

Christopher Rae leads AI Security Go-to-Market for the AWS Worldwide Specialist Organization, where he defines global strategy for securing AI workloads and advancing AI-powered security capabilities. His work focuses on helping customers adopt AI on AWS securely by embedding secure-by-design and defense-in-depth principles across services such as Amazon Bedrock, Amazon SageMaker, Amazon Q, and open-source AI solutions.


With deep expertise spanning cybersecurity, artificial intelligence, and emerging technologies, Christopher brings a rare blend of technical architecture and business strategy. He is a frequent advisor, speaker, and thought leader on AI security, engaging with executive leadership, field teams, and the broader community to turn security into a competitive advantage while enabling innovation at scale.


https://www.linkedin.com/in/christopherrae/

Sarah Currey: Principal Practice Manager for AWS Security

Sarah Currey: Principal Practice Manager for AWS Security

Sarah Currey is a Principal Practice Manager for AWS Security, where she works closely with AWS leadership to shape and strengthen security practices, culture, and strategy across the organization. Partnering directly with the AWS Security VP, Sarah focuses on building long-term security programs that protect customers and internal teams while fostering a blame-free, learning-driven security culture.


Her work spans three core areas: developing forward-looking security strategy and leadership capability, building scalable mechanisms that improve security readiness and resilience, and driving meaningful community impact through security initiatives and sponsorships. With a deep commitment to continuous improvement and innovation, Sarah brings a practical, human-centered perspective to security that resonates far beyond technology alone.


https://www.linkedin.com/in/sarahcurrey/

DISTINGUISHED SPEAKERS

Betajob

Anton Babenko

Betajob

Streamlining Compliance: Leveraging Open-Source Terraform AWS modules [Advanced]

Are you navigating the complexities of compliance frameworks like SOC2, CIS, and HIPAA and seeking a more efficient path? This talk breaks down these frameworks simply and shows you a time-saving trick, making it perfect for anyone wanting to make their organization's compliance journey much easier. I'll start by outlining the basics of these frameworks and highlighting the challenges businesses face in implementing them. As the creator and maintainer of the terraform-aws-modules projects, I'll be excited to share how using these open-source Terraform AWS modules can streamline the compliance process. I'll walk you through real-life examples showing how such solutions significantly reduce the effort and time required for compliance. At the end of the talk, attendees will get actionable insights on using Terraform AWS modules for efficient compliance management.

View in Agenda
Principal Developer Advocate @ AWS

Gunnar Grosch

Principal Developer Advocate @ AWS

Build-time to runtime: Automated security testing in your pipeline [Advanced]

Discover how to implement comprehensive security testing throughout your CI/CD pipeline. This session demonstrates how to integrate static analysis with Amazon CodeGuru Reviewer, dependency scanning with Amazon Inspector, and container security with Amazon ECR scanning. Learn to orchestrate security gates using AWS CodePipeline, implement software composition analysis, and automate vulnerability remediation with Amazon Q Developer. Walk away with practical patterns for embedding security testing at every stage of your delivery pipeline while maintaining development velocity.

View in Agenda
Senior Solutions Architect @AWS

Ishneet Kaur Dua

Senior Solutions Architect @AWS

Securing Large Language Models: Best Practices for Prompt Engineering and Mitigating Prompt Injection Attacks [Beginner]

The rapid adoption of large language models (LLMs) in enterprise IT environments has introduced new challenges in security, responsible AI, and privacy. One critical risk is the vulnerability to prompt injection attacks, where malicious actors manipulate input prompts to influence the LLM's outputs and introduce biases or harmful outcomes. This guide outlines security guardrails for mitigating prompt engineering and prompt injection attacks. The authors present a comprehensive approach to enhancing the prompt-level security of LLM-powered applications, including robust authentication mechanisms, encryption protocols, and optimized prompt designs. These measures aim to significantly improve the reliability and trustworthiness of AI-generated outputs, while maintaining high accuracy for non-malicious queries. The proposed security guardrails are compatible with various model providers and prompt templates, but require additional customization for specific models. By implementing these best practices, organizations can instill higher trust and credibility in the use of generative AI-based solutions, maintain uninterrupted system operations, and enable in-house data scientists and prompt engineers to uphold responsible AI practices.

View in Agenda
Principal Technical Account Manager @ AWS

Manas Satpahti

Principal Technical Account Manager @ AWS

Simplify Security Events Log Analysis with Amazon Q [Advanced]

Discover how to build security-focused applications with Amazon Q to analyze AWS accounts for compliance and vulnerabilities. Use automation to centralize security logs and events from AWS services, partner solutions, and open-source tools, and analyze using an intuitive chatbot interface. Through practical examples, explore how Generative AI enhances security analysis, delivering a richer experience with queries in natural language.

View in Agenda
Sr AI/ML Architect @ AWS

Parth Girish Patel

Sr AI/ML Architect @ AWS

Securing Large Language Models: Best Practices for Prompt Engineering and Mitigating Prompt Injection Attacks [Beginner]

The rapid adoption of large language models (LLMs) in enterprise IT environments has introduced new challenges in security, responsible AI, and privacy. One critical risk is the vulnerability to prompt injection attacks, where malicious actors manipulate input prompts to influence the LLM's outputs and introduce biases or harmful outcomes. This guide outlines security guardrails for mitigating prompt engineering and prompt injection attacks. The authors present a comprehensive approach to enhancing the prompt-level security of LLM-powered applications, including robust authentication mechanisms, encryption protocols, and optimized prompt designs. These measures aim to significantly improve the reliability and trustworthiness of AI-generated outputs, while maintaining high accuracy for non-malicious queries. The proposed security guardrails are compatible with various model providers and prompt templates, but require additional customization for specific models. By implementing these best practices, organizations can instill higher trust and credibility in the use of generative AI-based solutions, maintain uninterrupted system operations, and enable in-house data scientists and prompt engineers to uphold responsible AI practices.

View in Agenda
AWS Community Hero @ Answers for AWS

Peter Sankauskas

AWS Community Hero @ Answers for AWS

Everything you didn't want to know about IAM [Beginner]

If you have used AWS, you have seen an error message stating "x is not authorized to perform y". This is a annoying fact. But how do you solve these? In this talk, Peter will walk though how IAM is designed, different types of policies and when they are useful. You will leave with techniques for understanding and debugging those access issues you wish didn't exist.

View in Agenda
Sr. Solutions Architect @ AWS

Sandeep Mohanty

Sr. Solutions Architect @ AWS

Simplify Security Events Log Analysis with Amazon Q [Advanced]

Discover how to build security-focused applications with Amazon Q to analyze AWS accounts for compliance and vulnerabilities. Use automation to centralize security logs and events from AWS services, partner solutions, and open-source tools, and analyze using an intuitive chatbot interface. Through practical examples, explore how Generative AI enhances security analysis, delivering a richer experience with queries in natural language.

View in Agenda
Security specialist at SNOW Upgrade

Satish Jipster

Security specialist at SNOW Upgrade

Securing Generative AI applications using AWS Services [Business Focused]

Securing generative AI applications using AWS services involves implementing robust strategies to protect data, models, and infrastructure. This presentation explores how AWS tools like Identity and Access Management (IAM), AWS Key Management Service (KMS), and Amazon SageMaker enable secure model development, training, and deployment. Topics include safeguarding sensitive data with encryption, ensuring network security through Virtual Private Clouds (VPCs), and mitigating threats using services like AWS Shield and AWS WAF. Best practices for monitoring AI workloads with Amazon CloudWatch and addressing compliance requirements through AWS Audit Manager will also be discussed. Attendees will gain actionable insights to build and maintain secure, scalable, and resilient generative AI applications on AWS.

View in Agenda
Technical Leader, AWS Solutions Architecture

Shivansh Singh

Technical Leader, AWS Solutions Architecture

Creating secure code with Amazon Q Developer [Beginner]

In this session you will learn how to use Amazon Q Developer to create secure code. Write unit tests, optimize code, and scan for vulnerabilities, and discover how Amazon Q Developer suggests remediations that help fix your code instantaneously. Also, learn how you can use Amazon Q Developer security scanning to outperform other publicly benchmarkable tools on detection across popular programming languages.

View in Agenda
Founder/ Principal Pentester, Researcher, Author

Teri Radichel

Founder/ Principal Pentester, Researcher, Author

Threat Modeling a Batch Job System on AWS [Advanced]

I’ve been blogging about building a batch job system on AWS for about two years now as time allows, documented at https://medium.com/cloud-security/automating-cybersecurity-metrics-890dfabb6198. Initially I was “just” going to quickly show how to use batch jobs to run tools to analyze security in AWS accounts. For example, I run Prowler and other proprietary tools on AWS penetration tests and I can run those tools as batch jobs. But it turned into a much bigger endeavor as I considered how to deploy and run those jobs ~ securely ~ in a production environment. In this presentation, I’ll walk through some of the threats, mitigations, and I’ll talk about some unpublished developments.

View in Agenda

AGENDA

TimeSession Details
Morning Sessions
08:00 AM - 4:00 PM
Badge pick up, Assisted Registration, Information Desk - Grand Lobby
08:30 AM - 09:20 AM
50 minutes
Breakfast and Networking - Grand Hall
Closes 10 minutes before Keynote.
09:30 AM - 10:00 AM
30 minutes
Welcome, Introductions and Sponsors Parade - John Varghese - AWS Hero - Hahn Auditorium
10:00 AM - 10:45 AM
45 minutes
Keynote - Everything starts with Security - Christopher Rae, Sarah Currey - Hahn Auditorium
10:45 AM - 11:15 AM
30 minutes
Tea/coffee break and Networking - Grand Hall Sponsored by AWS
Tracks
Hahn Auditorium
Lovelace
Boole
Glass rooms
11:15 AM - 11:45 AM
30 minutes
Use IAM Roles Anywhere to reduce the use of static IAM keys [Advanced]

--Mike Graff

Security Considerations for MLOps Infrastructure on AWS [Advanced]

--David Akuma

Deploy GenAI Apps on AWS with OPEA (Intel AI workshop)


Deploy a GenAI app on AWS EKS using OPEA and CloudFormation, apply guardrails, and optimize performance with AWS services including OpenSearch and Bedrock.

--Alex Sin

Builder cards:(not a talk)
AWS BuilderCards, is a fun and educational deckbuilding card game, designed to teach how different AWS services work together to build well-architected workloads, while having fun with other attendees.

--Shivansh Singh

11:50 AM - 12:30 PM
40 minutes
Optimizing GPU Usage in Amazon EKS: Improving Performance and Security in Kubernetes [Advanced]

--Natalie Serebryakoval

AWS Security for Front End Devs [Intermediate]

--Chris Miller

12:20 PM - 1:20 PM
1 hour
Lunch and Networking - Grand Hall SPONSORS WANTED!!
Also Brain Date
Post Lunch Sessions
Tracks
Hahn Auditorium
Lovelace
Boole
Brain Date topics
1:30 PM - 1:55 PM
25 minutes
Threat Modeling a Batch Job System on AWS [Advanced]

--Teri Radichel

Securing Generative AI applications using AWS Services [Business Focused]

--Satish Jipster

Commvault Ransomware Attack experience - Minutes to Meltdown (workshop)


Experience a realistic simulation of a complex cyber-attack scenario. Navigate the ransomware journey and leave this session with an actionable toolkit to protect your company.

--Chris Bevil

Brain Date is during breaks at round tables!!
- ⁠⁠Student and early career
- ⁠Foundational security culture
- ⁠Securing generative AI applications
- ⁠Building robust identity and access management
- ⁠Responsible AI practices

--Conference Attendees

2:00 PM - 2:35 PM
35 minutes
Streamlining Compliance: Leveraging Open-Source Terraform AWS modules [Advanced]

--Anton Babenko

Establish a Secured and Resilient Architecture[Business Focused]

--Mona Patel

2:30 PM - 2:55 PM
25 minutes
Afternoon Tea break SPONSORS WANTED!!
Also Brain Date
Tracks
Hahn Auditorium
Lovelace
Boole
Glass rooms
3:00 PM - 3:25 PM
25 minutes
Build-time to runtime: Automated security testing in your pipeline [Advanced]

--Gunnar Grosch

RCPs + SCPs: The Missing Link in AWS Access Control (With Matt Carle, Sonrai) [Intermediate]

--Chris Kirschke

Securing Large Language Models: Best Practices for Prompt Engineering and Mitigating Prompt Injection Attacks [Beginner]

--Parth Patel and Ishneet Dua

Nothing planned yet. Let’s see what happens!

--Conference Attendees

3:30 PM - 3:55 PM
25 minutes
Simplify Security Events Log Analysis with Amazon Q [Advanced]

--Manas Satpathi & Sandeep Mohanty

Everything you didn't want to know about IAM [Beginner]

--Peter Sankauskus

Creating secure code with Amazon Q Developer [Beginner]

--Shivansh Singh

Nothing planned yet. Let’s see what happens!

--Conference Attendees

3:55 PM - 4:05 PM
10 minutes
Raffle & Closing Note - Hahn Auditorium

VOLUNTEERS

I want to volunteer!

Platinum Sponsors

AWS

AWS

Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

Intel AI

Intel AI

Intel offers comprehensive AI solutions through its Tiber™ AI Cloud and Tiber™ AI Studio, providing cutting-edge hardware and software platforms for scalable AI development and deployment. These services enable enterprises to efficiently build, optimize, and manage AI models across various industries, leveraging Intel’s advanced CPUs, GPUs, and AI accelerators. With a focus on reducing complexity and enhancing productivity, Intel empowers organizations to harness AI’s full potential.

Gold Sponsors

Commvault

Commvault

Commvault provides industry-leading data protection and management solutions, ensuring cyber resilience and business continuity. Their AI-driven technologies safeguard critical data across diverse environments with automated risk scanning. Trusted by over 100,000 organizations, Commvault delivers robust security for evolving data challenges.

Sonrai Security

Sonrai Security

Sonrai Security delivers comprehensive cloud security solutions, offering unparalleled visibility and risk mitigation for enterprises. Their innovative Cloud Permissions Firewall enables one-click least privilege enforcement without disrupting DevOps, ensuring sensitive data remains protected. Trusted by industry leaders, Sonrai empowers organizations to innovate securely across AWS, Azure, and Google Cloud platforms.

Silver Sponsors

NOVAworks

NOVAworks

NOVAworks is at the heart of our community’s workforce success, offering free, personalized career navigation and training services to individuals 17 and up in San Mateo and northern Santa Clara counties. We don’t just connect people with jobs—we connect them with opportunities to thrive. We fund internships that spark careers, advanced training that empowers workers to reimagine their futures, and innovative workforce solutions that fuel local businesses and communities.


COMMUNITY PARTNERS

AWS Bay AreaBay Area InfracodersPublic Cloud SecurityAdvanced AWSAWS East Bay Official EventsData Science on AWS

Venue

Computer History Museum

1401 N Shoreline Blvd,

Mountain View, CA 94043