Leaked AWS Credentials? Your 60-Minute Incident Response Playbook
An AWS access key just hit a public GitHub repo. The clock started the moment the commit was pushed. Research from Detectify and Truffle Security shows leaked AWS keys are exploited in under 60 seconds, with automated Bitcoin miners and data exfil running within minutes. What you do in the next 60 minutes decides whether this is a Tuesday or a data breach headline.
This is the playbook. Minute by minute.
The Clock Starts Before You Notice
Public GitHub scrapers are industrialized. Legitimate tools like GitGuardian and Trufflehog watch the event firehose. Less friendly actors do the same. The median time to exploitation of a leaked AWS key in 2024 research was between 60 seconds and 5 minutes, depending on the key's permissions.
Assume the key is already being used. Design your response for that assumption, not the hopeful one.
Minutes 0–5: Deactivate, Do Not Delete
Your instinct will be to delete the key. Resist it. Deleted keys lose their forensic trail — you cannot query CloudTrail for a key that no longer exists if you delete too quickly in some edge cases, and you certainly cannot reactivate if you made a mistake.
Deactivate first:
aws iam update-access-key \
--access-key-id AKIAIOSFODNN7EXAMPLE \
--status Inactive \
--user-name deploy-bot
If you do not know which user the key belongs to:
aws iam list-access-keys \
--query 'AccessKeyMetadata[?AccessKeyId==`AKIAIOSFODNN7EXAMPLE`]'
For cross-account scenarios, check every account you can reach.
Also check for sibling keys. If the leaked key was AKIA...ABC, the same IAM user might have a second active key. Deactivate both:
aws iam list-access-keys --user-name deploy-bot
Minutes 5–15: Forensic Query With CloudTrail
Now find out what happened while the key was live. CloudTrail is your friend.
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=AccessKeyId,AttributeValue=AKIAIOSFODNN7EXAMPLE \
--start-time "$(date -u -d '48 hours ago' +%Y-%m-%dT%H:%M:%S)" \
--max-results 500 \
--query 'Events[*].[EventTime,EventName,AwsRegion,SourceIPAddress]' \
--output table
Red flags to look for:
- Unusual regions — if your workload runs in
us-east-1only and you see activity inap-south-1, that is an attacker. - Unexpected services — if the key is scoped to S3 and you see
ec2:RunInstances, that is a crypto miner. - IAM modifications —
CreateUser,AttachUserPolicy,CreateAccessKeyall mean the attacker is establishing persistence. - Source IP addresses you do not recognize — especially residential IPs, Tor exit nodes, or known VPN providers.
Save the output. This becomes evidence for your RCA, your compliance report, and possibly law enforcement.
Minutes 15–30: Quarantine and Audit
If the attacker created resources, you need to find them before rotating. Attackers commonly create:
- Large EC2 instances for mining (look for
p3.*,g4.*,c5.24xlarge) - New IAM users with administrative policies attached
- Access keys on existing users
- Lambda functions running periodic exfiltration
- S3 buckets with public read policies for stolen data staging
Quick audit commands:
# Recent EC2 launches
aws ec2 describe-instances \
--filters Name=instance-state-name,Values=running \
--query 'Reservations[*].Instances[?LaunchTime >= `2026-04-19`].[InstanceId,InstanceType,LaunchTime]'
# New IAM users (last 7 days)
aws iam list-users \
--query 'Users[?CreateDate >= `2026-04-13`].[UserName,CreateDate]'
# All currently active access keys
aws iam get-account-authorization-details \
--filter User \
--query 'UserDetailList[*].[UserName,AccessKeys]'
Anything the attacker created: terminate it. Document the resource ID before termination.
Minutes 30–45: Secure Evidence Handoff
At this point you need to share findings with your security team, compliance officer, and potentially an external DFIR firm. The data you are sharing — CloudTrail exports, resource inventories, IAM change logs — is itself sensitive. It reveals your infrastructure layout and often contains the attacker's IP addresses (which you may need to treat as evidence for legal proceedings).
Do not email the JSON dumps. Do not drop them in the general Slack security channel. Use a zero-knowledge paste with a short TTL and a specific recipient list. That way:
- Evidence stays encrypted at rest and in transit
- Access is audited (you know exactly who opened it)
- The data self-destructs when the incident closes — no long-term copies scattered across mailboxes
Share Incident Evidence Without Leaking Again
CloudTrail dumps, forensic findings, rotated credentials — all sensitive, all needed urgently. SecureBin gives you encrypted, expiring URLs with audit logs so your IR workflow does not create its own incident.
Create Encrypted PasteMinutes 45–60: Rotate Downstream and Notify
Rotate Every Dependent Service
The key that leaked probably was not the only copy. It lives in:
- EC2 instance profiles or user data scripts (if someone hardcoded it)
- Other CI/CD pipelines
- Developer machines (check
~/.aws/credentials) - Third-party SaaS integrations using the same principal
- Kubernetes secrets (
kubectl get secret -o json | grep AKIA) - Terraform state files (always sensitive; see our guide on Kubernetes secrets management)
Create the replacement credential. Then use a secure channel to distribute it to the services and humans who need it. Email is wrong. Slack is wrong. Use an encrypted paste with a TTL of hours, not days.
Notification Tree
Notify in this order:
- Immediate incident channel — your own IR team, status update only, no sensitive details.
- Security lead / CISO — full details via secure channel.
- Legal and compliance — if customer data was accessed, within the first hour.
- AWS Support — open a "compromised credentials" case. AWS can sometimes waive fraudulent charges if you catch it fast.
- External stakeholders — only after legal review, following your breach notification policy.
Our incident response plan template has the full communication tree and severity framework.
Common CloudTrail Queries (Save These)
Find all actions from a specific IP
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventSource,AttributeValue=iam.amazonaws.com \
--query 'Events[?SourceIPAddress==`X.X.X.X`]'
Find all IAM changes in the last 24 hours
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=CreateAccessKey \
--start-time "$(date -u -d '24 hours ago' +%Y-%m-%dT%H:%M:%S)"
Find all S3 bucket policy changes
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=PutBucketPolicy
Find assumed role activity from the compromised principal
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=AssumeRole \
--start-time "$(date -u -d '48 hours ago' +%Y-%m-%dT%H:%M:%S)"
Attackers often use the leaked user key to assume a higher-privileged role. Track the chain.
The Hard Part: Finding Out About the Leak
Most leaks are found hours or days after they happen. The fastest detection methods:
- GitHub Secret Scanning — enabled by default on public repos, sends alerts to AWS, which auto-quarantines the key. Free.
- AWS IAM Access Analyzer — flags overly permissive policies and unused credentials.
- GitGuardian / Trufflehog Enterprise — monitors your repos plus the public GitHub firehose for your patterns.
- CloudTrail + GuardDuty — alerts on anomalous API patterns (unusual region, unusual service, impossible travel).
If you are reading this playbook and do not have detection in place, set those up today. The playbook is for after. Detection is what determines whether you are reading the playbook at minute 5 or minute 500.
Post-Incident: The 24-Hour Checklist
- Confirm every resource created by the attacker is terminated.
- Confirm every access key the attacker created is deleted.
- Review every policy that was modified during the incident window.
- Force MFA on all human users (if not already required).
- Review every role the leaked principal could assume — and what those roles could do in turn.
- Audit CloudTrail to verify the attacker did not disable it (
StopLogging,DeleteTrail). - File your RCA with timeline, root cause, and concrete prevention steps.
- If customer data was accessed, start breach notification clocks (GDPR: 72 hours; varies by jurisdiction).
Common Mistakes
1. Deleting the key before querying CloudTrail. You lose some forensic context. Deactivate first, investigate, then delete once the investigation closes.
2. Missing sibling keys. Same user might have two active keys. Check list-access-keys.
3. Ignoring assumed role chains. The leaked key may have had sts:AssumeRole permissions. Audit every role it could assume.
4. Rotating downstream via insecure channels. Pasting the new key into Slack during rotation creates the next incident.
5. Forgetting CloudTrail itself. Sophisticated attackers will disable logging. Check CloudTrail status: aws cloudtrail get-trail-status --name my-trail.
6. Not reviewing IAM Access Analyzer after. The leak revealed what permissions the key had — most of which it probably did not need. Right-size the replacement.
Prevention: Do Not Have a Next Incident
- Use OIDC federation for CI/CD. No long-lived keys to leak. See our CI/CD secret leak prevention guide.
- Rotate credentials on a schedule even if they have not leaked. 90-day maximum for long-lived keys.
- Use IAM roles over users wherever possible. Roles have built-in credential lifetime limits.
- Enable MFA on root and all human users. Non-negotiable.
- Scope policies tightly. The leaked key should only have been able to do what it needed — nothing more.
- Enable secret scanning and GuardDuty. Detection speed determines incident severity.
Frequently Asked Questions
How fast do attackers actually exploit leaked AWS keys?
Research from Truffle Security and independent researchers has demonstrated exploitation in under 60 seconds for keys pushed to public repos. Automated scanners watch the GitHub event firehose continuously.
Will AWS reimburse fraudulent charges from a compromised key?
Sometimes. Open a support case immediately when you detect the leak. AWS evaluates on a case-by-case basis and is more likely to waive charges when you demonstrate you responded quickly (minutes, not days) and had reasonable security hygiene. Long-tail fraud where you did not notice for weeks is harder to argue.
What if the leak is in git history but not the current HEAD?
The key is still live for anyone who cloned the repo. Deactivate the key regardless of where in history it appeared, then rewrite history with BFG or git filter-repo and force-push. Contact GitHub Support to purge cached views.
What about leaked keys in Docker image layers?
Same rules. Deactivate, then rebuild the image from scratch with the secret moved to runtime injection. Published public images are essentially broadcast — assume the key is compromised the moment the image hit any public registry.
Should I contact law enforcement?
If significant damage occurred (large fraudulent charges, customer data exfiltration), yes. In the US, file with the FBI at IC3.gov. Coordinate with legal counsel first — some disclosures can affect breach notification timelines.
How do I handle keys leaked by a third-party vendor?
Treat it like your own leak. Deactivate immediately on your side, then notify the vendor through their security contact. Do not wait for them to deactivate — your blast radius, your response.
Key Takeaways
- Assume exploitation within 60 seconds of a public leak. Design your response for that.
- Deactivate first, investigate with CloudTrail, then delete.
- Audit for attacker-created resources before rotating.
- Distribute rotated credentials through zero-knowledge channels, not Slack or email.
- The first 60 minutes determines the scale of the incident.
- Prevention (OIDC, MFA, scoped IAM, scanning) is cheaper than response.
Related reading: Prevent Secrets Leaks in CI/CD Logs, Share Production Logs Securely, Incident Response Plan Template, AWS Security Checklist for Production, API Key Rotation Best Practices.
Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.