← Back to Blog

What Your SOC 2 Auditor Sees: 12 Failures (2025)

SOC 2 audits do not fail because your security is bad. They fail because the documentation does not match the controls, the controls do not match the systems, and the auditors look at exactly the gap between what you wrote and what is happening. Here are the 12 specific findings that landed teams with control deficiencies in 2025 audits, ordered from most common to most expensive.

1. Offboarding tickets closed without verifying access removal

Your offboarding policy says HR closes a ticket within 24 hours of termination. The auditor pulls a sample of 10 terminated employees from the year and asks for evidence that access was actually removed within 24 hours. Three employees still had access to the production AWS console two weeks after their last day.

The control was not the policy. It was whether the closing of the ticket actually corresponded to access removal across every system. Most companies have 5+ systems where access lives (AWS, GitHub, Okta, production databases, monitoring tools, customer data platforms). Closing one ticket touches one system.

Pass it: have a single offboarding checklist that explicitly enumerates every system, has a checkbox per system with timestamp + initials, and gets attached to the offboarding ticket. Auditors love a checklist they can review.

2. Access reviews that everyone "completed" without actually changing…

anything

Your policy requires quarterly access reviews. The auditor pulls Q2 review evidence: signed forms saying "access reviewed and approved." Then they cross-reference against actual access changes in the period. Zero changes resulted from the "review."

This is the #1 finding in 2025 audits. Reviews that produce no changes look like rubber-stamping. Auditors expect to see some access removed every quarter as a result of review.

Pass it: design the access review to surface specific candidates (users who haven't logged in 90 days, users with admin who don't need it). Document each one as either "removed" or "kept, justification: X". A real review always produces some removals.

3. Production change tickets approved by the same person who made the…

change

Your change management policy says production changes require approval from someone other than the implementer. Auditor samples 30 production changes from the year. In four of them, the developer who made the PR also self-approved it because "their teammate was on PTO and we needed to ship."

This is one of the most consistently flagged findings because GitHub allows it by default.

Pass it: GitHub branch protection rules → require approval from someone other than the author. No exceptions, no break-glass without an audit log entry. Tell developers up front.

4. Encryption at rest claimed but configurations show defaults

Your security policy says all customer data is encrypted at rest using AES-256. Auditor checks your S3 buckets, RDS instances, EBS volumes, and DynamoDB tables. Three S3 buckets are unencrypted (they were created in 2020 before you turned on default encryption). One RDS instance has encryption disabled (someone disabled it during a migration in 2023 and forgot).

The policy is true for new resources but not for the entire estate. Auditors check the entire estate.

Pass it: enable account-level S3 default encryption. Run AWS Config rules s3-bucket-server-side-encryption-enabled, rds-storage-encrypted, ebs-encrypted. Every flagged resource gets encrypted or scheduled for replacement. Document any exceptions with a risk acceptance.

5. MFA "required for all users" but exceptions exist

Policy: MFA required on all production access. Auditor pulls IAM users with their MFA status. Three IAM users still have console access without MFA. They are "service accounts" used for occasional manual operations.

Auditors do not accept "service account" as a reason to skip MFA. If a human can log in, MFA is required.

Pass it: convert service accounts to IAM Roles with assumed credentials. If a service account must remain, attach an IAM policy that denies all actions when MFA is not present. Both routes pass audit.

6. Backup restoration "tested annually" but the test was a single…

restore in January

Your backup policy says backups are tested annually. You did one restore in January. Auditor wants to see test results across the year, with documented restore times, integrity checks, and post-restore verification. You have one screenshot.

Pass it: schedule quarterly restoration tests. Document the date, what was restored, the elapsed time, who performed it, and whether the restored data passed integrity checks. Keep a log. Auditors love logs.

7. Vendor risk assessments completed only at onboarding

Your vendor policy requires risk assessment of each subprocessor. You assessed each at onboarding. Some have been your vendor for 5 years. Their attestations have changed. Their breach history has changed. You have not refreshed the assessment.

Pass it: annual vendor reassessment cycle. Pull current SOC 2 Type II reports from each subprocessor every year. Document any breach disclosures or material changes. Track in a vendor inventory spreadsheet at minimum.

8. Incident response tested only on tabletop exercises that auditors…

recognize as scripted

Your IR policy requires annual testing. You ran a tabletop exercise where everyone read scripts. The auditor recognizes the scenario from a SANS template and asks: "have you tested IR on a real incident or a surprise drill?"

Tabletop is fine but increasingly auditors want evidence of either: (a) a real incident handled per the IR plan, or (b) a surprise drill where participants did not know in advance.

Pass it: at least one surprise drill per year. Document who participated, what worked, what did not, and what the post-mortem recommended. Use a real IR plan template, not a generic one.

9. Vulnerability scanning runs but findings are never actually closed

You run vulnerability scans monthly. Auditor pulls the November report. There are 23 high-severity findings open since March. Some are accepted risk, some are "in progress," some are "we will get to it." None have remediation deadlines.

Auditors do not require you to fix every finding. They require a documented remediation timeline, a risk-acceptance process, and evidence that high-severity findings are not aging indefinitely.

Pass it: every high finding gets either a remediation date in the next 30 days or an explicit risk-accepted status with sign-off from someone with authority. SLA for criticals is 7 days. Track the aging in a dashboard auditors can review.

10. Code review evidence missing for emergency changes

You require code review on all production changes. There were 3 emergency hotfixes in the year that bypassed normal review. Auditor finds them in CI logs. You have no documented justification for the bypass and no post-deploy review.

Pass it: every emergency change gets a post-deploy review within 48 hours. Document the emergency reason, who approved the bypass, and the post-deploy review. Track these in a "break-glass" log auditors can sample.

11. Logging "centralized" but production database queries do not…

appear in the log aggregator

You ship application logs to Datadog. Auditor asks for evidence that database query activity is logged. RDS query logs are off by default. You assumed application logs covered it. They do not for production data access.

Pass it: enable RDS audit logs (Postgres pgaudit, MySQL general log, or RDS Database Activity Streams for Aurora). Ship to centralized logging with at least 1-year retention. Define log review cadence and document who reviews.

12. Customer data deleted "on request" but the data is still in…

backups

Your privacy policy says customer data is deleted on request within 30 days. Auditor asks: do your backups still contain that data after deletion? Yes, backups go back 90 days. The deleted data persists in backup storage for 90 days after the deletion request.

Your policy promised something your retention does not deliver. This becomes a Privacy Trust Service Criteria finding under SOC 2.

Pass it: either reduce backup retention to match deletion SLAs, OR rewrite the privacy policy to say "deleted from production within 30 days, may persist in encrypted backups for up to 90 days, after which it is permanently deleted." Either is acceptable; the inconsistency is not.

The pattern across all 12

Every one of these findings has the same root cause: the documentation says one thing, the systems show another. SOC 2 auditors are paid to find that gap. They are good at it.

The fix in every case is to either update the documentation to match reality or update the systems to match the documentation. Both are valid. Wishful documentation is the failure mode.

The pre-audit dry run that catches all 12

Two months before your real audit, run an internal dry run with these specific tests:

  1. Pull 10 terminated employees from the past year. Verify each had access removed within 24 hours of termination across every system.
  2. Pull last quarter's access reviews. Count how many access changes resulted. If zero, redo.
  3. Audit GitHub branch protection. Confirm "require approval from non-author" is on for all production-bound branches.
  4. Run AWS Config and check encryption-at-rest for every S3 bucket, RDS instance, EBS volume, DynamoDB table.
  5. List all IAM users with console access. Confirm 100 percent have MFA.
  6. Pull backup restoration test history. Confirm at least 4 tests in the past year, each documented.
  7. Check vendor risk assessments. Refresh any older than 12 months.
  8. Schedule a surprise IR drill. Document the result.
  9. Open every vuln scan from the past year. Verify no high finding is older than 30 days without explicit risk acceptance.
  10. Pull all PRs merged to main. Identify any that bypassed review. Document each.
  11. Verify production database query logs are flowing to centralized logging.
  12. Read your privacy policy. Verify backup retention matches the deletion SLAs you promised customers.

Find the gaps now. The internal dry run is the cheapest part of audit prep.

Document and share audit evidence securely

SOC 2 audits require sharing sensitive evidence (vulnerability reports, IR plans, vendor SOC 2 reports) with auditors. Share through zero-knowledge encryption with auto-expiring links.

Create Encrypted Paste

The bottom line

SOC 2 audits fail because the documentation does not match reality. Twelve specific patterns produce 80% of findings. Each one has a fix that takes hours, not weeks. The expensive part is rebuilding controls during the audit instead of doing the dry run that catches them in advance.

Related reading: SOC 2 Compliance Checklist for Startups, SOC 2 Secret Management Requirements, Cybersecurity Audit Checklist, SOC as a Service Guide, and Incident Response Plan Template.