← Back to Blog

Fix AWS EFS Permission Denied, Mount Timeout, and Symlink Issues

Quick Fix

EFS mount hanging or permission denied? Check these three things first:

# 1. Verify NFS port 2049 is open in the mount target's security group
aws ec2 describe-security-groups --group-ids sg-xxxxxxxx \
  --query 'SecurityGroups[].IpPermissions[?ToPort==`2049`]'

# 2. Verify mount target exists in your AZ
aws efs describe-mount-targets --file-system-id fs-xxxxxxxx

# 3. Test NFS connectivity
nc -zv fs-xxxxxxxx.efs.us-east-1.amazonaws.com 2049

You ran sudo mount -t nfs4 and it hung indefinitely. Or it returned mount.nfs4: access denied by server while mounting. EFS mount failures are almost always one of three issues: security group misconfiguration, mount target placement, or IAM authorization. This guide covers each failure mode with the exact error you will see and the fix, plus EFS-specific gotchas with symlinks, performance modes, and throughput configuration.

Error 1: Mount Hangs / Timeout

The Error

$ sudo mount -t nfs4 -o nfsvers=4.1 \
    fs-0123456789abcdef0.efs.us-east-1.amazonaws.com:/ /mnt/efs

# Command hangs indefinitely, no output
# After timeout (if using mount helper):
mount.nfs4: Connection timed out

Why It Happens

The EC2 instance cannot reach the EFS mount target on port 2049. This is a network-level failure, not an authentication failure. The three most common causes:

Cause A: Security Group Missing NFS Rule

The mount target's security group must allow inbound TCP on port 2049 from the EC2 instance.

# Check the mount target's security group
aws efs describe-mount-targets --file-system-id fs-0123456789abcdef0 \
  --query 'MountTargets[].{AZ:AvailabilityZoneName,SubnetId:SubnetId,SecurityGroups:[]}'

# Get the security group IDs for the mount target
aws efs describe-mount-target-security-groups \
  --mount-target-id fsmt-0123456789abcdef0

# Check if port 2049 is allowed
aws ec2 describe-security-groups --group-ids sg-efs-group-id \
  --query 'SecurityGroups[].IpPermissions'

The fix: Add an inbound rule to the mount target's security group:

# Allow NFS from the EC2 instance's security group
aws ec2 authorize-security-group-ingress \
  --group-id sg-efs-mount-target-sg \
  --protocol tcp \
  --port 2049 \
  --source-group sg-ec2-instance-sg

# Or from a CIDR range (less secure but simpler)
aws ec2 authorize-security-group-ingress \
  --group-id sg-efs-mount-target-sg \
  --protocol tcp \
  --port 2049 \
  --cidr 10.0.0.0/16

Cause B: No Mount Target in the Availability Zone

EFS mount targets are AZ-specific. If your EC2 instance is in us-east-1a but the mount target is only in us-east-1b, the mount will fail or be extremely slow (cross-AZ NFS is unreliable).

# List all mount targets and their AZs
aws efs describe-mount-targets --file-system-id fs-0123456789abcdef0 \
  --query 'MountTargets[].{AZ:AvailabilityZoneName,IP:IpAddress,State:LifeCycleState}'

# Check which AZ your EC2 instance is in
curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone

The fix: Create a mount target in the same AZ:

# Find a subnet in the correct AZ
aws ec2 describe-subnets --filters "Name=availability-zone,Values=us-east-1a" \
  --query 'Subnets[].{SubnetId:SubnetId,VpcId:VpcId,AZ:AvailabilityZone}'

# Create the mount target
aws efs create-mount-target \
  --file-system-id fs-0123456789abcdef0 \
  --subnet-id subnet-0123456789abcdef0 \
  --security-groups sg-efs-mount-target-sg

Mount targets take 1-2 minutes to become available. Check state with describe-mount-targets until LifeCycleState is available.

Cause C: VPC DNS Resolution Disabled

The EFS DNS name (fs-xxx.efs.region.amazonaws.com) resolves to the mount target IP in your AZ. If VPC DNS resolution is disabled, the name does not resolve:

# Test DNS resolution
nslookup fs-0123456789abcdef0.efs.us-east-1.amazonaws.com

# Check VPC DNS settings
aws ec2 describe-vpc-attribute --vpc-id vpc-xxxxxxxx \
  --attribute enableDnsSupport
aws ec2 describe-vpc-attribute --vpc-id vpc-xxxxxxxx \
  --attribute enableDnsHostnames

Both enableDnsSupport and enableDnsHostnames must be true. If DNS is disabled, use the mount target IP directly:

sudo mount -t nfs4 -o nfsvers=4.1 10.0.1.23:/ /mnt/efs

Error 2: Permission Denied by Server

The Error

mount.nfs4: access denied by server while mounting
  fs-0123456789abcdef0.efs.us-east-1.amazonaws.com:/

Why It Happens

NFS connectivity works (no timeout), but the server rejected the mount request. This is an authorization-level failure, not a network failure.

Cause A: EFS File System Policy Denying Access

EFS supports resource-based policies (like S3 bucket policies). A restrictive policy can deny mount requests:

# Check the file system policy
aws efs describe-file-system-policy --file-system-id fs-0123456789abcdef0

If the policy requires IAM authorization ("Condition": {"Bool": {"elasticfilesystem:AccessedViaMountTarget": "true"}} combined with IAM conditions), you must use the EFS mount helper with IAM authentication.

Cause B: IAM Authorization Required

If the file system policy enforces IAM auth, standard NFS mount fails. You must use the EFS mount helper:

# Install EFS utilities
sudo apt-get install -y amazon-efs-utils    # Debian/Ubuntu
sudo yum install -y amazon-efs-utils        # RHEL/CentOS/AL2

# Mount with IAM authentication
sudo mount -t efs -o tls,iam fs-0123456789abcdef0:/ /mnt/efs

The EC2 instance's IAM role must have the following permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "elasticfilesystem:ClientMount",
        "elasticfilesystem:ClientWrite",
        "elasticfilesystem:ClientRootAccess"
      ],
      "Resource": "arn:aws:elasticfilesystem:us-east-1:123456789012:file-system/fs-0123456789abcdef0"
    }
  ]
}

ClientRootAccess is only needed if the mounting user needs root-level access. Omit it for least-privilege.

Cause C: POSIX Permissions on the EFS Root

By default, the EFS root directory (/) is owned by root:root with mode 755. If you are mounting as a non-root user or your application runs as a non-root user, it cannot write to the root:

# Mount as root first and fix permissions
sudo mount -t efs fs-0123456789abcdef0:/ /mnt/efs

# Option 1: Change ownership
sudo chown 1000:1000 /mnt/efs

# Option 2: Create a subdirectory with correct permissions
sudo mkdir -p /mnt/efs/data
sudo chown 1000:1000 /mnt/efs/data

# Option 3: Use EFS Access Points (recommended)
aws efs create-access-point \
  --file-system-id fs-0123456789abcdef0 \
  --posix-user Uid=1000,Gid=1000 \
  --root-directory Path=/data,CreationInfo='{OwnerUid:1000,OwnerGid:1000,Permissions:755}'

EFS Access Points are the cleanest solution. They enforce a specific POSIX identity and can create the root directory automatically with correct permissions.

Calculate Correct File Permissions

Use SecureBin's Chmod Calculator to determine the right permission values for your EFS directories.

Calculate Chmod

Error 3: nfs-utils / amazon-efs-utils Not Installed

The Error

mount: /mnt/efs: bad option; for several filesystems (e.g. nfs, cifs)
  you might need a /sbin/mount. helper program.

# Or:
mount.nfs4: No such device

The Fix

# Debian/Ubuntu
sudo apt-get update && sudo apt-get install -y nfs-common amazon-efs-utils

# RHEL/CentOS/Amazon Linux 2
sudo yum install -y nfs-utils amazon-efs-utils

# Amazon Linux 2023
sudo dnf install -y nfs-utils amazon-efs-utils

# Verify NFS module is loaded
lsmod | grep nfs
# If empty:
sudo modprobe nfs
sudo modprobe nfsv4

Setting Up /etc/fstab for Persistent Mounts

For the mount to survive reboots, add an entry to /etc/fstab:

Standard NFS Mount

# /etc/fstab entry
fs-0123456789abcdef0.efs.us-east-1.amazonaws.com:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0

Key options explained:

  • nfsvers=4.1: Required for EFS. Do not use NFSv3.
  • rsize=1048576,wsize=1048576: 1MB read/write buffer. Maximum for EFS.
  • hard: Retry NFS operations indefinitely on timeout (default). Use soft only if you prefer I/O errors over hangs.
  • timeo=600: 60 second timeout before retrying.
  • noresvport: Do not use a reserved port. Required when reconnecting after a network disruption.
  • _netdev: Wait for network before mounting. Critical for cloud instances where EFS is not available during early boot.

EFS Mount Helper (with TLS and IAM)

# /etc/fstab entry using the EFS mount helper
fs-0123456789abcdef0:/ /mnt/efs efs _netdev,tls,iam 0 0

# With an Access Point
fs-0123456789abcdef0:/ /mnt/efs efs _netdev,tls,iam,accesspoint=fsap-0123456789abcdef0 0 0

Test the fstab entry without rebooting:

# Mount using fstab entry
sudo mount /mnt/efs

# Verify the mount
df -h /mnt/efs
mount | grep efs

Symlink Gotchas with EFS

Symlinks on EFS cause subtle issues, especially with applications like Magento that rely heavily on symlinks for the media/ and var/ directories.

The Problem: Absolute Symlinks Pointing to Local Paths

Consider this scenario: your application creates a symlink on the EFS volume:

# On Server A, EFS is mounted at /var/www/html/shared
ln -s /var/www/html/shared/media /var/www/html/pub/media

# On Server B, EFS is mounted at /mnt/efs
# The symlink /mnt/efs/media -> /var/www/html/shared/media
# This resolves LOCALLY on Server B, not within EFS

The symlink target /var/www/html/shared/media is an absolute path that gets resolved on the local filesystem. If Server B mounted EFS at a different path, the symlink is broken.

The Fix: Use Relative Symlinks

# Bad: absolute symlink
ln -s /var/www/html/shared/media/catalog /var/www/html/pub/media/catalog

# Good: relative symlink
cd /var/www/html/pub/media
ln -s ../../shared/media/catalog catalog

Magento Media Directory on EFS

Magento uses symlinks in pub/static/ and sometimes pub/media/. When the media directory is on EFS and shared across multiple web servers or Kubernetes pods, symlink behavior can cause "file not found" errors:

# Common Magento EFS setup
# EFS mounted at /var/www/html/pub/media

# Problem: Magento's static content deploy creates absolute symlinks
# in pub/static that reference the local filesystem

# Solution 1: Mount EFS at the exact path Magento expects
sudo mount -t efs fs-xxx:/ /var/www/html/pub/media

# Solution 2: Use Magento's remote storage (S3 + CloudFront)
# instead of EFS for media. This eliminates symlink issues entirely.
# In env.php:
# 'remote_storage' => [
#     'driver' => 'aws_s3',
#     'config' => [
#         'bucket' => 'your-media-bucket',
#         'region' => 'us-east-1'
#     ]
# ]

Kubernetes PVC + EFS Symlink Issues

When EFS is mounted as a PersistentVolume in Kubernetes, symlinks created inside the PVC are relative to the container's filesystem, not the EFS volume:

# Pod A creates a symlink inside the PVC mounted at /data
# ln -s /data/uploads/2026 /data/current-uploads
# This resolves to /data/current-uploads inside the container

# Pod B mounts the same PVC at /data
# The symlink works because both pods mount at /data

# But if Pod C mounts at /mnt/shared instead of /data,
# the absolute symlink breaks

Best practice: use relative symlinks inside PVCs, and ensure all pods mount the PVC at the same path.

EFS Performance Modes and Throughput

Slow EFS performance is not a mount error, but it is the most common complaint after a successful mount. Understanding the performance model prevents misdiagnosing application slowness as a mount issue.

Performance Modes

  • General Purpose (default): Lower latency (sub-millisecond for cached data). Supports up to 35,000 read IOPS and 7,000 write IOPS. Sufficient for most workloads. Cannot be changed after creation.
  • Max I/O: Higher latency but virtually unlimited IOPS. Required only for massively parallel workloads (thousands of concurrent NFS clients). Cannot be changed after creation.

For almost all use cases, General Purpose is the correct choice. Check your current mode:

aws efs describe-file-systems --file-system-id fs-0123456789abcdef0 \
  --query 'FileSystems[].PerformanceMode'

Throughput Modes

  • Elastic (recommended): Automatically scales throughput based on workload. Pay per GB transferred. Best for spiky workloads.
  • Provisioned: Fixed throughput (1-3072 MiB/s) regardless of file system size. Use when you need guaranteed throughput.
  • Bursting: Throughput scales with file system size. Starts at 100 MiB/s baseline for the first 1 TiB, then 100 MiB/s per TiB. Includes burst credits. Can lead to throttling if burst credits are exhausted.
# Check current throughput mode and burst credits
aws efs describe-file-systems --file-system-id fs-0123456789abcdef0 \
  --query 'FileSystems[].{ThroughputMode:ThroughputMode,ProvisionedThroughput:ProvisionedThroughputInMibps}'

# Check burst credit balance via CloudWatch
aws cloudwatch get-metric-statistics \
  --namespace AWS/EFS \
  --metric-name BurstCreditBalance \
  --dimensions Name=FileSystemId,Value=fs-0123456789abcdef0 \
  --start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%S) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
  --period 300 \
  --statistics Average

Switching Throughput Mode

# Switch to Elastic throughput (recommended)
aws efs update-file-system \
  --file-system-id fs-0123456789abcdef0 \
  --throughput-mode elastic

# Switch to Provisioned with 256 MiB/s
aws efs update-file-system \
  --file-system-id fs-0123456789abcdef0 \
  --throughput-mode provisioned \
  --provisioned-throughput-in-mibps 256

Throughput mode can be changed at any time without unmounting. Allow up to 24 hours between throughput mode changes (AWS limits changes to prevent abuse).

Check Your AWS Network Configuration

Use SecureBin's Subnet Calculator to verify your VPC CIDR ranges and ensure proper subnet configuration for EFS mount targets.

Calculate Subnets

EFS in Kubernetes (EFS CSI Driver)

For EKS clusters, mount EFS using the EFS CSI driver rather than manual NFS mounts:

# Install the EFS CSI driver
helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
helm install aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
  --namespace kube-system \
  --set controller.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/efs-csi-role

Create a StorageClass and PersistentVolume:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: fs-0123456789abcdef0
  directoryPerms: "755"
  uid: "1000"
  gid: "1000"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 20Gi

The CSI driver handles mount target selection, TLS encryption, and IAM authentication automatically. Common issues:

  • Pod stuck in ContainerCreating: Check the CSI driver pods in kube-system namespace. The efs-csi-node DaemonSet must be running on every node.
  • Mount timeout from CSI: Same root cause as manual mount timeouts. Check security groups and mount target AZ placement.
  • Permission denied in pod: Use Access Points with posixUser to match the container's UID/GID, or set fsGroup in the pod's security context.

Troubleshooting Checklist

  1. Mount hangs? Security group on mount target must allow inbound TCP 2049 from your instance's SG.
  2. Permission denied? Check file system policy for IAM enforcement. Use mount -t efs -o tls,iam if IAM is required.
  3. mount.nfs4 not found? Install nfs-common (Debian) or nfs-utils (RHEL) and amazon-efs-utils.
  4. Works as root but not as user? Fix POSIX permissions on the EFS root directory or use Access Points.
  5. Symlinks broken? Use relative symlinks. Mount EFS at a consistent path across all instances.
  6. Slow performance? Check throughput mode and burst credit balance. Switch to Elastic throughput if bursting is exhausted.
  7. Kubernetes pod stuck? Verify EFS CSI driver is running, security groups are correct, and mount target exists in the node's AZ.

The Bottom Line

EFS mount failures break down into two categories: network issues (timeout = security group or AZ mismatch) and authorization issues (permission denied = IAM policy or POSIX permissions). Test network connectivity with nc -zv hostname 2049 first. If that works, the issue is authorization. Use amazon-efs-utils with -o tls,iam for IAM-enforced file systems, and use Access Points to manage POSIX permissions cleanly. For symlinks, always use relative paths and mount EFS at the same path on every instance. For performance issues, switch from Bursting to Elastic throughput mode and monitor burst credit balance.

Related Articles

Continue reading: Fix Docker OOM Killed, Fix Kubernetes CrashLoopBackOff, Fix Let's Encrypt Renewal Failed, Kubernetes Secrets Management, API Key Rotation Best Practices.

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.