← Back to Blog

Magento PHP Memory Exhaustion: Why setup:upgrade Crashes and How to Fix It

After deploying Magento 2 on EKS with 200+ modules, I learned the hard way that PHP's default memory_limit is nowhere near sufficient. The "Allowed memory size exhausted" error is one of the most common and most misdiagnosed problems in the Magento ecosystem. Here is exactly how to fix it, from bare metal to Kubernetes.

TL;DR

Run php -d memory_limit=4G bin/magento setup:upgrade --keep-generated. The default PHP memory_limit (128M to 756M) is not enough for Magento installs with 200+ modules. The "Allowed memory size exhausted" error often disguises itself as a misleading AMQP topic configuration error, making it extremely difficult to diagnose without knowing what to look for.

Why Magento Eats Memory

Magento 2 is not a lightweight application. Its dependency injection (DI) framework loads and compiles metadata for every registered module at startup. When you run setup:upgrade or di:compile, PHP must hold the entire object graph in memory simultaneously. For a typical enterprise Magento instance with 200+ modules (including third-party extensions from Hyva, Amasty, MagePlaza, and others), this means peak memory consumption between 2 GB and 4 GB.

The problem compounds during static-content:deploy. This command generates pre-compiled frontend assets for every combination of area (frontend, adminhtml), theme, and locale. If you have 3 themes and 14 locales, that is 42 combinations. Each combination loads the full Magento framework, processes LESS/CSS compilation, and generates JavaScript bundles. Memory usage scales linearly with locale count.

Here is a rough breakdown of where the memory goes during setup:upgrade:

  • Module registration and dependency resolution: ~200 MB for 200+ modules
  • Database schema comparison (declarative schema): ~300-500 MB depending on table count
  • Data patch execution: Variable, but can spike to 1 GB+ for large data migrations
  • DI compilation (if not using --keep-generated): ~1.5-2 GB additional
  • Peak overhead (PHP internals, garbage collection): ~500 MB

The total easily reaches 3-4 GB for a production store. PHP's default memory_limit of 128M (or even Magento's recommended 756M in php.ini) cannot accommodate this.

Step 1: Find Your Current Memory Limit

Before fixing anything, you need to know what limit PHP is actually using. This is trickier than it sounds because PHP CLI and PHP-FPM often use different configuration files.

Check CLI memory limit

# Check the effective memory_limit for CLI
php -i | grep memory_limit
# Output: memory_limit => 128M => 128M

# Find which php.ini CLI is loading
php --ini
# Output shows: /etc/php/8.2/cli/php.ini

Check PHP-FPM memory limit

# Find the FPM config file
php-fpm -i | grep memory_limit
# Or check the FPM pool config directly
cat /etc/php/8.2/fpm/php.ini | grep memory_limit

# Also check pool-level overrides
cat /etc/php/8.2/fpm/pool.d/www.conf | grep php_admin_value

Check Magento's own override

# Magento ships a .user.ini in the pub/ directory
cat /var/www/html/pub/.user.ini
# Typically contains: memory_limit = 756M

# And a .htaccess override for Apache
grep memory_limit /var/www/html/.htaccess
# php_value memory_limit 756M

There are three common paths to check: /etc/php/8.x/cli/php.ini for CLI commands, /etc/php/8.x/fpm/php.ini for web requests, and any .user.ini or .htaccess overrides in the Magento root. These are often different values, which leads to the situation where web requests work fine but CLI commands crash (or vice versa).

Step 2: The Misleading Error Messages

This is the part that cost me hours of debugging. When setup:upgrade runs out of memory, it does not always produce a clean "Allowed memory size exhausted" error. Instead, PHP crashes mid-execution, and Magento's error handling catches whatever state it was in when memory ran out. The result is often a completely misleading error message.

The most infamous example:

In TopicConfigComposite.php line 47:
  Topic "async.operations.all" is not configured.

This is not an AMQP configuration issue. The topic is configured perfectly fine. What happened is that PHP ran out of memory while loading the message queue topic configuration, and the partially loaded config threw a "not configured" exception. I have seen engineers spend days debugging RabbitMQ connections, checking env.php queue settings, and reinstalling the AMQP module, all because of a memory limit that was 2 GB too low.

Other misleading errors caused by OOM mid-execution:

  • Area code is not set during setup:upgrade
  • Class Magento\Framework\App\Cache\Type\Config does not exist
  • Segmentation fault (core dumped) with no additional context
  • Silent exit with return code 255 and no error output at all
  • PHP Fatal error: Composer detected issues in your platform (truncated real error)

How to find the real error

# Check the kernel OOM killer log
dmesg | grep -i "oom\|killed"
# Output: Out of memory: Killed process 12345 (php) total-vm:4521000kB

# Check PHP error log
tail -100 /var/log/php8.2-fpm.log

# Check syslog for OOM events
grep -i "out of memory" /var/log/syslog

If dmesg shows an OOM kill event around the same time your command failed, you have your answer. The "AMQP topic" error, the "area code" error, or whatever Magento reported is a red herring. The real problem is insufficient memory.

Step 3: Fix for CLI Operations

The most reliable fix for CLI commands is to override memory_limit at invocation time using PHP's -d flag. This bypasses whatever is set in php.ini and gives you explicit control.

setup:upgrade

# The fix: override memory_limit at runtime
php -d memory_limit=4G bin/magento setup:upgrade --keep-generated

# Why --keep-generated?
# Without it, setup:upgrade deletes the generated/ directory,
# forcing you to run di:compile again afterward.
# With 200+ modules, di:compile alone takes 5-10 minutes
# and needs 3-4 GB of memory itself.
# Use --keep-generated when you know the generated code is still valid
# (e.g., no new plugins, preferences, or interceptors were added).

di:compile

# DI compilation is the single most memory-intensive Magento command
php -d memory_limit=4G bin/magento setup:di:compile

# This generates:
# - Interceptors (plugin classes)
# - Factories and proxies
# - Compiled dependency injection configuration
# For 200+ modules, expect 5-10 minutes and 2-3 GB peak memory

static-content:deploy

# Deploy with explicit locale list to minimize memory
php -d memory_limit=4G bin/magento setup:static-content:deploy \
  en_US en_CA fr_CA \
  --theme Vendor/theme \
  -j 4

# The -j flag enables parallel processing with 4 jobs
# Each job spawns a child process, so total memory = 4 x per-job usage
# With -j 4, ensure your server has at least 8 GB available

I recommend setting memory_limit=4G as a baseline for all Magento CLI operations. For stores with 300+ modules or complex data patches, you may need 6G. The -d flag approach is better than editing php.ini because it makes the memory requirement explicit and visible in your deployment scripts.

Step 4: Fix for PHP-FPM (Web Requests)

PHP-FPM memory configuration requires a different approach than CLI. Each FPM worker process serves one web request at a time, and multiple workers run simultaneously. The critical constraint is:

# The golden rule:
# pm.max_children x memory_limit <= Available Server RAM

# Example: Server with 16 GB RAM, 2 GB reserved for OS/MySQL/Redis
# Available for PHP-FPM: 14 GB
# memory_limit per worker: 756M (Magento default)
# Max safe workers: 14000 / 756 = ~18 workers

# /etc/php/8.2/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 18
pm.start_servers = 6
pm.min_spare_servers = 4
pm.max_spare_servers = 10
pm.max_requests = 1000

For the FPM php.ini, set a more conservative limit than CLI:

# /etc/php/8.2/fpm/conf.d/99-magento.ini
memory_limit = 756M
max_execution_time = 18000
realpath_cache_size = 10M
realpath_cache_ttl = 7200

The reason FPM can use a lower limit than CLI is that web requests do not run setup:upgrade or di:compile. They load the pre-compiled dependency injection configuration from the generated/ directory, which requires far less memory. A typical Magento page load uses 200-400 MB of memory. The 756M limit provides headroom for complex catalog pages, large cart operations, and admin panel requests.

After changing FPM configuration, always restart the service:

sudo systemctl restart php8.2-fpm
sudo systemctl status php8.2-fpm
Pro Tip: In Kubernetes, your container memory limit must account for PHP-FPM workers and CLI commands running simultaneously. A deploy job running setup:upgrade inside a pod needs 4Gi request and 6Gi limit. Regular FPM pods serving web traffic can run comfortably with 1Gi request and 2Gi limit. Do not set the FPM pod limit to 4Gi just because your deploy job needs it. Use separate pod specs for deploy jobs (Kubernetes Jobs or Argo CD PreSync hooks) with higher memory allocations, and keep your long-running FPM pods lean.

Is Your Server Exposed?

While you are tuning Magento performance, make sure your infrastructure is not leaking sensitive data. Run a free exposure check on your domain.

Check Your Domain Free

Step 5: Static Content Deploy Memory

Static content deployment is where memory problems become multiplicative. Each locale you deploy requires a separate compilation pass that loads the full Magento framework. I have seen this firsthand: our EU store originally deployed 14 locales (en_US, en_GB, de_DE, es_ES, fr_FR, it_IT, nl_NL, pl_PL, pt_PT, sv_SE, da_DK, fi_FI, nb_NO, cs_CZ). Memory usage spiked to 8 GB+, and the deploy took over 40 minutes.

The fix was simple: trim locales to only the ones actually used by active store views.

# Before: 14 locales, 8 GB+ memory, 40+ minutes
php -d memory_limit=10G bin/magento setup:static-content:deploy \
  en_US en_GB de_DE es_ES fr_FR it_IT nl_NL pl_PL pt_PT \
  sv_SE da_DK fi_FI nb_NO cs_CZ

# After: 4 locales, ~2 GB memory, 8 minutes
php -d memory_limit=4G bin/magento setup:static-content:deploy \
  en_US en_GB de_DE es_ES

To find which locales your store actually needs, check the active store views:

# Query Magento's store_website table
mysql -e "SELECT s.code, s.name, c.locale
  FROM store s
  JOIN core_config_data c ON c.scope = 'stores'
    AND c.scope_id = s.store_id
    AND c.path = 'general/locale/code'
  WHERE s.is_active = 1;" magento

Every locale you remove saves roughly 500 MB of peak memory and 2-3 minutes of deploy time. For a multi-region Magento deployment, this optimization alone can cut your deploy pipeline time in half.

Step 6: Kubernetes and Docker Deployment Strategy

Running Magento in Kubernetes introduces unique memory challenges. The traditional approach of running static-content:deploy at runtime (during pod startup or as part of a deploy job writing to shared storage) has serious problems:

  • Shared EFS/NFS storage is slow. Writing 600 MB of static files to an EFS PVC takes 5-10x longer than local disk.
  • Stale assets persist across deploys. Old CSS and JS files remain on shared storage even after a new image is deployed, causing style regressions.
  • Memory spikes in the cluster. Running static-content:deploy in a pod requires 4 GB+ of memory for a one-time operation.

The better approach: bake static content into the Docker image during CI.

Dockerfile with baked static content

# Stage 1: Build static content
FROM php:8.2-cli AS builder
WORKDIR /var/www/html
COPY . .
RUN composer install --no-dev --optimize-autoloader
RUN php -d memory_limit=4G bin/magento setup:di:compile
RUN php -d memory_limit=4G bin/magento setup:static-content:deploy \
  en_US en_CA fr_CA \
  --theme Vendor/theme -j 4

# Stage 2: Production image
FROM php:8.2-fpm AS production
COPY --from=builder /var/www/html /var/www/html
# Static files are now part of the image, no runtime deploy needed

Nginx initContainer pattern

In a typical Magento Kubernetes setup, Nginx and PHP-FPM run in separate containers (or separate pods). Nginx needs access to static files but does not need the full Magento codebase. Use an initContainer to copy static files from the Magento image into a shared emptyDir volume:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: magento-nginx
spec:
  template:
    spec:
      initContainers:
        - name: copy-static
          image: your-registry/magento:latest
          command: ["cp", "-a", "/var/www/html/pub/static/.", "/static/"]
          volumeMounts:
            - name: static-files
              mountPath: /static
      containers:
        - name: nginx
          image: nginx:alpine
          volumeMounts:
            - name: static-files
              mountPath: /var/www/html/pub/static
              readOnly: true
      volumes:
        - name: static-files
          emptyDir: {}

The emptyDir volume uses the node's local disk (or memory if you specify medium: Memory), which is dramatically faster than EFS for serving static files. Nginx reads from local disk instead of NFS, cutting TTFB for static assets from 50-100ms to under 5ms.

Step 7: Cron and Indexer Memory

Magento's cron scheduler runs indexers, sends emails, processes message queues, and executes dozens of scheduled tasks. Each cron group spawns a separate PHP process, and indexers can be particularly memory-hungry.

# Check which indexers are configured
php bin/magento indexer:info

# Reindex with explicit memory override
php -d memory_limit=4G bin/magento indexer:reindex

# Reindex specific indexers (less memory)
php -d memory_limit=2G bin/magento indexer:reindex catalog_product_price catalog_category_product

Cron memory configuration

Magento's built-in cron runner uses the CLI php.ini memory limit. If your CLI limit is 128M, cron will fail silently when indexers exceed that limit. There are two approaches:

# Option 1: Set CLI php.ini to a safe default
# /etc/php/8.2/cli/php.ini
memory_limit = 2G

# Option 2: Override in the crontab entry
* * * * * /usr/bin/php -d memory_limit=2G /var/www/html/bin/magento cron:run 2>&1

For Kubernetes, configure the cron container with adequate memory:

# Kubernetes CronJob for Magento cron
apiVersion: batch/v1
kind: CronJob
metadata:
  name: magento-cron
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: cron
              image: your-registry/magento:latest
              command: ["php", "-d", "memory_limit=2G", "bin/magento", "cron:run"]
              resources:
                requests:
                  memory: "1Gi"
                limits:
                  memory: "3Gi"

Memory Requirements by Operation

After running these operations across multiple Magento instances (ranging from 150 to 300+ modules), here are the memory profiles I have observed:

Operation Typical Peak Memory Recommended Limit Duration (200+ modules) Can Run in Parallel
setup:upgrade 2-3 GB 4G 3-8 minutes No (DB locks)
setup:di:compile 2-4 GB 4G 5-12 minutes No (writes to generated/)
static-content:deploy (3 locales) 1.5-2 GB 4G 5-10 minutes Yes (with -j flag per locale)
static-content:deploy (14 locales) 6-8 GB 10G 30-45 minutes Yes (with -j flag per locale)
indexer:reindex (all) 1-2 GB 2G 5-20 minutes Partially (some indexers lock)
cron:run 512 MB - 1.5 GB 2G Continuous Yes (per cron group)
Web request (FPM) 200-400 MB 756M Per request Yes (per worker)

Preventing Memory Exhaustion Long Term

Fixing the immediate OOM crash is only half the battle. To prevent memory exhaustion from recurring as your store grows, implement these practices:

1. Remove unused modules

Every module adds to the memory footprint during compilation. Audit your module list regularly:

# List all enabled modules
php bin/magento module:status --enabled | wc -l

# Disable modules you do not use
php bin/magento module:disable Magento_Wishlist Magento_SendFriend \
  Magento_Swatches Magento_GroupedProduct

Removing 20 unused modules can save 200-400 MB during compilation. For headless/API-only stores, you can safely disable most frontend modules.

2. Trim locales aggressively

Only deploy locales that map to active store views. If you have a German store and a UK store, deploy en_GB and de_DE. Do not deploy en_US unless you have a US store view that specifically uses it.

3. Bake static content into the Docker image

As covered in Step 6, this eliminates the need for runtime static content deployment entirely. Your deploy job simplifies to setup:upgrade --keep-generated plus cache:flush.

4. Separate CLI and FPM configurations

Never set CLI memory_limit to the same value as FPM. CLI operations need 2-4 GB. FPM workers need 756M. Use separate php.ini files or the -d flag for CLI.

5. Monitor with APM tools

New Relic, Datadog, or even simple cron scripts can alert you when memory usage approaches limits:

# Simple monitoring script
#!/bin/bash
PHP_MEM=$(ps aux | grep php-fpm | awk '{sum += $6} END {print sum/1024}')
THRESHOLD=12000  # 12 GB warning threshold

if (( $(echo "$PHP_MEM > $THRESHOLD" | bc -l) )); then
  echo "PHP-FPM total memory: ${PHP_MEM}MB exceeds threshold" | \
    mail -s "Magento Memory Alert" ops@yourcompany.com
fi

Common Mistakes to Avoid

  1. Setting memory_limit in the wrong php.ini. CLI and FPM use different config files. Editing /etc/php/8.2/fpm/php.ini does nothing for CLI commands. Always verify with php --ini for CLI and php-fpm -i for FPM.
  2. Not using --keep-generated with setup:upgrade. Without this flag, Magento deletes the entire generated/ directory, forcing a full di:compile run afterward. If your deployment pipeline already handles compilation separately, always pass --keep-generated.
  3. Running static-content:deploy inside production pods. This wastes cluster resources (4 GB+ memory for a temporary operation), writes to shared storage (slow and prone to stale files), and blocks pod readiness. Bake it into the image instead.
  4. Deploying too many locales. Each locale adds ~500 MB of peak memory and 2-3 minutes of build time. Audit your active store views and remove any locale that does not map to a real storefront.
  5. Not checking dmesg for OOM killer events. When PHP crashes with a misleading error, the first thing to check is dmesg | grep -i oom. The kernel's OOM killer log is the definitive answer to whether memory was the root cause.

Frequently Asked Questions

Why does Magento show an AMQP error when the real issue is memory?

When PHP exhausts its memory limit, it terminates the current execution context immediately. If the process was in the middle of loading message queue topic configuration (which happens early in Magento's bootstrap), the partially loaded config triggers a "Topic is not configured" exception. The AMQP error is a symptom, not the cause. The real error is in dmesg or your PHP error log.

Is 4G memory_limit safe for production FPM workers?

Generally no. Setting FPM workers to 4G means each worker can consume up to 4 GB. With pm.max_children = 10, that is 40 GB of potential memory usage. FPM workers serving web requests rarely need more than 756M. Reserve the 4G limit for CLI operations only (deploy jobs, cron, manual commands).

Can I use memory_limit = -1 (unlimited) in production?

Never. Unlimited memory means a single runaway process (a bad database query, an infinite loop in a custom module) can consume all server RAM, triggering the kernel OOM killer and taking down other services including MySQL and Redis. Always set an explicit upper bound.

How do I calculate the right pm.max_children for my server?

Use the formula: pm.max_children = (Total RAM - Reserved for OS/DB/Cache) / memory_limit. For a 16 GB server running MySQL (2 GB) and Redis (1 GB), with 1 GB reserved for the OS: (16 - 4) / 0.756 = 15.8, so set pm.max_children = 15. Monitor actual usage with ps aux | grep php-fpm and adjust based on real consumption patterns.

Do I need to increase memory for Magento Cloud (Adobe Commerce Cloud)?

Adobe Commerce Cloud manages PHP configuration through .magento.app.yaml. You can set runtime.extensions and some PHP settings, but memory_limit for CLI operations during the build phase is controlled by the platform. If you hit OOM during cloud builds, the fix is usually to reduce locale count, remove unused modules, or contact Adobe support to increase the build container resources.

Secure Your Magento Store

Memory tuning is just one piece of running a healthy Magento instance. Make sure your store is not exposing sensitive configuration files, debug endpoints, or server version headers to attackers.

Run a Free Security Scan

The Bottom Line

Magento's PHP memory exhaustion problem comes down to one fundamental mismatch: PHP defaults are designed for lightweight web applications, and Magento is not lightweight. The fix is straightforward once you understand the root cause. Use php -d memory_limit=4G for CLI operations, keep FPM at 756M, trim your locale list, bake static content into Docker images, and always check dmesg before chasing misleading error messages.

The biggest time saver from this entire article: if you see "AMQP topic is not configured" during setup:upgrade, stop debugging AMQP. Increase memory_limit to 4G and run it again. You will save yourself hours.

Related reading: Free Website Security Scan, Website Security Headers Guide, Fix Docker Container OOM Killed, Fix Kubernetes Pod CrashLoopBackOff, and 70+ free developer tools.

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic Magento deployments on Kubernetes, and building zero-knowledge security tools. Read more about the author.