← Back to Blog

MySQL Too Many Connections: The Production Fix That Actually Works

It is 2 AM. Your monitoring fires. Your application is returning 500 errors. You try to connect to MySQL and get hit with ERROR 1040 (HY000): Too many connections. You cannot even log in to fix it. Sound familiar? This is one of the most common MySQL production incidents, and most people fix it wrong. Let me walk you through the fix that actually holds up.

TL;DR - Quick Diagnosis and Fix

-- Check the limit
SHOW VARIABLES LIKE 'max_connections';

-- Check current usage
SHOW STATUS LIKE 'Threads_connected';

-- If those two numbers are equal, you are maxed out.
-- Quick fix (takes effect immediately):
SET GLOBAL max_connections = 500;
SET GLOBAL wait_timeout = 60;

If Threads_connected equals max_connections, your connection pool is full. The commands above buy you breathing room. Read on for the permanent fix.

Why This Happens (And Why the Default Settings Are Terrible)

MySQL ships with max_connections = 151 by default. That number was reasonable in 2005 when a busy site handled a few hundred requests per minute. In 2026, a single Magento or Laravel application can burn through 151 connections in seconds during a traffic spike.

But the connection limit is only half the problem. The real killer is wait_timeout, which defaults to 28800 seconds. That is 8 hours. When your PHP application opens a database connection and finishes the request, it does not always close the connection cleanly. MySQL keeps that connection alive in a Sleep state for 8 entire hours, waiting for the client to maybe send another query.

So here is what actually happens during an incident:

  1. Your application opens connections to MySQL on each request
  2. Some connections do not get closed properly (very common in PHP)
  3. MySQL holds those sleeping connections open for 8 hours
  4. Over time, sleeping connections pile up and eat your connection slots
  5. New requests cannot get a connection because all 151 slots are taken by sleeping ghosts
  6. Your application returns 500 errors and your pages start throwing the dreaded SQLSTATE[HY000] [1040] Too many connections

The frustrating part is that your server might have plenty of CPU and RAM available. It is not under real load. It is just choking on abandoned connections that nobody is using.

Step 1: Connect When You Are Locked Out

If you are completely locked out and mysql -u root -p gives you the "too many connections" error, do not panic. MySQL reserves one extra connection for users with the SUPER privilege (or CONNECTION_ADMIN in MySQL 8.0+). This means root can still get in even when all regular connection slots are full.

# Connect as root - this uses the reserved SUPER connection
mysql -u root -p

# If even that fails (rare), try with the socket directly
mysql -u root -p --socket=/var/run/mysqld/mysqld.sock

In MySQL 8.0.14 and later, there is also admin_address and admin_port that you can configure in advance. This gives you a dedicated admin network interface that is completely separate from the regular connection pool. Worth setting up before you need it.

Pro tip: If you are running on RDS or Aurora, you cannot use the SUPER connection trick. Skip ahead to Step 7 for the RDS-specific approach.

Step 2: Check the Current State

Once you are connected, the first thing to do is understand what is actually going on. Run these queries:

-- How many connections are allowed?
SHOW VARIABLES LIKE 'max_connections';

-- How many are currently in use?
SHOW STATUS LIKE 'Threads_connected';

-- What is the highest it has ever been since last restart?
SHOW STATUS LIKE 'Max_used_connections';

-- What is the current wait_timeout?
SHOW VARIABLES LIKE 'wait_timeout';

-- See all active connections
SHOW FULL PROCESSLIST;

The SHOW FULL PROCESSLIST output is where the real story is. You will likely see something like this:

+------+------+-----------+--------+---------+------+-------+------------------+
| Id   | User | Host      | db     | Command | Time | State | Info             |
+------+------+-----------+--------+---------+------+-------+------------------+
| 1042 | app  | 10.0.1.5  | mydb   | Sleep   | 3847 |       | NULL             |
| 1043 | app  | 10.0.1.5  | mydb   | Sleep   | 2901 |       | NULL             |
| 1044 | app  | 10.0.1.5  | mydb   | Sleep   | 5623 |       | NULL             |
| 1045 | app  | 10.0.1.6  | mydb   | Query   |    0 |       | SELECT * FROM... |
| 1046 | app  | 10.0.1.5  | mydb   | Sleep   | 7200 |       | NULL             |
+------+------+-----------+--------+---------+------+-------+------------------+

See all those rows with Command: Sleep and a Time value in the thousands? Those are zombie connections. They are not doing anything. They are just sitting there, taking up connection slots that active queries need.

Let us count them:

-- Count sleeping vs active connections
SELECT
  Command,
  COUNT(*) as count,
  AVG(Time) as avg_time_seconds
FROM information_schema.PROCESSLIST
GROUP BY Command
ORDER BY count DESC;

If you see 140 sleeping connections and 5 active queries, you have found your problem. Those 140 sleeping connections are the reason new requests are getting rejected.

Step 3: Kill the Sleeping Connections

Time to clear the dead weight. You do not want to kill active queries, just the ones that are sleeping and have been idle for a while. This query generates the KILL statements for you:

-- Generate KILL statements for connections sleeping > 60 seconds
SELECT CONCAT('KILL ', id, ';') AS kill_statement
FROM information_schema.PROCESSLIST
WHERE Command = 'Sleep'
AND Time > 60
AND User != 'system user'
ORDER BY Time DESC;

Review the output, then copy and paste those KILL statements to execute them. If there are hundreds, you can do it in bulk:

-- Kill all sleeping connections older than 60 seconds (be careful)
SELECT GROUP_CONCAT(CONCAT('KILL ', id) SEPARATOR '; ')
FROM information_schema.PROCESSLIST
WHERE Command = 'Sleep'
AND Time > 60
AND User != 'system user';

Alternatively, from the command line:

# One-liner to kill sleeping connections older than 60s
mysql -u root -p -e "SELECT id FROM information_schema.PROCESSLIST WHERE Command='Sleep' AND Time > 60" --skip-column-names | while read id; do mysql -u root -p -e "KILL $id"; done

After killing the zombie connections, check SHOW STATUS LIKE 'Threads_connected'; again. You should see the count drop dramatically, and your application should start responding immediately.

Step 4: Increase max_connections (The Right Way)

Now that the immediate fire is out, let us set a proper connection limit. The magic question everyone asks is: "What should I set max_connections to?" The answer depends on your available RAM.

Each MySQL connection consumes memory. The exact amount depends on your buffer settings, but here is a rough formula:

Per-connection memory =
  sort_buffer_size (default 256K)
  + read_buffer_size (default 128K)
  + read_rnd_buffer_size (default 256K)
  + join_buffer_size (default 256K)
  + thread_stack (default 256K or 1M)
  + binlog_cache_size (default 32K)
  = roughly 1-4 MB per connection (depends on workload)

So the calculation looks like this:

Available RAM for connections = Total RAM - OS needs - InnoDB buffer pool - other buffers
max_connections = Available RAM for connections / Per-connection memory

Example:
  Server: 16 GB RAM
  OS + other processes: 2 GB
  InnoDB buffer pool: 10 GB
  Remaining: 4 GB
  Per connection: ~4 MB (worst case)
  max_connections = 4096 MB / 4 MB = 1024

  A safe setting: 500-800 (leaving headroom)

Apply it immediately:

SET GLOBAL max_connections = 500;

Do not go crazy and set it to 10000. If you actually have 10000 simultaneous active queries, your server is going to collapse under the load regardless of the connection limit. The connection limit is a safety valve, not a performance tuning knob. If you think you need more than 1000 connections, you probably need connection pooling or read replicas instead.

Step 5: Lower wait_timeout (This Is the Real Fix)

Here is the thing most tutorials skip. Raising max_connections only buys you more runway before the same problem happens again. The real fix is lowering wait_timeout so sleeping connections get cleaned up automatically.

-- Set wait_timeout to 60 seconds (instead of the insane 28800 default)
SET GLOBAL wait_timeout = 60;

-- Also set interactive_timeout for CLI sessions
SET GLOBAL interactive_timeout = 300;

What does this do? Any connection that sits idle (sleeping) for more than 60 seconds gets automatically terminated by MySQL. No more zombie connections piling up for 8 hours.

How low should you go?

  • 30-60 seconds - Aggressive but works well for web applications. PHP scripts rarely need a connection open for more than a few seconds.
  • 120-300 seconds - Conservative. Good if you have long-running background jobs or batch processes.
  • 28800 seconds (8 hours) - The default. Almost never what you want in production.

I typically recommend 60 seconds for web-facing databases and 300 seconds for databases that also handle background workers. If you have a specific process that needs a long-lived connection, configure that at the session level rather than keeping the global timeout high.

Step 6: Fix the Application Layer

Tuning MySQL is treating the symptom. The root cause is almost always the application not managing connections properly. Here are the most common offenders and how to fix them.

PHP and Persistent Connections

PHP's mysqli_pconnect() and PDO's PDO::ATTR_PERSISTENT reuse connections across requests. This sounds efficient, but in practice it means connections never get closed. Each Apache/PHP-FPM worker holds a connection indefinitely.

// BAD - persistent connections pile up
$pdo = new PDO($dsn, $user, $pass, [
    PDO::ATTR_PERSISTENT => true
]);

// GOOD - regular connections that close when the script ends
$pdo = new PDO($dsn, $user, $pass, [
    PDO::ATTR_PERSISTENT => false,
    PDO::ATTR_TIMEOUT => 5
]);

If you must use persistent connections, make sure your wait_timeout is low enough to clean up abandoned ones.

Connection Pooling

For high-traffic applications, connection pooling is the proper solution. Instead of each request opening and closing its own connection, a pool of connections is shared across all requests.

  • ProxySQL - A MySQL-aware proxy that handles connection pooling, query routing, and load balancing. This is the gold standard for MySQL connection management.
  • MySQL Router - Oracle's official proxy, built for InnoDB Cluster.
  • PHP-FPM process limit - Indirectly controls connections. If you have 50 PHP-FPM workers and each opens one connection, your max is 50.

ProxySQL is especially useful because it can multiplex hundreds of frontend connections onto a smaller pool of backend connections to MySQL. Your application thinks it has 500 connections, but ProxySQL only uses 50 real MySQL connections.

WordPress Specific

WordPress uses wpdb for database access. Add this to wp-config.php:

// Limit DB connections per request
define('WP_ALLOW_REPAIR', false);

// If using persistent connections (not recommended), at least set this
define('DB_CLOSE_TIMEOUT', 60);

The biggest WordPress connection hog is usually plugins making separate database connections. Audit your plugins and remove any that open their own connections instead of using $wpdb.

Laravel Specific

In config/database.php, Laravel uses PDO by default with non-persistent connections, which is good. But check these settings:

// config/database.php
'mysql' => [
    'driver' => 'mysql',
    'host' => env('DB_HOST', '127.0.0.1'),
    'port' => env('DB_PORT', '3306'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'options' => [
        PDO::ATTR_PERSISTENT => false,  // Keep this false
        PDO::ATTR_TIMEOUT => 5,         // Connection timeout
    ],
],

If you are using Laravel queues, each worker holds a connection. Set --timeout and use DB::disconnect() in long-running workers to prevent connection accumulation. Similar issues happen with Redis memory exhaustion in queue-heavy applications.

Magento Specific

Magento 2 is notorious for connection usage. Each cron job, each indexer, and each web request opens at least one connection. During a full reindex, Magento can easily consume 50+ connections.

# Check Magento's active MySQL connections
mysql -u root -p -e "SELECT User, Host, COUNT(*) as count FROM information_schema.PROCESSLIST WHERE db='magento' GROUP BY User, Host ORDER BY count DESC;"

Key settings in env.php:

'db' => [
    'connection' => [
        'default' => [
            'host' => 'localhost',
            'dbname' => 'magento',
            'username' => 'magento',
            'password' => '...',
            'active' => '1',
            'persistent' => null  // null = non-persistent (good)
        ]
    ]
],

Limit concurrent cron processes and stagger reindexing to avoid connection spikes.

Step 7: RDS and Aurora Specific Fixes

If you are running on AWS RDS or Aurora, you cannot SSH into the server and edit my.cnf. You also cannot use the SUPER privilege to connect when connections are maxed out. Here is what to do.

RDS max_connections Formula

RDS sets max_connections automatically based on instance class using this formula:

max_connections = {DBInstanceClassMemory/12582880}

Examples:
  db.t3.micro  (1 GB)  = ~85 connections
  db.t3.small  (2 GB)  = ~170 connections
  db.t3.medium (4 GB)  = ~341 connections
  db.r5.large  (16 GB) = ~1365 connections
  db.r5.xlarge (32 GB) = ~2730 connections

To override, create a custom parameter group in the RDS console:

  1. Go to RDS > Parameter Groups > Create Parameter Group
  2. Find max_connections and set your desired value
  3. Find wait_timeout and set it to 60
  4. Attach the parameter group to your instance
  5. Reboot the instance (required for static parameters, but max_connections and wait_timeout are dynamic so no reboot needed)

Both max_connections and wait_timeout are dynamic parameters on RDS, meaning they take effect immediately without a reboot.

Aurora Specific

Aurora uses the same parameter group approach but has higher default limits because of its architecture. Aurora also handles connection management more efficiently than standard RDS MySQL. If you are still hitting limits on Aurora, look into RDS Proxy, which provides built-in connection pooling.

# Check current RDS/Aurora max_connections via CLI
aws rds describe-db-instances \
  --db-instance-identifier my-instance \
  --query 'DBInstances[0].DBInstanceClass'

Step 8: Make It Permanent (The Part Everyone Forgets)

SET GLOBAL max_connections = 500; does not survive a MySQL restart. If MySQL restarts for any reason (crash, maintenance, server reboot), it goes right back to the default 151. You need to also update the configuration file.

# Edit the MySQL config file
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# or
sudo nano /etc/my.cnf

# Add under [mysqld] section:
[mysqld]
max_connections = 500
wait_timeout = 60
interactive_timeout = 300

Then validate without restarting:

# Verify the config file syntax is correct
mysqld --validate-config

# The SET GLOBAL changes are already active, so no restart needed
# The config file ensures they persist across restarts

Important: Always run both the SET GLOBAL command (for immediate effect) and update my.cnf (for persistence). Doing only one or the other is the most common mistake.

Connection Handling: MySQL vs MariaDB vs PostgreSQL

If you are evaluating database options or running multiple databases, here is how the defaults compare:

Setting MySQL 8.0 MariaDB 10.11 PostgreSQL 16
Default max connections 151 151 100
Idle connection timeout 28800s (8h) 28800s (8h) 0 (never)
Connection model Thread per connection Thread pool (optional) Process per connection
Memory per connection ~1-4 MB ~1-4 MB ~5-10 MB
Built-in pooling No (use ProxySQL) Thread pool plugin No (use PgBouncer)
Reserved admin connection Yes (SUPER privilege) Yes (extra_max_connections) Yes (superuser_reserved_connections)
Dynamic config change SET GLOBAL (no restart) SET GLOBAL (no restart) ALTER SYSTEM + reload

PostgreSQL is actually worse by default because it uses a process-per-connection model that consumes more memory. That is why PgBouncer is essentially mandatory for any PostgreSQL production deployment. MySQL's thread-per-connection model is lighter, but you still need to manage timeouts properly.

Monitoring: Catch It Before It Becomes an Incident

The best way to handle "too many connections" is to never let it happen in the first place. Set up alerts on these metrics:

Key Metrics to Monitor

  • Threads_connected / max_connections ratio - Alert at 80%. If you are using 80% of your connection limit, you are one traffic spike away from an outage.
  • Threads_running - Active (not sleeping) threads. If this number is consistently above 50, you have a query performance problem.
  • Connection_errors_max_connections - Counts how many times a connection was refused because the limit was reached. This should always be 0.
  • Aborted_connects - Failed connection attempts. A spike here often means the app cannot get a connection.
  • Slow_queries - Slow queries hold connections longer, contributing to exhaustion.

Grafana Dashboard Query

# Prometheus queries for MySQL connection monitoring
# Connection usage percentage
mysql_global_status_threads_connected / mysql_global_variables_max_connections * 100

# Sleeping connections count
mysql_global_status_threads_connected - mysql_global_status_threads_running

# Connection errors (should be 0)
rate(mysql_global_status_connection_errors_max_connections[5m])

CloudWatch Metrics (RDS)

For RDS, set CloudWatch alarms on:

  • DatabaseConnections - Alert at 80% of your max_connections value
  • CPUUtilization - High CPU often correlates with connection issues
  • FreeableMemory - Low memory means you cannot safely increase connections

If you are running Nginx as your reverse proxy and it starts returning 502 errors because PHP-FPM cannot connect to MySQL, check our guide on fixing Nginx 502 Bad Gateway errors for that side of the problem.

Check If Your Database Is Exposed

MySQL should never be accessible from the internet. Use SecureBin Exposure Checker to verify your database ports are not publicly reachable.

Run a Free Security Scan

Common Mistakes (Do Not Make These)

  1. Setting max_connections to 10000 and calling it fixed. If you actually use 10000 connections, your server needs ~40 GB of RAM just for connection buffers. You are papering over the real problem.
  2. Not lowering wait_timeout. Increasing max_connections without reducing wait_timeout means zombie connections just take longer to fill the bigger pool. You are buying time, not fixing anything.
  3. Using SET GLOBAL but forgetting my.cnf. Your fix vanishes on the next restart. Always update both.
  4. Enabling persistent connections in PHP without understanding the implications. Persistent connections + high wait_timeout + many PHP-FPM workers = guaranteed connection exhaustion.
  5. Killing all connections blindly. Always filter by Command = 'Sleep' and a reasonable Time threshold. Killing active queries causes rollbacks and can leave data in an inconsistent state.
  6. Ignoring the application layer. Database tuning is a band-aid if your application leaks connections. Find and fix the code that is not closing connections properly.

Frequently Asked Questions

Can I increase max_connections without restarting MySQL?

Yes. SET GLOBAL max_connections = 500; takes effect immediately with zero downtime. You do not need to restart MySQL. However, this change does not survive a restart, so you also need to update your my.cnf configuration file (or your RDS parameter group) to make it permanent.

Why does MySQL default to only 151 connections?

Each MySQL connection consumes memory for its thread stack, sort buffer, join buffer, read buffer, and temporary tables. On a server with limited RAM, 151 connections is a conservative default that prevents out-of-memory conditions. The right number for your server depends on available RAM, per-connection memory usage, and your workload patterns. A server with 16 GB of RAM can comfortably handle 500-1000 connections. A server with 1 GB of RAM probably cannot handle more than 100.

Is it safe to kill sleeping MySQL connections?

Sleeping connections that have been idle for more than a few minutes are almost always safe to kill. They represent abandoned connections from application processes that did not close them properly. If a connection is actively running a query (showing Query in the Command column, not Sleep), killing it will roll back any in-progress transaction. Only kill connections that are in Sleep state and have a Time value greater than your expected request duration.

Is Your Infrastructure Secure?

Database misconfigurations are one of the top causes of data breaches. Scan your domain for exposed ports, leaked credentials, and misconfigurations.

Check Your Exposure Free

The Bottom Line

MySQL "too many connections" is almost never a problem with MySQL itself. It is a problem with abandoned connections, bad timeout defaults, and applications that do not clean up after themselves. The fix is three things working together: a reasonable max_connections based on your available RAM, a low wait_timeout (60-300 seconds) to auto-kill zombie connections, and application-level connection management (pooling, non-persistent connections, proper cleanup).

Do all three and you will not see ERROR 1040 again.

Related reading: Redis Memory: Maxmemory Reached Fix, Nginx 502 Bad Gateway Fix, and 70+ free DevOps and security tools.

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.