Snowflake Cost Explosion: 5 Real Patterns That Burn Six Figures
A single Snowflake query can burn $10,000 in an hour if it lands wrong. Most teams discover this only when the monthly bill arrives. Five patterns are responsible for almost every runaway Snowflake account, and all of them are visible in QUERY_HISTORY if you know what to look for.
Quick refresher on what you actually pay for
Snowflake bills you for two things: compute credits (warehouses running) and storage (data sitting in tables). Storage is cheap. Compute is where almost all the bill comes from.
Standard tier credits cost around $2.00 to $4.00 each in 2026, depending on cloud provider and contract. A Medium warehouse uses 4 credits per hour. An XLarge uses 16. A 4XLarge uses 128. A 6XLarge (the largest single-cluster size) uses 512 credits per hour, which is roughly $1,000 to $2,000 per hour just for one warehouse to be running.
That last sentence is where six-figure incidents start.
Pattern 1: a forgotten warehouse that auto-resumed and never auto-suspended
The cheapest Snowflake setup is "warehouse runs on demand, suspends after 60 seconds idle." The most expensive is "warehouse stays warm for 24 hours so dashboards are fast."
The pattern that burns money:
- An analyst sets up a warehouse with
AUTO_SUSPEND = 3600(suspend after 1 hour idle) andSTATEMENT_TIMEOUT_IN_SECONDS = 0(no query timeout). - A scheduled job at 2am runs an expensive query that fails halfway through but does not error in a way that suspends the warehouse.
- The warehouse stays running for 1 hour idle after the failure, then auto-suspends.
- Repeat every night. Each idle hour at XLarge costs $32 to $64 just for being on.
- 10 warehouses doing this = $300 per night just in idle costs. $9,000 per month for nothing.
The diagnostic SQL:
SELECT
warehouse_name,
SUM(credits_used) AS credits,
SUM(credits_used) * 3.00 AS dollars_estimated
FROM snowflake.account_usage.warehouse_metering_history
WHERE start_time >= DATEADD(day, -30, CURRENT_TIMESTAMP())
GROUP BY warehouse_name
ORDER BY credits DESC;
If the top warehouse uses 10x more credits than your top query result returns, you have an idle warehouse problem.
The fix:
ALTER WAREHOUSE my_warehouse
SET AUTO_SUSPEND = 60
AUTO_RESUME = TRUE
STATEMENT_TIMEOUT_IN_SECONDS = 3600;
Pattern 2: a query that scans the entire table because the cluster key is wrong
Snowflake stores data in immutable micro-partitions. If your query filters on a column that is not the cluster key (or a column correlated with the cluster key), every query becomes a full table scan.
The pattern:
- Table has 50 TB of order data clustered by
order_date. - Analytics team writes a query that filters by
customer_id. - Each query scans 50 TB instead of one day's partitions.
- Query takes 45 minutes on Medium, costs $30 per run.
- Multiplied across the dashboards that auto-refresh every 30 seconds, that is $1,440 per hour.
The diagnostic:
SELECT
query_id,
query_text,
warehouse_size,
bytes_scanned / POWER(1024, 4) AS tb_scanned,
total_elapsed_time / 1000 AS seconds,
credits_used_cloud_services + (execution_time / 3600000.0) * (CASE warehouse_size
WHEN 'X-Small' THEN 1
WHEN 'Small' THEN 2
WHEN 'Medium' THEN 4
WHEN 'Large' THEN 8
WHEN 'X-Large' THEN 16
ELSE 32 END) AS credits_estimated
FROM snowflake.account_usage.query_history
WHERE start_time >= DATEADD(day, -1, CURRENT_TIMESTAMP())
AND bytes_scanned > 100 * POWER(1024, 3)
ORDER BY credits_estimated DESC
LIMIT 50;
Run that. If you see queries scanning 1 TB+ that should not be, fix the cluster key or add a more selective filter:
-- Re-cluster by the column the query actually uses
ALTER TABLE orders CLUSTER BY (customer_id, order_date);
-- Or, faster fix: rewrite the query to include the partition key
SELECT * FROM orders
WHERE customer_id = 12345
AND order_date >= DATEADD(day, -90, CURRENT_DATE());
Pattern 3: dbt full-refresh that runs every night instead of incremental
dbt models run as either incremental (process new rows) or full-refresh (rebuild entire table). If somebody pushes a model with materialized: 'table' instead of materialized: 'incremental', every nightly run rebuilds the entire table from scratch.
For a fact table that has grown to 5 TB, full-refresh on Large warehouse = 8 hours = roughly $250 per run. Run nightly = $7,500 per month for one bad model.
How to find them:
SELECT
query_text,
COUNT(*) AS runs,
AVG(total_elapsed_time / 1000) AS avg_seconds,
AVG(bytes_scanned / POWER(1024, 4)) AS avg_tb_scanned
FROM snowflake.account_usage.query_history
WHERE start_time >= DATEADD(day, -7, CURRENT_TIMESTAMP())
AND query_text ILIKE '%create or replace table%'
AND warehouse_name LIKE '%DBT%'
GROUP BY query_text
HAVING COUNT(*) >= 5
ORDER BY avg_tb_scanned DESC;
If you see create or replace table running daily and scanning multiple TB, switch the model to incremental:
{{ config(
materialized='incremental',
unique_key='order_id',
on_schema_change='append_new_columns'
) }}
SELECT * FROM source_table
{% if is_incremental() %}
WHERE updated_at > (SELECT MAX(updated_at) FROM {{ this }})
{% endif %}
Pattern 4: cross-cloud egress charges from external functions and Snowpipe streaming
Snowflake compute runs in your cloud account's region. If your S3 bucket is in us-east-1 and your Snowflake account is in us-west-2, every byte loaded crosses regions and bills at AWS egress rates ($0.02 per GB).
For a team loading 10 TB per day across regions, that is $200 per day, $6,000 per month, on top of Snowflake compute.
The diagnostic is in your AWS bill, not Snowflake. Look for DataTransfer-Regional-Bytes in Cost Explorer grouped by source bucket. If a bucket is feeding Snowflake and showing high regional transfer, you are crossing regions.
The fix is to put your Snowflake account in the same region as your data, or to use Snowflake's automatic replication to staging regions. Both require coordination with your account rep.
Pattern 5: query result cache misses caused by NOW() and CURRENT_TIMESTAMP() in dashboards
Snowflake caches query results for 24 hours. Any dashboard query that returns the same result within that window costs zero credits to re-run.
The cache is invalidated by any non-deterministic function. The most common offender:
-- This NEVER cache hits because NOW() returns different values
SELECT *
FROM orders
WHERE order_date >= DATEADD(day, -7, NOW());
Even though the underlying data has not changed in the last 5 minutes, every dashboard refresh re-runs the full query.
The fix:
-- Truncate to the day so cache holds for 24 hours
SELECT *
FROM orders
WHERE order_date >= DATEADD(day, -7, DATE_TRUNC('day', CURRENT_DATE()));
For a busy BI tool with 50 active users refreshing the same dashboard every minute, this single change can drop credit usage by 70 percent.
Other functions that bust the cache: CURRENT_TIMESTAMP(), RANDOM(), UUID_STRING(), any user-defined function that is not marked IMMUTABLE, and any query against a non-deterministic external function.
Guardrails that prevent the next explosion
Five settings to apply across every warehouse and account:
- Resource Monitors with hard cap. Set a monthly credit cap per warehouse. Snowflake will suspend the warehouse automatically when the cap is hit. Better than learning about overages from your finance team.
CREATE RESOURCE MONITOR analytics_monitor WITH CREDIT_QUOTA = 5000 FREQUENCY = MONTHLY TRIGGERS ON 75 PERCENT DO NOTIFY ON 90 PERCENT DO SUSPEND ON 100 PERCENT DO SUSPEND_IMMEDIATE; - Statement timeout. Per warehouse, set a max query duration. Anything that takes longer than the timeout is killed automatically.
ALTER WAREHOUSE my_warehouse SET STATEMENT_TIMEOUT_IN_SECONDS = 1800; - Query tagging. Force every connection to tag its queries with a project or team identifier. Then you can attribute spend.
ALTER USER analytics_user SET DEFAULT_NAMESPACE = 'analytics_db.public' QUERY_TAG = 'team=analytics'; - Cluster keys actually matched to query patterns. Audit the top 10 queries on each large table monthly. If the WHERE clause does not include the cluster key, re-cluster.
- Warehouse sizing review. Quarterly review of warehouse sizes against actual usage. Most teams have at least one warehouse oversized by 2x.
Audit your queries before you ship them
Bad SQL is the most common cause of Snowflake bill spikes. Use our SQL Formatter to clean up complex queries, our Regex Tester to validate filter patterns, and our JSON Formatter to parse Snowflake's metadata exports.
Explore Free ToolsThe bottom line
Six-figure Snowflake bills are almost always caused by the same five patterns. Idle warehouses without auto-suspend. Full table scans because of missing cluster keys. dbt full-refresh on growing tables. Cross-region data transfer. Cache misses from non-deterministic functions in dashboards.
The diagnostic queries take 5 minutes to run. The fixes take an hour to apply. The savings are usually 50 to 80 percent of your current bill. The hard part is getting permission to actually change things in production, which is a different problem.
Related reading: SQL Index Optimization, Database Normalization Guide, AWS Bill Spike Incident Response, Datadog vs Grafana vs New Relic Cost Comparison, and SQL Joins Explained.