What’s New in dbt 1.10: We Read the Release Notes So You Don’t Have To
dbt Core 1.10 dropped on June 16, 2025, and it brings powerful capabilities: sample builds for faster iteration, robust validation, hybrid catalog support, micro-batch awareness, Python 3.13, and new behavior flags—all while paving the way for the next-gen engine.
We’ve distilled the changelog and docs to highlight what matters most—so you can upgrade confidently.
- Hands-on dbt developers
Table of Contents
- 🎯 TL;DR
- 🔧 New Features & Enhancements
- ⚠️ Breaking Changes & Behavior Flags
- 🛠️ Upgrade Path & Tips
- 💡 Example Configurations & SQL
- 🚀 Why 1.10 Matters Now
- 🧪 Final Thoughts
🎯 TL;DR
Here’s your TL;DR of what dbt 1.10 delivers:
- Sample mode:
--sample
allows lightweight builds, ideal for devs and CI. - Micro-batch awareness: New
batch
context and correct pre-/post-hook execution. - Snapshot hard deletes:
hard_deletes="new_record"
mode to track deletions. - Freshness via SQL and model-level config for adaptive jobs.
- Catalogs support: Reads
catalogs.yml
—a milestone toward Iceberg/Unity. - Validation & linting: YAML/JSON schema checks, duplicate-key detection, deprecations, and optional macro argument validation.
- Python 3.13 compatibility.
- Artifact enhancements—added
invocations_started_at
timestamp and direct Cloud upload.
🔧 New Features & Enhancements
In this section, we will discuss the new features and enhancements dbt labs is bringing in this release.
🔄 Sample Mode
dbt’s biggest UX improvement: sample mode for run
and build
commands.
Relative time spec sampling
dbt run --select models/staging/stg_orders --sample="3 days"
dbt run --select models/staging/stg_orders --sample="6 hours"
Finer controls in sampling
dbt run --sample="{'start': '2025-01-01', 'end': '2024-01-02 18:00:00'}"
You can also prevent a ref from being sampled by using the .render()
method.
with
source as (
select * from {{ ref('stg_customers').render() }}
),
...
Why sample?
Sampling creates a reduced, time-bound data slice for testing. Makes it perfect for dev and CI workflows where speed and cost matter.
🐣 Micro-Batch Hooks
For models configured with micro-batches, dbt introduces a batch
Jinja object:
batch.first()
/batch.last()
identify boundaries.- Pre-hooks only run on the first batch, post-hooks on the final batch.
{{ config(
materialized='incremental',
incremental_strategy='microbatch',
event_time='session_start',
begin='2020-01-01',
batch_size='day'
) }}
{% if batch.first() %}
{{ log("🔔 Starting first micro-batch") }}
{% endif %}
select
sales_id,
transaction_date,
customer_id,
product_id,
total_amount
from {{ source('sales', 'transactions') }}
{% if batch.last() %}
{{ log("🔔 Starting first micro-batch") }}
{% endif %}
You can read more about microbatching incremental models here.
🚫 Snapshot Hard Deletes
Capture deletion history by recording new rows:
snapshots:
- name: my_snapshot
config:
hard_deletes: new_record # options are: 'ignore', 'invalidate', or 'new_record'
strategy: timestamp
updated_at: updated_at
columns:
- name: dbt_valid_from
description: Timestamp when the record became valid.
- name: dbt_valid_to
description: Timestamp when the record stopped being valid.
- name: dbt_is_deleted
description: Indicates whether the record was deleted.
{{
config(
unique_key='id',
strategy='timestamp',
updated_at='updated_at',
hard_deletes='new_record'
)
}}
Note that new_record
will create a new metadata column dbt_is_deleted
in the snapshot table.
When to use hard_deletes: new_record
?
- Large volume tables where explicitly tracking deleted records is needed and beneficial.
- Retain continuous snapshot history without any gaps.
- Explicitly track deletions by adding new rows with a
dbt_is_deleted
column.
Whenever a record disappears from the upstream source, dbt inserts a new row marking deletion.
⏰ Source Freshness via SQL & Model-Level Config
Freshness checks are more flexible:
version: 2
sources:
- name: authors_source
config:
freshness:
warn_after:
count: 1
period: day
error_after:
count: 3
period: day
loaded_at_field: loaded_at_field: "convert_timezone('Australia/Sydney', 'UTC', created_at_local)"
loaded_at_query: |
SELECT max(last_seen) FROM raw.authors
Plus, you can define freshness directly at the model level for adaptive jobs:
models:
- name: stg_orders
config:
freshness:
build_after:
count: 1
period: day
updates_on: any
📋 Catalog Parsing & External Catalog Config
Initial support for catalogs.yml
enables hybrid setups and Iceberg integration. Create a catalogs.yml
at the root level of the dbt project. An example of Snowflake Horizon as the catalog is shown below:
catalogs:
- name: catalog_horizon
active_write_integration: snowflake_write_integration
write_integrations:
- name: snowflake_write_integration
external_volume: dbt_external_volume
table_format: iceberg
catalog_type: built_in
Now you can apply the catalog configuration in the model config:
{{
config(
materialized='table',
catalog = catalog_horizon
)
}}
select * from {{ ref('jaffle_shop_customers') }}
Read more about catalogs and supported configurations here.
🧹 YAML/JSON Schema Validation
With dbt Core 1.10 and above, we have duplicate key detection in YAML, warnings for deprecated custom keys, JSON schema validation for dbt_project.yml
and other YAMLs.
Before dbt 1.10, if your profiles.yml
file contains 2 profiles with same key, dbt just uses the last one. Or if you have a schema file with repeated top level keys like this one:
# models/schema.yml
version: 2
models:
- name: my_model_a
description: "This is a"
columns:
- name: user_name
tests:
- not_null
models:
- name: my_model_b
description: "This is b"
columns:
- name: user_name
tests:
- not_null
Only the last models
key will apply to the project when you execute dbt tasks that depends on them like test
or docs generate
.
This has been fixed, and will now throw warnings. Nice!
⚙️ Artifact Metadata & Cloud Upload
dbt Core users are now able to upload artifacts such as the run_results.json
files from local to the dbt Cloud after the invocation finishes. This is especially beneficial for hybrid projects, fostering collaboration between dbt + dbt Core users, for a more connected dbt experience.
As part of this change, an invocation_started_at
field has been added alongside invocation_id
field to make certain types of run time calculations easier.
🐍 Python 3.13 Support
Full compatibility with the latest Python 3.13 runtime. Learn more about Python 3.13 here.
⚠️ Breaking Changes & Behavior Flags
In this section we will discuss the breaking changes with this release.
🚫 Spaces in Resource Names Disallowed
By default, spaces in model/table names are now blocked. Controlled by behavior flag disallow-spaces-in-resource-names
.
When the require_resource_names_without_spaces
flag is set to True
, dbt will raise an exception (instead of a deprecation warning) if it detects a space in a resource name.
🔁 Behavior Flag Updates
source-freshness-run-project-hooks
now true by default. Legacy workflows may need adjustment.warn_error
flag enforces warnings to fail builds.- New toggles to migrate from deprecated behaviours slowly.
You can read more about behaviour flags here.
How to explicitly set the new flags?
models:
+validate_macro_args: true
+warn_error: true
+disallow-spaces-in-resource-names: true
+source-freshness-run-project-hooks: true
🕒 Artifacts Change
invocations_started_at
addition in the run_results.json
artifact may require updates in downstream integrations.
🛠️ Upgrade Guide
We recommend following the official upgrading guide below.

But here is our checklist, in any case, if you're upgrading it.
Step 1: Pin & Install
pip install --upgrade dbt-core dbt-postgres # Or your adapter
Or in dbt Cloud, use the “Latest” or “Compatible” release track.
Step 2: Dry Run & Lint
dbt compile
dbt build --sample
Watch for deprecation warnings and validation failures.
If you enable validate_macro_args
, catch invalid macro calls before runtime.
Step 3: Behavior Flags
Add settings in dbt_project.yml
:
models:
+validate_macro_args: true
+warn_error: true
Also explicitly set new flags:
models:
+disallow-spaces-in-resource-names: true
+source-freshness-run-project-hooks: true
Step 4: Test Catalogs & Snapshots
Check:
snapshots
usinghard_deletes="new_record"
catalogs.yml
parsing and reference integration- Micro-batch logic with Jinja
batch
behavior
Step 5: Update CI & Cloud
- Use
--sample
to speed CI - Pin environments/jobs to correct release track (
latest
,compatible
) - Monitor artifact uploads if using hybrid Cloud deployments
Step 6: Production Rollout
- Once staging checks out, upgrade production environments
- Expect cleaner builds, faster iteration, and schema-sound metadata
- Continue toggling new flags—especially around warnings and naming
🚀 Why 1.10 Matters Now
- Iteration Speed & Cost-Savings: Development build times drop with sample mode.
- Configuration Hygiene: Early detection of errors with validation.
- Micro-batch Control: Precision hooks and logging in batch jobs.
- Hybrid & Catalog Readiness: Snowflake/BigQuery/Iceberg users are first in line.
- Stability for the Future: Python 3.13 support and artifacts geared for the new engine.
🧪 Final Thoughts
dbt 1.10 introduces game-changing features around speed, safety, and scale. From sample mode and batch context to improved automations and hybrid metadata support, this release positions your team for low-cost iteration and readiness for upcoming engine upgrades.
Adopt it gradually. Start with sampling and validation, tune behavior flags, test in staging, and roll out to prod. The payoff? Faster dev cycles, cleaner configs, more robust builds, and alignment with next-gen dbt.
Need help upgrading or auditing behavior-flag decisions? I’m here—drop me a note!
📚 References
EOF
(Where I tend to share unrelated things).
What's your favourite feature/bug fix in this release? Let me know in the comments.
Discussion