So you are telling me that there was 6 TB of shared and dedup data in the most recent snapshot that was reference in the snapshots from days 8 - 13?
I believe the answer is YES
Your original statement has a clue in it
"Last week we had 14 days worth of snapshots and due to some storage growth, we changed this to 7 days worth of snapshots." I believe you had some storage growth due to unexpected change rate which got captured in your "older" snapshots.
That ~6 TB were blocks whose last reference lived in the older snapshots. When you deleted days 8–14:
- Those blocks either became unreferenced (freed) or coalesced/deduped with active/newer data.
- The array’s “space that would be freed if you delete the most-recent snapshot” metric shrank—from ~11 TB to ~4.6 TB.
Scenario
- Daily snapshot at 00:00.
- A 6 TB temp dataset exists Day 1–7, then is deleted on Day 8.
- Other normal churn adds ~4–5 TB across the week.
Before: 14-day retention (S1–S14 kept)
- The array must retain those 6 TB blocks because S1–S7 still reference them.
- Due to how “would-free” is attributed, a large chunk of that 6 TB can be charged to the newest snapshot’s metric (e.g., S14 shows ~11 TB would-free).
After: cut to 7-day retention (keep S8–S14; delete S1–S7)
- All references to the 6 TB dataset are gone (they only lived in S1–S7).
- Those blocks are fully freed.
- The “most-recent” snapshot’s would-free drops (e.g., from ~11 TB → ~4.6 TB), even though S8–S14 didn’t “contain” that data. The accounting changed because the old snapshots that forced the blocks to exist no longer do.
Takeaway: Snapshot sizes are interdependent. Deleting older snapshots can shrink the reported size of the newest snapshot when those older snapshots held the last references to big, now-deleted data. For forecasting, model daily unique churn (including short-lived datasets) rather than summing per-snapshot numbers.
Hope it helps, Garry Ohanian