
If you’re moving from spreadsheets into a database-backed analytics setup, there’s one model I nearly always recommend first: One Big Table (OBT).
Why? Because the experience feels familiar. You can query, filter, and pivot almost exactly as you would in Excel or Tableau — but now you’re harnessing the power and scalability of the cloud. No extract size limits, no complex joins to remember. For most teams, it’s a seamless step up that enables faster insights with less technical friction.
When I work with Tableau and Alteryx users taking their first steps into Snowflake or BigQuery, this model makes the transition intuitive. It aligns well with the analytical mindset they already have while introducing better practices like central data resources, automation, and governance.
That said, there’s a natural evolution. Once a company already has a well-defined data warehouse or starts introducing dbt, the conversation becomes more nuanced. Here, I tend to favour a Medallion architecture — separating raw, processed, and presentation layers — for traceability and modularity.
But even then, the presentation or “gold” layer often looks surprisingly like a One Big Table. Business users still benefit from having single, denormalised views optimised for analysis, even if they’re built on more engineered foundations beneath the surface.
The takeaway:
For new database users or spreadsheet-heavy teams, start simple with OBT.
For mature setups or dbt-driven pipelines, structure your data flow like Medallion, but let your end-users experience something OBT-like at the top.
Simplicity beats purity every time when it comes to adoption.
You can read my data model comparison with reasoning in the my complete post here → Medium
