Future of the Metrics Layer with Drew Banin (dbt) and Nick Handel (Transform)
Hot takes on what we get wrong about the metrics layer and where it fits in the modern data stack
The metrics layer has been all the rage in 2022. It’s just forming in the data stack, but I’m so excited to see it coming alive. Recently dbt Labs incorporated a metrics layer into their product, and Transform open-sourced MetricFlow (their metric creation framework).
A few weeks ago, I was lucky enough to chat about the metrics layer with two most prolific product thinkers in the space — Drew Banin (Co-founder of dbt Labs) and Nick Handel (Co-founder of Transform).
We covered everything from the basics of a metrics layer and what people get wrong about it to real-life use cases and its place in the modern data stack.
Before we begin… WTF actually is a metrics layer? Today metrics are often split across different data tools, and different teams or dashboards end up using different definitions for the same metric. The metrics layer aims to fix this by creating a common set of metrics and their definitions.
Drew and Nick dove more into this definition, so let’s jump right into all of their insights and fiery takes. We talked for over an hour, so this is a condensed, edited version of our discussion. (Check out the full recording here.)
How would you explain the metrics layer to a beginner data analyst?
Since it’s a new concept, there’s a lot of confusion about what really the metrics layer is. Drew and Nick cut through the confusion with succinct definitions about creating a common source of truth for metrics.
Drew Banin: “The shortest version I can think of is…
“Define your metrics once and reference them everywhere so that if your metrics ever change, you get updated results everywhere you look at data.”
Nick Handel: “The way that I’ve explained it to family and people who are totally out of the space is just, businesses have data. They use that data to measure their operations. The point of this software is basically to make it really easy for the data analysts (the people who are responsible for measuring that data) to define those metrics, and make it easy for the rest of the business to consume that single correct way to measure that data.”
What is the real problem the metrics layer is looking to solve?
Nick and Drew explained that the metrics layer is motivated by two key ideas: precision and trust.
Nick: “I think we’re all pretty convinced about the value of data. We have all kinds of different, interesting things that we can do with data, and the cost of doing those things is fairly high. There’s a bunch of work to get the data into the place where we can go and do anything that’s really interesting and useful.
“Why does this matter? It’s supposed to make that whole process of getting the data ready for that delivery of value much easier and also more trustworthy.
“It comes down to those two things: productivity and trust. Is it easy to produce the metric, and is it the right metric? And can you put it into whatever application you’re trying to serve?”
Drew: “That’s really good framing. I just look inwards at our organization. The very first metric we ever created was weekly active projects — how many dbt projects were run in the previous seven days? Now we’re about 250 people and we’re measuring so many things across the business with lots of new people around.
“We’re trying to make sure that when someone says ‘weekly active accounts’ or ‘MRR’ or ‘MRR split by manage versus self-service’, we all mean exactly the same thing.”
Drew and Nick also emphasized change management as both a major challenge and use case for the metrics layer.
Drew: “I think so much about the change management part of it. If you get the right people together, you can precisely define a metric at that point in time. But inevitably your business or product will evolve. How do you keep it in sync in perpetuity? That’s the hard part.”
Nick: “I really agree with that. Especially if change management is happening when there are only a few people in the room, and other people who are depending on the same metrics weren’t a part of that conversation.”
How should we think about the metrics layer, and how should it interplay with other components of the modern data stack?
Nick broke the metrics layer down into four key components (semantics, performance, querying, and governance), while Drew focused on its role as a network connecting a diverse set of data tools.
Nick: “The way that I think about the metrics layer is basically four pieces. There are the semantics: How do I go and define this metric? This can range from ‘Here’s a SQL snippet’ or ‘This is the definition of the metric’ to a full semantic layer that has entities and measures and dimensions and relations.
“Then there’s performance. Great, now I have this semantic model. How do I go and build logic against it, executed against some compute environment (whether it’s a warehouse or just a compute engine on a data lake)?
“Then there’s, how do I query this thing? What are the interfaces that I use to pull it out of the data warehouse or data lake, resolve it into this quantitative object that I can then go and use in some analysis. That includes both broad ways of consuming data (like a Python interface or GraphQL or a SQL interface) as well as direct integrations (a tool that builds a custom wrapper around a REST or GraphQL API and builds a really first class experience).
“Then the last piece is governance. There’s organizational governance and technical governance. Organizational governance meaning, does the finance leader agree on the human-understandable definition of revenue in the same way that the technical person who’s defining the logic defines that code?”
Drew: “Just to provide an alternate framing: We can think of it in terms of the experience for the person who wants to consume data to answer some question or solve some problem, and then also the people building the tools where these folks are consuming the data.
“It’s a little bit at odds with each other, because the business consumers want to see the exact same metric in every single tool and they want it all to update in real time. So you have this giant network of different tools that conceivably need to talk to each other. That’s a hard thing to organize and make happen in practice.
“That’s why the idea that we call this the ‘metrics layer’ makes sense. It is a single abstraction layer that everything can interface with so that you can get precise and consistent definitions in every single tool.
“To me, that’s where metadata really shines. Like, this is the metric, this is how it’s defined, this is its provenance, here’s where it’s used. This isn’t actually the data itself. It’s attributes of the data. That’s the information that can synchronize all these different tools together around shared data definitions.”
What metadata should we be tracking about our metrics, and why?
Nick and Drew shared that metadata is key for understanding metrics because companies lose important tribal knowledge about data outages and anomalies over time as staff changes.
Nick: “The metric is one of the most consistent objects in an organization’s life.
“Products change, tables change, everything changes. Even the definitions of these metrics evolve. But most businesses end up tracking the same North Star metrics from the very early days. If you can attach metadata to it, that is incredibly valuable.
“At Airbnb, we tracked nights booked. It was important from the very early days when BI was literally a printed-off graph that they put on the wall, and it’s still the most important metric that the company talks about in the public earnings calls. If we had been tracking important metadata through time of what was happening to that metric, there would be a wealth of knowledge that the company could use.”
They explained that these changes are why it’s crucial for the metrics layer to interact with both the data layer and the business layer — to capture context that affects data analysis and quality.
Nick: “Airbnb had a big product launch, and different metrics spiked in all different directions. Today, I’m not sure that a data scientist at Airbnb could really understand what happened. They’re trying to use historical data to understand things, and they just don’t have that context. If anything, they really only have context for the last two or three years, when there was somebody who’s in the business who remembers what happened, who did the analysis, etc.”
Drew: “There’s a lot of this that ends up being technical — in terms of how tools integrate with each other, and how you define the metrics and version them. But so much of it is indeed the social and business context.
“In practice, the people that have been around for the longest time have the most context and probably know more than any of our actual systems do.
“We had a period where we had a little bit of data loss for some events we were tracking. It looked like, I think it was, May 2021 was the worst month ever. But really it was just like, no, we didn’t collect the data.
“How would you know that? Where does that information live? Is it a property of the source dataset that propagates through to the metrics? Who’s responsible for encoding that?”
What are the real use cases for a metrics layer?
Drew and Nick called out a lot of potential applications for the metrics layer — e.g. improving BI and analytics for early-stage data teams, helping business and data people use data models in the same way, and making valuable but time-consuming applications (like experimentation, forecasting, and anomaly detection) possible for all companies.
Drew: “I think some of the use cases around BI and analytics are the most clear, obvious, and present for a lot of companies.
“Many companies out there are not at the data science and machine learning part of their journeys yet. Things that make business intelligence and reporting better (more precise and more consistent) cover 90% of the problems that they’re trying to solve with data.
“Casting our minds forward, I think that there could be a ton of benefits to leveraging metrics for data science use cases.
“Specifically, one of the things that we’ve seen people do with dbt that was really formative for me — they would build these data models and then use them both for BI reporting and also to power data science applications and modeling. The fact that the data scientist and the BI analysts are using the same data sets means that it’s a lot more likely that they are consuming the same data in the same way. When you extend it to metrics, there’s like a really natural way to make that happen too.”
Nick: “I do partly agree with that. But also there are a lot of data science and machine learning applications that require very different datasets than what a metric store produces.
“In analytics applications, you try and include as much relevant information as possible. If you have an ecommerce store, people can browse it logged out. So you try and dedupe users and identify as users log into devices. There’s a whole practice of trying to figure out which entities are using your service. That is really important for analytics because it allows us to get a much clearer picture. But you don’t want to do that for machine learning, because that’s all information leakage and that will ruin your models.
“With machine learning, you try and get as close to the raw data sets as possible. With analytical applications, you try and process that information into the clearest and best picture of the world.
“One of the applications that I always think about is experimentation. The reason we built a metrics repo initially was experimentation.
“There were 15–20 people on the data team at the time. We were trying to run more product experiments, and we were doing everything manually. It was really time intensive to go and take assignment logs and metric definitions and join them together.
“Basically, we needed some programmatic way to go and construct metrics. It’s a hugely valuable application for companies that do it, but very few companies have the infrastructure or build the tooling to do this. I think that that’s really unfortunate. And it’s probably the thing that I’m most excited about the metrics layer.
“If you think about every data application as having some cost and some benefit — the more you can reduce the cost of pursuing that application, the more clearly the justification becomes to pursue some new application.
“I think experimentation is one of these examples. I also think about anomaly detection or forecasting. These are things that I think most companies don’t do — not because they’re not valuable, but just because producing the datasets to even get started on those applications is really hard.”
Let’s jump into some questions about the metric layer and the modern data stack.
First, let’s talk bundling vs unbundling. Should the metrics layer even be a separate layer, or should it be part of an existing layer in the stack?
As with every debate in the data ecosystem, we ended up just answering, it depends. Drew and Nick explained that how we solve this problem is ultimately more important than how we define that solution.
Drew: “I’m not in love with the way that we as an ecosystem talk about new tools as being layers, like the missing layer of the data stack. That’s the wrong framing.
“People that build applications don’t think about it that way. They have services, and the services can talk to each other. Some are internal services and some are SaaS services. It becomes a network of connected tools rather than exactly, say, four layers. No one runs an application anymore with exactly the Linux, Apache, MySQL, and PHP (LAMP) stack, right? We’re past that.
“The word ‘layer’ makes sense only insofar as it’s a layer of abstraction. But otherwise, I reject the terminology, although I can’t think of anything too much better than that.
“The last thing I’m going to say on bundling and unbundling… For this thing to work, it does need to be an intermediary between a very big network of different tools. Treating it as a boundary like that motivates which tools can build it and provide it. It’s not something you would see from a BI tool, because it’s not really in a BI tool’s interest to provide the layer to every other BI tool — which is like the thing that you want from this.”
Nick: “I think I generally agree with that.
“Basically, people have problems, and companies build technologies to solve problems. If people have problems and there is a valuable technology to build, then I think it’s worth taking a shot and trying to build that technology and voicing those opinions.
“Ultimately, I think that there are good points there of the connection to different organizational workflows. This is not something that I think we’ve done a good job of explaining, but I think that the metrics store and the metrics layer are two different concepts.
“The metrics store extends the metrics layer to include this piece of organizational governance — how do you get a bunch of different business users involved in this conversation, and actually give them a role in something that, frankly, they have a huge stake in? I think that that is something that is not really caught in this conversation around the metrics layer, or headless BI, or any of these different words. But it’s really, really important.”
For a traditional company that already has a data warehouse and BI layer, where does the metrics layer fit into their stack?
Again, the answer is that it depends — sigh. The metrics layer would live between the data warehouse and BI tool. However, every BI tool is different and some are friendlier to this integration than others.
Nick: “The metrics layer sits on top of the data warehouse and basically wraps it with semantic information. It then allows different endpoints to be consumed from and basically pushes metrics to those different places, whether they’re generic or direct integrations to those tools.”
Drew: “It ends up being very BI tool–dependent. There are some BI tools where this is a very natural type of thing to do, and others where it’s actually pretty unnatural.”
If a company has already defined a ton of metrics within their BI tool, what should they do?
Nick and Drew explained that slow and steady wins the race when you aren’t starting from scratch. Instead of planning a huge overhaul, start with one team or tool, integrate a better metrics layer, and test how it works for your organization.
Nick: “I would advocate for not a huge ‘change everything all at once’. I would advocate for, define some metrics, push these through the APIs and integrations, build something new, potentially replace something old that was hard to manage, and then go from there once you’ve seen how it works and believe in that philosophy.”
Drew: “I’m with you. I think something domain-driven makes a lot of sense. You can validate it and then expand. I’d probably start with… it depends on your tolerance, but the executive dashboard that goes to the CEO. Is that the best place to kick the tires? Maybe not. But if it works there, it’ll work everywhere.”
Can’t a metrics layer just be part of a feature store?
Since Nick has built multiple feature stores and metrics layers, he had a strong opinion on this topic — while the metrics layer and features store are similar, they are too fundamentally different to merge right now.
Nick: “I have a really strong opinion about this one because I’ve built two feature stores and three metrics layers. These two things are totally different.
“At the core, they are both derived data. But there are so many nuances to building feature stores and so many nuances to building metric stores. I’m not saying that these two things will never merge — the idea of a derived data repository or something like that sounds wonderful. But I just don’t see it happening in the short term.
“Everyone wants features to be specific to their model. Nobody wants metrics to be specific to their team or their consumption. People want metrics to be consistent. People want features to be unique and whatever benefits their model.
“Real-time versus batch — this is a super challenging problem in the feature space. Organizational governance is way important for the metrics layer. The technical definitions are often different. The level of granularity is different for features — you go way finer with features than you do metrics.”
Do you believe a caching layer is critical for a metrics layer?
This was a resounding YES from both Drew and Nick. Caching makes the metrics layer fast, which is critical for ensuring that data practitioners actually use it. However, it’s important that this caching doesn’t replicate data.
Drew: “I think that the speed with which you can ask a question and get an answer back is really critical.
“The difference between something taking a minute plus to come back and not coming back at all is negligible in a lot of cases. So, conceptually, I’m very aligned with the idea of caching metric data and being able to serve it up really quickly.
“I will just say — and I think we’ve been open about this in the past — we probably won’t do that for V1 of metrics within dbt. But conceptually, I’m pretty aligned with that being an important part of the system long-term.”
Nick: “Caching is super important. Performance matters a ton, especially to business users. Even 10 seconds is less than an ideal experience.
“I think that there are two important nuances to caching. One is, what do I know ahead of time that I want, and how do I pre-compute that and make that really snappy? And then if I do compute something, how do I then reuse it so that it’s fast next time? I think that is the point of a caching layer.
“The other one is, I don’t think that caching needs to happen outside of the cloud data warehouse or the data lake. I think that you can use those systems. The replication of data, in my mind, is just so costly and so hard to manage.”
Finally, if you were handed a megaphone and could blast out a message for the entire data world, what would you say?
“There are a lot of problems in data that you can solve with technology, but some of the hardest and most important ones you must solve with conversations and people and alignment and sometimes whiteboards. Knowing which kind of problem you’re trying to solve at any given time is going to help you pick the right kind of solution.”
“I think the metrics layer is basically a semantic layer with an additional concept of a metric, which is super important. So I would just say, the metrics layer should be backed by a general-purpose semantic layer. The spec and the definition of that semantic layer and the abstractions is so unbelievably important.”
Side note: I’m personally super excited about how a metrics layer can interact with an active metadata platform to supercharge knowledge management for data teams. It’s been super exciting to see the metrics layer become more mainstream, which was a prediction I’d made at the start of this year.
Found this content helpful? I write weekly on active metadata, DataOps, data culture, and our learnings building Atlan at my newsletter, Metadata Weekly. Subscribe here.