Skip to main content
kpikey performance indicatorsbusiness metricsperformance management

Key Performance Indicator (KPI)

A KPI is a quantifiable metric tied to a specific business objective — not just any number you can measure, but one that signals whether the organization is moving toward or away from a strategic goal. Revenue growth, customer churn rate, deployment frequency, net promoter score: these qualify as KPIs because someone changes behavior based on them.

The challenge is not defining KPIs but governing them. When finance calculates "monthly revenue" one way, marketing calculates it another, and the CEO's dashboard shows a third number, the metric stops being useful. That is a metadata problem, and it is where KPIs intersect with data governance.

TL;DR

KPIs are metrics tied directly to strategic objectives — revenue growth, customer retention, operational efficiency. Leading KPIs predict future outcomes (pipeline value, NPS). Lagging KPIs confirm past results (quarterly revenue, churn rate). The biggest KPI problem is not choosing the wrong metrics — it is having the same metric defined differently across departments. When finance and marketing disagree on what "conversion" means, dashboards show conflicting numbers and trust erodes.

What Makes a Good KPI

Five criteria separate a genuine KPI from a number that sits in a dashboard and gathers dust.

Tied to a decision. If no one changes behavior based on this number, it is not a KPI. "Total website visitors" is a metric. "Visitor-to-trial conversion rate" is a KPI — when it drops, the marketing team investigates and acts.

Clearly defined. The calculation method, data sources, and exclusions are documented. Monthly recurring revenue (MRR) is a good KPI: it has a standard calculation (sum of all active subscription values), excludes one-time fees, and uses a specific source system as the record of truth.

Consistently measurable. The same person measuring on two different days gets the same result. If the number depends on which spreadsheet someone pulled, or which dashboard they opened, it fails this test.

Owned. Someone is accountable for the metric and its trajectory. MRR is owned by finance. Customer churn is owned by customer success. If nobody owns a KPI, nobody acts on it.

Time-bound. Measured against a target within a specific period. "Reduce churn" is a wish. "Reduce monthly churn from 3.2% to 2.5% by Q4" is a KPI.

LEADING VS. LAGGING INDICATORSNOWFUTUREPipeline ValueWeb TrafficNPS ScoreLEADING — predictRevenueChurn RateProfit MarginLAGGING — confirm
Click to enlarge

Leading vs. Lagging Indicators

Leading indicators predict future results. Lagging indicators confirm past performance. The distinction matters because organizations that only track lagging indicators are driving by looking in the rearview mirror.

Leading indicators signal what is likely to happen. Sales pipeline value predicts next quarter's revenue. Website traffic predicts conversion volume. Employee engagement scores predict attrition six months out. These metrics give teams time to act before the outcome is locked in.

Lagging indicators report what already happened. Quarterly revenue, annual profit margin, customer churn rate — these confirm whether the strategy worked. They are essential for accountability and reporting, but they arrive too late to change course.

The practical insight: every KPI dashboard should include both types. A SaaS company tracking only MRR (lagging) misses the pipeline signals that predict next month's MRR. A company tracking only pipeline value (leading) cannot confirm whether pipeline quality actually converts to revenue. Pairing them — pipeline value now, revenue next quarter — creates a feedback loop that validates or corrects assumptions.

Amazon tracks weekly inventory turns as a leading indicator for quarterly revenue. When turns slow, it signals softening demand before the revenue impact appears — giving merchandising teams weeks to adjust pricing, promotions, and replenishment.

— Kaplan & Norton, The Balanced Scorecard

The KPI Definition Problem

The most common KPI failure is not picking the wrong metric. It is having multiple conflicting definitions of the same metric.

Consider "active users." Marketing defines it as anyone who logged in within 30 days. Product defines it as anyone who performed a core action within 7 days. Finance defines it as any paid account with at least one session. Marketing reports 500,000 active users. Product reports 200,000. Finance reports 80,000. The CEO asks "how many active users do we have?" and gets three different answers.

This is not a measurement error. Each definition is internally consistent and reasonable for its context. The problem is that the organization never agreed on a single canonical definition. There is no governed business glossary entry for "active users" that specifies the calculation method, the data source, the inclusion/exclusion criteria, and the owner.

The fix is metadata, not meetings. KPI definitions belong in a governed glossary where the calculation, source, filters, and owner are documented and version-controlled. When someone changes the definition (say, switching from 30-day to 7-day active users), the change is tracked, downstream dashboards are flagged, and stakeholders are notified. Without this infrastructure, definition drift happens silently.

THE KPI DEFINITION PROBLEMMarketing"Logged in within 30 days"500K usersProduct"Core action within 7 days"200K usersFinance"Paid account with any session"80K usersActive Users: ???CEO dashboardGOVERNED SOLUTIONBusiness Glossary: "Active Users"Definition: Paid account that performed a core action within 7 daysSource: product_events table | Owner: Product AnalyticsOne number. One definition. All dashboards agree.
Click to enlarge

KPI Frameworks

Three frameworks help organizations structure their KPIs beyond ad-hoc selection.

Balanced Scorecard. Developed by Kaplan and Norton, this framework forces measurement across four perspectives: financial (revenue, margins), customer (satisfaction, retention), internal process (efficiency, quality), and learning and growth (employee development, innovation). The value is balance — it prevents the trap of optimizing financial KPIs while ignoring the customer and operational metrics that drive them.

OKRs (Objectives and Key Results). A qualitative objective ("Become the preferred vendor for mid-market SaaS companies") paired with 3-5 measurable key results ("Increase mid-market pipeline by 40%," "Win 15 new mid-market logos," "Achieve NPS 60+ in mid-market segment"). OKRs are common in technology companies and work well for goal alignment across teams. The risk: teams pick easy key results that look good but do not stretch performance.

North Star Metric. A single metric that captures the core value a product delivers. Spotify: time spent listening. Airbnb: nights booked. Slack: messages sent. The North Star anchors all team-level KPIs — every team's metrics should ultimately contribute to moving it. The weakness: a single metric can oversimplify complex businesses with multiple revenue streams.

Google's OKR implementation requires that 60-70% of key results are measured by metrics that already exist in the organization's data systems. The remaining 30-40% drive new instrumentation — meaning OKR adoption is also a data governance initiative, forcing teams to define and standardize metrics they never formally measured.

— John Doerr, Measure What Matters

Governing KPIs

Ungoverned KPIs degrade over time. Here is how to prevent it.

Central glossary. Every KPI has one governed definition, one owner, one calculation method. The glossary is the authoritative source. When a BI tool displays "Monthly Revenue," it references the glossary definition — not a local formula someone built in a spreadsheet.

Lineage. Trace from the dashboard number back through transformations to source data. When the revenue number looks wrong, lineage shows which tables, joins, and calculations produced it — reducing investigation from days to minutes.

Change management. When a definition changes (say, switching from 30-day to 7-day active users), all downstream dashboards update, and stakeholders are notified. Without this, one team changes the definition, the dashboard updates silently, and a month later someone notices the trend line "broke."

Quality monitoring. Automated checks that KPI values are within expected ranges. If "monthly revenue" suddenly drops to zero or jumps 500%, a data observability alert fires before the number reaches a dashboard. This catches pipeline failures and source data issues before they corrupt decision-making.

Common KPI Failures

Vanity metrics. Measuring total registered users instead of active users because the number is bigger. Tracking total page views instead of conversion rate because it always goes up. Vanity metrics are designed to impress, not to inform. They feel good in a board deck but do not drive decisions.

Gaming. Call center agents measured on average call duration start hanging up on complex calls. Sales reps measured on deal count start splitting deals into smaller contracts. KPIs shape behavior — if the metric incentivizes the wrong behavior, the team will optimize for the metric and harm the outcome.

Metric proliferation. An organization tracking 200 KPIs is tracking zero effectively. When everything is a priority, nothing is. Executive dashboards should have 5-7 KPIs. Department dashboards 10-15. More than that, and the term "key" loses meaning.

Dashboard rot. Dashboards built for a specific initiative that nobody updates when business context changes. A year later, the dashboard still shows metrics for a product line that was discontinued. Users learn to distrust the entire analytics platform.

Definition drift. The analytics team updates the tracking code, changing how "conversion" is measured. The metric name stays the same. The dashboard updates. Nobody notices that the trend line now reflects a different calculation. Three months later, the marketing team reports a "20% improvement" that is actually a measurement change. A/B tests planned around the old definition no longer produce comparable results.

How Dawiso Supports KPI Management

Dawiso's business glossary is purpose-built for KPI governance. Each metric gets a formal definition, calculation method, data source documentation, and ownership assignment. When a BI tool displays "Monthly Revenue," it can reference Dawiso's glossary to show the canonical definition — preventing the "which revenue number is right?" debate.

Data lineage traces each KPI from the dashboard back through transformations to the source system. When the churn number looks off, a data analyst can follow the lineage from the dashboard widget to the transformation logic to the raw event table — pinpointing whether the issue is in the source data, the calculation, or the visualization layer.

Dawiso's data catalog lets teams discover which KPIs already exist before inventing new ones. Before a product team creates a new "engagement score," they can search the catalog to find that three similar metrics already exist — and pick the governed one instead of adding a fourth conflicting definition.

Through the Model Context Protocol (MCP), BI platforms and AI agents can programmatically retrieve KPI definitions from Dawiso. An AI-powered analytics assistant answering "what is our churn rate?" can look up the canonical definition, pull the correct data, and cite the source — rather than guessing which table to query.

Conclusion

The hard part of KPIs is not selecting them. It is governing them — ensuring every team, every dashboard, and every automated report uses the same definition, the same source, and the same calculation. Organizations that treat KPI management as a metadata problem — with a governed glossary, lineage, and change management — get metrics they can trust. Those that skip governance get three different answers to the same question.

Dawiso
Built with love for our users
Make Data Simple for Everyone.
Try Dawiso for free today and discover its ease of use firsthand.
© Dawiso s.r.o. All rights reserved