Skip to main content
agile methodologyscrumdevopssoftware development

Agile Development

Agile development is an iterative approach to software delivery where teams ship working increments in short cycles — typically two-week sprints — gather feedback, and adapt. It replaced waterfall's long planning-then-building sequence with continuous delivery and continuous learning. For data teams, agile methods shape how data products, governance policies, and catalog features are prioritized and delivered.

TL;DR

Agile development delivers software in short iterative cycles rather than long sequential phases. Teams plan in sprints (1-4 weeks), ship working increments, gather feedback, and adjust priorities continuously. The Agile Manifesto values working software over documentation, customer collaboration over contracts, and responding to change over following rigid plans. Data teams increasingly adopt agile to manage data product backlogs, governance rollouts, and catalog development.

Core Principles

The Agile Manifesto, published in 2001, defines four values:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Twelve supporting principles follow from these values. The practical ones: deliver working software frequently (weeks, not months). Welcome changing requirements, even late in development. Business people and developers work together daily. Build projects around motivated individuals, give them the environment and support they need, and trust them. At regular intervals, the team reflects on how to become more effective and adjusts.

These principles are not aspirational slogans. They are design constraints. A team that delivers every two weeks but ignores feedback is not agile. A team that holds daily standups but ships once a quarter is performing ceremony without substance.

AGILE SPRINT CYCLE2-WeekSprintPlanSelect backlog itemsBuildWrite codeTestValidate qualityReviewDemo to stakeholdersRetrospectAdapt processWorking increment shipped
Click to enlarge

Scrum, Kanban, and XP

Scrum is the most widely adopted framework. Work is organized into time-boxed sprints (usually two weeks). Three roles define accountability: the product owner prioritizes the backlog, the scrum master removes impediments and coaches the team, and the development team delivers the increment. Four ceremonies structure each sprint: planning (what will we build?), daily standup (what is blocking progress?), sprint review (demo to stakeholders), and retrospective (how do we improve the process?).

Kanban drops the time-box. Work flows continuously through a visual board with columns (To Do, In Progress, Done). The key constraint is WIP limits — capping how many items can be in progress simultaneously. This prevents overcommitting and forces the team to finish work before starting new work. Kanban works well for teams handling unpredictable work like support requests or data incident response.

Extreme Programming (XP) focuses on engineering practices rather than project management. Its core: test-driven development (write the test before the code), pair programming (two developers, one screen), continuous integration (merge and test code multiple times per day), and relentless refactoring. XP practices complement Scrum — many teams run Scrum ceremonies with XP engineering discipline underneath.

The 16th Annual State of Agile Report found that 87% of organizations practice agile in some form, with Scrum remaining the most popular framework. However, only 16% of those organizations apply agile practices beyond software development to areas like data, marketing, and operations.

— Digital.ai, State of Agile Report

Agile for Data Teams

Software teams adopted agile two decades ago. Data teams are catching up. The principles transfer directly, but the implementation looks different.

Data product backlog. Instead of user stories for application features, the backlog contains data deliverables: datasets to publish, pipelines to build, quality rules to implement, glossary terms to define. Each item has acceptance criteria — "the customer dimension table includes all active accounts, refreshes daily by 6am, and passes five quality checks" — not vague descriptions like "improve data quality."

Sprint-based governance rollout. Rather than deploying a governance framework across all domains at once, teams govern one domain per sprint. Sprint 1: define and document the 30 most-used business terms in finance. Sprint 2: link those terms to source tables and add quality rules. Sprint 3: move to the next domain. Each sprint delivers measurable progress.

Cross-functional data squads. Data engineers, analysts, data stewards, and business users working in the same sprint toward a shared goal. The business user defines what "revenue" means, the engineer builds the pipeline, the steward documents it in the catalog, and the analyst validates the output. No handoffs across team boundaries.

Iterative catalog development. Ship catalog features incrementally — lineage for the top 10 pipelines first, then expand. A glossary with 50 well-defined terms is more useful than a glossary with 500 placeholder entries. BI consumers start getting value immediately instead of waiting for a "complete" solution.

Common Pitfalls

Agile theater. The team holds daily standups, runs sprints, and uses a board — but never actually adapts based on feedback. Requirements are locked at sprint start and never change. Retrospectives produce action items that nobody follows up on. The ceremonies happen; the mindset does not.

Backlog grooming neglect. An unmanaged backlog of 500+ items is not agile — it is a graveyard of good intentions. If no one has looked at a backlog item in three months, it should be deleted or archived. The backlog is a prioritized list of what to build next, not a wish list.

Velocity as a target. Velocity measures how much work a team completes per sprint. When management treats velocity as a performance metric ("increase velocity by 20%"), teams inflate story points. A team that completed 40 points last sprint now completes 50 — not because they delivered more, but because they re-estimated.

Missing technical practices. Doing Scrum ceremonies without continuous integration, automated testing, or refactoring is what Martin Fowler calls "flaccid Scrum." The ceremonies manage work; the engineering practices make the work shippable. Without both, sprints end with "almost done" increments that pile up.

The most common failure mode in agile adoption is implementing ceremonies without changing the underlying engineering practices. Teams that add sprints and standups without test automation, continuous integration, and incremental delivery see no improvement in outcomes.

— Martin Fowler, Flaccid Scrum

Measuring Agile Effectiveness

The DORA metrics (from the DevOps Research and Assessment team) provide the most practical framework for measuring whether agile practices are actually working:

Lead time — the time from idea to production. How long does it take from "we should build this" to "users can use it"? High-performing teams measure lead time in hours or days, not weeks.

Cycle time — the time from work started to work done. Unlike lead time, this excludes queue time. It measures how fast the team moves once they pick something up.

Deployment frequency — how often the team ships to production. Daily? Weekly? Monthly? Higher frequency means smaller batches, which means lower risk per release.

Change failure rate — what percentage of deployments cause an incident, rollback, or hotfix. This measures quality. Shipping fast only matters if what you ship works.

These four metrics function as KPIs for engineering teams. They are measurable, owned (by the team), tied to outcomes (not activity), and actionable. Teams that track all four avoid the trap of optimizing speed at the expense of quality, or vice versa. A/B testing process changes against these metrics helps teams distinguish real improvements from noise.

WATERFALL VS. AGILE GOVERNANCE ROLLOUTWaterfall GovernancePlan framework6 monthsBuild framework4 monthsDeploy all domains2 monthsAdoption?Stalls at ~20%Agile GovernanceSprint 1Finance domainSprint 2+ Sales domainSprint 3+ Marketing domainSprint 4+ Product domain...Adoption growsincrementally
Click to enlarge

Agile and Data Governance

Traditional data governance followed waterfall-style rollouts: 12-month planning cycles, "big bang" deployments of governance frameworks, and adoption rates that stalled around 20%. The governance team would spend months writing a comprehensive policy document, then wonder why nobody followed it.

Agile governance starts with one data domain, proves value, and expands. Instead of abstract deliverables ("governance framework document v3.2"), governance sprints deliver measurable outcomes: "100 business terms defined and linked to source tables," "data quality rules covering 95% of the finance pipeline," or "lineage mapped for the top 20 dashboards."

The sprint structure forces prioritization. A governance team cannot govern everything simultaneously, so they start with the data domain that causes the most pain — usually the one where conflicting definitions lead to conflicting dashboards. Fixing that domain first generates visible value, which builds organizational support for expanding to the next domain.

Retrospectives matter here too. After each governance sprint, the team asks: Did the business terms we defined actually get used? Did the quality rules catch real issues? Are the documented owners actually responding to alerts? If not, the team adjusts before repeating the same approach across five more domains.

How Dawiso Supports Agile Data Teams

Dawiso's data catalog is built for iterative adoption. Teams start by cataloging one data domain, add glossary terms sprint by sprint, and expand lineage coverage incrementally. There is no requirement to catalog everything before the platform becomes useful.

The platform tracks which data assets are documented, which lack owners, and which have quality rules — providing a natural backlog for governance sprints. A data steward can open Dawiso, filter for ungoverned assets in the sales domain, and generate a sprint's worth of work in minutes.

Through the Model Context Protocol (MCP), automation tools can query the catalog to identify ungoverned assets and generate backlog items programmatically. An AI agent can scan for tables without descriptions, columns without business terms, or pipelines without lineage — and create prioritized work items for the next sprint.

Conclusion

Agile development is not a set of ceremonies. It is a commitment to short feedback loops, continuous delivery, and adaptation based on evidence. The frameworks — Scrum, Kanban, XP — provide structure, but the principles matter more than the process. For data teams, agile methods offer a proven alternative to the waterfall governance rollouts that stall at 20% adoption. Start with one domain, deliver something real every two weeks, and expand based on what works.

Dawiso
Built with love for our users
Make Data Simple for Everyone.
Try Dawiso for free today and discover its ease of use firsthand.
© Dawiso s.r.o. All rights reserved