Most NZ organisations are data rich and insight poor. We fix the second half of that problem.

Make decisions based on what your data actually shows.

We turn the data you already have into forecasts, risk signals, and decision support that leaders can actually use.

Your organisation is probably generating more data than it’s ever had. Transaction records, service logs, financial reports, customer interactions, operational metrics. The data exists. What’s missing is the ability to turn it into signals that actually help leaders make better decisions. Data models only deliver value when strategy sets the direction.

That gap — between having data and being able to act on it confidently — is what AI data modelling closes. Not by building a data warehouse or replacing your existing tools, but by applying analytical models to the data you already have to produce forecasts, risk signals, and performance insights that people understand and use.

Changeable designs and builds data models grounded in business context, not technical ambition. The measure of success isn’t model sophistication — it’s whether the people who need the insight are getting it, understanding it, and acting on it.

What data modelling actually means for your organisation

The terminology around data modelling and predictive analytics can make it sound like something that only large enterprises with dedicated data science teams can access. That’s not the reality — and it’s one of the reasons we’re direct about what this work involves.

At its core, data modelling for business means building systems that answer questions your organisation already has but can’t currently answer reliably:

  • Will we have enough capacity to meet demand next quarter?
  • Which customers, clients, or service users are most at risk of disengaging?
  • Where in our operation are costs increasing faster than output?
  • What’s the likely outcome if we change this pricing, staffing, or service configuration?
  • Which of our current projects are tracking toward a problem we can still prevent?
  • Where are the anomalies in our data that signal something worth investigating?

These are operational questions. The models we build are designed to answer them — in language and formats that decision-makers can work with, embedded into the workflows where decisions actually happen.

What we build

We work across the full range of data modelling and analytics use cases. The most common for NZ organisations:

Demand and capacity forecasting

Models that predict future demand based on historical patterns, seasonal factors, and leading indicators — giving operations, planning, and resourcing teams enough lead time to respond rather than react. Particularly valuable for councils managing service demand, health and social service providers, and businesses with seasonal or cyclical workloads.

Risk and anomaly detection

Models that continuously monitor operational data and flag deviations, concentrations, or patterns that warrant attention. These sit in the background and surface the signals that manual reporting misses — budget overruns tracking early, compliance gaps forming, quality metrics drifting before they become incidents.

Customer and cohort analysis

Models that segment your customers, clients, or service users by behaviour, value, risk, or need — giving sales, service, and planning teams a clearer picture of who they’re serving and what different groups require. Includes churn prediction, lifetime value modelling, and engagement scoring.

Performance and operational analytics

Models that measure what’s actually happening in your operation against what should be happening — surfacing the gaps between planned and actual performance, identifying the drivers of variance, and giving managers the visibility to intervene before small problems become large ones.

Scenario modelling and decision support

Models that let leaders test the likely outcomes of different decisions before committing to them. What happens to margin if input costs rise 15%? What’s the service impact of reducing headcount in this function? What’s the break-even point on this capital investment? Scenario models turn strategic guesswork into evidence-based planning.

Reporting and dashboard automation

Replacing manual report production with automated, accurate, real-time views of the metrics that matter. This isn’t just about saving time — it’s about making sure leaders are looking at current information when they make decisions, not last month’s numbers compiled by hand. For those interested, learn how to build your first AI-powered dashboard. without spending a cent

How we work

Data modelling work fails most often for one of two reasons: the model is technically sound but answers a question nobody was actually asking, or the data quality was never assessed before the model was built and the outputs can’t be trusted. Our methodology addresses both.

Phase 01

Business question definition and data audit

We start with the decisions, not the data. Before any modelling begins, we work with your leadership and operational teams to identify the specific questions the models need to answer and the decisions they need to support. Vague requirements produce vague outputs — precision at this stage determines the usefulness of everything that follows.

In parallel, we conduct a data audit: what data you have, where it lives, what quality issues exist, what’s missing, and what needs to be addressed before reliable modelling is possible. Perfect data is rare, and we don’t require it — but we won’t build models on data that will produce misleading outputs without telling you first.

Phase 02

Model design and validation approach

With clear questions and a realistic picture of your data, we design the model architecture — which analytical approach is appropriate for each use case, how the model will be validated against historical data, what the acceptable accuracy thresholds are, and how uncertainty will be communicated in the outputs.

We also design the governance layer: who owns the model, how its performance will be monitored over time, what triggers a review or recalibration, and how the outputs connect to your existing decision-making processes.

Phase 03

Build, test, and calibrate

Models are built iteratively and validated against real historical data before being presented as reliable. We communicate accuracy honestly — including the confidence intervals and limitations of each model — because a leader who over-trusts a model makes worse decisions than a leader who understands its boundaries and uses it accordingly.

We test outputs against scenarios your organisation recognises, so the people who will use the model can sense-check whether it’s producing results that align with their operational knowledge.

Phase 04

Integration and handover

A model that lives in a spreadsheet nobody opens isn’t delivering value. We integrate outputs into the workflows and tools where decisions actually get made — whether that’s a dashboard in your existing BI platform, an automated report delivered to the right people at the right cadence, or a signal embedded into an operational workflow.

Handover includes documentation, training for the people who will use and maintain the models, and a clear process for what happens when model performance degrades or business conditions change significantly.

What if our data isn't clean enough?

This is the question we hear most often — and it’s the right question to ask. The honest answer is: it depends on what you want to model and how much accuracy you need.

Most NZ organisations have enough data to build useful models in at least some areas of their operation, even with quality issues. We assess your data honestly in Phase 1 and tell you exactly what’s possible, what would improve with some data quality work, and what isn’t viable yet.

Where data quality investment is needed before reliable modelling is possible, we’ll tell you what that work involves and help you prioritise it. We don’t build models we don’t believe in and present them as reliable — that’s not useful to anyone.

A focused professional with a beard and headphones working at a dual-monitor setup in a Taranaki office. One screen displays 'EUREKA' with glowing green data circuits, symbolizing a technical AI breakthrough and innovation by Changeable in New Zealand.
An over-the-shoulder view of a professional reviewing a Behavioral AI Simulation on a computer. The dashboard shows persona risk calibration, transaction conversion rates, and liquidity growth metrics for a Taranaki-based strategy.

What you receive

A clear assessment of your data readiness and gaps

  • A data audit report documenting quality, gaps, and readiness across your relevant data sources
  • A business question and model design document agreed with your leadership team before build begins
  • One or more working models validated against historical data, with documented accuracy and limitations
  • Integrated outputs — dashboards, automated reports, or workflow-embedded signals — in the formats your team actually uses
  • Model documentation covering design assumptions, validation results, and maintenance requirements
  • Training and guidance for the people who will use and maintain the models
  • A model governance plan including review cadence, recalibration triggers, and ownership
  • A roadmap for extending modelling capability to additional use cases over time
  • Guidance for leaders and teams on how to use the insights in real decisions

Who this is for

Leaders making high-stakes decisions without reliable foresight

If your organisation’s planning is based on last year’s numbers, gut feel, or reports that arrive after the decision window has closed, you’re operating with less information than you should be. Data modelling gives leaders the forward visibility to make better resource, investment, and operational decisions — with enough lead time to actually act on what they see.

SMBs that want data capability without a full data team

You don’t need a data scientist on staff to benefit from data modelling. We build models that your existing team can use and maintain, integrated into tools you already have, designed to answer the specific questions your business is facing right now. The investment is proportionate to the value — we start with the highest-impact use case and build from there.

Councils and public sector organisations

Local government and public sector organisations carry significant data — service demand records, financial performance, asset condition, community outcomes — much of which is underused for planning and decision-making. Models built on this data can improve service planning, budget forecasting, risk monitoring, and reporting to elected members. We build public sector data models with appropriate privacy controls and governance documentation aligned to NZ obligations. MOI provides operational truth for models to draw from.

Enterprises needing consistent decision support across teams

Larger organisations often have BI tools and reporting infrastructure in place but inconsistent use — some teams have good visibility, others are flying blind. Data modelling at enterprise scale means standardising the analytical foundation, connecting disparate data sources, and ensuring the metrics that matter to leadership are being measured consistently across the organisation.

Have a question about data modelling?

What is AI data modelling and how is it different from regular reporting?

Regular reporting shows you what happened. Data modelling shows you what’s likely to happen next — and why. A report tells you that demand was up 12% last month. A demand forecasting model tells you to expect a further 8% increase next quarter and identifies the service areas most likely to experience the spike. The difference is the ability to act before an event rather than after it.

Not necessarily. The minimum viable data set depends on what you’re trying to model. For demand forecasting, two to three years of historical data in the relevant area is typically enough to build a reliable model. For anomaly detection, even smaller data sets can be effective. We assess your specific situation in Phase 1 and give you an honest picture of what’s possible before any build work begins.

Messy data is the norm, not the exception. Perfect data quality is not a prerequisite for useful modelling — but understanding the quality issues is. Our data audit identifies exactly where the problems are, how they affect modelling accuracy, and which issues are worth fixing before you build versus which you can work around. We build models on the data that’s reliable and flag clearly where uncertainty is higher.

We are platform-agnostic. Tool selection is driven by your existing environment, your team’s capability to maintain the models, and the specific requirements of the use case. We work with Python-based analytical frameworks, Power BI, Tableau, and a range of cloud data platforms. Where you already have BI tooling in place, we build to integrate with it rather than replace it. Stats NZ’s open data resources is valuable for data population.

Every model we build is validated against historical data before delivery, with documented accuracy metrics and confidence intervals. We communicate the limitations of each model clearly — including the conditions under which it’s most and least reliable. We also establish a performance monitoring approach so accuracy is tracked over time and recalibration happens before drift becomes a problem. A model you understand the limits of is far more useful than one that feels like a black box.

For a focused single-model engagement, most clients have a working model in their hands within three to five weeks. The speed to value after that depends on how quickly the model is integrated into decision workflows — which is why integration and training are built into the engagement rather than treated as an afterthought.

Models need maintenance as your business and data change. We provide full documentation covering the model’s design assumptions, validation approach, and known limitations, along with a clear maintenance guide. We also establish a review cadence — typically quarterly — and define the triggers that should prompt a recalibration: significant changes in business conditions, sustained accuracy degradation, or major changes to the underlying data.