AI governance isn't about slowing down. It's about moving faster without the risk.

Use AI confidently without creating new risk.

AI governance consultancy.

Every organisation using AI is making decisions about data, accountability, and trust — whether they realise it or not. The question isn’t whether you need AI governance. It’s whether the governance you have is fit for purpose.

Changeable builds practical AI governance frameworks for New Zealand organisations. Not thick policy documents that sit in a drawer. Working governance structures with clear accountability, defined decision rights, and controls your teams can actually apply day to day. And these support your AI strategy

We work across NZ businesses, local government, and public sector organisations — environments where the stakes of getting this wrong are high, and where trust is as important as performance.

Why AI governance fails in most organisations

Most AI governance failures aren’t dramatic. They’re quiet. An AI tool gets adopted without a policy. A decision gets made without a record. A dataset gets shared without checking whether that’s allowed. Nobody intended to create risk — it accumulated incrementally.

The most common patterns we see:

  • Shadow AI use — staff adopting tools that haven’t been approved, assessed, or logged
  • No clear ownership — nobody knows who is responsible when something goes wrong
  • Policies borrowed from overseas templates that don’t map to NZ law or the organisation’s actual workflows, including obligations under the NZ Privacy Act
  • Governance designed for a snapshot in time, with no process for reviewing it as AI evolves
  • Public sector organisations caught between innovation pressure and accountability obligations, with no framework that satisfies both

The result is organisations that are either over-governed — paralysed by rules that prevent any useful AI adoption — or under-governed, running real legal, ethical, and reputational risk without realising it.

Governance is most effective when it follows a clear AI strategy. Good AI governance sits in neither place. It’s proportionate, practical, and built to last.

A diverse group of New Zealand professionals collaborating around a rustic wooden table in a New Plymouth studio. One person wears a silver fern cap while they review AI projects on laptops. Representing local AI capability and community-led innovation by Changeable in Taranaki.
A Changeable AI strategy workshop in Inglewood, Taranaki. A female consultant presents a 'Change Journey' roadmap on a whiteboard to a local team of professionals. The street view of Inglewood is visible through the window, emphasizing local Taranaki business roots.
A team of five professionals in a relaxed New Plymouth office setting, engaging in a strategic AI consultation. They are seated in a circle with coffee, emphasizing the collaborative and human-led approach of Changeable in Taranaki, New Zealand.

The New Zealand Regulatory Context

New Zealand doesn’t yet have a single AI-specific law. But that doesn’t mean AI is unregulated — far from it. Organisations must already navigate a complex web of existing obligations, and the regulatory landscape is changing.

What already applies

Privacy Act 2020: The most significant existing constraint on AI use in NZ. Any AI system that collects, processes, or makes decisions about personal information must comply with the Information Privacy Principles. This includes automated decision-making, profiling, and AI-assisted communications.

Algorithm Charter for Aotearoa New Zealand: A voluntary commitment for government agencies to be transparent about how they use algorithms in decisions that affect people. If your organisation is a signatory or aspires to be, your governance framework needs to demonstrate this.

Human Rights Act and Employment Relations Act: AI used in hiring, performance management, or staff decisions has obligations under both. Bias in AI-assisted HR decisions is a real and growing compliance risk.

Sector-specific obligations: Health, finance, utilities, and education all carry additional requirements around data handling, decision accountability, and audit trails that directly affect how AI can be used.

What’s coming

The NZ Government has signalled intent to develop AI-specific regulation, following similar moves in the EU, UK, and Australia. Organisations that have governance structures in place now will be significantly better positioned when that regulation arrives — both in terms of compliance and in terms of demonstrating readiness to regulators, boards, and the public. ISO 42001 — the global standard for AI management systems.

Changeable stays across this landscape so you don’t have to. Every governance framework we build is designed with current obligations in full view and structured to adapt as requirements evolve.

Working Process

How we build your AI governance framework

Our approach draws on deep business analysis methodology and direct experience in regulated, high-accountability environments — including local government and public sector organisations in New Zealand. We follow four phases.

Phase 01

Governance assessment

Before designing anything, we need to understand what’s already in place. This means reviewing your existing policies, data practices, current AI tool use (including shadow use), and the accountability structures that govern decisions in your organisation.

We identify what’s already working, what’s missing, where risk is currently unmanaged, and which parts of your organisation face the greatest exposure. The output is a clear governance gap analysis — not a generic scorecard, but a specific map of your situation.

Phase 02

Framework design

The governance framework is the centrepiece of this engagement. It defines: what AI is approved for use, how new AI tools and use cases get assessed and approved, who owns AI governance decisions at each level, how risk is classified and what controls apply to each risk tier, and how incidents are identified, escalated, and resolved.

For public sector clients, this includes alignment with the Algorithm Charter and explicit mapping to Privacy Act obligations. For regulated industry clients, we align the framework to sector-specific requirements.

The framework is built to be proportionate to your organisation’s size and maturity. 

Phase 03

Policy and standards development

A framework without practical policy is a diagram on a wall. We translate the framework into working documents your teams can actually use: an AI use policy that staff understand and can follow, data classification standards that govern what can and can’t be fed into AI systems, a vendor assessment checklist for evaluating new AI tools, and decision-making guidelines for AI-assisted or AI-automated workflows.

Everything is written in plain language, tested against real scenarios in your organisation, and designed to hold up to external scrutiny — OIA requests, board review, or regulatory audit.

Phase 04

Implementation and embedding

Governance only works if people understand it and use it. We provide a structured implementation plan that includes a change communication approach for different stakeholder groups, training guidance for staff who interact with AI systems, a governance review calendar so the framework stays current, and accountability mechanisms so ownership doesn’t drift.

We can also provide ongoing advisory support as your AI capabilities grow and your governance needs evolve.

A team of four professionals in a New Plymouth office conducting an AI strategy workshop. A woman stands by a whiteboard with sticky notes, leading a discussion with three seated colleagues. Representing the collaborative planning services of Changeable in Taranaki, NZ.

What you receive at the end

Every AI Governance engagement produces a complete, usable package:

  • A governance gap analysis documenting your current state and risk exposure
  • A tailored AI governance framework with defined risk tiers, decision rights, and controls
  • An AI use policy written for your organisation in plain language
  • Data classification and handling standards for AI contexts
  • A vendor/tool assessment checklist for evaluating new AI systems
  • Mapping to relevant NZ obligations: Privacy Act 2020, Algorithm Charter (where applicable), sector requirements
  • An implementation and rollout plan with stakeholder communication guidance
  • A governance review schedule with triggers for update
  • An executive summary suitable for board, leadership, or ministerial reporting

Engagements typically run three to six weeks depending on organisational size, sector, and the complexity of existing AI use. Public sector and regulated industry clients generally sit toward the longer end given the additional compliance mapping required.

Who is this for

Councils and local government

Local government operates under significant public scrutiny and faces unique accountability obligations around algorithmic decision-making and data use. AI governance for councils needs to satisfy elected members, staff, ratepayers, and central government — often simultaneously. Balancing Innovation with Compliance. We have direct experience in this environment. Our frameworks are designed to be auditable, explainable to the public, and aligned to the Algorithm Charter.

Central government and public sector agencies

Central government agencies face the full weight of Privacy Act compliance, the Algorithm Charter, and increasing scrutiny from select committees and the public on AI use. If your agency is using AI in any decisions that affect New Zealanders, a governance framework isn’t optional — it’s how you demonstrate responsible stewardship of public trust.

Finance, health, and regulated industries

Financial services, health, and utilities operate under sector-specific obligations that create additional complexity for AI governance. We map your framework to the regulatory requirements specific to your sector, ensuring you’re not just compliant with general privacy law but with the full set of obligations your organisation carries.

Enterprises with growing AI capability

Larger private sector organisations — particularly those scaling AI across multiple departments — often find that informal, ad hoc governance doesn’t hold as AI use expands. A formal governance framework creates consistency, reduces risk accumulation, and gives leadership the visibility they need to make informed decisions about AI investment.

SMBs that want to get ahead of risk

You don’t need to be a large organisation to need AI governance. If you’re using AI tools with customer data, in hiring, or in any workflow that affects people’s outcomes, basic governance protects you from risks you may not even be aware of yet. We offer lightweight, right-sized governance engagements for smaller organisations that need structure without bureaucracy.

Have a question about AI Governance?

What is an AI governance framework?

An AI governance framework is a structured set of policies, standards, roles, and processes that define how AI is approved, used, monitored, and controlled in your organisation. It answers three core questions: what AI are we allowed to use, who is responsible for AI decisions, and what happens when something goes wrong. Without effective governance, public sector AI projects fail.

Not yet — there is no single AI Act in New Zealand. However, AI use is already subject to significant obligations under the Privacy Act 2020, the Human Rights Act, employment law, and sector-specific regulation. Government agencies are also expected to adhere to the Algorithm Charter for Aotearoa New Zealand. AI-specific regulation is expected in the coming years, and organisations with governance frameworks already in place will be much better positioned when it arrives.

The Algorithm Charter is a voluntary commitment by New Zealand government agencies to be transparent about how they use algorithms in decisions that affect people. It covers requirements around bias testing, transparency, human oversight, and privacy. If your organisation is a central or local government agency, or if you’re working toward Charter alignment, your governance framework should explicitly address its requirements. We build Charter alignment into all public sector governance work.

Related, but not the same. Data governance covers how data is collected, stored, and managed. A privacy policy covers how personal information is handled. AI governance addresses the specific risks and accountability questions that arise when AI systems are making or influencing decisions — including bias, explainability, human oversight, vendor accountability, and the ethics of automated decision-making. All three need to work together, and a good AI governance framework maps clearly to your existing data and privacy obligations.

We start from where you are. Our governance assessment will review what’s already in place, identify what’s working, and surface gaps. In many cases, organisations have partial governance — a privacy policy, some informal rules about which tools are used — but no coherent framework that ties it together. We build on what works rather than starting from scratch.

Most engagements run three to six weeks from kickoff to delivery. Smaller organisations with limited AI use can move faster. Public sector and regulated industry clients typically need longer given the compliance mapping and stakeholder consultation involved. We’ll give you a realistic estimate at the discovery session.

This is one of the most important questions — and one that most governance frameworks don’t answer well. Ours include a review calendar with defined triggers: new AI tools being adopted, changes in legislation, significant changes in how existing tools are used, and an annual review regardless. We also include a lightweight change log process so governance decisions are documented as they happen, not reconstructed after the fact.

It has to. The goal isn’t to slow AI down — it’s to make AI adoption sustainable. Organisations with clear governance actually move faster in the medium term, because they’re not constantly managing the fallout from ungoverned decisions. The approval process we design is tiered: low-risk tools can be adopted quickly within a pre-approved framework; higher-risk use cases get more scrutiny. Speed is built into the design.