Designing and Implementing Responsible AI Governance

Overview

A large New Zealand government organisation engaged Changeable to help design and implement a practical, organisation-wide approach to AI governance, ethics, and responsible use. The organisation was already experimenting with AI tools across multiple business units, but leadership recognised growing risks around inconsistency, unmanaged experimentation, privacy, and public trust.

The challenge was not a lack of interest in AI. It was the absence of a clear, shared framework that enabled safe adoption while still allowing teams to move forward.

Changeable was engaged to bridge this gap.

Context and Drivers

The organisation operates in a high-trust public sector environment, with strong obligations under:

  • New Zealand privacy and public sector information management expectations
  • Government guidance on responsible and ethical AI use
  • Te Tiriti o Waitangi principles and cultural considerations
  • Public accountability, auditability, and transparency requirements

Several business units were already using or exploring AI for:

  • Document summarisation and analysis
  • Internal knowledge search
  • Decision support and reporting
  • Process efficiency and administrative relief

However, these efforts were largely uncoordinated. Leaders were concerned about:

  • Inconsistent use of AI tools and models
  • Unclear decision rights and approvals
  • Limited visibility of where AI was being used
  • Heightened risk around sensitive data
  • Staff uncertainty about what was allowed versus prohibited

The Challenge

The organisation needed to answer a deceptively simple question:

“How do we enable people to use AI safely and confidently, without slowing everything down or putting public trust at risk?”

Key challenges included:

  • Decentralised experimentation across business units
  • Fear-based hesitation from some teams, driven by uncertainty rather than policy
  • Overly theoretical guidance that did not translate into day-to-day decisions
  • Cultural risk, where blanket restrictions would push AI use underground
  • Change fatigue, with staff already navigating multiple transformation initiatives

Changeable’s Approach

Changeable focused on designing governance that was usable, human-centred, and embedded into existing ways of working, rather than creating a standalone AI policy that would sit on a shelf.

Business Unit Engagement and Elicitation

We began with structured engagement across a representative set of business units, including frontline, policy, operational, and enabling functions.

This involved:

  • Facilitated workshops to surface current and intended AI use cases
  • Eliciting perceived risks, constraints, and concerns from staff
  • Identifying informal workarounds already in place
  • Understanding where AI could genuinely reduce cognitive and administrative load

This step was critical. It built trust and ensured governance was grounded in real work, not assumptions.

Defining Practical AI Use Categories

Rather than treating all AI use as equal risk, Changeable worked with the organisation to define clear categories of use, such as:

  • Low-risk productivity support
  • Assisted analysis and summarisation
  • Decision support (human-in-the-loop)
  • Prohibited or high-risk use cases

Each category included:

  • What was allowed
  • What approvals were required
  • What data could or could not be used
  • What safeguards were expected

This removed ambiguity and gave staff confidence to proceed responsibly.

Designing Governance That Fits Existing Assurance

Instead of creating a parallel AI governance structure, Changeable embedded AI considerations into existing assurance and decision processes, including:

  • Risk and privacy assessment workflows
  • Technology and architecture review points
  • Business case and initiative approval stages

This avoided duplication and reduced friction for teams already familiar with these processes.

Ethics and Responsible AI Principles

Changeable worked with leaders to articulate a plain-language set of responsible AI principles, aligned to New Zealand public sector expectations and organisational values.

These principles focused on:

  • Human accountability and oversight
  • Transparency and explainability
  • Fairness and bias awareness
  • Data protection and minimisation
  • Respect for people, communities, and cultural context

Importantly, principles were paired with examples and scenarios, helping staff understand how they applied in practice.

Enablement, Guidance, and Change Support

Governance alone was not enough. Changeable supported adoption through:

  • Role-based guidance explaining “what this means for me”
  • Practical decision checklists
  • Simple intake and logging mechanisms for AI use
  • Education sessions focused on confidence, not compliance

This shifted the tone from restriction to responsible enablement.

Outcomes

The organisation achieved several tangible benefits:

  • Increased confidence among staff to use AI appropriately
  • Improved visibility of AI use across the organisation
  • Reduced risk exposure through clearer guardrails and approvals
  • Faster, safer experimentation, without shadow AI practices
  • Stronger alignment with government guidance and public expectations
  • Improved trust between leadership, technology teams, and business units

Most importantly, AI adoption became intentional rather than accidental.

What Made This Successful

Key success factors included:

  • Grounding governance in real business workflows
  • Treating AI adoption as a change and capability challenge, not a technology rollout
  • Avoiding fear-driven or overly restrictive policies
  • Respecting New Zealand’s public sector context and cultural responsibilities
  • Keeping humans clearly accountable at every stage

Changeable’s Perspective

Responsible AI is not about slowing innovation. It is about making innovation sustainable.

In public sector environments, trust is not optional. Governance must enable people to do their jobs better while protecting the communities they serve.

This engagement demonstrated that when AI governance is practical, transparent, and human-centred, organisations can move forward with confidence rather than caution.

Client details withheld due to confidentiality.

Have a question about Responsible AI Governance?

Why was AI governance needed if the organisation was already experimenting with AI?

Because experimentation alone created inconsistency and risk. Teams were using AI tools independently, without shared standards for privacy, data use, approvals, or accountability. Governance ensured AI could be adopted confidently and safely, rather than reactively or informally.

No. When designed well, governance removes uncertainty and accelerates experimentation. By defining clear guardrails and approval paths, staff know what they can do, what requires approval, and what is out of bounds, which reduces hesitation and avoids shadow use.

We focused on practical adoption rather than theoretical frameworks. Governance was built around actual business workflows, existing assurance processes, and real use cases surfaced through engagement with frontline, operational, policy, and enabling teams.

We drew on requirements including privacy, security, record-keeping, transparency, and Te Tiriti o Waitangi considerations. The framework was designed to support trust, accountability, and public expectations, rather than simply meet technical compliance.

Workshops, interviews, and elicitation sessions helped uncover real needs, risks, and opportunities. This built buy-in and ensured governance supported day-to-day work instead of becoming a top-down policy no one used.

Clear categories of AI use, risk levels, allowed and prohibited activities, required approvals, data considerations, human-in-the-loop expectations, and responsible AI principles explained in plain language.

Through enablement tools such as role-based guidance, decision checklists, simple logging processes, and education sessions that focused on confidence rather than compliance.

Greater confidence to use AI responsibly, improved visibility of AI activity, reduced risk exposure, and faster experimentation without shadow AI. Leadership and frontline teams gained clearer alignment and trust.

Respect for people, fairness, and community impacts were built into principles and decision-making. The approach emphasised transparency, accountability, and cultural awareness in line with public sector expectations.

Yes. While tailored to the client’s context, the principles-based, category-driven model can be adapted across sectors where trust, transparency, and operational safety are critical.

Grounding governance in real work. Rather than fear-driven restrictions, the model focused on enablement, clarity, and human accountability, which empowered staff instead of constraining them.