Case Study

Designing and Implementing Responsible AI Governance

Helping a large New Zealand government organisation move from unmanaged AI experimentation to practical, organisation-wide AI governance, ethics and responsible use.

Service AI Governance & Responsible AI
Client Large New Zealand Government Organisation
Focus Public trust, ethics and safe adoption
What changed
01
Uncoordinated experimentationAI use was spreading across business units
02
Practical use categoriesTeams understood what was allowed and restricted
03
Existing assurance fitAI controls were embedded into known processes
04
Responsible enablementStaff gained confidence without creating shadow AI
Project overview

AI governance that enabled safe adoption instead of slowing it down.

A large New Zealand government organisation engaged Changeable to help design and implement a practical, organisation-wide approach to AI governance, ethics, and responsible use.

The organisation was already experimenting with AI tools across multiple business units, but leadership recognised growing risks around inconsistency, unmanaged experimentation, privacy, and public trust.

The challenge was not a lack of interest in AI. It was the absence of a clear, shared framework that enabled safe adoption while still allowing teams to move forward.

Clearer guardrails

Staff had better guidance on what was allowed, what required approval and what was out of scope.

Reduced risk exposure

AI use became easier to review, govern and align with privacy, accountability and public trust expectations.

Greater confidence

Teams could experiment responsibly without relying on fear, uncertainty or informal workarounds.

Governance embedded

AI considerations were built into existing assurance, approval and decision processes.

Context and drivers

Public sector AI adoption comes with higher expectations.

The organisation operates in a high-trust public sector environment, with strong obligations under New Zealand privacy and public sector information management expectations, government guidance on responsible and ethical AI use, Te Tiriti o Waitangi principles and cultural considerations, and public accountability, auditability, and transparency requirements.

Several business units were already using or exploring AI for document summarisation and analysis, internal knowledge search, decision support and reporting, and process efficiency and administrative relief.

Leaders were concerned about

  • Inconsistent use of AI tools and models
  • Unclear decision rights and approvals
  • Limited visibility of where AI was being used
  • Heightened risk around sensitive data
  • Staff uncertainty about what was allowed versus prohibited
  • Alignment with New Zealand privacy principles and public expectations
The challenge

How do we enable people to use AI safely and confidently?

The organisation needed to answer a deceptively simple question: “How do we enable people to use AI safely and confidently, without slowing everything down or putting public trust at risk?”

Key challenges included

  • Decentralised experimentation across business units
  • Fear-based hesitation from some teams, driven by uncertainty rather than policy
  • Overly theoretical guidance that did not translate into day-to-day decisions
  • Cultural risk, where blanket restrictions would push AI use underground
  • Change fatigue, with staff already navigating multiple transformation initiatives

Changeable’s approach

Changeable focused on designing governance that was usable, human-centred and embedded into existing ways of working.

01

Business unit engagement and elicitation

Structured engagement across frontline, policy, operational and enabling functions surfaced current use cases, risks, workarounds and areas where AI could reduce cognitive and administrative load.

02

Practical AI use categories

Rather than treating all AI use as equal risk, Changeable helped define clear categories such as low-risk productivity support, assisted analysis and summarisation, decision support, and prohibited or high-risk use cases.

03

Existing assurance alignment

AI considerations were embedded into risk and privacy assessment workflows, technology and architecture review points, and business case and initiative approval stages.

04

Ethics and responsible AI principles

Changeable worked with leaders to articulate plain-language responsible AI principles aligned to New Zealand public sector expectations and organisational values.

05

Enablement and guidance

Role-based guidance, decision checklists, intake and logging mechanisms, and education sessions helped shift the tone from restriction to responsible enablement.

06

Human accountability

The framework made clear where human judgement, review and accountability needed to remain in place, especially for decision support and higher-risk uses.

Responsible AI principles

Principles only worked because they were paired with real scenarios.

Changeable worked with leaders to articulate a plain-language set of responsible AI principles, aligned to New Zealand public sector expectations and organisational values.

Importantly, principles were paired with examples and scenarios, helping staff understand how they applied in practice.

These principles focused on

  • Human accountability and oversight
  • Transparency and explainability
  • Fairness and bias awareness
  • Data protection and minimisation
  • Respect for people, communities, and cultural context
  • Alignment with public-sector accountability expectations, including the Algorithm Charter for Aotearoa New Zealand

Outcomes

The organisation achieved several tangible benefits, and AI adoption became intentional rather than accidental.

Increased staff confidence

Staff had clearer guidance on how to use AI appropriately and responsibly.

Improved visibility

Leadership gained a clearer view of where AI was being used across the organisation.

Reduced risk exposure

Clearer guardrails and approvals helped reduce privacy, accountability and governance risk.

Safer experimentation

Teams could test ideas more safely without encouraging shadow AI practices.

Better public-sector alignment

The model aligned more strongly with government guidance and public expectations.

Improved trust

Leadership, technology teams and business units had a shared language for responsible AI adoption.

What made this successful

Governance worked because it was grounded in real work.

Responsible AI is not about slowing innovation. It is about making innovation sustainable.

In public sector environments, trust is not optional. Governance must enable people to do their jobs better while protecting the communities they serve.

This engagement demonstrated that when AI governance is practical, transparent and human-centred, organisations can move forward with confidence rather than caution.

Key success factors included

  • Grounding governance in real business workflows
  • Treating AI adoption as a change and capability challenge, not a technology rollout
  • Avoiding fear-driven or overly restrictive policies
  • Respecting New Zealand’s public sector context and cultural responsibilities
  • Keeping humans clearly accountable at every stage
Questions

Have a question about Responsible AI Governance?

Common questions about public-sector AI governance, responsible AI, human oversight and safe adoption.

Why was AI governance needed if the organisation was already experimenting with AI?

Because experimentation alone created inconsistency and risk. Teams were using AI tools independently, without shared standards for privacy, data use, approvals, or accountability. Governance ensured AI could be adopted confidently and safely, rather than reactively or informally.

Does responsible AI governance slow innovation or create barriers?

No. When designed well, governance removes uncertainty and accelerates experimentation. By defining clear guardrails and approval paths, staff know what they can do, what requires approval, and what is out of bounds, which reduces hesitation and avoids shadow use.

What was Changeable’s approach to designing governance?

We focused on practical adoption rather than theoretical frameworks. Governance was built around actual business workflows, existing assurance processes, and real use cases surfaced through engagement with frontline, operational, policy, and enabling teams.

How did Changeable ensure the governance model aligned with public sector obligations?

We drew on requirements including privacy, security, record-keeping, transparency, and Te Tiriti o Waitangi considerations. The framework was designed to support trust, accountability, and public expectations, rather than simply meet technical compliance.

How were staff involved in the process?

Workshops, interviews, and elicitation sessions helped uncover real needs, risks, and opportunities. This built buy-in and ensured governance supported day-to-day work instead of becoming a top-down policy no one used.

What did the governance framework include?

Clear categories of AI use, risk levels, allowed and prohibited activities, required approvals, data considerations, human-in-the-loop expectations, and responsible AI principles explained in plain language.

How was change management supported?

Through enablement tools such as role-based guidance, decision checklists, simple logging processes, and education sessions that focused on confidence rather than compliance.

What outcomes did the organisation achieve?

Greater confidence to use AI responsibly, improved visibility of AI activity, reduced risk exposure, and faster experimentation without shadow AI. Leadership and frontline teams gained clearer alignment and trust.

How did cultural context and Te Tiriti principles shape the work?

Respect for people, fairness, and community impacts were built into principles and decision-making. The approach emphasised transparency, accountability, and cultural awareness in line with public sector expectations.

Can this approach scale to other public or private organisations?

Yes. While tailored to the client’s context, the principles-based, category-driven model can be adapted across sectors where trust, transparency, and operational safety are critical.

What was the most important success factor?

Grounding governance in real work. Rather than fear-driven restrictions, the model focused on enablement, clarity, and human accountability, which empowered staff instead of constraining them.

Start with a free decision clarity session

A Decision Clarity Session is a no-obligation conversation where we listen to where you are, what you’re trying to achieve, and what’s getting in the way. By the end, you’ll have a clearer picture of the decisions in front of you, whether that means AI, process improvement, transformation, or a combination of all three, and whether a Changeable engagement is the right next step. Alternatively, if you don’t feel ready, build your AI confidence before we implement.