Clearer guardrails
Staff had better guidance on what was allowed, what required approval and what was out of scope.

Helping a large New Zealand government organisation move from unmanaged AI experimentation to practical, organisation-wide AI governance, ethics and responsible use.
A large New Zealand government organisation engaged Changeable to help design and implement a practical, organisation-wide approach to AI governance, ethics, and responsible use.
The organisation was already experimenting with AI tools across multiple business units, but leadership recognised growing risks around inconsistency, unmanaged experimentation, privacy, and public trust.
The challenge was not a lack of interest in AI. It was the absence of a clear, shared framework that enabled safe adoption while still allowing teams to move forward.
Staff had better guidance on what was allowed, what required approval and what was out of scope.
AI use became easier to review, govern and align with privacy, accountability and public trust expectations.
Teams could experiment responsibly without relying on fear, uncertainty or informal workarounds.
AI considerations were built into existing assurance, approval and decision processes.
The organisation operates in a high-trust public sector environment, with strong obligations under New Zealand privacy and public sector information management expectations, government guidance on responsible and ethical AI use, Te Tiriti o Waitangi principles and cultural considerations, and public accountability, auditability, and transparency requirements.
Several business units were already using or exploring AI for document summarisation and analysis, internal knowledge search, decision support and reporting, and process efficiency and administrative relief.
The organisation needed to answer a deceptively simple question: “How do we enable people to use AI safely and confidently, without slowing everything down or putting public trust at risk?”
Changeable focused on designing governance that was usable, human-centred and embedded into existing ways of working.
Structured engagement across frontline, policy, operational and enabling functions surfaced current use cases, risks, workarounds and areas where AI could reduce cognitive and administrative load.
Rather than treating all AI use as equal risk, Changeable helped define clear categories such as low-risk productivity support, assisted analysis and summarisation, decision support, and prohibited or high-risk use cases.
AI considerations were embedded into risk and privacy assessment workflows, technology and architecture review points, and business case and initiative approval stages.
Changeable worked with leaders to articulate plain-language responsible AI principles aligned to New Zealand public sector expectations and organisational values.
Role-based guidance, decision checklists, intake and logging mechanisms, and education sessions helped shift the tone from restriction to responsible enablement.
The framework made clear where human judgement, review and accountability needed to remain in place, especially for decision support and higher-risk uses.
Changeable worked with leaders to articulate a plain-language set of responsible AI principles, aligned to New Zealand public sector expectations and organisational values.
Importantly, principles were paired with examples and scenarios, helping staff understand how they applied in practice.
The organisation achieved several tangible benefits, and AI adoption became intentional rather than accidental.
Staff had clearer guidance on how to use AI appropriately and responsibly.
Leadership gained a clearer view of where AI was being used across the organisation.
Clearer guardrails and approvals helped reduce privacy, accountability and governance risk.
Teams could test ideas more safely without encouraging shadow AI practices.
The model aligned more strongly with government guidance and public expectations.
Leadership, technology teams and business units had a shared language for responsible AI adoption.
Responsible AI is not about slowing innovation. It is about making innovation sustainable.
In public sector environments, trust is not optional. Governance must enable people to do their jobs better while protecting the communities they serve.
This engagement demonstrated that when AI governance is practical, transparent and human-centred, organisations can move forward with confidence rather than caution.
Common questions about public-sector AI governance, responsible AI, human oversight and safe adoption.
Because experimentation alone created inconsistency and risk. Teams were using AI tools independently, without shared standards for privacy, data use, approvals, or accountability. Governance ensured AI could be adopted confidently and safely, rather than reactively or informally.
No. When designed well, governance removes uncertainty and accelerates experimentation. By defining clear guardrails and approval paths, staff know what they can do, what requires approval, and what is out of bounds, which reduces hesitation and avoids shadow use.
We focused on practical adoption rather than theoretical frameworks. Governance was built around actual business workflows, existing assurance processes, and real use cases surfaced through engagement with frontline, operational, policy, and enabling teams.
We drew on requirements including privacy, security, record-keeping, transparency, and Te Tiriti o Waitangi considerations. The framework was designed to support trust, accountability, and public expectations, rather than simply meet technical compliance.
Workshops, interviews, and elicitation sessions helped uncover real needs, risks, and opportunities. This built buy-in and ensured governance supported day-to-day work instead of becoming a top-down policy no one used.
Clear categories of AI use, risk levels, allowed and prohibited activities, required approvals, data considerations, human-in-the-loop expectations, and responsible AI principles explained in plain language.
Through enablement tools such as role-based guidance, decision checklists, simple logging processes, and education sessions that focused on confidence rather than compliance.
Greater confidence to use AI responsibly, improved visibility of AI activity, reduced risk exposure, and faster experimentation without shadow AI. Leadership and frontline teams gained clearer alignment and trust.
Respect for people, fairness, and community impacts were built into principles and decision-making. The approach emphasised transparency, accountability, and cultural awareness in line with public sector expectations.
Yes. While tailored to the client’s context, the principles-based, category-driven model can be adapted across sectors where trust, transparency, and operational safety are critical.
Grounding governance in real work. Rather than fear-driven restrictions, the model focused on enablement, clarity, and human accountability, which empowered staff instead of constraining them.
Other examples of practical AI governance, knowledge systems, behavioural simulation and secure transformation work.
Secure, scalable infrastructure support for business systems and long-term operational resilience.
View Project →AI-powered knowledge retrieval and operational support for a multi-site New Zealand retail environment.
View Project →Behavioural simulation and market validation using AI-supported analysis of trust, adoption and feasibility.
View Project →A Decision Clarity Session is a no-obligation conversation where we listen to where you are, what you’re trying to achieve, and what’s getting in the way. By the end, you’ll have a clearer picture of the decisions in front of you, whether that means AI, process improvement, transformation, or a combination of all three, and whether a Changeable engagement is the right next step. Alternatively, if you don’t feel ready, build your AI confidence before we implement.