Case Study

Beyond Feasibility: Market Validation with Behavioural AI

How Changeable used behavioural AI, simulation and human-centred analysis to test whether a national swap marketplace could work before a costly build decision was made.

Service AI Strategy & Market Validation
Focus Trust, fairness and behavioural feasibility
Outcome Evidence-based no-go and local pilot recommendation
What changed
01
Assumption testedWould people swap without money?
02
Behaviour simulatedTrust, effort and fairness thresholds were modelled
03
National risk exposedFriction outweighed novelty appeal
04
Better path foundA localised pilot was recommended
Project overview

Testing what people would do, not just what they said they liked.

One of our clients had an idea to launch a national, consumer-to-consumer swap marketplace with no cash involved. Instead of relying on conventional feasibility methods, we sought real-world insight into how people would behave, validating, iterating, and ultimately determining a defensible go / no-go decision.

We treated the concept as a set of behavioural hypotheses rather than a product build. Success would depend on whether real users could overcome trade-offs of time, trust, effort, and fairness.

Decision clarity

Behavioural friction outweighed novelty appeal, creating a no-go result for a national rollout.

Evidence-based pivot

The work suggested a localised pilot to test trust and fairness mechanics.

Reusable IP

Persona models, fairness bands and liquidity analytics now feed future behavioural validation work.

Strategic savings

The client avoided premature capital spend on an unvalidated market assumption.

Our thinking

Feasibility was not the real question. Behaviour was.

We examined the idea through multiple lenses, including people, exchanges, trust and context. The core question was whether the market could generate enough trust, fairness and liquidity to sustain repeat participation.

This made the project a strong fit for data modelling, behavioural simulation and governance-aware AI analysis rather than standard product feasibility alone.

We examined the idea through multiple lenses

  • People: Persona-based behavioural drivers, friction points, and motivations.
  • Exchanges: How bilateral trade without money changes fairness perception.
  • Trust: Assurance, reputation, and perceived risk.
  • Context: Micro and macro forces shaping feasibility.

Methodology overview

A structured validation method that combined persona modelling, market simulation, scenario testing and behavioural analytics.

01

Persona generation and calibration

We created a synthetic population of AI agents modelled on empirical New Zealand demographic and psychographic data, with behavioural parameters for risk tolerance, value sensitivity, effort bias and privacy preference.

02

Market simulation engine

We built a dynamic virtual economy where agents listed, browsed, negotiated and completed swaps, incorporating macro inputs, micro variables and reinforcement learning logic.

03

Scenario development

We tested open swap markets, credit-based barter, AI-mediated fairness, assurance layers and regional density models.

04

Metrics captured

We collected transaction conversion rates, trust breaches, user drop-offs, negotiation cycles, perceived fairness and liquidity growth curves.

05

Learning extraction

AI analytics clustered successful transactions to identify behavioural levers, including trust, fairness, effort and emotional resonance.

06

Validation layers

Reliability was strengthened through triangulation, sensitivity analysis, bias checks, benchmarking and cross-validation against wider market conditions.

Macro and micro context

The market was shaped by more than demand.

To ensure validity, we grounded the behavioural analysis in real-world trends, including sustainability, localism, digital fatigue, cost-of-living pressure, mature resale ecosystems, time scarcity and privacy sensitivity.

The work recognised that people may value circularity in principle while still defaulting to convenience in practice. That tension became central to the simulation design.

Context factors tested

  • Sustainability vs convenience paradox
  • Trust inflation and stronger assurance expectations
  • Localism resurgence and neighbourhood-scale trust
  • Cost-of-living pressure and risk aversion
  • Time scarcity and engagement friction
  • Privacy sensitivity, transparency and consent expectations

Outcomes and benefits

The strongest result was not a build recommendation. It was a clearer decision about what not to build too early.

Decision clarity

Behavioural friction outweighed novelty appeal, creating a defensible no-go for a national rollout.

Evidence-based pivot

A localised pilot was suggested to test trust and fairness mechanics before broader investment.

Reusable behavioural IP

Persona models, fairness bands and liquidity analytics now feed future validation frameworks.

Strategic savings

The client avoided premature capital spend and gained insight that could scale to governance and sustainability sectors.

Policy playbooks

Evidence-based guidance helped inform councils, sustainability groups and marketplaces interested in safer micro-economies.

Trust scaffolds

Reusable architectures were identified to simulate and stabilise low-trust environments.

Strategic significance

Changeable built a capability for modelling uncertainty before investment.

By using AI to replicate human market behaviour, Changeable built a unique capability: testing human-AI interaction dynamics in controlled, data-rich conditions, generating proprietary behavioural IP, and bridging AI systems design with behavioural economics and governance insight.

This positions Changeable as a leader in applied AI ethics, human-centred design and governance strategy, not just as a builder of tools, but as a builder of understanding.

What this set in motion

This project formed the conceptual foundation for the Ministry of Insights Behaviour-Led Validation Framework, integrating behavioural science with AI-based simulation.

  • Predictive trust modelling
  • Market ethics analysis
  • Real-world simulation of human-AI systems
  • Governance, sustainability and marketplace strategy support
Questions

Have a question about Behavioural AI?

Common questions about behavioural simulation, market validation, trust modelling and AI-supported decision-making.

Why didn’t Changeable use traditional feasibility or market research methods?

Traditional methods excel at measuring interest and intent, but they struggle to reveal how people actually behave when trust, fairness, and effort are involved. We used behavioural AI because swapping without money is driven by human psychology, not just preference surveys.

What made this challenge uniquely suited to behavioural simulation?

A swap-based market removes price signals and replaces them with subjective fairness and trust. That means success depends on behavioural thresholds rather than supply-demand modelling alone. Simulation helped us observe what users would do, not just what they say.

How realistic were the personas and simulations?

Personas were calibrated using New Zealand demographic and psychographic data, with behavioural traits like risk tolerance and effort bias. Simulations incorporated macroeconomic factors, location density, and reinforcement learning so agent behaviour approximated real-world trade dynamics.

What kinds of scenarios were tested?

We modelled multiple swap configurations, including open markets, credit-based barter, fairness-assisted trades, assurance features, and local versus national density. This allowed us to compare adoption and drop-off under different trust and friction conditions.

How did you measure success or failure?

We tracked transaction conversion rates, negotiation cycles, drop-offs, perceived fairness, trust breaches, and market liquidity curves. These metrics provided evidence for whether participation would grow or collapse over time.

Did the simulations show the marketplace could work nationally?

No. Behavioural friction consistently outweighed novelty appeal in a national model. Trust, time effort, and fairness concerns prevented sustainable adoption at scale. A localised pilot proved more viable as an incremental strategy.

Why is a “no-go” outcome still valuable?

It prevents large capital spend on an unvalidated assumption. It also redirects investment toward models that do work, such as hyper-local trust networks, community marketplaces, or credit-supported swap systems. Insight beats optimism.

What did the client receive at the end of the engagement?

They received behavioural models, scenario analytics, trust scaffolding patterns, and strategic guidance. Alongside this were policy playbooks and communication assets that translate complex findings into clear decisions for stakeholders.

Can this approach be applied to other markets or products?

Yes. Any product where trust, fairness, risk, or effort play a central role, including marketplaces, sustainability schemes, public sector mechanisms and shared resources, can benefit from behavioural simulation before building.

How does this differentiate Changeable from other AI consultancies?

We don’t just build tools. We model human behaviour, ethics, and system dynamics to answer the hardest question early: Will this work once real people use it? Our approach combines AI, behavioural science, and governance to inform strategic decisions, not speculative builds.

When should an organisation consider behavioural validation?

When uncertainty is high, trust is pivotal, friction threatens engagement, or market novelty risks overconfidence. Behavioural AI provides clarity before investment and avoids costly optimism bias.

What did this project set in motion for Changeable?

It seeded the Ministry of Insights Behaviour-Led Validation Framework, our structured method for pressure-testing ideas against behavioural realities. The reusable IP now supports governance, sustainability, and marketplace strategy engagements.

Ready to test your ideas against human behaviour before you build?

Let’s design an anonymised, behaviour-first validation for your concept. A Decision Clarity Session is a no-obligation conversation where we listen to where you are, what you’re trying to achieve, and what’s getting in the way. Alternatively, if you don’t feel ready, build your AI confidence before we implement.