Our approach to AI

Switching on AI is not the same as having a strategy for AI.

AI is powerful, and that power brings opportunity and risk — to data, to decisions, to accountability, and to trust. Tooling decisions, pilots, and credit spend do not, on their own, constitute a strategy, and they do not address where AI creates value or exposure. The work that matters — and where the risk concentrates — sits in governance, operating model, data foundations, and the judgement required to use AI well. That is the work we do.

What we work on

Where we work.

AI return on investment

AI spending is growing faster than the discipline around measuring what it returns. The costs are easier to hide than for traditional IT — licence sprawl, time spent on prompts and refinement, integration and governance overhead, and the compute that supports it — while the benefits are often diffuse: time saved, faster triage, decisions made better or sooner. We help organisations frame the question honestly — what is being spent, what is being avoided or gained, what should be measured — and put the framework in place to track it.

AI strategy

An AI strategy is a set of deliberate choices: where AI is worth applying, what outcomes it should support, what foundations need to be in place first, and where the organisation should not invest. The aim is not exhaustiveness but defensibility. We help executive teams and boards establish that position — assessing readiness, prioritising opportunities, and identifying where AI is unlikely to repay the investment.

AI governance

AI governance sits where data governance, technology risk, and operational accountability meet. The questions are familiar in shape but new in scope: who is accountable for decisions AI informs or makes; what data is being used and how it is controlled; how models, prompts, and outputs are reviewed, retained, and audited; where human judgement remains authoritative. Most organisations have policy gaps here because existing frameworks predate the technology. We help organisations establish AI governance that is consistent with their broader risk, data, and technology governance — and that holds up to board and regulatory scrutiny.

AI implementation

We are not a model-building firm. We work on the implementation decisions that determine whether AI delivers value or creates exposure: integration patterns, identity and access, data handling, private deployment options, vendor evaluation, operating model changes, capability uplift, and pilot design. The choices made here decide whether AI becomes a part of how the organisation operates, or another shelf of tools.

AI adoption and the workforce

The hardest part of AI adoption is rarely the technology. Workforces respond unevenly: anxiety about role change and job security, varying willingness or ability to use the tools, shadow adoption that adds risk to data and process, and a quiet de-skilling as judgement once exercised by people becomes something the tool performs first. Trust in AI output is itself unevenly calibrated — over-trust and under-trust each carry their own problems. We help organisations work through the operating model and change implications: which roles are affected, how skills should evolve, what good management of AI-augmented work looks like, and how to pace adoption to what the workforce can actually absorb.

When to engage

When organisations come to us.

  • The organisation is making AI decisions through pilots and tooling choices, rather than from a deliberate position
  • Existing governance, risk, and data frameworks do not yet account for AI use
  • AI investment is increasing but the organisation cannot articulate what it is getting in return
  • A board or executive needs an independent position on AI strategy or risk
  • AI vendor proposals or platform decisions require independent review before commitment
  • Adoption is uneven, or the workforce implications of AI need to be worked through deliberately

Disclosure

Our own use of AI

We selectively use AI-assisted tools to support analysis, synthesis, and document review within engagements. The intent is to reduce manual overhead and direct our advisors' time toward judgement, stakeholder engagement, and the parts of advisory work where experience matters most. Accountability for the advice we provide rests with our advisors. Client information is handled in line with the confidentiality and data handling commitments agreed at the start of each engagement.

Ready to talk?

Cut through the complexity.

Whether you're facing a critical technology decision, a program in trouble, or a board that needs clarity - we can help.

Level 20, Tower 2  ·  201 Sussex Street, Sydney NSW 2000