research cooperative

Applied AI research, done through the work itself.

AI engineering, organisational design, and governance — done as one practice. Each engagement is a field site; what we learn funds methods the next one starts with.

scroll
Our founding team has worked with

Collective experience of the founding team

Most AI adoption looks like progress but leaves organisations more fragile than before.

The engineering team ships tools. The change team writes policies. Neither talks to the other until something breaks. The result is systems that work in the pilot but stall in practice. Speed goes up. The organisation's ability to adapt goes down.

New capability, without dismantling what already functions.

what most AI adoption produces

Locked in

Systems that look efficient but can't respond when conditions change. Dependent on tools it doesn't understand.

what we work toward

Adaptive

Tools the team can inspect, override, and explain. New capability the organisation can actually steer.

How the work compounds.

The research and the practice are the same thing.

The same tools we ship for clients also help us study how the cooperative learns.

01

Engage

We work with you on real problems, under real conditions. A small team from the cooperative — engineering, organisational design, governance — drops into the engagement together. The research happens inside the work, not adjacent to it.

02

Learn

We instrument the work with consent. Every engagement produces decision traces, working notes, and artefacts the cooperative learns from. Some engagements go deeper — semantic patterns, biosignals, coupling-quality studies — when the research scope warrants it.

03

Improve

What we learn improves tools and sharpens methods. Some of it ships back into the open-source stack; some becomes methodology the cooperative writes up and publishes. Your next engagement starts where the last one finished.

Three forms the work takes

Trustworthy AI, transformation advisory, senior people who stay with the engagement — three on-ramps to the same practice.

Trustworthy AI systems

Agentic AI systems that explain their reasoning, flag uncertainty, and change course when the evidence changes. Learning by design.

Transformation advisory

Guidance on AI adoption that accounts for how change actually lands in organisations and on the people inside them. Not a playbook. A practice grounded in ongoing research.

Practitioner teams

Teams assembled from the cooperative for your specific engagement. You get experienced people who stay with the work, and learn with you, not handoff to junior staff.

Tools we build and share

Everything we use in client work is built on our releases of free software. Patent-free. The knowledge of how to use them well, suited to your context, is what matters.

See the full stack →

Released under the Earthian Stewardship License (ESL-A). Preserves study, modification, and redistribution freedoms while restricting deployment for surveillance, manipulation, or harm.

Three principles, built in.

These aren't values on a wall. They're built into how we coordinate, what we build, and how we pay each other.

Work is the coordination

We coordinate through the work itself. Shared documents, visible decisions, open code. Fewer meetings.

The practice teaches

We're our own first case study. Methods, tools, and governance get tested through our own practice. The cooperative is the experiment.

People aren't averages

No member's wellbeing gets sacrificed for the group's metrics. Individual paths matter. Our economics and governance protect them.

Founding members

Different disciplines, one conviction: that what we build should nourish people, communities, and the planet.

Hugo O'Connor

Trust engineering

R&D engineer and system architect with a background in applied cryptography and supply-chain integrity. Co-founder of Bit Trade (acquired by Kraken). Enjoys making and creating things for and with other people for good purpose.

Mathew Mytka

Transformative adaptation

Imagineer, tech ethicist, and designer. Lecturer at University of Wollongong on AI and Transformation. Studies how AI integration shapes adaptive capacity in people and organisations. Known to converse with ravens and occasionally rap.

Claire Barnes

Systems engineering

Software and systems wrangler. Often thinking about how we can better manage complexity in tech. Loves simple, well-crafted tools designed with humans in mind. Can be found foraging for mushrooms or making strange noises with synthesisers.

Dave Factor

Automation engineering

Specialises in designing and implementing automated systems to improve efficiency and reliability. A philosopher of machines and human interaction that makes great sourdough too.

Viveka Weiley

Strategic design

Designer and research convenor. Leads CSIRO's Concept Lab, where creative intelligence is grown alongside scientific discovery. Twenty-five years across participatory design, interactive geovisualisation, AI/ML, and XR. Keeps sharp tools for sashimono and the sea.

Tell us what you're working on

If you're navigating AI adoption and want a team that's honest about what works — if you're a practitioner ready to do serious R&D in the open — or if your organisation wants a seat at the table, not just a service contract — we'd like to hear from you.

hello@anuna.io