Easol logo
<  Back to blog <  Back
Our Thoughts

AI: The Ultimate Solo Act or Future Team Player?

June 18, 2025 | by Mike Oliver

AI: The Ultimate Solo Act or Future Team Player?

A recent article from the University of Exeter has stirred an interesting debate within the AI community, and it connects directly to some of the insights we uncovered on a recent Easol project.

The study, published in Science Advances, found that while generative AI can boost individual creativity, it often reduces variety across a group. In short, people get better at what they’re doing individually, but the group’s collective output becomes less diverse. It highlights a growing challenge in AI integration. We’ve made huge strides in using AI as a personal assistant, but we’re still figuring out how to turn it into a valuable, active member of a team.

That’s exactly the issue we explored during Soldier Assist / Robin, a project assessing how Large Language Models (LLMs) could support decision-making in complex military operations.

Lessons from the Frontline of AI Integration

Robin was designed to support multi-operator teams. In practice, users mostly interacted with the AI one-to-one, often ignoring group collaboration tools like team chat functions. AI became a personal assistant rather than a shared teammate, mirroring the Exeter findings in a completely different, high-pressure setting.

This leaves an important question on the table. How do we move beyond AI as a personal tool and start integrating it as a true part of the team?

Why Trust and Team Dynamics Matter

One of the biggest learnings from Soldier Assist / Robin was how quickly trust in AI systems can be built, and just as quickly lost. Pre-deployment training helped establish initial confidence, but it was fragile, especially when the AI delivered inaccurate or irrelevant outputs.

This is where the AI Handler role proved essential. Acting as a guide and monitor, the Handler helped manage trust in the system, making sure AI was supporting the team’s mission rather than distracting from it. But even with this role in place, achieving seamless coordination and shared awareness through AI was far from simple.

The bigger issue lies in how AI changes team dynamics. Existing organisational structures, decision-making processes and workflows aren’t built with digital teammates in mind. Introducing AI isn’t as easy as switching on a tool; it requires careful thought about how it fits within the team and how people interact with it, especially when decisions need to be made at pace.

The Future of Human-AI Teaming

Both the Exeter research and our own findings land in the same place. AI works brilliantly as a personal assistant, but integrating it properly into teams needs a new way of thinking. From military operations to disaster response, the challenge is about more than tech. It’s about people, process, trust and culture.

The next step for AI is not to get smarter in isolation, but to become better at working with people. That means supporting collaborative decision-making, improving shared awareness and enhancing group performance without pulling people into isolated AI interactions.

At Easol, we’re continuing to explore how AI can genuinely work as part of a team, not just sit on the sidelines. Further work is already underway, including person-versus-machine trials and new experiments with the AI Handler concept in complex, multi-agency environments.

The opportunity is huge. The question now is who will be bold enough to solve it.

previous

Vested Impact - Redefining the Meaning of a Millionaire

Like the article?

Share on LinkedIn

Share

Easol logo
Headstarts Services Product Portfolio About Contact Blog