Published by Brilliant Noise
Written by Antony Mayfield. Edited by Stephanie Hubbard.
Contributors: Dr Jason Ryan, Katie St Laurence, Rachel Stubbs , Harriet Malina-Derben
© Brilliant Noise 2024. All rights reserved.
About Brilliant Noise
For 15 years, we’ve helped some of the world’s most ambitious brands navigate the complexities of digital change – building capability, shifting culture and supporting teams through real, sometimes messy, transformation.
That experience shaped us. It taught us how organisations adapt, how people respond to new technology, and what it really takes to do things differently.
When generative AI emerged, we didn’t wait and see. We rebuilt the business from the ground up – refocusing, retooling our methods, rethinking our role to the brands we work with. 
Because we believe AI isn’t just the next wave of technology. It’s the next transformation. And we know exactly what it takes to lead one.
A note on definitions and terms.
For ease and simplicity we will refer to generative artificial intelligence systems as AI throughout this paper. If we start talking about other types of technology we will make that clear. (If you prefer the terms Gen AI, gen AI, generative AI or GAI please contact us and we could easily send you a personalised version. It's no trouble, because: AI. Email: [email protected] for that free service.)
AI: What's the catch?
Imagine if you offered your colleagues a simple way to achieve an extra day's work each week without working longer hours. The response would be something along the lines of: "Sure, but what's the catch?"
Generative artificial intelligence (AI*) systems like ChatGPT offer those performance boosts to anyone, but the catch is this: using AI is easy to start but deeply challenging to develop.
It's easy because the tools are freely available, the interfaces are familiar and anyone who can ask a question can get started.
It's challenging because using the tools can quickly disappoint, raise difficult questions, or become baffling.
There's so much opportunity. And yet, organisational AI adoption is proving tricky.
The challenge lies in leading teams to develop AI skills that will fully unlock its potential.

Like learning a language
The reason that learning to use AI effectively is so hard is because it's more like learning a language than learning to use a computer system.
AI doesn't behave like any other computer system we've experienced before. Like language, AI is not consistent. It changes. It behaves much more like a human brain than a computer — constantly learning and adapting to its environment and stimulus.
And learning to use it is a lot like when we learn a language — we need to learn words but we also need constant exposure — to not only learn to speak
in this new language but also how to THINK with it too.Just like learning a language, using AI effectively requires practice, immersion, and continuous learning.
It requires us to become AI literate.

*Dell', F., Saran, A., Mcfowland, R., Krayer, L., Mollick, E., Candelon, F., Lifshitz-Assaf, H., Lakhani, K. and Kellogg, K. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality.
What is AI literacy?
A working definition
AI literacy is an evolving set of skills, including critical thinking, knowing the limitations of AI systems, the ability to assess their outputs and understanding where they can complement or enhance human cognition and expertise in a given field.
It's the ability to understand, evaluate and use artificial intelligence systems and tools in a responsible, ethical and effective way.
AI has moved very quickly from an edge technology into a mainstream tool. Technical expertise isn't required in the same way as it once was in order to use this technology to achieve sophisticated results. However, it's still crucial to grasp the principles behind how AI is built and operates — this knowledge helps leaders make informed decisions about when and how to integrate AI into their processes.
AI literacy involves recognising AI's strengths and limitations, understanding that outputs may be flawed or biased, and knowing where AI can best complement human thinking. Critical thinking is key — not just in spotting errors, but in questioning how AI arrives at conclusions and how our own biases may influence the results. It's only through continued use and experimentation we gain the wisdom to make these kinds of judgments.
AI is not just a technical tool anymore; it's been irrevocably released into the world and therefore carries organisational and moral implications. Using it effectively means understanding its role as a tool that enhances — rather than replaces — human intelligence, so we must learn how to "dance with the system", as systems thinking pioneer Donella Meadows called it.*

*'Dancing with Systems'. 2014. The Academy for Systems Change https://donellameadows.org/archives/dancing-with-systems
A framework for AI literacy
Levels of literacy in AI can be understood as a progression from foundational knowledge, where individuals grasp basic concepts and automate simple tasks, to advanced literacy, where teams innovate and create new business models using AI.
As individuals and teams advance through these levels, they experience increasingly significant benefits, transforming not just efficiency but also their overall capabilities.
More sophisticated models of literacy will be necessary in the future, but for the purposes of today's teams and leaders making decisions about how to make use of AI, this one works well.
1
Pre-literacy
Occasional users and experimenters are in this category. They may be getting some benefits, but usually are limited by a lack of knowledge or support to develop their skills. Frequently they are using the free versions of services, and aren't aware of the difference in performance of leading models, or using structured prompts and dynamic conversations to get better results.
2
Foundational AI Literacy
Individuals understand what AI is and how it works at a conceptual level. Leaders and teams start to experiment with AI tools, gaining efficiency in task automation. Benefits at this stage are immediate but limited to incremental gains in productivity or decision-making efficiency. This level only scratches the surface in terms of true innovation, however, it opens up the mind to the possibilities.
3
Intermediate AI Literacy
Individuals and teams begin to integrate AI into existing workflows in more innovative ways. This level goes beyond mere usage; it's about creating new processes that leverage AI to optimise and rethink how work is done. The returns at this level are far greater, as AI shifts from a tool for small improvements to a catalyst for transforming processes. Organisations start to see compounding benefits across departments, from cost savings to faster innovation cycles.
4
Advanced AI Literacy
At the highest level, AI is used not only to optimise but to create entirely new business models, services, or products. Teams that reach this level understand AI deeply enough to push boundaries and design AI-driven strategies that weren't possible before. The benefits here are exponential, driving competitive advantage and opening
Literacy levels are moving
Like everything to do with AI, the levels of literacy are moving and evolving. We think of them like three trains moving at different speeds, each 10 times faster than the one below.
Level one is easy to hop on to. Once you're able to get things done faster with good prompts and fluency — nudging and pushing the system to do what you need or do better — then you're able to develop your knowledge of AI faster. Once you transition to level two — developing bots and repeatable processes or even small apps with AI to speed up team work — you start to develop knowledge and skills at a faster rate, an order of magnitude faster.
This could explain another common puzzle for users of AI. Regardless of age or profession, we repeatedly hear variations of the question: "I just don't understand why everyone isn't using this!" We've heard that from bosses talking about their teams, creatives talking about technical colleagues, even undergraduates talking about their fellow students. The reason for this is they're usually moving quickly away from their previous point of knowledge, and it's hard to remember what it was like before.
Three phases of AI integration
AI literacy creates value. Knowledge opens doors to more knowledge. As teams learn about AI, they discover new gaps in our understanding, which we then fill. We've found this process leads to better work and business gains.
1
Do what you do now but better
We can speed up everyday tasks like writing emails, taking notes, creating reports, or writing newsletters. Often, with better results.
2
Do what you do in new ways
Work that requires multiple tasks in sequences – processes – with more than one person, can be organised differently to take advantage of AI. For example, using meeting conversations as raw data to create reports, product descriptions and proposals leads to different, faster, better ways of getting things done.
3
Find new things
New tech brings new markets and ways of doing things. It's hard to see these at first. For example, when the iPhone came out, no one had thought of Uber, Tinder, Instagram or TikTok yet. We're in the early days of AI. We can't even imagine what will be invented.
Measurable gains
The growing number of AI tools and apps can be overwhelming because they're proliferating at an alarming rate. However, the best way to develop literacy right now is to use paid services from leading models like OpenAI, Anthropic, and Google, as they provide the most advanced and user-friendly systems available.
For now, we recommend OpenAI's ChatGPT for the relative stability and consistency of the user experience and its performance, which is important for foundational learning.
Investing time and money in one platform doesn't tie you down. The literacy is 'portable'. Once you learn to prompt and engage effectively with one model, you'll be able to apply those skills to others in the future.
Revolutions need leaders. And leaders need AI vertigo.
When considering who to train, the answer is: as many people as you can. But make sure your organisation's leaders are first. This can be a challenge with some, but it's always worth it.
AI can boost work, but it's not a cure-all. Real revolution happens when we push our teams to question and rethink how we work. We need to create a space where new ideas thrive and old limits fade.
This goes beyond new tech; it's about shifting our outlook. We should build a workplace that values bold attempts and new discoveries. Where the thrill of finding novel solutions outweighs the fear of mistakes.
This shift doesn't just happen. Leaders must drive it. They can't just nod from afar; they need to dive in. By getting hands-on with AI, they'll grasp its true power and inspire others to follow suit.
Easier said than done.
One way we've found to unlock a leader's understanding how to use AI is one-on-one AI Power Hours. This is where a senior consultant works alongside them to tackle a difficult task using foundational AI tools and techniques. They often experience 'AI vertigo' — a dizzying moment when AI surprises you (Casey Newton of Platformer coined the phrase).
It's a 'show don't tell' format where the power of thinking with AI tends to astound because as well as winning back several hours of time, leaders start to understand the expansive nature of what AI means for business.
Benefits of AI Literacy
The business case
AI is no longer a matter of competitive advantage. It's a matter of competitive necessity. It's a matter of being able to compete at all.
+$15.7 Trillion
PwC predicts that in just six years time, Generative AI will be contributing the equivalent of five UKs of additional GDP to the global economy.
Bottom line: a lot of value is about to be created.
40% quality increase
The Harvard Business School report shows that knowledge workers using basic generative AI tools in their daily work experience a 40% increase in the quality of their work.
+1 day per week
The same Harvard Business School study reported the equivalent of an extra day of work every week in terms of their output (12.5% tasks, +25% faster). We also have anecdotal reports of dramatic time savings, such as a board member of a public company completing 4 hours of work in just 20 minutes after an AI workshop⁠⁠.
Finding ROI for AI
Investing in AI literacy across an organisation yields substantial returns in the short term, but AI literacy will deliver even more value in the long-term.
The capabilities of AI systems are increasing by the week, and as new features and tech become available, companies with AI literate teams will be able to spot them sooner and exploit them faster than competitors.
There are so many platforms, programmes and models to take advantage of for huge organisational benefits now, but the challenge lies in assessing them and knowing how, when and where to implement them.
Increasing ROI
Think of it as compounding Return On Investment (ROI): the return on investment is a performance and productivity boost that strengthens the skills of an AI literate team each time they use it because of the following factors:
Metrics
There are many metrics to measure the ROI of AI literacy initiatives.
Quantitative Metrics
  • Productivity increases (e.g. tasks completed per hour/day/week)
  • Time reduction in how long employees spend on particular tasks (e.g. campaign planning, research or creative production)
  • Cost savings from AI-driven process improvements over previous solutions
  • Revenue generated from AI-enabled products or services
  • Reduction in errors or security incidents
Qualitative Metrics
  • Employee confidence in using AI tools
  • Pace of AI-driven innovation
  • Customer satisfaction with AI-enhanced offerings
So…what's stopping us?
Many of us understand the benefits. Or can imagine them. So why aren't people using AI to its full potential? There are many reasons, a lot of them complex, many of them emotional.‌
Stuff that slows us down
AI reluctancy
"I literally don't understand why my team isn't using it. I have found it to be incredible! The tools are there, I've told them to expense training. But it's still just me using it. "
- Marketing director, finance sector Source: Brilliant Noise stakeholder research (2024).
While AI literacy offers tremendous benefits, one of the primary obstacles to AI adoption is resistance to change. The scale of the change and the strangeness of the technology create emotional resistance to using the technology.
AI's image problem
"When I suggest to clients they use ChatGPT to help with an idea they seem... disgusted. It's a visceral reaction. They think it's cheating, or dangerous somehow."
- Senior content marketing consultant
Generative AI has been framed as a way to automate people out of a job. Of course, this makes people feel suspicious of AI technologies. Some marketing teams have already been decimated in the hope that AI will soon pick up the slack. In our experience, this is a mistake. AI is better as an accelerator, not a replacement for minds.
Tech Solutionism
"The board sounded very excited about AI, but the main action so far has been to cut jobs. We've lost 60% of our content production capability but there's been no training or tools to use AI to pick up the slack."
- Marketing director, B2B
The tech alone isn't the solution. It's the people using the tech effectively that are the solution. Equally, waiting for the perfect enterprise-approved AI solution to come along is a dangerous approach. It will come too late and out of date.
AI learner types
Over the past two years, we've been working with big and small teams — in large global corporations, and independent agencies — and we've come to recognise the different characters you come across on an AI literacy journey.
Anxiety, fear, and uncertainty are common reactions when faced with new technologies. So it's useful to recognise these different characters and to understand what emotion is driving them in order to alleviate worry and concern, address the psychological aspects of technological change, and help them grow their expectations and ambition when it comes to AI.
These are some of learning types we've come to recognise:
  • Open minders: Many people simply want to learn as much as they can and put it to work. What stops them is being careful not to get things wrong or step on toes. Frameworks and guidelines set them free to start exploring.
  • Nervous superhero: The most advanced users before the process begins can be a little nervous. They have been championing AI use for a while and are confused why so few have joined them in experimenting with new ways of working. Again, they're usually driven by a fear of being wrong.
  • Sceptical dynamos: Sceptics are there to let you know they disagree or already know everything. They are often vital members of learning cohorts, because once they have a usable and non-technical definition of what generative AI can do they become very active advocates and supporters of other learners. Scepticism is a strength with developing AI — the value of critical thinking is huge with any organisational-level behaviour change.
  • Guilty dabblers: Very often embarrassed leaders will say things like "I know I should have worked this out by now but I just haven't been able to prioritise it". They've been intrigued, but breakthroughs and vertigo moments have eluded them.
  • Threatened thinkers: Deep experts in a field, often technical, who feel their status threatened by machines that seem to do some of what they can do. Sometimes, they're almost annoyed that finally everyone has 'got' AI. These types are amazing at understanding the possibilities and limitations of AI, so are great for building bespoke solutions, but might not understand or accept the very necessary process of experimentation to get to an optimal level of AI literacy.
How to build AI in your team
Most teams are eager to start using AI to solve their problems. But you can’t solve business problems in-house without building AI literacy first. Once teams have built understanding, they’re not only able to implement AI solutions faster, but are also more likely to continue using AI tools effectively and responsibly over time. These are the optimal steps we have learned are essential for building AI literacy:
1
Get hands-on asap
Let people experience AI firsthand to spark curiosity and engagement.
2
Understand the machine
Provide context on AI history, types, and how large language models work.
3
Filter the noise
Explain the AI value stack, tech race, and power of prompts.
4
Beware the thinking traps
Encourage collaboration with AI, not avoidance.
5
Address security and ethics
Develop an AI policy covering safety, data usage, and bias mitigation.

1. Get hands-on asap
People learn faster when they get hands-on experience with AI systems.
We try to create a sense of ‘AI vertigo’ — the sudden expansion of possibilities and shift in problem-solving approaches — early on. Framed in the right way this can be exhilarating and motivating.
We get people working with AI as soon as possible, getting them to try different models and prompt structures so they can see the difference these things can make to the quality of their results. Once they’ve done it themselves, we explain why it works or doesn’t.
This ‘show then tell’ approach sparks curiosity and engagement, making subsequent explanations more meaningful and relatable. For example, instead of explaining how AI can assist in content creation, we demonstrate it by collaboratively writing a blog post with an AI tool.
A sense of surprise activates our brain’s capacity for learning. Find the right demo, and it usually doesn’t take long before even the most sceptical of sceptics are surprised and delighted by something AI can do for them.
Practical exercises might include experimenting with different prompts, exploring AI-assisted design or visualisation tools, or using AI for data analysis. The key is to create a safe space for learning where team members can explore, make mistakes, and gain confidence in their interactions with AI.
2. Understand the machine
To fully grasp the implications of AI, we’ve found it's important to show the timeline of human efforts and events that have created thinking machines and how recent innovations in generative AI came to be.
A brief look at the history of AI helps put today’s advancements in context, while recognising the differences between narrow AI, general AI, and other approaches provides a clearer understanding of what the technology can and cannot do.
At the heart of many AI applications are Large Language Models (LLMs), and understanding the basics of what they actually are and how they operate is essential since they form the foundation of generative AI systems. Exploring how these developments represent a fundamental shift in technology reveals the vast potential impact AI could have across industries.
Understanding the machine usually follows this structure:
  • History of AI: A brief overview of AI’s evolution helps put current developments in perspective.
  • Types of AI: The differences between narrow AI, general AI, and various AI approaches to provide a full picture of the technology’s capabilities and limitations.
  • Large Language Models (LLMs): The basics of how LLMs (the systems that power generative AI) work, as they form the backbone of many current AI applications.
The AI Revolution: Explore why generative AI represents a fundamental shift in technology and its potential impacts across industries.
3. Filter the noise
Deep technical knowledge isn't essential for everyone. But we’ve found that our clients appreciate having a basic understanding of how AI systems work in terms of:
  • The companies involved and who owns the different AI models
  • The commercial values of things like the data chips used to fuel AI models
  • The power dynamics and corporate agendas
They want to know this so they can filter the noise and make sound judgments. Understanding the competitive landscape, or ‘Tech Race’, also helps inform strategic decisions around AI adoption and investment.
And this involves exploring the AI value stack — which encompasses the layers of technology and processes, from data collection to application development, that drive AI functionality.
Learning this helps to understand why some prompts work better than others, e.g. how an effective prompt creates context in order to refine AI's focus.
To summarise, our curriculum usually covers:
  • The AI Value Stack: The layers of technology, processes and companies that contribute to AI’s functionality, from data collection to application development.
  • The Tech Race: The competitive landscape in AI development that informs strategic decisions about adoption and investment.
The Power of Prompts: Why prompts work in directing AI behaviour and how to craft effective prompts for different purposes.
4. Beware the thinking traps
‘Delegation dodging ' — the habit of avoiding delegation because “it’s faster to do it myself” — often stems from thinking traps like lack of trust or fear of failing at more complex tasks.
A similar dynamic occurs with AI adoption, where people hesitate to integrate AI into their workflow after initial training because they perceive that learning a new workflow will set them back.
The goal is to create a system of co-intelligence between you and AI, learning to collaborate rather than focusing on ‘correct’ use.
There’s no perfect way to use LLMs yet, but experimenting with smaller, low-stakes tasks — like drafting emails or reports — can help build AI literacy before applying it to high-stakes projects, strategic decisions or products.
Think of it as finding the right fit between your working style and the technology, understanding the trade-offs between adapting your methods or the tools themselves.
5. Address security and ethics
The era of blanket AI bans is over.
AI literacy must go beyond technical skills to cover ethics, including fairness, transparency, and accountability. Users need to understand the impact of AI decisions and actively mitigate risks. Addressing biases is key to ensuring fair and inclusive outcomes, making equity a central part of AI education.
Sustainability is also a growing concern, especially with the energy demands of large AI models. Raising awareness of AI's environmental impact can help organisations adopt more sustainable practices.
A strong AI policy should include safe practice guidelines, clear rules on data usage, and myth-busting around what AI does and doesn’t do with your data.
We’ve found teams want the following:
  • What an AI policy should include
  • Safe practice guidelines
  • What data can be used where and when
  • Myth-busting (what AIs do and don’t do with your data)

What an AI policy should include
  • Safe practice guidelines
  • What data can be used where and when
  • Myth-busting (what AIs do and don’t do with your data)
Conclusion: AI literacy is an essential skill
AI has moved very quickly from an edge technology into a mainstream tool. And now it’s irretrievably a part of our lives and our work.
It will continue to enhance and accelerate human thought and innovation in unprecedented ways. What the Industrial Revolution did for physical strength and effort, generative AI will do for thinking. It will speed up how long it takes us to do simple repetitive tasks, while boosting productivity and creativity.
For this reason, literacy in these technologies is no longer optional — it’s essential. What is needed right now is for organisations’ employees — and especially their leaders — to develop an ease not only in making decisions about AI but in using it themselves.
AI presents the biggest opportunity in the last 20 years. By investing in AI literacy now, organisations can unlock unprecedented productivity, innovation, and long-term success.

Ready to dive deeper? Let’s discuss how you can start applying these insights today. Email us at [email protected] to schedule a strategy session with one of our AI experts.
To get more of our analysis, subscribe via the links below to Brilliant Noise's BN Edition newsletter, and your author's newsletter, Antonym.

Subscribe to the BN Edition newsletter


Written by Antony Mayfield. Edited by Stephanie Hubbard.
Contributors: Dr Jason Ryan, Katie St Laurence, Rachel Stubbs , Harriet Malina-Derben
Published by Brilliant Noise
© Brilliant Noise 2024. All rights reserved.
Appendix: Sources & further reading Recommended reads
These sources are highly influential in general and have been specifically useful to this writing of this paper.
Recommended reading
These sources are highly influential in general and have been specifically useful to this writing of this paper.
The Jagged Frontier paper
Dell', F., Saran, A., Mcfowland, R., Krayer, L., Mollick, E., Candelon, F., Lifshitz-Assaf, H., Lakhani, K. and Kellogg, K. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. [online]
Co-Intelligence, by Ethan Mollick
Mollick, E. (2024) Co-Intelligence: Living and Working with AI. New York: WH Allen.
The Future is Digital, by George Rzevski
Rzevski, G. (2023) The Future is Digital: How Complexity and Artificial Intelligence will Shape Our Lives and Work. 1st ed. Southampton: WIT Press.
How To Have a Good Day, by Caroline Webb
Webb, C. (2016) How to Have a Good Day: Harness the Power of Behavioral Science to Transform Your Working Life. London: Bantam Press.
Right Kind of Wrong, by Amy Edmondson
Edmondson, A.C., 2023. The right kind of wrong: the science of failing well. New York: Atria Books.