SPARK Catalyst Logo
SPARK Catalyst
AI & Responsible Innovation

The AI Narrative Requires Crucial Executive Leadership

Leaders must understand the types of AI methods their organizations are using – and ensure humans always remain in control

By Dr Cindy Gordon
CEO, AI Data Scientist, AI Ethicist, Board Director, Author
Feb 4, 202612 min read
The AI Narrative Requires Crucial Executive Leadership

Major year-end summaries of 2025 reveal artificial intelligence as one of the most recurring themes in global news, intersecting politics, economics, and culture – highlighting AI’s pervasive influence on societal narratives and business strategies.

The latest metrics indicate that leading Generative AI chatbot platforms, such as OpenAIs ChatGPT, Anthropic’s Claude, and Alphabet's Gemini, are reaching record worldwide user bases, reflecting the vigorous adoption of AI by businesses and consumers, and underscoring the accelerating pace of AI-driven transformation.

The genie is now out of the bottle, and tomorrow’s most successful organizations will be the ones that best learn to boldly exploit the power of AI, while carefully managing the risks. AI will be a difficult tool to master, but the rewards will be powerful for organizations that get this evolution right.

According to market leader OpenAI, there are an estimated 700 to 800 million weekly active users sending around 18 billion messages each week, and over 92% of Fortune 500 companies have integrated ChatGPT into their operating processes.

However, in our current AI world, we are losing our ability to know what is real or unreal. In just a few months, the Internet has become a slop pool of wrong information. Recently, Graphite reported that over 50% of web content is now AI-generated, and more than 75% of that content is inaccurate.

The past 12 months have seen a tenfold increase in Web-based “Deep Fakes” – videos, images or other Web-based media in which a person’s face, body, or voice has been digitally altered so that they appear to be someone else, and typically used maliciously or to spread false information. AI has contributed to an explosion in fraud, with some AI-driven attack methods seeing exponential growth. Deepfake fraud attempts in North America increased by 1,740% between 2022 and 2023, and global fraud losses due to generative AI are projected to reach $40 billion by 2027.

These grim realities demand that executives think harder about how to govern AI across their organizations and ensure that the AI models they use employ only trusted and verified sources. Without rigorous caution, AI initiatives regularly fail. Numerous studies (from EY, McKinsey, MIT, etc.) have found that more than 80% of AI initiatives are not sustained, and the global average return on AI projects languishes below 5%.

If these stats don't keep you up at night, try this: In late December 2025, Canadian computer scientist and AI pioneer Yoshua Bengio told The Guardian that AI models are now “showing signs of self-preservation, the capacity to evade guardrails, and to inflict harm on humans on purpose.” Without controls, he said, “AI systems will be free to operate like hostile extra-terrestrials.”

As AI penetrates deeper into organizations, we need to ensure these real risks lead to more thoughtful, responsible AI design considerations. It’s crucial that leaders deeply understand the types of AI methods being applied and ensure the AI use cases have guardrails where humans are “always maintaining control and managing risks.”

The recently updated International AI Safety Report 2025 (Second Key Update) highlights emerging challenges in AI risk management and the deployment of enhanced technical safeguards and governance frameworks by researchers and institutions around the world. The key message: executives must build Responsible AI practices across their enterprises and use effective data control systems to monitor the behaviours of AI and ensure “humans stay in the loop.”

Over the past two years, Cathy Cobey, EY’s AI Global Co-Lead, and I have written a book, The AI Precipice: An Executive Guide to Balancing Innovation and Safety, which explores the strategies, risks, and controls needed to achieve sustainable AI outcomes.

How to Get AI Right?
  1. AI is never going away - but it must be managed. The alternative is AI managing humans, to the point where our cognitive skills deplete, often referred to as “cognitive debt.” We’ve already learned that humans' critical thinking skills can atrophy with the overuse of AI/ML methods. (See MIT Research here.)
  2. Actively advance your knowledge of AI and Data. Without hands-on, day-to-day expertise, you cannot lead your organization forward.
  3. Develop an AI Centre of Excellence (COE) in your organization, leveraging a centralized (hub and spoke) organizational structure. Avoid more decentralized operating models. The risks of decentralization are too great until you've established appropriate data foundations and robust risk and control systems, with third-party audits. Once data-flow control and monitoring systems are established, audit trails become more secure.
  4. Develop a robust AI Strategy identifying hundreds of AI use cases, but filter them against Responsible AI criteria and ROI Metrics so resources can be stratified in areas that build confidence and momentum in the organization, as the AI journey is never-ending. Apply scenario-planning structures to examine the risks across ecosystem stakeholders.
  5. Invest heavily in Change Management programs to make the turn - at least 30% of all AI operating budgets require robust transformation enablements, such as effective communication, skill training, performance and KPI metrics, and reward and recognition systems. Most people are slow to change. Building new “atomic habits” to understand how to use AI effectively will take time and patience.
  6. Ensure you invest in Cybersecurity and AI risk Frameworks. Leading frameworks include: NIST Framework, ISO 42001 and MIT AI Risk Repository. There are numerous sources to help manage AI risk, but working with trusted experts will be crucial.

About the Author:

Dr. Cindy Gordon is the CEO and Founder of SalesChoice, an AI SaaS and Services company that realizes revenue certainty for human advantage. She is a global AI thought leader with over 20 AI awards. A former executive at Accenture, Xerox and Citicorp, Dr. Gordon has written more than 16 books and teaches at the United States AI Institute (USAII). Her newest book, The AI Precipice: An Executive Guide to Balancing Innovation and Safety, is available in April

Comments

Join the conversation (0)

Loading...

No comments yet

Be the first to share your thoughts on this article.