The Human Nature of AI: Embracing Innovation While Managing Risk

Alan Gin
, CEO
, ZeroDown Software
Real-world challenges and opportunities companies face when integrating AI, especially under the pressures of competitive advantage, regulatory compliance, and the inevitable human behaviors that shape technology use.

Artificial Intelligence (AI) has rapidly become a focal point of discussion across industries, businesses, and everyday life. As AI technologies like ChatGPT revolutionize how we work and interact with data, the challenge lies not only in harnessing their power but also in managing the risks that come with them. In this comprehensive exploration, we delve into the human nature of AI adoption, drawing insights from Alastair Paterson, CEO and co-founder of Harmonic Security, a cybersecurity veteran who has witnessed AI’s transformative journey firsthand.

From his beginnings in the UK cyber startup scene to leading innovation in Silicon Valley, Alastair offers a unique perspective on how businesses can navigate the complex landscape of AI—balancing enthusiasm with caution, and innovation with security. This article unpacks the real-world challenges and opportunities companies face when integrating AI, especially under the pressures of competitive advantage, regulatory compliance, and the inevitable human behaviors that shape technology use.

Listen to the original podcast here: /the-human-nature-of-ai/

A Journey into AI and Cybersecurity

Alastair Paterson’s story starts in the UK, where he co-founded Digital Shadows, a pioneering company in threat intelligence. Over 11 years, he grew that venture from a small kitchen-table startup in London to a global player with hundreds of customers worldwide. His move to San Francisco in 2015 marked a new chapter in his career, placing him at the heart of the tech revolution in Silicon Valley.

When ChatGPT debuted in late 2022, Alastair saw it as a “transformational moment” — “the closest thing to magic I’ve seen as an adult.” The technology’s potential to change the world was immediately clear to him, sparking a renewed passion to build solutions that harness AI safely. This drive led to the founding of Harmonic Security, focused on managing AI’s risks in enterprise environments.

The Profound Shift: AI’s Impact on Business

AI’s impact on industries is massive and complex. According to Alastair, this shift rivals or even surpasses the influence of the cloud and mobile technology revolutions. Yet, the changes AI brings will be unevenly felt across sectors and organizations, due to the varied capabilities and limitations of current AI tools.

“This shift is probably bigger than cloud and mobile, at least from my perspective. It’s right up there in terms of impact on companies around the world.”

One of the key insights Alastair highlights is the widespread, often unregulated use of AI tools by employees, even when companies have yet to officially embrace them. “Even if you don’t know it’s happening, employees are going to be trying out and using these tools both at home and often at work,” he notes.

The Rise of Shadow AI

This phenomenon, sometimes called “Shadow AI,” presents a dual challenge. On one hand, it reflects employees’ eagerness to leverage AI for productivity gains. On the other, it exposes organizations to risks when sensitive data is processed through unapproved or unsecured AI platforms.

Alastair shares a telling example from a bank’s Chief Information Security Officer (CISO): an employee circumvented internal restrictions by emailing sensitive information to their personal account, then feeding it into AI tools outside the corporate network before returning the output to work systems. This risky workaround highlights the difficulty of outright blocking AI use.

“Employees find ways around security controls—disabling them or using their own devices—so blocking AI isn’t a practical solution.”

Industry Differences in AI Adoption

AI adoption varies dramatically by industry. Highly regulated sectors such as finance and healthcare tend to be risk-averse, defaulting to blocking or tightly controlling AI use. In contrast, startups and tech companies often embrace AI enthusiastically, viewing it as a key competitive advantage.

Alastair explains:

“Financial services and healthcare tend to be the most risk-averse and try to block tools like ChatGPT. Consumer tech companies and startups lean in more, enabling employees because the consequences of misuse are usually less severe.”

However, even companies that attempt strict controls often struggle to enforce them effectively. The rapid proliferation of AI applications—over 6,000 tracked by Harmonic, with an average company using 254 different AI apps—makes comprehensive blocking impractical.

The Explosion of AI Apps and Use Cases

AI applications now cover an astonishing range of business functions. For example, in the area of automated presentation generation alone, dozens of AI tools exist—such as Gamma, Slides AI, Presentify, and Beautiful AI—each offering users the ability to transform raw notes or corporate data into polished PowerPoint decks with minimal effort.

These tools save employees significant time and effort, driving adoption even when companies have not formally approved their use. But this proliferation also creates a sprawling, often unmanaged ecosystem of AI usage within organizations.

Managing AI Risk: Policies, Visibility, and Control

Given the inevitability of AI use, Alastair advocates for a proactive approach to managing AI adoption in business:

1. Establish Clear AI Use Policies

The first step is to create straightforward policies outlining acceptable and unacceptable AI use. For example, a common rule is to prohibit employees from inputting sensitive customer data into unapproved AI engines.

“Most companies have established policies by now, but the challenge is that nobody reads them, and people just carry on with their jobs.”

2. Gain Visibility into AI Usage

Policies alone are insufficient without visibility. Companies should leverage existing tools such as Secure Access Service Edge (SASE) or web gateways to monitor AI activity. Harmonic Security goes further by analyzing the actual prompts employees input into AI tools, providing detailed insights into use cases and user behavior.

This visibility can spark important business conversations about which AI tools are essential and merit formal enterprise agreements, versus those that pose risks.

3. Implement Controls to Protect Sensitive Data

Enforcing AI policies requires controls to prevent sensitive data leakage. Traditional Data Loss Prevention (DLP) solutions often generate many false positives and require significant security team effort. Data labeling solutions like Microsoft Purview attempt to tag sensitive data but are complex and resource intensive.

Some companies try to block all AI except approved enterprise tools. Yet, as noted, employees often bypass these restrictions, so blocking is rarely a complete solution.

The Hidden Risks of Free AI Tools

Another critical consideration is the risk posed by free AI tools. Many free or trial versions of AI applications use the data users submit to train their models. This means that proprietary company information, intellectual property, or customer data entered into these free versions could be stored and potentially used to improve the AI models accessible by others.

“If it’s the free version, your prompts can go into the training data that anyone can access later. That’s a real risk for critical IP or customer lists.”

This concern is a major reason why some organizations block free AI tools outright. However, a new generation of workers often treats AI tools like Google—ubiquitous and essential—without fully understanding the implications.

Frameworks for AI Risk Management

With AI adoption accelerating, businesses seek frameworks to guide safe integration. Alastair highlights the abundance of AI risk management frameworks—over 200 cataloged by the Cloud Security Alliance alone. These frameworks vary widely, many targeting AI developers rather than typical business users.

For most small and medium businesses, Alastair recommends applying existing risk management standards, such as the NIST Cybersecurity Framework (CSF), which has recently incorporated governance elements relevant to AI.

“The NIST risk management framework is actually a very good way to think about AI adoption, especially for organizations that aren’t building their own AI models.”

This approach focuses on fundamental governance questions: How does the organization want to use AI? What is acceptable? How do we control data movement and prevent leaks? These basics are critical to establishing a responsible AI strategy.

Looking Ahead: Embracing AI Responsibly

AI is no longer a futuristic concept—it is deeply intertwined with how businesses operate today. The key challenge is not whether AI will be used, but how it will be used safely and effectively.

Alastair’s insights underscore that the human element—employees’ behaviors, fears, and enthusiasm—plays a central role in shaping AI’s impact. Companies that ignore this risk the unintended consequences of unmanaged AI use, while those that embrace it intelligently can unlock tremendous value.

“There’s a tension in every enterprise: the business wants to move fast and adopt AI for competitive advantage, while legal and compliance teams worry about risk. Our goal at Harmonic is to enable safe AI use that balances these forces.”

For business owners ready to take the first step, the path is clear:

  1. Create and communicate clear AI usage policies.
  2. Invest in tools and processes to gain visibility into AI activity.
  3. Implement controls to safeguard sensitive data without stifling innovation.
  4. Leverage established risk management frameworks like NIST to guide governance.

By adopting this balanced approach, companies can empower their teams to harness AI’s power confidently and responsibly.

Final Thoughts and Invitation to Engage

AI is a fast-moving frontier with vast potential and complex challenges. As Alastair Patterson emphasizes, ongoing conversations and learning are essential. He welcomes continued dialogue and questions from businesses and individuals eager to navigate AI safely.

Whether you are a seasoned cybersecurity expert or a small business owner just starting to explore AI, the message is clear: AI is here to stay, and embracing it wisely is the key to resilience and success.

Remember Alastair’s closing words:

“I’m happy to come back and answer questions anytime. AI is a journey, and we’re all learning how to do it better together.”

For those interested in further resources, frameworks, and guidance on AI risk management, the NIST Cybersecurity Framework and Harmonic Security’s research provide excellent starting points.

Embrace AI with open eyes, thoughtful policies, and a commitment to safeguarding your organization’s most valuable assets. The future belongs to those who navigate the human nature of AI with intelligence and care.

For more information about the SafeHouse Initiative and how you can protect your organization, visit safehouseinitiative.org.