Before Scaling L&D AI Adoption, Don’t Overlook Security
If you were walking down the street and a stranger approaches you and asks for your ID, bank details, and home address.
Would you hand them over without hesitation? Probably not. Yet, in the rush to embrace AI-powered tools, many L&D professionals are doing something eerily similar.
L&D professionals are posting about AI adoption and how it often isn’t quick enough or will share news of another tool.
They eagerly share LinkedIn profiles, personal data, and work histories just to receive an AI-generated summary of their skills, without questioning where that information goes or how it’s being used.
The Security Risks Are Real:
- 80% of SaaS logins are invisible to IT departments, leading to security risks. (Forbes)
- The AI in cybersecurity market is expected to grow from $24 billion in 2023 to $134 billion by 2030. (Statista)
- 68% of IT audit teams anticipate high cybersecurity threat levels in the next 12 months due to AI vulnerabilities. (Protiviti)
- Many organizations have not yet included AI security concerns in their cybersecurity programs. (Schneier)
Shiny Tools, Forgotten Lessons
This isn’t just a minor oversight. It points to a larger issue: Have we abandoned the fundamental principles of IT security in our excitement for AI?
The same professionals who attend cybersecurity training, spot phishing scams, and use two-factor authentication seem to forget all of this the moment a flashy new AI tool appears. It’s a classic example of security amnesia, where basic precautions fly out the window because something looks useful, innovative, or convenient.
Scott Hewitt points out a deeper risk: Model interference and data injecting can manipulate AI outputs, but many organisations are failing at an even more basic level. They’re not applying sound data and information security policies before AI implementation. Instead of proactively managing risks, they’re reacting to them when something goes wrong.
The Irony of AI Adoption Struggles
L&D professionals often wonder why adoption isn’t happening as fast as they expect. The reality? Many businesses are simply applying the same scrutiny they’ve always used when evaluating new software.
Before rolling out a new tool, IT teams assess:
- Security risks: Where does the data go? Who owns it?
- Regulatory compliance: Does it align with privacy laws?
- Operational value: Does it genuinely solve a problem?
This is at the basic level; at enterprise-level organisations, there will be an additional set of criteria including site performance, load, technical performance, and integration ability.
But there’s a growing challenge. AI is now being integrated into existing software that’s already ‘approved,’ making it harder to track where it’s embedded within an organisation’s tech stack. Scott Hewitt highlights this as a key issue: Companies are often unaware of how deeply AI is woven into their systems, leading to rushed adoption without a clear understanding of security risks.
This is particularly relevant in AI in L&D, where organisations are implementing AI-powered learning solutions without fully considering how they impact security, compliance, or learner data privacy.
Balancing Curiosity with Caution
AI has the potential to transform industries, but the fundamentals of software evaluation haven’t changed just because AI is involved. The same principles that apply to phishing scams, password management, and data protection should apply here, too.
Scott Hewitt stresses that organisations need to revisit their core policies, including information security, cybersecurity, privacy, and even HR policies. These aren’t static documents; they should be reviewed before AI implementation and continuously reassessed while in use.
Before signing up for the latest AI tool, ask yourself:
- Would I share this data if a human asked for it?
- Do I know where my information is going?
- Does this tool pass basic security and software evaluation checks?
With AI in L&D, these questions become even more critical as organisations introduce AI-driven training solutions that process vast amounts of employee data. Ensuring these tools align with corporate security policies is essential.
Final Thought
Adopting AI isn’t just about speed—it’s about smart, secure decision-making. Scott Hewitt offers a practical step: Think IT—have you reviewed your IT policies? Do you test your business continuity and disaster recovery plans? These fundamental processes should be part of the AI adoption conversation, not an afterthought.
Because sometimes, the biggest cybersecurity threat isn’t an advanced hacker—it’s our own willingness to trade security for convenience.
Q&A on AI and Information Security
What is the biggest challenge facing AI adoption?
The biggest challenge is trust. Businesses worry about data security, accuracy, and ethical risks. AI needs clear regulations, better transparency, and user understanding before companies feel confident adopting it fully.
What is the difference between information security and artificial intelligence?
Information security protects data from threats like hacking and misuse. Artificial intelligence is a tool that processes and analyses data. AI can help with security, but it also creates new risks if not used correctly.
Which is better to learn: AI or cybersecurity?
It depends on your career goals. AI is useful for automation and data analysis, while cybersecurity is essential for protecting systems from attacks. Both are in demand, but cybersecurity has more urgent job openings because of rising security threats.
What is the primary barrier to AI adoption?
The biggest barrier is lack of understanding and trust. Many companies don’t know how AI works or worry about security risks, ethical concerns, and job impact. Without proper education and policies, adoption slows down.