top of page

Trusted And Dangerous As AI Empowers Insider Threats From Within


The insider threat isn't human anymore magazine cover

“With AI as an ally, it becomes easier to disguise, easier to coerce, easier to manipulate, and easier to misinform.”

– Boaz Fischer


In the world of cybersecurity, most defences are built to detect and repel attacks that come from the outside, such as viruses, phishing campaigns, distributed denial of service attacks, and advanced persistent threats launched by foreign actors.


These are the enemies at the gates. They are somewhat predictable, traceable, and often met with layers of technical layered safeguards.


But what happens when the threat is already inside?


Unlike external cyberattacks, insider threats operate from within the walls of trust. They come in the form of a trusted employee, a contractor, or a team member who simply has access, intent, and the tools to exploit both. With the rise of artificial intelligence (AI), the determined insider becomes faster, stealthier, and more challenging to detect.


AI has made it easier for insiders, regardless of motive, to mimic legitimate behaviour, bypass controls, automate exploitation, and disappear into the digital noise.


What once required technical skill and careful planning can now be executed quickly and quietly with AI-powered assistance.


Artificial Intelligence is no longer just a tool for boosting productivity. It’s also enabling insiders to manipulate, conceal, and carry out actions that bypass traditional security measures.


From influencing others through realistic messages to quietly using unsanctioned AI tools, the line between legitimate activity and hidden risk is becoming increasingly blurred.


As organisations embrace hybrid work, adopt new technologies, and rely more on digital trust, the ability to detect subtle, AI-enabled insider behaviour is becoming a critical challenge.


These emerging risks aren’t limited to rogue actors or deliberate saboteurs. They can come from anyone with access, intent, or opportunity, especially when AI empowers them.


This article therefore explores six emerging and evolving insider threat challenges fuelled by artificial intelligence.


1. AI-Driven Social Engineering

Social engineering has always been a powerful tactic for manipulating employees into revealing sensitive information or granting access. However, with the rise of AI, this threat has taken on new levels of sophistication.


Malicious insiders or external actors can now feed AI with publicly available data, such as an employee’s social media activity, emails, and communication history, to create hyper-personalised messages that emulate internal language and behaviour.


For example, AI can generate a message from a senior executive asking an assistant to transfer funds or share restricted documents. Because the message is contextually accurate and written in a familiar tone, the recipient is less likely to question it. AI voice synthesis tools can even create audio messages that sound like trusted figures in the organisation, adding pressure and urgency.


Example: Finance Employee Defrauded By Deepfake CFO


zoom call

What happened: A finance employee at a multinational firm’s Hong Kong branch was tricked into transferring approximately $25.6 million USD. The employee participated in a video call where all other participants, including the company’s CFO and staff members, were AI-generated deepfakes. Convinced by the realistic appearances and voices, the employee followed instructions to transfer funds to multiple bank accounts.


These types of manipulations make social engineering harder to detect and more convincing than ever.


Traditional phishing filters and employee awareness campaigns are no longer enough.


Takeaway: Organisations must recognise that social engineering is no longer just a matter of gullibility or technical failure. It is now a battle of trust, timing, and highly focused persuasion, amplified by AI.

Detecting these threats requires a deeper understanding of behavioural patterns, communication anomalies, and subtle shifts in user interaction across platforms.


2. Bring Your Own AI (BYOAI)

I’m sure you are well aware of Bring Your Own Device (BYOD). Now, there’s a new challenge on the rise:


Bring Your Own AI (BYOAI).


A new wave of productivity tools powered by AI is flooding the workplace.


Employees increasingly use personal AI assistants, writing tools, coding generators, and content enhancers to improve performance. However, these tools often fall outside organisational control and governance.


In many cases, these applications store processed information in third-party servers with limited security, no visibility, and no audit trails. Sensitive business content, like proprietary code, confidential strategies, or financial data, can unknowingly be shared with tools that mine the data to improve their algorithms.


Worse still, employees may use these tools to automate risky activities, such as generating code that manipulates internal systems or bypasses standard processes.


Example: Samsung Data Leak Via ChatGPT


chatgpt and samsung

What happened? In 2023, Samsung employees inadvertently leaked confidential information by using ChatGPT to review internal code and documents. As a result, Samsung decided to ban the use of generative AI tools across the company to prevent future breaches.


Takeaway: Organisations must treat BYOAI as a form of Shadow IT, requiring policy, visibility, and risk assessment. Without understanding what tools employees are using and how they’re using them, companies face increased data leakage risks and compliance blind spots.


3. Spoofing with AI

Spoofing is no longer a low-tech threat. AI now enables insiders to create real-time forgeries of digital identities with remarkable accuracy. From deepfake videos and voice recordings to emails that match a user’s tone and grammar, AI allows attackers to impersonate colleagues and executives seamlessly.


An insider might generate an email that mimics a CISO’s writing style, instructing IT staff to disable a security control or use voice cloning to call the help desk and request a password reset. But the threat doesn’t stop there. Deepfake video tools could impersonate executives in live video meetings, persuading staff to share sensitive access or approve transactions, as mentioned in AI-Driven Social Engineering.


AI can also craft messages on internal chat platforms like Slack or Microsoft Teams, mirroring the tone and phrasing of trusted colleagues to request files, credentials, or actions.


Spoofed alerts from fake IT support accounts or phony vendor communications can push employees to transfer funds or reveal sensitive information. Even internal voice memos or meeting transcripts can be manipulated to appear as legitimate follow-ups from leadership.


With AI’s ability to replicate what people say but how and when they say it, traditional authentication and detection systems are increasingly outmatched.


Example: Voice Deepfake Used To Scam A CEO


robot

What Happened? In 2019, the CEO of a UK-based energy firm received a call from someone he believed to be his German parent company’s chief executive. Using AI-based voice cloning technology, the caller instructed the CEO to transfer €220,000 (approximately $243,000) to a Hungarian supplier. The voice’s subtle German accent and tone convinced the CEO of its authenticity, leading to the successful fraud.


Takeaway: Organisations must now defend against attacks where the person initiating the action appears entirely legitimate on the surface. Identity validation must evolve beyond passwords and static policies to include behavioural and contextual verification.


4. AI-Powered Shadow IT

Shadow IT, the use of unauthorised applications, devices, or services outside the control of an organisation’s IT and security teams, has long been a serious risk. But artificial intelligence is now “supercharging” this challenge in ways that are faster, harder to detect, and far more dangerous than traditional Shadow IT ever was.


Today, many AI platforms are built for seamless integration. They offer APIs, automation tools, and user-friendly setups that allow employees to create workflows, move, analyse, and transform sensitive data without seeking IT approval or triggering security alerts.


These AI tools often bypass normal vetting processes because they are embedded in personal accounts, browser extensions, or cloud services that appear harmless on the surface.


Employees might use an AI content generator to rewrite confidential documents, a coding assistant to automate database queries, or a personal AI dashboard to pull real-time data from internal systems.


In many cases, these tools synchronise data directly to external servers beyond corporate control, often without encryption, oversight, or auditability. Once the data leaves, organisations may have no visibility, control, or awareness of what was accessed, modified, or exposed.


However, AI-powered Shadow IT is not limited to technology alone. It increasingly includes external people who are not authorised to use it.


Knowingly or unknowingly, employees can outsource internal work to external freelancers, AI vendors, or human-in-the-loop AI services.


Sensitive data may be sent outside the company for document summarisation, data annotation, coding assistance, or project work, exposing confidential information to third parties who were never vetted, contracted, or governed by internal security policies.


Unlike traditional Shadow IT, where rogue devices or unapproved apps could be inventoried and blocked, AI-powered Shadow IT is fluid, adaptive, decentralised, and woven into everyday workflows.


Employees may genuinely believe they are increasing efficiency or solving problems without realising they are creating new risks that IT teams can’t see or control. Behind the scenes, sensitive information could be leaked through channels never intended for corporate use or accessed by people the organisation does not know.


The result is a “digital Wild West” – A fragmented, sprawling environment where data flows across unsanctioned systems, external hands touch critical information, and security teams have no reliable line of sight.


Insiders, whether careless or malicious, now have unprecedented tools to operate outside policy at the speed of AI.


Example: Samsung Bans ChatGPT Among Employees After Sensitive Code Leak


chatgpt data leak

What Happened? In 2023, Samsung Electronics experienced a significant incident that exposed the growing risks of AI-powered Shadow IT.

Several engineers, attempting to optimise and debug code, uploaded sensitive source code and confidential internal data into ChatGPT, a publicly available AI platform operating outside of Samsung’s security governance.

Because ChatGPT retains user input to improve its models unless otherwise restricted, the confidential data risked being stored, accessed, or even exposed in future AI interactions without Samsung’s visibility or control.

The discovery of multiple data exposure incidents led Samsung to swiftly ban the use of generative AI tools like ChatGPT for work purposes.


Takeaway: Organisations that fail to tackle AI-powered Shadow IT risk losing control over their most critical assets without realising a breach has occurred.

Visibility, governance, and strict policies around AI tools and third-party engagement must now be treated as urgent priorities before invisible damage is done.


5. Offensive AI Used as a Weapon

Perhaps the most dangerous development is the use of AI not just for evasion but for active attacks.


Malicious insiders are beginning to leverage AI as a weapon to find vulnerabilities, exploit weaknesses, and escalate their activities rapidly, with little manual effort and minimal technical skill required.


Open-source AI agents and custom-trained models can now automate vulnerability scanning, identifying misconfigurations, weak credentials, exposed APIs, and cloud infrastructure gaps faster and more accurately than traditional manual techniques.


Instead of manually probing systems, an insider can instruct an AI to map internal networks, simulate attacks, and suggest specific steps to gain higher privileges or extract sensitive information. Some AI tools can even create customised malware payloads designed to exploit newly discovered weaknesses in real-time.


Even more concerning, AI doesn’t rely on a static playbook. It can adapt its behaviour dynamically. If a threat detection system blocks one method of access, AI can autonomously try alternative tactics, changing IPs, modifying payloads, and adjusting timing patterns until it finds a successful path through.


Example: Emerging Trend in Claude AI misuse 


AI

What happened? A report by Anthropic (April 2025) detailed how Claude AI was recently misused. It revealed some surprising and novel trends in how threat actors and chatbot abuse are evolving and the increasing risks that generative AI poses.

In one case, Anthropic found that a “sophisticated actor” had used Claude to help scrape leaked credentials to access security cameras. In another case, an individual with “limited technical skills” was able to develop malware that normally required more expertise.


This self-adjusting resilience makes AI-driven attacks harder to spot, quicker to execute, and more scalable across multiple targets at once.


Takeaway: Organisations must respond by turning AI into their defensive ally. Machine learning models should be deployed to detect suspicious behavioural patterns, dynamically adjust risk scores, and correlate subtle signals that human analysts might miss.

Static security controls will no longer be enough. Defenders must match offensive AI’s automation, adaptability, and speed or risk being silently outmanoeuvred from within.


6. AI-Enabled Threats In Fragmented Work Environments

The shift to remote and hybrid work has fundamentally changed how, where, and through what devices employees interact with corporate systems.


No longer tied to a centralised office network, employees now access resources from a wide range of endpoints such as home Wi-Fi, personal smartphones, unmanaged laptops, virtual desktops, and third-party cloud services. Each environment brings its own set of access controls, logging capabilities, and visibility gaps.


The result is a digital workspace where tracking user actions consistently across systems is a significant challenge for security teams.


Artificial Intelligence adds a new layer of complexity and concealment to this distributed setup.


Users operating across multiple environments can now deploy AI for productivity and planning, executing, and nefarious activities.


Example: North Korean hackers impersonate remote IT workers to infiltrate European companies


AI

What happened? North Korea’s state-sponsored hacking operations have turned human resources into a proxy battleground.

Trained operatives are applying for legitimate remote IT roles at European companies. They craft fake résumés, forge academic records, and some even include degrees from Belgrade University, and conceal their true identities behind AI-edited photos and profile data. Once hired, they gain access to internal infrastructures, mirroring the behaviour of real contributors.


For instance, an insider could use AI-powered scripts to mimic typical work patterns like browsing shared folders, querying databases, or generating reports while simultaneously initiating automated data transfers through personal cloud accounts or encrypted channels. These actions can be spaced, randomised, and contextualised to avoid triggering alerts.


This threat is even more concerning because many modern AI tools, especially open-source models, can be downloaded and run locally, meaning they don’t require an Internet connection to function. This allows the insider to analyse, summarise, or manipulate sensitive data entirely offline, away from corporate oversight. For example, an insider could use a locally installed AI model to:

  • Summarise confidential reports to extract key information quickly

  • Translate sensitive documents into another language

  • Classify or reorganise data for easier export or targeting


Users operating across multiple environments can now employ AI to script, schedule, or automate tasks that appear benign on the surface but conceal malicious intent. For example, an insider could use AI to replicate regular activity patterns while orchestrating slow data exfiltration from an unmanaged personal device. They could also use AI assistants or local models to interpret, transform, or encrypt data before sending it through non-corporate channels completely outside the radar of standard monitoring tools.


Takeaway: Organisations must respond by turning AI into their defensive ally. Machine learning models should be deployed to detect suspicious behavioural patterns, dynamically adjust risk scores, and correlate subtle signals that human analysts might miss.

Static security controls will no longer be enough. Defenders must match offensive AI’s automation, adaptability, and speed or risk being silently outmanoeuvred from within.


Key Takeaway:

These six challenges: AI-driven social engineering, Bring Your Own AI, Spoofing with AI, AI-Powered Shadow IT, Offensive AI used as a Weapon, and AI-enabled threats in Fragmented Work Environments reveal a fast-changing landscape where insider threats are stealthier, faster, easier and more damaging than ever before.


The traditional lines between trusted users and malicious actors are becoming blurred as AI equips individuals with unprecedented capabilities to deceive, exploit, and operate beyond the reach of conventional security measures.


As the nature of insider risk continues to evolve, organisations must be willing to rethink long-held assumptions about trust, access, and visibility in a world where human behaviour and artificial intelligence are increasingly intertwined.


Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page