top of page

AI Agents Are the New Insider Threat—and Most Enterprises Aren’t Ready

Despite heavy AI adoption, a new report reveals gaping holes in how companies secure their digital coworkers


In the age of hyperautomation, AI agents have quietly become digital coworkers—logging in, pulling data, and even making executive decisions. But a new report warns these "virtual employees" may be the most overlooked insider threat in enterprise cybersecurity today.


BeyondID, a leading Managed Identity Solutions Provider, surveyed U.S. IT leaders and found a striking contradiction: while 85% of organizations say they’re prepared for AI in security, fewer than half actually monitor the behavior or access patterns of the AI systems they deploy.


“AI is no longer just a tool; it’s acting like a user. But most security teams aren’t treating it like one,” said Arun Shrestha, CEO of BeyondID. “This disconnect is creating a massive security vulnerability that’s hiding in plain sight.”


The report, titled “AI Agents: The New Insider Threat?”, paints a troubling picture. Organizations are increasingly leaning on AI for threat detection, but they often fail to recognize that these very systems can also become threats—especially when operating autonomously with deep access and little oversight.


Among the most revealing insights:


  • Only 30% of companies regularly map AI agents to the critical systems they interact with.


  • More than half use AI to detect external threats, yet apply virtually no access controls or behavioral monitoring to the AI itself.


  • Just 6% of security leaders cite securing non-human identities as a top challenge—even though AI impersonation tops their list of emerging threats.


Nowhere is this risk more urgent than in healthcare, where AI is being rapidly integrated into patient-facing services and operational systems.


According to the report:


  • 61% of healthcare organizations experienced an identity-related attack in the past year.


  • 42% failed a compliance audit tied to identity issues.


  • Despite this, only 17% list compliance as a priority concern.


  • And a mere 23% have implemented passwordless authentication—leaving legacy credential systems exposed to manipulation by human and machine actors alike.


“Healthcare is moving fast with AI, but often without the identity safeguards required to protect sensitive patient data,” the report notes. AI impersonation—where AI mimics legitimate users to gain unauthorized access—is becoming a particularly acute concern, named by 34% of healthcare respondents as their top emerging threat.


BeyondID’s advice? Start treating AI agents with the same rigor applied to high-risk human users. That means implementing least-privilege access policies, enforcing continuous monitoring, and integrating non-human identities into the identity and access management (IAM) lifecycle.


“AI agents don’t need to be malicious to be dangerous,” the report cautions. “Left unchecked, they can become shadow users with far-reaching access and no accountability.”


As organizations scale AI deployment across industries, the message is clear: AI may be your best analyst—but if not secured, it could also be your biggest vulnerability.

bottom of page
OSZAR »