Artificial Intelligence is transforming software development, making coding faster, smarter, and more automated. AI coding assistants, also called AI code agents, are increasingly used to generate code, automate repetitive tasks, and optimize workflows.
However, new research from UpGuard, a leader in cybersecurity and risk management, highlights a significant hidden risk: 1 in 5 developers grant AI coding agents unrestricted access to their workstations.
These permissions allow AI agents to read, write, delete files, and execute code without human oversight. While this may speed up development, it introduces serious supply chain, data breach, and security risks. Organizations relying on AI without proper controls may unknowingly expose themselves to malicious attacks or system failures.
In this article, we dive deep into the findings from UpGuard, explain the risks, and explore ways organizations can mitigate exposure.
Key Findings from UpGuard Research
1. Unrestricted File Deletion
UpGuard’s analysis of over 18,000 AI agent configuration files on public GitHub repositories revealed that almost 20% of developers allow AI agents to delete files automatically.
This is extremely risky because even a small error or prompt injection could wipe entire projects or critical system files, leading to lost work, downtime, and potential financial loss.
2. Automated Changes in Main Repositories
Another alarming finding is that some AI agents automatically commit code changes directly to primary repositories without human review.
While this improves efficiency, it creates a potential backdoor for attackers to inject malicious code into production or open-source projects. Organizations using automated AI coding tools could unintentionally propagate vulnerabilities into widely used software systems.
3. Arbitrary Code Execution
UpGuard’s research also uncovered that:
-
14.5% of Python AI agent files
-
14.4% of Node.js AI agent files
grant AI agents the ability to execute arbitrary code. This level of access could allow attackers to take full control of a developer’s environment if they successfully exploit prompt injection or malicious configurations.
4. MCP Typosquatting Threat
The Model Context Protocol (MCP) ecosystem, widely used for AI tools, has been found to contain numerous lookalike or impostor servers.
For every verified vendor server, there are up to 15 untrusted impostors, which can trick developers into using malicious packages. This is a classic example of typosquatting, where attackers exploit small mistakes in URLs or package names to compromise systems.
Implications for Organizations
These findings have far-reaching implications for organizations relying on AI coding tools:
-
Governance Gaps – Broad AI permissions create blind spots for security teams. Without visibility into AI agent activity, organizations cannot fully control what code is executed or modified.
-
Incident Response Delays – Unchecked AI activity can slow down threat detection and remediation, increasing the impact of attacks.
-
Supply Chain Risks – Malicious code inserted through AI agents can propagate into third-party systems, open-source projects, or client-facing software.
-
Credential Exposure – AI agents with unrestricted access can unintentionally leak sensitive credentials or configuration data, leaving the organization vulnerable to external attacks.
Even well-intentioned shortcuts designed to improve efficiency can become major security vulnerabilities if not properly monitored.
How to Mitigate AI Coding Risks
Organizations can take proactive steps to reduce exposure and enforce safe AI workflows:
-
Enforce Permission Limits – Never grant AI agents full system access. Limit file read/write/delete permissions to only what is necessary for the task.
-
Require Human Code Review – Always have developers review AI-generated code before committing changes to main repositories.
-
Monitor AI Activity – Use monitoring tools to track AI agent operations, detect unusual behavior, and maintain an audit trail.
-
Validate Sources – Avoid using unverified AI agent packages or servers. Implement strict vetting and package verification to prevent typosquatting exploits.
-
Educate Developers – Train teams on AI security risks, safe workflows, and best practices to prevent accidental exposure.
Implementing these controls can significantly reduce the risk of data breaches, supply chain attacks, and operational disruptions.
The Bigger Picture: AI and Security
AI coding assistants are undoubtedly powerful tools that can revolutionize software development. They save time, reduce repetitive work, and enable developers to focus on creative problem-solving.
However, as UpGuard’s research shows, efficiency must not come at the expense of security. Organizations that ignore AI governance risk introducing vulnerabilities that could compromise systems, sensitive data, and even customer trust.
Cybersecurity teams must treat AI code agents like any other powerful tool: with proper oversight, clear policies, and robust monitoring.
About UpGuard
Founded in 2012, UpGuard is a leading cybersecurity and risk management company. Its platform helps organizations identify, monitor, and mitigate risks across vendors, workforces, and AI-powered workflows. UpGuard’s research into AI coding tools highlights hidden security threats, helping companies enforce governance and reduce exposure to cyber risks.
Conclusion
AI coding tools are transforming software development, but they also introduce new and unexpected security risks. The UpGuard study shows that 1 in 5 developers grant AI agents unrestricted access, creating supply chain vulnerabilities, credential exposure, and the potential for catastrophic data loss.
Organizations must adopt strict AI governance frameworks, limit permissions, monitor activity, and ensure human oversight. By taking proactive measures, developers can safely leverage AI while minimizing security risks.

