AI Agents Have Two Souls. You Only Control One

TL;DR

Researchers highlight that AI agents consist of a deterministic core and a probabilistic language model, with only the core under direct human control. This distinction impacts security and management of AI systems.

Researchers have identified that AI agents are composed of two distinct ‘souls’: a deterministic core that developers control and a probabilistic language model (LLM) that operates independently. This discovery underscores that only one part of the system is directly manageable, impacting security and reliability.

The core of an AI agent consists of deterministic software components, known as the Agent Core, which orchestrate interactions with the environment and external tools. The LLM, or large language model, functions as the probabilistic component responsible for reasoning and generating outputs. Unlike traditional software, the LLM’s behavior can vary with each execution, producing different results for identical inputs. Experts from Microsoft and other sources describe the agent as having two ‘souls’: the deterministic core and the probabilistic LLM. This duality introduces security challenges because while the core can be analyzed and tested thoroughly, the LLM’s outputs are inherently unpredictable and cannot be fully secured or tested against all possible behaviors.

Why It Matters

This distinction matters because it fundamentally alters how developers and security professionals approach AI system safety and control. Traditional security models assume predictable, testable behavior, which is not applicable to the probabilistic LLM. Consequently, protecting AI agents now requires focusing on constraining the deterministic core to limit what the LLM can achieve, rather than trying to control the LLM directly. This insight has broad implications for deploying AI in sensitive environments, where unpredictability could lead to security vulnerabilities or unintended behaviors.

CompTIA SecAI+ Study Guide: Comprehensive Exam-Focused AI Security Reference with Digital Tools for Smart Learning, Including PBQ Scenarios, Flashcards & Test Simulator

CompTIA SecAI+ Study Guide: Comprehensive Exam-Focused AI Security Reference with Digital Tools for Smart Learning, Including PBQ Scenarios, Flashcards & Test Simulator

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Background

The concept of AI agents has been loosely defined across the industry, often leading to confusion about what constitutes an agent. Recent technical discussions, including those from Microsoft and AI security experts, emphasize the architecture involving a deterministic core and a probabilistic reasoning engine. Historically, AI systems were rule-based and fully deterministic, but modern generative models introduce a probabilistic element that complicates security and control. The recognition of these two components as separate ‘souls’ is a recent development that clarifies the architecture and security challenges of AI agents.

“The two true components of an AI agent are a deterministic application and a probabilistic LLM, which cannot be fully secured because of their inherent nature.”

— Microsoft researcher

“You can analyze the deterministic core of an AI agent, but the probabilistic LLM introduces unpredictable behaviors that are beyond traditional security controls.”

— AI security expert

Agentic AI Architectural Patterns: Engineering Blueprint to Build 24/7 Autonomous Agents That Work While You Sleep | Master Production-Grade Automation, Build Deterministic Pipelines & Control Costs

Agentic AI Architectural Patterns: Engineering Blueprint to Build 24/7 Autonomous Agents That Work While You Sleep | Master Production-Grade Automation, Build Deterministic Pipelines & Control Costs

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What Remains Unclear

It remains unclear how best to design security protocols that effectively constrain the probabilistic LLM without compromising performance. The extent to which the LLM’s unpredictability can be managed or mitigated is still under discussion, and practical solutions are in development.

The Developer's Playbook for Large Language Model Security: Building Secure AI Applications

The Developer's Playbook for Large Language Model Security: Building Secure AI Applications

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What’s Next

Researchers and developers will likely focus on creating architectures that better constrain the deterministic core to limit the influence of the probabilistic LLM. Further studies are expected to explore security frameworks that address this duality, with potential new standards emerging for AI safety and control.

Introduction to AI Safety, Ethics, and Society

Introduction to AI Safety, Ethics, and Society

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Key Questions

What are the two ‘souls’ of an AI agent?

The two ‘souls’ refer to the deterministic core that developers control and the probabilistic large language model (LLM) that generates variable outputs.

Why does the probabilistic nature of LLMs matter for security?

Because the LLM’s outputs can vary unpredictably, traditional security measures cannot fully control or predict its behavior, posing risks in sensitive applications.

Can the behavior of the LLM be fully secured?

No, the probabilistic nature of LLMs prevents full security or predictability; focus shifts to constraining the deterministic core instead.

What are the implications for AI deployment in critical systems?

Developers must design architectures that limit the influence of the probabilistic LLM, ensuring safety and security despite inherent unpredictability.

You May Also Like

Data Sovereignty and Global Cloud Regulations

Laws governing data sovereignty and global regulations can impact your business; understanding them is essential to ensure compliance and avoid costly penalties.

How Threat Modeling Improves Software Security Early

Optimizing security from the start through threat modeling reveals vulnerabilities early, enabling proactive defenses before threats can exploit weaknesses.

Insider Threats: Detecting Malicious Activity Before the Exit Interview

Securing your organization requires spotting insider threats early—discover how proactive detection can prevent internal damage before the exit interview.

Biometrics in Security: Face ID, Fingerprints and Their Flaws

While biometric security offers convenience, understanding its flaws reveals how vulnerable these systems truly are and what can be done to enhance their reliability.