Keys to Reliability and Security in a World of LLM Integrations

Why Protocols Like MCP and AFC Matter
This technical post from Valo co-founder and engineering lead Hannu Varjoranta explores LLM-API integrations by explaining the principles of structured approaches for relevant communication frameworks and protocols -- most notably, MCP and AFC. Hannu examines how these enable secure interactions, and provides a key example using authorization. We hope this helps readers understand the role these frameworks can play to help ensure reliable LLM execution through clear action definitions and precise parameter handling.
Introduction
Integrating Large Language Models (LLMs) with external systems via APIs presents significant challenges in ensuring both operational reliability and data security. Ad-hoc prompting for API interactions often leads to unpredictable results and potential security gaps.
The solution lies in structured communication frameworks and protocols designed specifically for these LLM-API interactions. These frameworks establish clear rules for how the LLM requests actions and receives information from external tools. Frameworks embodying these principles include Anthropic's Model Context Protocol (MCP), and conceptually similar features like Google Gemini's Automated Function Calling (AFC), all aiming to bring order to these integrations.
What are MCP Principles & Why Do They Matter?
At their core, protocols following MCP principles establish a standardized language for communication between an LLM and the external world – specifically, the tools, APIs, and contextual information it needs to perform useful tasks. Instead of relying solely on interpreting nuanced natural language prompts to guess how to interact with a tool, these protocols define a structured, predictable format for requests and responses.
Frameworks like Anthropic's MCP and Google's Gemini AFC, for instance, both leverage schemas (typically JSON Schema) to rigorously define tool capabilities and required parameters. This shared foundation facilitates predictable interactions, ensuring the LLM knows precisely how to formulate a valid request for a specific action. While Gemini AFC often highlights the robust function-calling feature, MCP is sometimes framed as a potentially broader protocol aiming to standardize the entire structured conversation between the LLM and its operational context, explicitly handling various interaction states beyond simple tool success or failure.
Why does this structured approach matter? It moves critical operations away from the inherent ambiguity of pure natural language interpretation. The key benefits are tangible for building robust applications:
- Increased Reliability: Tool invocations become more predictable and significantly less prone to errors caused by LLM hallucination or misinterpretation of ambiguous requests.
- Enhanced Security: Explicit definitions for tools and parameters allow for finer-grained control over the actions an LLM can initiate, providing clear, enforceable boundaries and hooks for necessary authorization checks.
- Greater Control: Developers gain deterministic control points and validation layers within the LLM workflow, which is crucial for building complex, multi-step processes.
- Improved Predictability: Standardized requests and responses simplify the process of debugging, monitoring, and managing LLM interactions with external systems.
Deep Dive Use Case: Secure Authorization Handling
Nowhere are the benefits of structured LLM interaction more apparent than in handling authorization. Granting an LLM, or the user interacting via the LLM, the correct level of permission at the right time, especially temporary elevated access, demands precision and robust security measures. Consider a common enterprise scenario: an AI assistant integrated with Salesforce might need temporary administrator privileges to configure a setting, while for daily tasks, it operates correctly with standard, read-only permissions. Handling this transition securely is paramount.
A protocol-driven approach, following MCP principles, addresses this elegantly:
- Schema Defines Access Needs: The process originates in the tool's definition schema. Alongside functional parameters, the schema explicitly defines the required access_level (e.g., standard user vs. elevated admin) needed to execute the specific action.
- Permission Check Before Execution: When the LLM determines it needs to use a tool requiring elevated access, the mediating system first verifies the current session's permission level against the tool's requirement. If the session holds only standard credentials, the mismatch is detected before any attempt is made to call the restricted API.
- Structured authorization_required Response: Instead of proceeding or failing cryptically, the tool execution layer returns a specific, structured response to the LLM, such as:
{
"type": "authorization_required",
"authorization_url": "https://your-auth-service.com/elevate?session=...",
"reason": "This action requires temporary admin rights to modify Salesforce settings."
}
This message unambiguously signals the need for elevation, provides the secure endpoint (authorization_url) where the user can grant consent, and includes necessary context (reason).
4. Clear User Prompt via LLM: The LLM receives this structured data but doesn't expose the raw JSON to the user. It leverages the information to formulate a clear, context-aware prompt: "To modify Salesforce settings as requested, I need temporary administrator access. Please authorize this request here: [Link]. This access will be valid for the next 30 minutes."
5. Secure Consent Flow: Clicking the link initiates the actual consent workflow, handled entirely by the application's secure backend or identity provider (operating outside the LLM's direct control). This flow typically involves verifying the user's identity (potentially requiring re-authentication or MFA), clearly displaying the requested permissions and their duration, capturing the user's explicit consent (Allow/Deny), and—only upon approval—securely obtaining and managing a short-lived, elevated access token specifically for that user's session.
This explicit, stateful approach provides robust control. Furthermore, the principles readily extend beyond simple elevation. The same structured interaction model can manage other critical authentication and authorization scenarios, such as handling initial user logins, requesting specific granular API scopes (e.g., 'permission to delete records' vs. 'permission to view records'), or navigating token expiry and mandatory refresh cycles.
In my next post, I’ll continue the topic as we look into what risks remain when using structured protocols. At Valo, we regularly analyze and apply the latest protocols and advances in AI as we deliver improved productivity, cost management and security for Salesforce managers and platform owners. To access our latest features, trial our AI-powered product here.
About Hannu
Hannu is Co-Founder and engineering leader at Valo, driving the development of an AI-powered Salesforce platform to enhance workflow efficiency and system security. He focuses on delivering scalable, secure, and innovative solutions by mentoring teams and streamlining product development. Hannu has held leadership and engineering roles at Spotify, Infrakit, and F‑Secure, where he built robust distributed systems, optimized data pipelines, and ensured high availability and security across diverse technology environments. His expertise spans software development (Python, Java, Go), cloud computing, databases, and cybersecurity, all relevant to today’s mission to deliver impactful solutions in the Salesforce ecosystem.
Hannu Varjoranta