52 KiB
status, contact, date, deciders, consulted, informed
| status | contact | date | deciders | consulted | informed | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
Agent Tools
Context and Problem Statement
AI agents increasingly rely on diverse tools like function calling, file search, and computer use, but integrating each tool often requires custom, inconsistent implementations. A unified abstraction for tool usage is essential to simplify development, ensure consistency, and enable scalable, reliable agent performance across varied tasks.
Decision Drivers
- The abstraction must provide a consistent API for all tools to reduce complexity and improve developer experience.
- The design should allow seamless integration of new tools without significant changes to existing implementations.
- Robust mechanisms for managing tool-specific errors and timeouts are required for reliability.
- The abstraction should support a fallback approach to directly use unsupported or custom tools, bypassing standard abstractions when necessary.
Considered Options
Option 1: Use ChatOptions.RawRepresentationFactory for Provider-Specific Tools
Description
Utilize the existing ChatOptions.RawRepresentationFactory to inject provider-specific tools (e.g., for an AI provider like Foundry) without extending the AITool abstract class from Microsoft.Extensions.AI.
ChatOptions options = new()
{
RawRepresentationFactory = _ => new ResponseCreationOptions()
{
Tools = { ... }, // backend-specific tools
},
};
Pros
- No development work needed; leverages existing
Microsoft.Extensions.AIfunctionality. - Flexible for integrating tools from any AI provider without modifying the
AITool. - Minimal codebase changes, reducing the risk of introducing errors.
Cons
- Requires a separate mechanism to register tools, complicating the developer experience.
- Developers must know the specific AI provider (via
IChatClient) to configure tools, reducing abstraction. - Inconsistent with the
AIToolabstraction, leading to fragmented tool usage patterns. - Poor tool discoverability, as they are not integrated into the
AIToolecosystem.
Option 2: Add Provider-Specific AITool-Derived Types in Provider Packages
Description
Create provider-specific tool types that inherit from the AITool abstract class within each AI provider’s package (e.g., a Foundry package could include Foundry-specific tools). The provider’s IChatClient implementation would natively recognize and process these AITool-derived types, eliminating the need for a separate registration mechanism.
Pros
- Integrates with the
AIToolabstract class, providing a consistent developer experience within theMicrosoft.Extensions.AI. - Eliminates the need for a special registration mechanism like
RawRepresentationFactory. - Enhances type safety and discoverability for provider-specific tools.
- Aligns with the standardized interface driver by leveraging
AIToolas the base class.
Cons
- Developers must know they are targeting a specific AI provider to select the appropriate
AITool-derived types. - Increases maintenance overhead for each provider’s package to support and update these tool types.
- Leads to fragmentation, as each provider requires its own set of
AITool-derived types. - Potential for duplication if multiple providers implement similar tools with different
AIToolderivatives.
Option 3: Create Generic AITool-Derived Abstractions in M.E.AI.Abstractions
Description
Develop generic tool abstractions that inherit from the AITool abstract class in the M.E.AI.Abstractions package (e.g., HostedCodeInterpreterTool, HostedWebSearchTool). These abstractions map to common tool concepts across multiple AI providers, with provider-specific implementations handled internally.
Pros
- Provides a standardized
AITool-based interface across AI providers, improving consistency and developer experience. - Reduces the need for provider-specific knowledge by abstracting tool implementations.
- Highly extensible, supporting new
AITool-derived types for common tool concepts (e.g., server-side MCP tools).
Cons
- Complex mapping logic needed to support diverse provider implementations.
- May not cover niche or provider-specific tools, necessitating a fallback mechanism.
Option 4: Hybrid Approach Combining Options 1, 2, and 3
Description
Implement a hybrid strategy where common tools use generic AITool-derived abstractions in M.E.AI.Abstractions (Option 3), provider-specific tools (e.g., for Foundry) are implemented as AITool-derived types in their respective provider packages (Option 2), and rare or unsupported tools fall back to ChatOptions.RawRepresentationFactory (Option 1).
Pros
- Balances developer experience and flexibility by using the best
AITool-based approach for each tool type. - Supports standardized
AIToolinterfaces for common tools while allowing provider-specific and breakglass mechanisms. - Extensible and scalable, accommodating both current and future tool requirements across AI providers.
- Addresses ancillary and intermediate content (e.g., MCP permissions) with generic types.
Cons
- Increases complexity by managing multiple
AIToolintegration approaches within the same system. - Requires clear documentation to guide developers on when to use each option.
- Potential for inconsistency if boundaries between approaches are not well-defined.
- Higher maintenance burden to support and test multiple tool integration paths.
More information
AI Agent Tool Types Availability
| Tool Type | Azure AI Foundry Agent Service | OpenAI Assistant API | OpenAI ChatCompletion API | OpenAI Responses API | Amazon Bedrock Agents | Anthropic | Description | |
|---|---|---|---|---|---|---|---|---|
| Function Calling | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Enables custom, stateless functions to define specific agent behaviors. |
| Code Interpreter | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | Allows agents to execute code for tasks like data analysis or problem-solving. |
| Search and Retrieval | ✅ (File Search, Azure AI Search) | ✅ (File Search) | ❌ | ✅ (File Search) | ✅ (Knowledge Bases) | ✅ (Vertex AI Search) | ❌ | Enables agents to search and retrieve information from files, knowledge bases, or enterprise search systems. |
| Web Search | ✅ (Bing Search) | ❌ | ✅ | ✅ | ❌ | ✅ (Google Search) | ✅ | Provides real-time access to internet-based content using search engines or web APIs for dynamic, up-to-date information. |
| Remote MCP Servers | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ | Gives the model access to new capabilities via Model Context Protocol servers. |
| Computer Use | ❌ | ❌ | ❌ | ✅ | ✅ (ANTHROPIC.Computer) | ❌ | ✅ | Creates agentic workflows that enable a model to control a computer interface. |
| OpenAPI Spec Tool | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | Integrates existing OpenAPI specifications for service APIs. |
| Stateful Functions | ✅ (Azure Functions) | ❌ | ❌ | ❌ | ✅ (AWS Lambda) | ❌ | ❌ | Supports custom, stateful functions for complex agent actions. |
| Text Editor | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | Allows agents to view and modify text files for debugging or editing purposes. |
| Azure Logic Apps | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | Low-code/no-code solution to add workflows to AI agents. |
| Microsoft Fabric | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | Enables agents to interact with data in Microsoft Fabric for insights. |
| Image Generation | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | Generates or edits images using GPT image. |
API Comparison
Function Calling
Azure AI Foundry Agent Service
Source: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/function-calling?pivots=restMessage Request:
{
"tools": [
{
"type": "function",
"function": {
"description": "{string}",
"name": "{string}",
"parameters": "{JSON Schema object}"
}
}
]
}
Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "function",
"function": {
"name": "{string}",
"arguments": "{JSON object}",
}
}
]
}
OpenAI Assistant API
Source: https://platform.openai.com/docs/assistants/tools/function-callingMessage Request:
{
"tools": [
{
"type": "function",
"function": {
"description": "{string}",
"name": "{string}",
"parameters": "{JSON Schema object}"
}
}
]
}
Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "function",
"function": {
"name": "{string}",
"arguments": "{JSON object}",
}
}
]
}
OpenAI ChatCompletion API
Source: https://platform.openai.com/docs/guides/function-calling?api-mode=chatMessage Request:
{
"tools": [
{
"type": "function",
"function": {
"description": "{string}",
"name": "{string}",
"parameters": "{JSON Schema object}"
}
}
]
}
Tool Call Response:
[
{
"id": "{string}",
"type": "function",
"function": {
"name": "{string}",
"arguments": "{JSON object}",
}
}
]
OpenAI Responses API
Source: https://platform.openai.com/docs/guides/function-calling?api-mode=responsesMessage Request:
{
"tools": [
{
"type": "function",
"description": "{string}",
"name": "{string}",
"parameters": "{JSON Schema object}"
}
]
}
Tool Call Response:
[
{
"id": "{string}",
"call_id": "{string}",
"type": "function_call",
"name": "{string}",
"arguments": "{JSON object}"
}
]
Amazon Bedrock Agents
Source: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateAgentActionGroup.html#API_agent_CreateAgentActionGroup_RequestSyntaxCreateAgentActionGroup Request:
{
"functionSchema": {
"name": "{string}",
"description": "{string}",
"parameters": {
"type": "{string | number | integer | boolean | array}",
"description": "{string}",
"required": "{boolean}"
}
}
}
Tool Call Response:
{
"invocationInputs": [
{
"functionInvocationInput": {
"actionGroup": "{string}",
"function": "{string}",
"parameters": [
{
"name": "{string}",
"type": "{string | number | integer | boolean | array}",
"value": {}
}
]
}
}
]
}
Message Request:
{
"tools": [
{
"functionDeclarations": [
{
"name": "{string}",
"description": "{string}",
"parameters": "{JSON Schema object}"
}
]
}
]
}
Tool Call Response:
{
"content": {
"role": "model",
"parts": [
{
"functionCall": {
"name": "{string}",
"args": {
"{argument_name}": {}
}
}
}
]
}
}
Anthropic
Source: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/overviewMessage Request:
{
"tools": [
{
"name": "{string}",
"description": "{string}",
"input_schema": "{JSON Schema object}"
}
]
}
Tool Call Response:
{
"id": "{string}",
"model": "{string}",
"stop_reason": "tool_use",
"role": "assistant",
"content": [
{
"type": "text",
"text": "{string}"
},
{
"type": "tool_use",
"id": "{string}",
"name": "{string}",
"input": {
"argument_name": {}
}
}
]
}
Commonalities
- Standardized Tool Definition: All providers use a JSON-based structure for defining tools, including a
typefield (commonly "function") and afunctionobject withname,description, andparameters(often following JSON Schema). - Tool Call Response Structure: Responses typically include a list of tool calls with an
id,type, and details about the function called (e.g.,nameandarguments), enabling consistent handling of function invocations. - JSON Schema for Parameters: Parameters for functions are defined using JSON Schema objects across most providers, facilitating a unified approach to parameter validation and processing.
- Extensibility: The structure allows for additional metadata or fields (e.g.,
call_id,actionGroup), suggesting potential for abstraction to support provider-specific extensions while maintaining core compatibility.
Code Interpreter
Azure AI Foundry Agent Service
.NET Support: ✅
Message Request:
{
"tools": [
{
"type": "code_interpreter"
}
],
"tool_resources": {
"code_interpreter": {
"file_ids": ["{string}"],
"data_sources": [
{
"type": {
"id_asset": "{string}",
"uri_asset": "{string}"
},
"uri": "{string}"
}
]
}
}
}
Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "code_interpreter",
"code_interpreter": {
"input": "{string}",
"outputs": [
{
"type": "image",
"file_id": "{string}"
},
{
"type": "logs",
"logs": "{string}"
}
]
}
}
]
}
OpenAI Assistant API
.NET Support: ✅
Message Request:
{
"tools": [
{
"type": "code_interpreter"
}
],
"tool_resources": {
"code_interpreter": {
"file_ids": ["{string}"]
}
}
}
Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "code",
"code": {
"input": "{string}",
"outputs": [
{
"type": "logs",
"logs": "{string}"
}
]
}
}
]
}
OpenAI Responses API
Source: https://platform.openai.com/docs/guides/tools-code-interpreter
.NET Support: ❌ (currently in development: GitHub issue)
Message Request:
{
"tools": [
{
"type": "code_interpreter",
"container": { "type": "auto" }
}
]
}
Tool Call Response:
[
{
"id": "{string}",
"code": "{string}",
"type": "code_interpreter_call",
"status": "{string}",
"container_id": "{string}",
"results": [
{
"type": "logs",
"logs": "{string}"
},
{
"type": "files",
"files": [
{
"file_id": "{string}",
"mime_type": "{string}"
}
]
}
]
}
]
Amazon Bedrock Agents
Source: https://docs.aws.amazon.com/bedrock/latest/userguide/agents-enable-code-interpretation.html
.NET Support: ❌ (Amazon SDK has IChatClient implementation but lacks ChatOptions.RawRepresentationFactory)
CreateAgentActionGroup Request:
{
"actionGroupName": "{string}",
"parentActionGroupSignature": "AMAZON.CodeInterpreter",
"actionGroupState": "ENABLED"
}
Tool Call Response:
{
"trace": {
"orchestrationTrace": {
"invocationInput": {
"invocationType": "ACTION_GROUP_CODE_INTERPRETER",
"codeInterpreterInvocationInput": {
"code": "{string}",
"files": ["{string}"]
}
},
"observation": {
"codeInterpreterInvocationOutput": {
"executionError": "{string}",
"executionOutput": "{string}",
"executionTimeout": "{boolean}",
"files": ["{string}"],
"metadata": {
"clientRequestId": "{string}",
"endTime": "{timestamp}",
"operationTotalTimeMs": "{long}",
"startTime": "{timestamp}",
"totalTimeMs": "{long}",
"usage": {
"inputTokens": "{integer}",
"outputTokens": "{integer}"
}
}
}
}
}
}
}
.NET Support: ❌ (official SDK lacks IChatClient implementation.)
Message Request:
{
"contents": {
"role": "{string}",
"parts": {
"text": "{string}"
}
},
"tools": [
{
"codeExecution": {}
}
]
}
Tool Call Response:
{
"content": {
"role": "model",
"parts": [
{
"executableCode": {
"language": "{string}",
"code": "{string}"
}
},
{
"codeExecutionResult": {
"outcome": "{string}",
"output": "{string}"
}
}
]
}
}
Anthropic
Source: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/code-execution-tool
.NET Support: ❌
- Anthropic.SDK - uses `code_interpreter` instead of `code_execution` and lacks a possibility to specify file id.
- Anthropic by tryAGI - has `code_execution` implementation, but it's in beta and can't be used as a tool.
Message Request:
{
"tools": [
{
"name": "code_execution",
"type": "code_execution_20250522"
}
]
}
Tool Call Response:
{
"role": "assistant",
"container": {
"id": "{string}",
"expires_at": "{timestamp}"
},
"content": [
{
"type": "server_tool_use",
"id": "{string}",
"name": "code_execution",
"input": {
"code": "{string}"
}
},
{
"type": "code_execution_tool_result",
"tool_use_id": "{string}",
"content": {
"type": "code_execution_result",
"stdout": "{string}",
"stderr": "{string}",
"return_code": "{integer}"
}
}
]
}
Commonalities
- Tool Type Specification: Providers consistently define a
code_interpretertool type within thetoolsarray, indicating support for code execution capabilities. - Input and Output Handling: Requests include mechanisms to specify code input (e.g.,
inputorcodefields), and responses return execution outputs, such as logs or files, in a structured format. - File Resource Support: Most providers allow associating files with the code interpreter (e.g., via
file_idsorfiles), enabling data input/output for code execution. - Execution Metadata: Responses often include metadata about the execution process (e.g.,
status,logs, orexecutionError), which can be abstracted for standardized error handling and result processing.
Search and Retrieval
Azure AI Foundry Agent Service
Source: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/file-search-upload-files?pivots=restFile Search Request:
{
"tools": [
{
"type": "file_search"
}
],
"tool_resources": {
"file_search": {
"vector_store_ids": ["{string}"],
"vector_stores": [
{
"name": "{string}",
"configuration": {
"data_sources": [
{
"type": {
"id_asset": "{string}",
"uri_asset": "{string}"
},
"uri": "{string}"
}
]
}
}
]
}
}
}
File Search Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "file_search",
"file_search": {
"ranking_options": {
"ranker": "{string}",
"score_threshold": "{float}"
},
"results": [
{
"file_id": "{string}",
"file_name": "{string}",
"score": "{float}",
"content": [
{
"text": "{string}",
"type": "{string}"
}
]
}
]
}
}
]
}
Azure AI Search Request:
{
"tools": [
{
"type": "azure_ai_search"
}
],
"tool_resources": {
"azure_ai_search": {
"indexes": [
{
"index_connection_id": "{string}",
"index_name": "{string}",
"query_type": "{string}"
}
]
}
}
}
Azure AI Search Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "azure_ai_search",
"azure_ai_search": {} // From documentation: Reserved for future use
}
]
}
OpenAI Assistant API
Source: https://platform.openai.com/docs/assistants/tools/file-searchMessage Request:
{
"tools": [
{
"type": "file_search"
}
],
"tool_resources": {
"file_search": {
"vector_store_ids": ["string"]
}
}
}
Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "file_search",
"file_search": {
"ranking_options": {
"ranker": "{string}",
"score_threshold": "{float}"
},
"results": [
{
"file_id": "{string}",
"file_name": "{string}",
"score": "{float}",
"content": [
{
"text": "{string}",
"type": "{string}"
}
]
}
]
}
}
]
}
OpenAI Responses API
Source: https://platform.openai.com/docs/api-reference/responses/createMessage Request:
{
"tools": [
{
"type": "file_search"
}
],
"tool_resources": {
"file_search": {
"vector_store_ids": ["string"]
}
}
}
Tool Call Response:
{
"output": [
{
"id": "{string}",
"queries": ["{string}"],
"status": "{in_progress | searching | incomplete | failed | completed}",
"type": "file_search_call",
"results": [
{
"attributes": {},
"file_id": "{string}",
"filename": "{string}",
"score": "{float}",
"text": "{string}"
}
]
}
]
}
Amazon Bedrock Agents
Source: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.htmlMessage Request:
{
"sessionState": {
"knowledgeBaseConfigurations": [
{
"knowledgeBaseId": "{string}",
"retrievalConfiguration": {
"vectorSearchConfiguration": {
"filter": {},
"implicitFilterConfiguration": {
"metadataAttributes": [
{
"description": "{string}",
"key": "{string}",
"type": "{string}"
}
],
"modelArn": "{string}"
},
"numberOfResults": "{number}",
"overrideSearchType": "{string}",
"rerankingConfiguration": {
"bedrockRerankingConfiguration": {
"metadataConfiguration": {
"selectionMode": "{string}",
"selectiveModeConfiguration": {}
},
"modelConfiguration": {
"additionalModelRequestFields": {
"string" : "{JSON string}"
},
"modelArn": "{string}"
},
"numberOfRerankedResults": "{number}"
},
"type": "{string}"
}
}
}
}
]
}
}
Tool Call Response:
{
"trace": {
"orchestrationTrace": {
"invocationInput": {
"invocationType": "KNOWLEDGE_BASE",
"knowledgeBaseLookupInput": {
"knowledgeBaseId": "{string}",
"text": "{string}"
}
},
"observation": {
"type": "KNOWLEDGE_BASE",
"knowledgeBaseLookupOutput": {
"retrievedReferences": [
{
"metadata": {},
"content": {
"byteContent": "{string}",
"row": [
{
"columnName": "{string}",
"columnValue": "{string}",
"type": "{BLOB | BOOLEAN | DOUBLE | NULL | LONG | STRING}"
}
],
"text": "{string}",
"type": "{TEXT | IMAGE | ROW}"
}
}
],
"metadata": {
"clientRequestId": "{string}",
"endTime": "{timestamp}",
"operationTotalTimeMs": "{long}",
"startTime": "{timestamp}",
"totalTimeMs": "{long}",
"usage": {
"inputTokens": "{integer}",
"outputTokens": "{integer}"
}
}
}
}
}
}
}
Message Request:
{
"contents": [
{
"role": "user",
"parts": [
{
"text": "{string}"
}
]
}
],
"tools": [
{
"retrieval": {
"vertexAiSearch": {
"datastore": "{string}"
}
}
}
]
}
Tool Call Response:
{
"content": {
"role": "model",
"parts": [
{
"text": "{string}"
}
]
},
"groundingMetadata": {
"retrievalQueries": [
"{string}"
],
"groundingChunks": [
{
"retrievedContext": {
"uri": "{string}",
"title": "{string}"
}
}
],
"groundingSupport": [
{
"segment": {
"startIndex": "{number}",
"endIndex": "{number}"
},
"segment_text": "{string}",
"supportChunkIndices": ["{number}"],
"confidenceScore": ["{number}"]
}
]
}
}
Commonalities
- Vector Store Integration: Providers like Azure and OpenAI use
vector_store_idsor similar constructs to reference vector stores for file search, suggesting a common approach to retrieval-augmented generation. - Search Configuration: Requests include configurations for search (e.g.,
vectorSearchConfiguration,ranking_options), allowing customization of retrieval parameters like result count or ranking. - Result Structure: Responses contain a list of search results with fields like
file_id,score, andcontentortext, enabling consistent processing of retrieved data. - Metadata Inclusion: Search responses often include metadata (e.g.,
score,timestamp,usage), which can be abstracted for unified analytics and performance tracking.
Web Search
Azure AI Foundry Agent Service
Source: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/bing-code-samples?pivots=restBing Search Message Request:
{
"tools": [
{
"type": "bing_grounding",
"bing_grounding": {
"search_configurations": [
{
"connection_id": "{string}",
"count": "{number}",
"market": "{string}",
"set_lang": "{string}",
"freshness": "{string}",
}
]
}
}
]
}
Bing Search Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "function",
"bing_grounding": {} // From documentation: Reserved for future use
}
]
}
OpenAI ChatCompletion API
Source: https://platform.openai.com/docs/guides/tools-web-search?api-mode=chatMessage Request:
{
"web_search_options": {},
"messages": [
{
"role": "user",
"content": "{string}"
}
]
}
Tool Call Response:
[
{
"index": 0,
"message": {
"role": "assistant",
"content": "{string}",
"annotations": [
{
"type": "url_citation",
"url_citation": {
"end_index": "{number}",
"start_index": "{number}",
"title": "{string}",
"url": "{string}"
}
}
]
}
}
]
OpenAI Responses API
Source: https://platform.openai.com/docs/guides/tools-web-search?api-mode=responsesMessage Request:
{
"tools": [
{
"type": "web_search_preview"
}
],
"input": "{string}"
}
Tool Call Response:
{
"output": [
{
"type": "web_search_call",
"id": "{string}",
"status": "{string}"
},
{
"id": "{string}",
"type": "message",
"status": "{string}",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "{string}",
"annotations": [
{
"type": "url_citation",
"start_index": "{number}",
"end_index": "{string}",
"url": "{string}",
"title": "{string}"
}
]
}
]
}
]
}
Message Request:
{
"contents": [
{
"role": "user",
"parts": [
{
"text": "{string}"
}
]
}
],
"tools": [
{
"googleSearch": {}
}
]
}
Tool Call Response:
{
"content": {
"role": "model",
"parts": [
{
"text": "{string}"
}
]
},
"groundingMetadata": {
"webSearchQueries": [
"{string}"
],
"searchEntryPoint": {
"renderedContent": "{string}"
},
"groundingChunks": [
{
"web": {
"uri": "{string}",
"title": "{string}",
"domain": "{string}"
}
}
],
"groundingSupports": [
{
"segment": {
"startIndex": "{number}",
"endIndex": "{number}",
"text": "{string}"
},
"groundingChunkIndices": [
"{number}"
],
"confidenceScores": [
"{number}"
]
}
],
"retrievalMetadata": {}
}
}
Anthropic
Source: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-toolMessage Request:
{
"tools": [
{
"name": "web_search",
"type": "web_search_20250305",
"max_uses": "{number}",
"allowed_domains": ["{string}"],
"blocked_domains": ["{string}"],
"user_location": {
"type": "approximate",
"city": "{string}",
"region": "{string}",
"country": "{string}",
"timezone": "{string}"
}
}
]
}
Tool Call Response:
{
"role": "assistant",
"content": [
{
"type": "server_tool_use",
"id": "{string}",
"name": "web_search",
"input": {
"query": "{string}"
}
},
{
"type": "web_search_tool_result",
"tool_use_id": "{string}",
"content": [
{
"type": "web_search_result",
"url": "{string}",
"title": "{string}",
"encrypted_content": "{string}",
"page_age": "{string}"
}
]
},
{
"text": "{string}",
"type": "text",
"citations": [
{
"type": "web_search_result_location",
"url": "{string}",
"title": "{string}",
"encrypted_index": "{string}",
"cited_text": "{string}"
}
]
}
]
}
Commonalities
- Tool-Based Activation: Providers define web search as a tool (e.g.,
web_search,bing_grounding,googleSearch), typically within atoolsarray, allowing standardized activation of search capabilities. - Query Input: Requests support passing a search query (e.g., via
input,content, orquery), enabling a unified interface for initiating searches. - Result Annotations: Responses include search results with metadata like
url,title, and sometimesconfidenceScoresorcitations, which can be abstracted for consistent result presentation. - Grounding Metadata: Most providers include grounding metadata (e.g.,
groundingMetadata,annotations), facilitating traceability and validation of search results.
Remote MCP Servers
OpenAI Responses API
Source: https://platform.openai.com/docs/guides/tools-remote-mcpMessage Request:
{
"tools": [
{
"type": "mcp",
"server_label": "{string}",
"server_url": "{string}",
"require_approval": "{string}"
}
]
}
Tool Call Response:
{
"output": [
{
"id": "{string}",
"type": "mcp_list_tools",
"server_label": "{string}",
"tools": [
{
"name": "{string}",
"input_schema": "{JSON Schema object}"
}
]
},
{
"id": "{string}",
"type": "mcp_call",
"approval_request_id": "{string}",
"arguments": "{JSON string}",
"error": "{string}",
"name": "{string}",
"output": "{string}",
"server_label": "{string}"
}
]
}
async def get_agent_async():
toolset = MCPToolset(
tool_filter=['read_file', 'list_directory'] # Optional: filter specific tools
connection_params=SseServerParams(url="http://remote-server:port/path", headers={...})
)
# Use in an agent
root_agent = LlmAgent(
model='model', # Adjust model name if needed based on availability
name='agent_name',
instruction='agent_instructions',
tools=[toolset], # Provide the MCP tools to the ADK agent
)
return root_agent, toolset
Anthropic
Source: https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connectorMessage Request:
{
"messages": [
{
"role": "user",
"content": "{string}"
}
],
"mcp_servers": [
{
"type": "url",
"url": "{string}",
"name": "{string}",
"tool_configuration": {
"enabled": true,
"allowed_tools": ["{string}"]
},
"authorization_token": "{string}"
}
]
}
Tool Use Response:
{
"type": "mcp_tool_use",
"id": "{string}",
"name": "{string}",
"server_name": "{string}",
"input": { "param1": "{object}", "param2": "{object}" }
}
Tool Result Response:
{
"type": "mcp_tool_result",
"tool_use_id": "{string}",
"is_error": "{boolean}",
"content": [
{
"type": "text",
"text": "{string}"
}
]
}
Commonalities
- Server Configuration: Providers specify remote servers via URL and metadata (e.g.,
server_url,url,name), enabling a standardized way to connect to external MCP services. - Tool Integration: MCP tools are integrated into the
toolsormcp_serversarray, allowing agents to interact with remote tools in a consistent manner. - Input/Output Structure: Requests and responses include structured input (e.g.,
input,arguments) and output (e.g.,output,content), supporting abstraction for tool execution workflows. - Authorization Support: Most providers include mechanisms for authentication (e.g.,
authorization_token,headers), which can be abstracted for secure communication with remote servers.
Computer Use
OpenAI Responses API
Source: https://platform.openai.com/docs/guides/tools-computer-useMessage Request:
{
"tools": [
{
"type": "computer_use_preview",
"display_width": "{number}",
"display_height": "{number}",
"environment": "{browser | mac | windows | ubuntu}"
}
]
}
Tool Call Response:
{
"output": [
{
"type": "reasoning",
"id": "{string}",
"summary": [
{
"type": "summary_text",
"text": "{string}"
}
]
},
{
"type": "computer_call",
"id": "{string}",
"call_id": "{string}",
"action": {
"type": "{click | double_click | drag | keypress | move | screenshot | scroll | type | wait}",
// Other properties are associated with specific action type.
},
"pending_safety_checks": [],
"status": "{in_progress | completed | incomplete}"
}
]
}
Amazon Bedrock Agents
Source: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateAgentActionGroup.html#API_agent_CreateAgentActionGroup_RequestSyntaxSource: https://docs.aws.amazon.com/bedrock/latest/userguide/agent-computer-use-handle-tools.html
CreateAgentActionGroup Request:
{
"actionGroupName": "{string}",
"parentActionGroupSignature": "ANTHROPIC.Computer",
"actionGroupState": "ENABLED"
}
Tool Call Response:
{
"returnControl": {
"invocationId": "{string}",
"invocationInputs": [
{
"functionInvocationInput": {
"actionGroup": "{string}",
"actionInvocationType": "RESULT",
"agentId": "{string}",
"function": "{string}",
"parameters": [
{
"name": "{string}",
"type": "string",
"value": "{string}"
}
]
}
}
]
}
}
Anthropic
Source: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/computer-use-toolMessage Request:
{
"tools": [
{
"type": "computer_20250124",
"name": "computer",
"display_width_px": "{number}",
"display_height_px": "{number}",
"display_number": "{number}"
},
]
}
Tool Call Response:
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "{string}",
"name": "{string}",
"input": "{object}"
}
]
}
Commonalities
- Tool Type Definition: Providers define a computer use tool (e.g.,
computer_use_preview,computer_20250124,ANTHROPIC.Computer) within thetoolsarray, indicating support for computer interaction capabilities. - Action Specification: Responses include actions (e.g.,
click,keypress,type) with associated parameters, enabling standardized interaction with computer environments. - Environment Configuration: Requests allow specifying the environment (e.g.,
browser,windows,display_width), which can be abstracted for cross-platform compatibility. - Status Tracking: Responses include status indicators (e.g.,
status,pending_safety_checks), facilitating consistent monitoring of computer use tasks.
OpenAPI Spec Tool
Azure AI Foundry Agent Service
Source: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/openapi-spec-samples?pivots=rest-apiSource: https://learn.microsoft.com/en-us/rest/api/aifoundry/aiagents/run-steps/get-run-step?view=rest-aifoundry-aiagents-v1&tabs=HTTP#runstepopenapitoolcall
Message Request:
{
"tools": [
{
"type": "openapi",
"openapi": {
"description": "{string}",
"name": "{string}",
"auth": {
"type": "{string}"
},
"spec": "{OpenAPI specification object}"
}
}
]
}
Tool Call Response:
{
"tool_calls": [
{
"id": "{string}",
"type": "openapi",
"openapi": {} // From documentation: Reserved for future use
}
]
}
Amazon Bedrock Agents
Source: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateAgentActionGroup.html#API_agent_CreateAgentActionGroup_RequestSyntaxCreateAgentActionGroup Request:
{
"apiSchema": {
"payload": "{JSON or YAML OpenAPI specification string}"
}
}
Tool Call Response:
{
"invocationInputs": [
{
"apiInvocationInput": {
"actionGroup": "{string}",
"apiPath": "{string}",
"httpMethod": "{string}",
"parameters": [
{
"name": "{string}",
"type": "{string}",
"value": "{string}"
}
]
}
}
]
}
Commonalities
- OpenAPI Specification: Both providers support defining tools using OpenAPI specifications, either as a JSON/YAML payload or a structured
specobject, enabling standardized API integration. - Tool Type Identification: The tool is identified as
openapior via anapiSchema, providing a clear entry point for OpenAPI-based tool usage. - Parameter Handling: Responses include parameters (e.g.,
parameters,apiPath,httpMethod) for API invocation, which can be abstracted for unified API call execution.
Stateful Functions
Azure AI Foundry Agent Service
Source: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/azure-functions-samples?pivots=restMessage Request:
{
"tools": [
{
"type": "azure_function",
"azure_function": {
"function": {
"name": "{string}",
"description": "{string}",
"parameters": "{JSON Schema object}"
},
"input_binding": {
"type": "storage_queue",
"storage_queue": {
"queue_service_endpoint": "{string}",
"queue_name": "{string}"
}
},
"output_binding": {
"type": "storage_queue",
"storage_queue": {
"queue_service_endpoint": "{string}",
"queue_name": "{string}"
}
}
}
}
]
}
Tool Call Response: Not specified in the documentation.
Amazon Bedrock Agents
Source: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateAgentActionGroup.html#API_agent_CreateAgentActionGroup_RequestSyntaxCreateAgentActionGroup Request:
{
"apiSchema": {
"payload": "{JSON or YAML OpenAPI specification string}"
}
}
Tool Call Response:
{
"invocationInputs": [
{
"apiInvocationInput": {
"actionGroup": "{string}",
"apiPath": "{string}",
"httpMethod": "{string}",
"parameters": [
{
"name": "{string}",
"type": "{string}",
"value": "{string}"
}
]
}
}
]
}
Commonalities
- API-Driven Interaction: Both providers use API-based structures (e.g.,
apiSchema,azure_function) to define stateful functions, enabling integration with external services. - Parameter Specification: Requests include parameter definitions (e.g.,
parameters,JSON Schema object), supporting standardized input handling.
Text Editor
Anthropic
Source: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-toolMessage Request:
{
"tools": [
{
"type": "text_editor_20250429",
"name": "str_replace_based_edit_tool"
}
]
}
Tool Call Response:
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "{string}",
"name": "str_replace_based_edit_tool",
"input": {
"command": "{string}",
"path": "{string}"
}
}
]
}
Microsoft Fabric
Azure AI Foundry Agent Service
Source: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/fabric?pivots=restMessage Request:
{
"tools": [
{
"type": "fabric_dataagent",
"fabric_dataagent": {
"connections": [
{
"connection_id": "{string}"
}
]
}
}
]
}
Tool Call Response: Not specified in the documentation.
Image Generation
OpenAI Responses API
Source: https://platform.openai.com/docs/guides/tools-image-generationMessage Request:
{
"tools": [
{
"type": "image_generation"
}
]
}
Tool Call Response:
{
"output": [
{
"type": "image_generation_call",
"id": "{string}",
"result": "{Base64 string}",
"status": "{string}"
}
]
}
Decision Outcome
TBD.