Tool Use
Tool Use
Agent
Development Kit
(ADK) - Tools
Follow us on
What is Tool ?
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
ADK Tool Utilization
Process
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
MASTERING LLM PRESENTS
COFFEE BREAK CONCEPTS
Tool Types
Supported in
ADK
Follow us on
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
Function Tools
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
Agent-as-a-Tool
(AgentTool)
A particularly powerful concept
where one agent can invoke another
agent as if it were a standard tool.
This enables modular design, task
delegation, and building
hierarchical agent teams.
When Agent A calls Agent B as a
tool, Agent A receives the result
from Agent B and decides how to
use it in its final response, retaining
overall control.
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
Long Running Function
Tool
ADK also supports Long Running
Function Tool for tasks that take
time or need to yield intermediate
results.
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
Built-in Tools
ADK provides several ready-to-use
tools for common tasks, saving
development time.
Examples include
GoogleSearchTool,
CodeExecutionTool (for
running Python code)
VertexAiSearchTool (for RAG
over custom data stores)
Google Cloud Tools : Specialized
tools for interacting directly with
Google Cloud Platform services,
such as Apigee API Hub, Application
Integration workflows, and various
Google Cloud databases (often via
the MCP Toolbox).
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
Third-Party Tools
Integrate tools from popular
external libraries like LangChain
(using LangchainTool) or CrewAI
(using CrewaiTool) seamlessly into
your ADK agents
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
MCP & OpenAPI Tools
Tools that adhere to the Model
Context Protocol (MCP), an open
standard for how models interact
with external tools and data
sources.
This allows ADK agents to consume
services from any MCP-compliant
server, like the MCP Toolbox for
Databases.
Automatically create tools that can
interact with any REST API
described by an OpenAPI (Swagger)
specification.
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
MASTERING LLM PRESENTS
COFFEE BREAK CONCEPTS
Follow us on
ToolContext
When a tool function is executed by
the framework, it receives a special
ToolContext object .
This context is a bridge, giving the
tool access to the current state of
the interaction. Through
ToolContext, the tool can.
This context allows tools to be more
than just stateless functions; they
can participate in the conversation's
state and access shared resources.
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
What can ToolContext
do?
Read and write to the current
session's state (tool_context.state).
Access services to manage Artifacts
(tool_context.list_artifacts(),
load_artifact(), save_artifact()) and
Memory
(tool_context.search_memory()).
Get information about the specific
tool invocation (function_call_id)
and any prior authentication
(auth_response).
Influence the agent's next steps via
tool_context.actions
@MASTERING-LLM-LARGE-LANGUAGE-MODEL
www.masteringllm.com
LLM Interview
Course
Want to Prepare yourself for an
LLM Interview?
120+ Questions spanning 14 categories
with Real Case Studies
Curated 100+ assessments for each
category
Well-researched real-world interview
questions based on FAANG & Fortune
500 companies
Focus on Visual learning
Certification
AgenticRAG with
LlamaIndex
Want to learn why AgenticRAG is
future of RAG?
Master RAG fundamentals through practical
case studies
Benchmark Your
LLM Knowledge
Advanced GenAI Assessment
(MCQ)
Realistic interview scenarios based on actual
questions and insights from engineers at
Google, NVIDIA, AWS, Microsoft, Meta & More