Google Agent Development Kit (ADK): Building Your First AI Agent – Part 1 of 5


Part 1 of 5 – From Zero to Production-Ready Agents

The race to build autonomous AI agents has intensified
dramatically in 2024-2025. While LLMs have proven their ability to generate text, the real transformation happens
when they can perceive their environment, reason about problems, and take actions autonomously.
Google’s Agent Development Kit (ADK) represents a fundamental shift from simple LLM wrappers to production-grade
agent architectures.

This series will take you from foundational concepts to
deploying multi-agent systems in production on Google Cloud. If you’re an architect, engineer, or technical leader
evaluating agent frameworks, this guide provides the technical depth and architectural patterns you need to make
informed decisions.

What is Google ADK?

The Agent Development Kit (ADK) is Google’s open-source Python framework specifically designed for
building autonomous AI agents. Unveiled at Google Cloud NEXT 2025, ADK addresses the gap between experimental AI
demos and production-ready agent systems.

🎯 Architectural Positioning

ADK is NOT a general-purpose LLM framework. It’s purpose-built for agent orchestration where
multiple autonomous components need to collaborate, use tools, maintain state, and operate reliably at scale.

ADK vs. Genkit vs. LangChain: When to Use What?

Google offers multiple AI frameworks. Here’s the architectural decision tree:

Framework Primary Use Case Best For Production Maturity
ADK Multi-agent systems, complex orchestration Enterprise automation, autonomous systems ✅ GA (Generally Available)
Genkit Full-stack AI apps (chatbots, RAG) Rapid prototyping, Firebase integration ✅ GA
LangChain General LLM orchestration Broad ecosystem, multi-model support ⚠️ Rapidly evolving
Architecture Decision: Choose ADK when you need:

  • Multiple agents working collaboratively
  • Production-grade observability (Cloud Trace, BigQuery Analytics)
  • Deep Gemini integration with advanced features (Code Execution, Search)
  • Enterprise-level reliability and support

Source: Google Cloud Agent Builder Documentation

Core ADK Philosophy: Model-Agnostic by Design

While ADK is optimized for Gemini models, it’s architecturally model-agnostic. You can integrate:

  • Google Models: Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 2.0
  • Third-Party Models: Claude (Anthropic), GPT-4 (OpenAI), Llama (Meta)
  • Specialized Models: Domain-specific fine-tuned models

This flexibility is critical for architects building systems that may need multi-model strategies or vendor
diversification.

Reference: ADK GitHub
Repository – Model Integration Guide

Understanding Agent Architecture: The Perception-Reasoning-Action Loop

Before diving into code, let’s establish the foundational architecture. An ADK agent operates in a continuous loop:

ADK Architecture: Perception-Reasoning-Action Loop - C4 Context Diagram showing agent components including Perception, Reasoning, Action, Observability, and Memory & State Management
Figure 1: Agent Architecture – Perception-Reasoning-Action Loop (C4 Context Level)

Diagram Explanation: C4 Context Level

1. Perception: Receives user input and retrieves relevant context from memory/external sources.

2. Reasoning: The LLM (Gemini) processes the enriched context and decides what action to take.

3. Action: Executes tools (API calls, database queries, calculations) based on the reasoning
output.
4. Observability: All components emit telemetry to Cloud Trace and BigQuery for monitoring.
5. Memory & State: Persistent layer that all components read/write to maintain conversation
context.

This architecture pattern is based on the ReAct (Reasoning + Acting) framework popularized by Yao et al. 2022 and implemented natively in
ADK.

Setting Up Your ADK Development Environment

Prerequisites

Before installing ADK, ensure you have:

# System Requirements
- Python 3.10 or higher
- pip 23.0+
- Google Cloud Project with billing enabled
- gcloud CLI installed and configured

# Verify installations
python --version  # Should be 3.10+
pip --version
gcloud --version

Google Cloud Setup

ADK requires API access to Vertex AI. Set up your project:

# 1. Create a Google Cloud project (if you don't have one)
gcloud projects create my-adk-project --name="ADK Development"

# 2. Set the project
gcloud config set project my-adk-project

# 3. Enable required APIs
gcloud services enable aiplatform.googleapis.com
gcloud services enable cloudtrace.googleapis.com
gcloud services enable bigquery.googleapis.com

# 4. Set up authentication
gcloud auth application-default login

# 5. Set environment variable for your project
export GOOGLE_CLOUD_PROJECT="my-adk-project"
export GOOGLE_CLOUD_REGION="us-central1"  # Or your preferred region
🏗️ Architecture Best Practice: Use separate Google Cloud projects for development, staging, and
production environments. This ensures:

  • Isolated billing and quota management
  • Clear separation of secrets and credentials
  • Independent deployment pipelines

Installing ADK

# Create a virtual environment (recommended)
python -m venv adk-env
source adk-env/bin/activate  # On Windows: adk-env\Scripts\activate

# Install ADK
pip install google-adk-core

# Install optional dependencies for Google Cloud integration
pip install google-adk-vertexai

# Verify installation
python -c "import adk; print(f'ADK version: {adk.__version__}')"

Reference: Official ADK
Installation Guide

Project Structure Best Practices

Here’s a production-ready project structure for ADK applications:

my-adk-project/
├── agents/
│   ├── __init__.py
│   ├── search_agent.py         # Individual agent implementations
│   ├── analysis_agent.py
│   └── coordinator.py          # Multi-agent orchestrator
├── tools/
│   ├── __init__.py
│   ├── custom_search.py        # Custom tool implementations
│   └── database_tools.py
├── config/
│   ├── development.yaml        # Environment-specific configs
│   ├── staging.yaml
│   └── production.yaml
├── tests/
│   ├── test_agents.py
│   └── test_tools.py
├── deployment/
│   ├── Dockerfile
│   ├── cloudbuild.yaml         # CI/CD for Cloud Build
│   └── terraform/              # Infrastructure as Code
├── requirements.txt
├── pyproject.toml
└── README.md

Building Your First Agent: Google Search Assistant

Let’s build a production-ready agent that can search Google and provide intelligent responses. This example
demonstrates the core ADK patterns you’ll use in all agents.

Step 1: Define the Agent Configuration

Create config/development.yaml:

# ADK Agent Configuration - Development Environment
agent:
  name: "google-search-assistant"
  description: "An AI agent that uses Google Search to answer questions"
  
  # Model configuration
  model:
    provider: "vertex-ai"
    name: "gemini-1.5-pro-002"  # Latest stable Gemini model
    parameters:
      temperature: 0.7          # Balance creativity and consistency
      top_p: 0.95
      max_output_tokens: 2048
      
  # Safety settings (important for production)
  safety:
    harm_category_hate_speech: "BLOCK_MEDIUM_AND_ABOVE"
    harm_category_dangerous_content: "BLOCK_MEDIUM_AND_ABOVE"
    harm_category_sexually_explicit: "BLOCK_MEDIUM_AND_ABOVE"
    harm_category_harassment: "BLOCK_MEDIUM_AND_ABOVE"
    
  # Observability
  observability:
    enable_cloud_trace: true
    enable_bigquery_analytics: true
    log_level: "INFO"

# Tool configurations
tools:
  google_search:
    enabled: true
    max_results: 5
    safe_search: true

Step 2: Implement the Agent

Create agents/search_agent.py:

"""
Google Search Assistant Agent
A tutorial implementation demonstrating core ADK patterns.
"""

from typing import Dict, List, Any
import yaml
from pathlib import Path

from adk import Agent, AgentConfig
from adk.tools import GoogleSearchTool
from adk.observability import trace_span
from adk.errors import AgentError, ToolExecutionError


class SearchAssistant:
    """
    An intelligent search assistant that uses Google Search 
    and Gemini to provide comprehensive answers.
    
    Architecture Pattern: Single-Agent with Tool Integration
    """
    
    def __init__(self, config_path: str = "config/development.yaml"):
        """
        Initialize the Search Assistant.
        
        Args:
            config_path: Path to YAML configuration file
            
        Raises:
            FileNotFoundError: If config file doesn't exist
            AgentError: If agent initialization fails
        """
        self.config = self._load_config(config_path)
        self.agent = self._initialize_agent()
        
    def _load_config(self, config_path: str) -> Dict[str, Any]:
        """Load and validate configuration."""
        config_file = Path(config_path)
        if not config_file.exists():
            raise FileNotFoundError(f"Config not found: {config_path}")
            
        with open(config_file, 'r') as f:
            config = yaml.safe_load(f)
            
        # Validate required sections
        required_sections = ['agent', 'tools']
        for section in required_sections:
            if section not in config:
                raise ValueError(f"Missing required config section: {section}")
                
        return config
    
    def _initialize_agent(self) -> Agent:
        """Initialize the ADK agent with tools and configuration."""
        
        # Create agent configuration from YAML
        agent_config = AgentConfig(
            name=self.config['agent']['name'],
            description=self.config['agent']['description'],
            model_config={
                'provider': self.config['agent']['model']['provider'],
                'model_name': self.config['agent']['model']['name'],
                'parameters': self.config['agent']['model']['parameters'],
            },
            safety_settings=self.config['agent']['safety'],
        )
        
        # Initialize Google Search tool
        search_tool = GoogleSearchTool(
            max_results=self.config['tools']['google_search']['max_results'],
            safe_search=self.config['tools']['google_search']['safe_search'],
        )
        
        # Create agent with tools
        agent = Agent(
            config=agent_config,
            tools=[search_tool],
        )
        
        return agent
    
    @trace_span(name="search_and_synthesize")
    async def ask(self, question: str, context: Dict[str, Any] = None) -> str:
        """
        Ask a question and get an intelligent response using Google Search.
        
        This method demonstrates the core agent interaction pattern:
        1. Receive question
        2. Agent reasons about what tools to use
        3. Execute Google Search if needed
        4. Synthesize results into coherent answer
        
        Args:
            question: The user's question
            context: Optional additional context (e.g., user preferences)
            
        Returns:
            Synthesized answer from the agent
            
        Raises:
            AgentError: If agent processing fails
            ToolExecutionError: If search tool fails
            
        Example:
            >>> assistant = SearchAssistant()
            >>> answer = await assistant.ask("What is Google ADK?")
            >>> print(answer)
        """
        try:
            # Build the prompt with instructions
            system_prompt = self._build_system_prompt()
            
            # Call the agent
            response = await self.agent.run(
                user_message=question,
                system_message=system_prompt,
                context=context or {},
            )
            
            return response.content
            
        except ToolExecutionError as e:
            # Handle tool-specific errors
            return f"I encountered an error while searching: {str(e)}. Please try rephrasing your question."
            
        except AgentError as e:
            # Handle general agent errors
            raise AgentError(f"Agent processing failed: {str(e)}")
    
    def _build_system_prompt(self) -> str:
        """
        Build the system prompt that guides agent behavior.
        
        This is a critical architectural decision point - the system prompt
        defines the agent's personality, capabilities, and constraints.
        """
        return """You are a helpful research assistant with access to Google Search.

Your capabilities:
- Search Google for current information
- Synthesize multiple sources into coherent answers
- Cite sources when providing information

Your constraints:
- Always search for current information rather than relying on training data
- Be honest about limitations - say "I don't know" if uncertain
- Cite sources for factual claims
- Refuse to help with harmful, illegal, or unethical requests

When answering:
1. Search for relevant information
2. Analyze and synthesize results
3. Provide a clear, well-structured answer
4. Include source citations
"""

    @trace_span(name="batch_questions")
    async def ask_batch(self, questions: List[str]) -> List[str]:
        """
        Process multiple questions in batch (useful for cost optimization).
        
        Args:
            questions: List of questions to process
            
        Returns:
            List of answers corresponding to each question
        """
        results = []
        for question in questions:
            answer = await self.ask(question)
            results.append(answer)
        return results


# Example usage and testing
async def main():
    """Demonstrate the Search Assistant in action."""
    
    # Initialize assistant
    assistant = SearchAssistant()
    
    # Example 1: Simple question
    print("=" * 80)
    print("Example 1: Current Events Question")
    print("=" * 80)
    
    question1 = "What are the latest developments in Google's Gemini AI models?"
    answer1 = await assistant.ask(question1)
    print(f"\nQ: {question1}")
    print(f"\nA: {answer1}\n")
    
    # Example 2: Technical question
    print("=" * 80)
    print("Example 2: Technical Documentation")
    print("=" * 80)
    
    question2 = "How do I deploy an ADK agent to Cloud Run?"
    answer2 = await assistant.ask(question2)
    print(f"\nQ: {question2}")
    print(f"\nA: {answer2}\n")
    
    # Example 3: With context
    print("=" * 80)
    print("Example 3: Question with Context")
    print("=" * 80)
    
    question3 = "What deployment options are available?"
    context = {
        "previous_questions": ["How do I deploy an ADK agent?"],
        "user_role": "architect",
    }
    answer3 = await assistant.ask(question3, context=context)
    print(f"\nQ: {question3}")
    print(f"Context: {context}")
    print(f"\nA: {answer3}\n")


if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Step 3: Run Your First Agent

# From your project root directory
python agents/search_agent.py

Expected Output:

================================================================================
Example 1: Current Events Question
================================================================================

Q: What are the latest developments in Google's Gemini AI models?

A: Based on current information, here are the latest developments:

**Gemini 2.0 Flash** (December 2024):
- Experimental release with enhanced multimodal capabilities
- 2x faster than Gemini 1.5 Pro
- Native tool use and code execution

**Gemini 1.5 Pro-002** (Stable):
- Production-grade model with 1M token context window
- Improved reasoning and coding capabilities
- Integrated with ADK for agent development

Sources:
- Google AI Blog: "Announcing Gemini 2.0" (December 2024)
- Vertex AI Documentation: Model Updates

[Additional examples would show similar structured responses]

Understanding the Request-Response Cycle

Let’s visualize what happens when you call assistant.ask():

ADK Request-Response Cycle - C4 Sequence Diagram showing the complete flow from application request through Gemini API and tools to final response with timing annotations
Figure 2: ADK Request-Response Cycle (C4 Sequence Diagram – Component Level)

What’s Happening Under the Hood

Step 1-3: Your application calls the agent, which starts a trace span and sends the question to
Gemini with the system prompt.

Step 4-5: Gemini reasons that it needs current information and decides to call the
Google Search tool. This is autonomous – you don’t hard-code “if question about X, then search.” The LLM makes
this decision.

Step 6-8: The ADK framework executes the Search tool, handles any errors, and returns results to
Gemini.

Step 9-10: Gemini receives the search results and synthesizes them into a coherent,
well-formatted answer.

Step 11-13: The final answer is returned, and all telemetry is sent to Cloud Trace for
observability.

Learn more about agent decision-making in the Vertex AI
Function Calling documentation
.

Testing and Debugging Your Agent

Production agents require robust testing. ADK provides built-in testing utilities:

"""
tests/test_search_agent.py
Unit tests for the Search Assistant
"""

import pytest
from unittest.mock import AsyncMock, patch
from agents.search_agent import SearchAssistant


class TestSearchAssistant:
    """Test suite for Search Assistant agent."""
    
    @pytest.fixture
    def assistant(self):
        """Create a test instance with mocked configuration."""
        with patch('agents.search_agent.Path.exists', return_value=True):
            with patch('builtins.open', create=True) as mock_open:
                # Mock config file
                mock_open.return_value.__enter__.return_value.read.return_value = """
                agent:
                  name: test-agent
                  description: Test
                  model:
                    provider: vertex-ai
                    name: gemini-1.5-pro-002
                    parameters:
                      temperature: 0.7
                  safety:
                    harm_category_hate_speech: BLOCK_MEDIUM_AND_ABOVE
                tools:
                  google_search:
                    enabled: true
                    max_results: 5
                """
                return SearchAssistant()
    
    @pytest.mark.asyncio
    async def test_simple_question(self, assistant):
        """Test that agent can answer a simple question."""
        
        # Mock the agent.run method
        with patch.object(assistant.agent, 'run', new_callable=AsyncMock) as mock_run:
            mock_run.return_value.content = "ADK is Google's agent framework."
            
            answer = await assistant.ask("What is ADK?")
            
            assert "ADK" in answer
            assert "agent" in answer.lower()
            mock_run.assert_called_once()
    
    @pytest.mark.asyncio
    async def test_error_handling(self, assistant):
        """Test that agent handles search errors gracefully."""
        
        from adk.errors import ToolExecutionError
        
        with patch.object(assistant.agent, 'run', new_callable=AsyncMock) as mock_run:
            mock_run.side_effect = ToolExecutionError("Search API failed")
            
            answer = await assistant.ask("Test question")
            
            assert "error" in answer.lower()
            # Should not raise exception, should return error message
    
    @pytest.mark.asyncio
    async def test_batch_processing(self, assistant):
        """Test batch question processing."""
        
        with patch.object(assistant, 'ask', new_callable=AsyncMock) as mock_ask:
            mock_ask.return_value = "Answer"
            
            questions = ["Q1", "Q2", "Q3"]
            answers = await assistant.ask_batch(questions)
            
            assert len(answers) == 3
            assert mock_ask.call_count == 3


# Run tests with: pytest tests/test_search_agent.py -v

Run the tests:

# Install test dependencies
pip install pytest pytest-asyncio

# Run tests with coverage
pytest tests/ -v --cov=agents --cov-report=html

# Output:
# tests/test_search_agent.py::TestSearchAssistant::test_simple_question PASSED
# tests/test_search_agent.py::TestSearchAssistant::test_error_handling PASSED  
# tests/test_search_agent.py::TestSearchAssistant::test_batch_processing PASSED
# 
# Coverage: 87% (agents/)

Key Takeaways and Next Steps

🎓 What You’ve Learned

1. Architectural Foundation:

  • ADK is purpose-built for multi-agent systems, not general LLM apps
  • Agents operate in a Perception-Reasoning-Action loop
  • Observability is built-in from day one (Cloud Trace, BigQuery Analytics)

2. Development Patterns:

  • Configuration-driven design separates concerns
  • Agents autonomously decide when to use tools (no hard-coded if/else)
  • Async/await pattern enables scalable concurrent operations

3. Production Considerations:

  • Safety settings are configurable and critical for production
  • Comprehensive error handling prevents cascading failures
  • Testing with mocks enables rapid iteration

📚 Coming in Part 2: ADK Building Blocks

In the next article, we’ll dive deep into:

  • Custom Tool Development: Build domain-specific tools (database queries, API integrations)
  • Memory & State Management: Implement conversation history and long-term memory
  • Advanced Configuration: Model selection, cost optimization, prompt engineering
  • Real-World Case Study: Building a complete customer support agent

Publication Date: May 2025, Week 2

Additional Resources


Ready to build production-grade AI agents?
Subscribe for the complete 5-part series on Google ADK.

Next: Part 2 – ADK Building Blocks: Tools, Memory, and State Management
Publishing: May 2025, Week 2


Discover more from C4: Container, Code, Cloud & Context

Subscribe to get the latest posts sent to your email.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.