DEV Community

Cover image for Building a Multi-Agent Deep Research Tool with Google ADK, A2A, & Cloud Run
Amit Maraj for Google AI

Posted on

Building a Multi-Agent Deep Research Tool with Google ADK, A2A, & Cloud Run

"Research" is a loaded word. It’s not just Googling a keyword. It’s reading papers, verifying facts, finding that one perfect diagram, and synthesizing it all into something coherent.

Asking a single AI agent to do all of that sequentially is not very efficient. They’ll hallucinate, they’ll get stuck, and they’ll definitely be slow.

Deep Researcher Tool

(TL;DR: Want the code? Check out the Deep Research Agent code on GitHub.)

I wanted a system that could take a topic—say, "The History of Recurrent Neural Networks"—and produce a comprehensive, illustrated report. Additionally, I wanted to learn how to build a Deep Research Tool from scratch.

The first attempt? A single loop. It researched, then it looked for images, then it checked its work. It took forever.

So I asked: Can I make this faster?

In this post, we’re going to build a Parallel Research Squad. Instead of one agent doing everything, we’ll spin up three specialized agents that run simultaneously, coordinated by a central Orchestrator. We’ll use Google’s Agent Development Kit (ADK) for the brains, the Agent-to-Agent (A2A) Protocol for communication, and Google's Cloud Run to let them scale infinitely.


Architecture

Part 1: Agentic Design Patterns

We aren't just writing prompts anymore; we are doing System Engineering. To build a robust system, we leverage three key design patterns:

1. The Orchestrator Pattern

Instead of a "God Agent" that decides everything, we have a central Orchestrator. Think of it as the Editor-in-Chief. It doesn't write the articles; it assigns stories to reporters. It manages the state, handles errors, and ensures the final product meets the deadline.

2. Parallelization

This is our speed hack. Most agent frameworks run sequentially (Step A -> Step B -> Step C). But "Reading Arxiv Papers" and "Searching for Images" are independent tasks. By running them in parallel, we reduce the total latency to the duration of the slowest task, not the sum of all tasks.

3. The Evaluator-Optimizer

We don't trust the first draft. Our system includes a Judge agent. The Orchestrator sends the research to the Judge, who returns a strict Pass/Fail grade with feedback. If it fails, the Orchestrator loops back (Optimizer) to fix the gaps.


Sequential Processing

Part 2: The Need for Speed (Parallel Execution)

The biggest bottleneck in AI agents is latency. Waiting for a model to "think" and browse the web takes time.

With ADK, we implement a ParallelAgent. This isn't just a concept; it's a primitive in the framework that handles the async complexity for us. ParallelAgents run in parallel, and the Orchestrator waits for all of them to finish before moving on. This is a simple way to parallelize your agents and improve performance within agents that don't depend on each other.

# orchestrator/app/agent.py
from google.adk.agents import ParallelAgent

# The "Squad" runs together
research_squad = ParallelAgent(
    name="research_squad",
    description="Runs the researcher, academic scholar, and asset gatherer in parallel.",
    sub_agents=[researcher, academic_scholar, asset_gatherer],
)
Enter fullscreen mode Exit fullscreen mode

This one change cut our total processing time by 60%. While the Scholar is reading a dense PDF, the Asset Gatherer is already validating image URLs.


A2A Handshake

Part 3: The Universal Language (A2A Protocol)

How do these agents talk? They are separate microservices. The Researcher might be on a high-memory instance, while the Orchestrator is on a tiny one.

We use the Agent-to-Agent (A2A) Protocol. It’s like a standardized API for AI agents, built on top of JSON-RPC.

Why A2A?

  1. Decoupling: The Orchestrator doesn't need to know how the Researcher works, just where it is.
  2. Interoperability: You could write the Researcher in Python and the Judge in Go. As long as they speak A2A, they can collaborate.
  3. Service Discovery: In development, we map agents to localhost ports. In production, we map them to Cloud Run URLs.
# orchestrator/app/agent.py
from google.adk.agents.remote_a2a_agent import RemoteA2aAgent

# The Orchestrator calls the remote Scholar service
academic_scholar = RemoteA2aAgent(
    name="academic_scholar",
    # In prod, this is an internal Cloud Run URL
    agent_card="http://scholar-service:8000/.well-known/agent.json",
    description="Searches for academic papers."
)
Enter fullscreen mode Exit fullscreen mode

Scaling Graph

Part 4: Infrastructure as a Superpower (Cloud Run)

We deploy this system on Google Cloud Run. This gives us the "Grocery Store" scaling model.

The "Grocery Store" Model

Imagine a grocery store with one checkout lane. If 50 people show up, the line goes out the door.
In our system, each agent is a checkout lane.

  • Monolith: One lane. 50 requests = 50x wait time.
  • Microservices on Cloud Run: 50 requests = Cloud Run instantly spins up 50 instances of the Researcher. Everyone gets checked out at once.

Scale to Zero

When no one is using the app, we have 0 instances running. We pay $0. This is crucial for cost-effective AI applications. Note, when a Cloud Run service is not in service, it is automatically scaled to zero, which means that it will require a cold start when the next request comes in. You can keep your Cloud Run services warm by using a health check.


Part 5: The Frontend (Next.js + Real-Time)

We didn't want a CLI tool. We wanted a product.

We built a Next.js frontend that connects to the Orchestrator. Because we know the architecture, we can visualize it. When the research_squad starts, our frontend shows three pulsing indicators side-by-side. You actually see the parallelism happening.

It creates a sense of "liveness" and transparency that builds user trust.


Conclusion

By breaking our monolith into a Parallel Research Squad, we built a system that is:

  1. Faster: Parallel execution cuts wait times by >50%.
  2. Better: Specialized agents (Scholar, Gatherer) do deeper work than one generalist.
  3. Scalable: Microservices on Cloud Run handle infinite load.

Want to build this yourself?

Top comments (0)