
Build
One agent. Reads the order, verifies against the ERP, writes the order.
ADK · Agent Studio · Model Garden
BLUEPRINT · LIGHT MANUFACTURING · DISTRIBUTION
Read orders, verify SKUs and pricing against the system of record, flag exceptions, write the order.
Read orders, verify SKUs and pricing against the system of record, flag exceptions, write the order. Named scope, named timeline, named stack — ADK · Document AI · Model Armor · 6 weeks.

Design ceiling
Order entry and verification time from 11 minutes to under 90 seconds. Data-entry errors down by roughly 85%. Fulfillment cycle time down two to three days.
The agent system is designed to read the incoming order across email, PDF, and EDI, verify every SKU and price against the ERP, write the order when it clears, and flag the exceptions to a named operator with the discrepancy already framed.
The problem
A multi-site light manufacturer carrying a steady B2B order book is paying inside-sales operators to retype the same orders into the ERP: read the PO PDF, match each line to the SKU catalog, check the contract price, key the order, fix the typo three steps later. The cost is the eleven minutes per order, the data-entry errors that surface as returns, and the two to three days of fulfillment slippage every error compounds into.
The job is not to replace the operator. The job is to read the order across whatever channel it arrived on, verify every SKU and price against the ERP, write the clean ones straight through, and hand the exceptions to a named operator with the line-item discrepancy already framed. One agent, Document AI on the inbound parse, Model Armor at the gateway on every ERP write, every order linked back to the operator who finalised it.
Agent architecture
The platform’s four pillars, mapped to the components this agent system actually exercises.

One agent. Reads the order, verifies against the ERP, writes the order.
ADK · Agent Studio · Model Garden

Order state and verification context live in Memory Bank across the read-verify-write pass.
Agent Runtime · Memory Bank · Document AI

Every ERP write policy-checked. Every order tied to the operator who finalised it.
Agent Registry · Model Armor · Gateway · Identity

Eval set walks the SKU-match edges and the historical operator corrections.
Evals · Observability · Agent Analytics
Engagement · 6 weeks
Fixed scope, fixed price, fixed timeline. Here is what happens when.
Week 1
Discovery and channel inventory.
Walk the current order-entry workflow with the operations lead and two senior operators. Inventory the inbound channels (email, PDF, EDI), the SKU catalog, the contract-price feed, the ERP write API. Shape the eval set against operator correction rate on written orders.
Week 2
Order reader and Document AI.
Stand up the ADK agent that reads the inbound order across channels using Document AI on the PDF path. First eval pass on a sampled month of historical orders against the operator-finalised version.
Week 3
SKU and price verification.
Layer the verification step against the SKU catalog and the contract-price feed. Every line carries the matched SKU, the verified price, and a confidence signal. Exceptions route to the operator with the discrepancy already framed.
Week 4
ERP write and governance.
Wire the ERP write API behind a Model Armor policy gate. Enrol the agent in Agent Registry against the operator pool. Stand up the audit trail so every written order links to the operator who finalised it.
Week 5
Staging shadow run.
Run the agent in shadow against live inbound for a week. Compare written orders to the operator queue. Tune the verification thresholds and the eval set against any drift.
Week 6
Production cutover and handoff.
Deploy to Agent Runtime with the operator in the loop on every exception. Walk the runbook with the operations lead. Four-week post-launch support window for drift watch and operator feedback.
What it looks like in code
The actual shape of the code your team owns at engagement end. Real ADK, real tools, real instruction copy.
agents/order_entry/agent.py
python
from google.adk.agents import LlmAgentfrom google.adk.tools import FunctionToolfrom .tools import ( parse_inbound_order, verify_sku_against_catalog, verify_contract_price, write_order_to_erp,)order_entry = LlmAgent( name="order_entry_verifier", model="gemini-2.0-pro", instruction=( "You read inbound orders across email, PDF, and EDI, verify " "every line against the SKU catalog and the contract-price feed, " "and write the order to the ERP when every line clears. Flag any " "exception to a named operator with the discrepancy framed; " "never write an order to the ERP with an unverified line." ), tools=[ FunctionTool(parse_inbound_order), FunctionTool(verify_sku_against_catalog), FunctionTool(verify_contract_price), FunctionTool(write_order_to_erp), ],)What you walk away with
Every blueprint hands the engineering team a deployed agent and the artefacts to run it themselves. No black box, no lock-in.
Two weeks. Named scope. Working agent on Agent Runtime at the end.
Code
Lives in your Git org, owned from commit one.
Governance
Model Armor and Agent Registry on day one.
Speed
Two weeks to a runnable pilot. Eight to production.
Not ready to talk? Take the 4-min readiness assessment