You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A tiered roadmap built from a survey of the 19 open issues on this repo, the
22 sections of the current SDK.md user manual, the 38 Python modules in src/bigquery_agent_analytics/, and the ADK plugin
(google/adk/plugins/bigquery_agent_analytics_plugin.py, ~3,500 LOC, 14
lifecycle callback hooks, BigQuery Storage Write API path with GCS offload).
Filed for discussion. Estimates are calibrated to a single experienced
SDK / plugin engineer; multiply for parallel streams. Impact and effort
are best-effort; the implementer of each item will refine them.
Ground rules for this doc. Impact = H/M/L on a "downstream user
adoption + DevX uplift" axis, not revenue. Effort = engineer-weeks
(full-time equivalent), confidence band in parens. Items marked strategic decision pending need a maintainer call before
sequencing.
TL;DR
Three workstreams already mature: trace reconstruction, deterministic
One workstream is high-leverage but quietly under-invested: plugin
telemetry. The cache-hit-rate metric (Improve token efficiency in ADK via BigQuery Agent Analytics #32) is a one-week win that
unlocks the cost-optimization narrative and ties directly into the
existing CodeEvaluator surface.
One workstream needs a strategic decision before any code: the
ontology pipeline migration to upstream bigquery_ontology (Migrate SDK ontology pipeline to bigquery_ontology package #38) —
runtime contract migration, not a module swap; needs a go/no-go.
Method
How I built this:
Read each open issue's body + first 2-3 comments. Categorized by
workstream.
Surveyed src/bigquery_agent_analytics/ (38 modules) and SDK.md
(22 sections, ~1700 lines) to understand what already ships.
Read the ADK plugin to understand the upstream telemetry contract
the SDK consumes.
Tiered each item by impact × effort, then sequenced for parallel
workstreams.
Flagged items that need a strategic call before scoping (marked strategic decision pending).
Plugin: extract cached_content_token_count from Gemini usage_metadata. SDK: add cache_hit_rate evaluator + threshold. Schema change requires a backward-compat plan (new column, default NULL). Strong tie-in to post #2's cost narrative.
Already in flight — Gayathri uploaded evaluation_rubrics.py for review. Locks in three pillars (response_usefulness, task_grounding, policy_compliance) using existing categorical vocabulary. Additive, reuses existing dashboard views.
Quality Scorecard Phase 2: persist root_agent_name + region on categorical_results + new categorical_fleet_leaderboard view
The genuinely net-new piece. Decide upfront: idempotent MERGE vs append-only? resolved_at lifecycle column? Worth a small design note before implementation.
Inheritance (extends) compilation in ontology DDL compiler
Three candidate strategies named in the issue (fan-out / union view / label-referenced edges). Decide one, implement, ship as gm compile --emit-extends-as=… flag.
evaluate --suggest-thresholds baseline helper (deferred from blog #2 polish)
(no issue yet — file one)
M
1 wk (high)
Reads last N days of prod, prints suggested per-metric thresholds with a buffer. Halves the prose burden of blog #2's "how do I pick thresholds" sidebar.
P1 total: ~11 eng-weeks. With one engineer: about a calendar month if focused; with two engineers parallelizing the scorecard track and the ontology + helper tracks, ~3 weeks.
P2 — ship in 1 quarter (strategic, larger effort)
Item
Issue
Impact
Effort
Notes
ReasoningBank: per-user/per-session memory of past distilled outcomes, loaded as initial agent context
Storage layer (BQ table for memories), distillation pipeline (LLM-as-Judge + summarization), retrieval API (MemoryService.load_relevant_memories(...)), agent integration shape (callable from plugin or app). Needs a small design RFC first because memory shape affects every downstream consumer.
Compile-time code generation for structured extractors (Phase 1) — only extract_bka_decision_event and the structured-event registry
Gated on #76 landing. Phase 1 scope is deliberate: known structured event schemas only, no free-text. Server-side AI.GENERATE stays as semantic fallback until precision/recall is measured.
Ontology pipeline migration to bigquery_ontology upstream package(strategic decision pending)
Runtime contract migration, not a module swap. Maintainer needs to decide: full migration vs. keep SDK pipeline as a thin wrapper. Risk is high (consumed across 5+ modules); upside is dropping ~5K LOC of duplicate code.
Design proposal phase; needs feedback round resolved. Follows #38 because it should land in bigquery_ontology not in this repo if migration goes ahead.
Design proposal currently. Quoted user feedback: ~85% of brief-validation value sits at runtime, not schema time. Big payoff for production-agentic users; design surface needs to land first.
Auto-benchmark from traces — extract high-signal success/failure pairs to seed eval suites
Builds on existing quality_report.py + agent_improvement_cycle. Generalize the cycle into a reusable extractor. Cross-links to Vertex AI Prompt Optimizer integration (post #4 in the blog series).
Streaming evaluation — Pub/Sub + continuous query path that scores sessions as events arrive
Partial scaffolding exists at _streaming_evaluation.py. Productization needs an architectural call: on-arrival vs. micro-batch, latency budgets, schema for "in-flight" partial sessions.
P2 total: ~23 eng-weeks. With two engineers parallel-streaming, ~12 weeks (one quarter).
P3 — research / future (defer until P0-P2 ships)
Item
Issue
Impact
Effort
Notes
Auto-skills loop based on AutoSkill paper (arxiv 2603.01145) — agents learn reusable skills from interaction history
Alternative storage to BigQuery for ontology data. Significant — multiple concurrent backends multiply maintenance. Defer unless an ADK-team partner needs it.
ReasoningBank shape (BigQuery Agent Analytics SDK — ReasoningBank #49). Storage table + distillation pipeline + retrieval API. The shape this lands in determines whether it can be reused by agent_improvement_cycle (existing demo) or whether they're parallel systems.
Resource budget summary
Engineer-weeks
What ships
2 weeks (1 eng)
All of P0. Quote-escape, cache hit-rate metric, blog series cleanup.
Quality Scorecard before everything else in P1 because it's already in flight (Gayathri's PR coming) and it converts the existing categorical evaluator surface from "I have to design my own metrics" → "here's a known-good rubric." Adoption boost.
Ontology consolidation in P2 not P1 because it's the most expensive single decision and we don't want to block the scorecard / extractor work behind it. The strategic-decision PR for Migrate SDK ontology pipeline to bigquery_ontology package #38 can run in parallel with P1 implementation.
ReasoningBank in P2 not P3 because it ties the agent_improvement_cycle demo to a real product surface. Without ReasoningBank, the demo is "look, agents can self-improve in this contained example"; with ReasoningBank, it's "your agent's memory, in a queryable BQ table."
Auto-skills, V5 context graph, Spanner backends to P3 because they're large bets that need a research arm or a partner team to justify the investment. Don't block production work on them.
No commitment to specific calendar dates. Sequencing is relative; absolute dates depend on engineer count + DevRel review cycles + GA gates.
No mention of internal-only Google Cloud product integrations (Vertex AI Agent Engine, etc.) beyond what the public surface already covers. Those would be a parallel internal roadmap.
How to use this issue
Maintainer: leave reactions on items you agree with the priority of; comment with re-rankings on items you disagree with; resolve the three strategic decisions above so P2 can sequence.
Contributors: pick a P0 or P1 item that matches your interest, drop a "I can take this" comment, and the maintainer can hand the linked issue over.
Quarterly review: this roadmap should be re-checked every ~6 weeks. The shape of evaluation work (scorecard, auto-benchmarking) is moving fastest right now and may push items between tiers.
Generated 2026-04-28 from a survey of the 19 open issues, the SDK surface
at target/main, and the ADK plugin at google/adk/plugins/bigquery_agent_analytics_plugin.py.
Open to revision; this is a starting point, not a contract.
BigQuery Agent Analytics — Roadmap
A tiered roadmap built from a survey of the 19 open issues on this repo, the
22 sections of the current
SDK.mduser manual, the 38 Python modules insrc/bigquery_agent_analytics/, and the ADK plugin(
google/adk/plugins/bigquery_agent_analytics_plugin.py, ~3,500 LOC, 14lifecycle callback hooks, BigQuery Storage Write API path with GCS offload).
Filed for discussion. Estimates are calibrated to a single experienced
SDK / plugin engineer; multiply for parallel streams. Impact and effort
are best-effort; the implementer of each item will refine them.
TL;DR
keep them stable.
loop — quality scorecard (Proposal: Automated Agent Quality Scorecard #63), automated benchmarking (Data Science Features for BQ Agent Analytics #95), agent
improvement cycle (already shipped as a demo), ReasoningBank (BigQuery Agent Analytics SDK — ReasoningBank #49). This
is the user-facing differentiator over the next quarter.
telemetry. The cache-hit-rate metric (Improve token efficiency in ADK via BigQuery Agent Analytics #32) is a one-week win that
unlocks the cost-optimization narrative and ties directly into the
existing CodeEvaluator surface.
issues across V5 Context Graph: TTL import, mixed extraction, temporal lineage — implementation design #12, feat: implement inheritance (extends) compilation support #30, Migrate SDK ontology pipeline to bigquery_ontology package #38, Feat: SKOS import support alongside OWL (design proposal — feedback wanted) #57, Feat: Runtime entity resolution primitives — OntologyRuntime, concept index, EntityResolver protocol (design proposal — feedback wanted) #58, Epic: Compile-time code generation for structured trace extractors (scoped rework) #75, Feat: Ontology-aware validate_extracted_graph with fallback-scope classification (prerequisite for #75) #76, Epic: Address Remaining Ontology Platform Gaps (Live Agent Resolution, Advanced Resolvers, SHACL, Spanner/MAKO) #93). Multiple design
proposals are gating implementation. Consolidate to one epic per quarter.
ontology pipeline migration to upstream
bigquery_ontology(Migrate SDK ontology pipeline to bigquery_ontology package #38) —runtime contract migration, not a module swap; needs a go/no-go.
Method
How I built this:
workstream.
src/bigquery_agent_analytics/(38 modules) andSDK.md(22 sections, ~1700 lines) to understand what already ships.
the SDK consumes.
workstreams.
strategic decision pending).
Workstream map (where the open issues live)
agent_events; GCS offload; PyArrow schemaClient.get_session_trace,list_traces, tree renderagent_improvement_cycleexample,quality_report.pyclient.insights(), drift detection,memory_servicegmCLI, OWL importer, DDL compiler, materializer,compile_concept_indexbq-agent-sdkCLI,SDK.md,examples/, blog seriesP0 — ship in ≤2 weeks (low-risk, asked-for)
feedback="..."snippet inevaluate --exit-codeFAIL output"and\. Surfaced from blog #3 live capture.CodeEvaluator.context_cache_hit_rate())cached_content_token_countfrom Geminiusage_metadata. SDK: addcache_hit_rateevaluator + threshold. Schema change requires a backward-compat plan (new column, default NULL). Strong tie-in to post #2's cost narrative.P0 total: ~2.25 eng-weeks.
P1 — ship in 1 month (high leverage)
evaluation_rubrics.py) over existingCategoricalEvaluatorevaluation_rubrics.pyfor review. Locks in three pillars (response_usefulness,task_grounding,policy_compliance) using existing categorical vocabulary. Additive, reuses existing dashboard views.root_agent_name+regiononcategorical_results+ newcategorical_fleet_leaderboardviewALTER TABLEmigration plan. Bridges the eval results → fleet ranking gap that the SDK doesn't have today.Client.triage_low_score_sessions(...)+hitl_triage_queuetableMERGEvs append-only?resolved_atlifecycle column? Worth a small design note before implementation.extends) compilation in ontology DDL compilergm compile --emit-extends-as=…flag.validate_extracted_graph(spec, graph)evaluate --suggest-thresholdsbaseline helper (deferred from blog #2 polish)P1 total: ~11 eng-weeks. With one engineer: about a calendar month if focused; with two engineers parallelizing the scorecard track and the ontology + helper tracks, ~3 weeks.
P2 — ship in 1 quarter (strategic, larger effort)
MemoryService.load_relevant_memories(...)), agent integration shape (callable from plugin or app). Needs a small design RFC first because memory shape affects every downstream consumer.extract_bka_decision_eventand the structured-event registryAI.GENERATEstays as semantic fallback until precision/recall is measured.bigquery_ontologyupstream package (strategic decision pending)bigquery_ontologynot in this repo if migration goes ahead.OntologyRuntime, concept index lookups,EntityResolverprotocolquality_report.py+agent_improvement_cycle. Generalize the cycle into a reusable extractor. Cross-links to Vertex AI Prompt Optimizer integration (post #4 in the blog series)._streaming_evaluation.py. Productization needs an architectural call: on-arrival vs. micro-batch, latency budgets, schema for "in-flight" partial sessions.P2 total: ~23 eng-weeks. With two engineers parallel-streaming, ~12 weeks (one quarter).
P3 — research / future (defer until P0-P2 ships)
EntityResolverfor live agents (extends #58)EntityResolverprotocol. Embedding store + cosine-similarity layer + LLM-disambiguation tier.validate_extracted_graphfrom #76. Niche but needed for governance-heavy verticals.bigquery_ontologymigration decision (#38).Items to deprecate / explicitly close
_LEGACY_LLM_JUDGE_BATCH_QUERY(ML.GENERATE_TEXTpath inside the LLM-judge cascade)--strictfor API-fallback judge errors--spec-pathflag (gm compile --spec-path ...)--ontology PATH --binding PATH. Schedule removal for 0.4.x.Strategic decisions pending (need maintainer call)
These three are gating P2 work. Recommend a single decision-doc PR that resolves all three:
bigquery_ontology(Migrate SDK ontology pipeline to bigquery_ontology package #38). Full migration vs. thin wrapper. Resolution affects Feat: SKOS import support alongside OWL (design proposal — feedback wanted) #57, Epic: Compile-time code generation for structured trace extractors (scoped rework) #75, Feat: Ontology-aware validate_extracted_graph with fallback-scope classification (prerequisite for #75) #76, Epic: Address Remaining Ontology Platform Gaps (Live Agent Resolution, Advanced Resolvers, SHACL, Spanner/MAKO) #93 sequencing.agent_improvement_cycle(existing demo) or whether they're parallel systems.Resource budget summary
Sequencing rationale (short version)
What this roadmap deliberately doesn't do
bigquery_ontology↔BigQuery-Agent-Analytics-SDKrepo splits beyond the existing Migrate SDK ontology pipeline to bigquery_ontology package #38 decision. That's an organizational call.How to use this issue
Generated 2026-04-28 from a survey of the 19 open issues, the SDK surface
at
target/main, and the ADK plugin atgoogle/adk/plugins/bigquery_agent_analytics_plugin.py.Open to revision; this is a starting point, not a contract.