CV Parser
→ ParsedCV
Extracts skills, years of experience, achievements and languages into a typed ParsedCV.
Google ADK v2 shipped its graph workflow API recently. I wanted to push it end-to-end on a real use case, in the spirit of LangGraph.
Upload a CV and a job description. An agent pipeline runs the full analysis and produces an output tailored to whichever side of the hiring loop is asking, recruiter or candidate.
The graph encodes a deliberate separation between deterministic control flow and LLM-driven generation. Routing and synchronization stay in pure Python; only generation crosses an LLM boundary, and every output is constrained by a Pydantic schema before it leaves the node.
root_agent = Workflow(
name="career_copilot",
edges=[
(
"START",
(cv_parser_agent, jd_parser_agent), # parallel
parse_join,
mode_router,
{
"RECRUITER": fit_analyzer_agent,
"CANDIDATE": (research_agent, cv_optimizer_agent),
},
),
(
fit_analyzer_agent,
verdict_router,
{"OUTREACH": outreach_writer_agent, "GAP": gap_explainer_agent},
),
(
(research_agent, cv_optimizer_agent),
candidate_join,
interview_prep_agent,
),
],
)Eight LLM agents, two FunctionNode routers, two JoinNodes. Every agent runs OpenAI gpt-5.4-mini via LiteLlm with low reasoning effort by default. The Outreach Writer escalates to medium reasoning for sharper, less generic copy. Each agent's output is a Pydantic schema enforced by ADK's output_schema, except the Research Agent which uses Tavily tools and validates its JSON output at the API boundary.
→ ParsedCV
Extracts skills, years of experience, achievements and languages into a typed ParsedCV.
→ ParsedJD
Extracts title, company, required and preferred skills, seniority and agency hints into a ParsedJD.
→ FitVerdict
Compares ParsedCV against ParsedJD and emits a FitVerdict with calibrated confidence, matched strengths and gaps.
→ OutreachDraft
Writes a LinkedIn outreach draft that cites one specific CV achievement verbatim. Medium reasoning effort.
→ GapReport
On no_fit, explains the gaps in plain language and suggests adjacent roles worth pursuing.
→ CompanyIntelligence
Calls Tavily search and extract via MCP to gather company intelligence: funding, culture, Glassdoor signals.
→ CVOptimizationBundle
Suggests targeted CV edits to better match the JD, without lying about the candidate's experience.
→ InterviewPrepBundle
Builds an interview prep bundle: probable questions, talking points and smart reverse questions.
The agentic graph above only ships if it is exposed through a stateless API and reachable from a UI. Two clean layers do that without leaking complexity into the graph.
# Discriminated union: clients get one of three typed response
# shapes, picked by the requested mode.
AnalyzeResponse = Union[
RecruiterFitResponse, # recruiter mode, fit / borderline
RecruiterNoFitResponse, # recruiter mode, no_fit
CandidateResponse, # candidate mode
]
@router.post("/v1/analyze", response_model=AnalyzeResponse)
async def analyze(request: AnalyzeRequest) -> AnalyzeResponse:
"""Run the agent graph against a CV + JD pair, return the typed result."""
# ADK Workflow graph runs end-to-end. Parallel branches actually
# execute in parallel thanks to the async runner.
state = await run_agent(root_agent, initial_state)
# Pydantic schemas at every node boundary guarantee state is typed
# where it matters; we just pick the right response builder.
return (
_build_recruiter_response(state)
if request.mode == "recruiter"
else _build_candidate_response(state)
)Every request, traced end-to-end with Langfuse.
An LLM system you cannot trace is a system you cannot fix. Every /v1/analyze call opens a parent agent observation in Langfuse. Sub-agent and tool calls nest under it as child spans, with inputs, outputs, latency and token counts captured. Mode, model, version and input sizes are propagated as trace attributes for fast filtering in the Langfuse UI.
@router.post("/analyze", response_model=AnalyzeResponse)
async def analyze(request: AnalyzeRequest) -> AnalyzeResponse:
langfuse = get_client()
with langfuse.start_as_current_observation(
name=f"analyze.{request.mode}",
as_type="agent",
input={"mode": request.mode, "cv_text": ..., "jd_text": ...},
) as span, propagate_attributes(
trace_name=f"career-copilot.analyze.{request.mode}",
tags=["analyze", request.mode],
metadata={
"mode": request.mode,
"model": PRIMARY_MODEL,
"version": VERSION,
"cv_chars": str(len(request.cv_text)),
"jd_chars": str(len(request.jd_text)),
},
):
state = await run_agent(root_agent, initial_state)
response = _build_response(state)
span.update(output=response.model_dump())
return responseThe /v1/analyze handler opens a parent observation typed as 'agent'. ADK runs each sub-agent and tool call as a nested span under it. Every input, output, latency and token count is captured automatically.
propagate_attributes injects mode, model, version, cv_chars and jd_chars onto the active trace. Filter by mode in two clicks, compare confidence distributions across model versions, spot regressions early.
Agent errors surface as 502 with the full traceback in logs. Workflow state inconsistencies (missing FitVerdict, missing OutreachDraft) surface as 500 with actionable detail. Nothing is silently swallowed.
Paste a CV, paste a job description, pick a mode. Get a verdict and a draft in seconds.