The only thing missing now is your Lead Detective. This node will synthesize findings from the Appraiser and Evidence Analyst to identify the culprit and calculate the total value of the stolen items.
The Lead Detective receives results from both previous agents via shared state. Its system prompt (defined in agentConfigs.ts) injects the appraisal results and evidence analysis directly, giving the LLM all the information it needs to reason about the case.
👉 Open /project/JavaScript/starter-project/src/investigationWorkflow.ts
👉 Add the Lead Detective node method inside your InvestigationWorkflow class, after evidenceAnalystNode:
private async leadDetectiveNode(state: AgentState): Promise<Partial<AgentState>> {
console.log('\n🔍 Lead Detective analyzing all findings...')
const userMessage = 'Analyze all the evidence and identify the culprit. Provide a detailed conclusion.'
try {
const response = await this.orchestrationClient.chatCompletion({
messages: [
{
role: 'system',
content: AGENT_CONFIGS.leadDetective.systemPrompt(
state.appraisal_result || 'No appraisal result available',
state.evidence_analysis || 'No evidence analysis available',
state.suspect_names,
),
},
{ role: 'user', content: userMessage },
],
})
const conclusion = response.getContent() || 'No conclusion could be drawn.'
console.log('✅ Investigation complete')
return {
final_conclusion: conclusion,
messages: [...state.messages, { role: 'assistant', content: conclusion }],
}
} catch (error) {
const errorMsg = `Error during final analysis: ${error}`
console.error('❌', errorMsg)
return {
final_conclusion: errorMsg,
messages: [...state.messages, { role: 'assistant', content: errorMsg }],
}
}
}💡 Understanding how the Lead Detective gets context:
The
AGENT_CONFIGS.leadDetective.systemPrompt()function takes three arguments fromstate:
state.appraisal_result— the RPT-1 predictions from the Appraiser nodestate.evidence_analysis— the grounded evidence findings from the Evidence Analyst nodestate.suspect_names— the original list of suspectsThese are injected into the system prompt using template literals. The LLM receives the entire context in one message, making it a synthesis task: it reasons over structured inputs rather than searching for new information.
This is the TypeScript equivalent of CrewAI's
context=[self.appraise_loss_task(), self.analyze_evidence_task()], but instead of framework magic, you're explicitly passing data through state.
👉 Update the buildGraph method to include the Lead Detective:
private buildGraph(): StateGraph<AgentState> {
const workflow = new StateGraph<AgentState>({
channels: {
payload: null,
suspect_names: null,
appraisal_result: null,
evidence_analysis: null,
final_conclusion: null,
messages: null,
},
})
workflow
.addNode('appraiser', this.appraiserNode.bind(this))
.addNode('evidence_analyst', this.evidenceAnalystNode.bind(this))
.addNode('lead_detective', this.leadDetectiveNode.bind(this))
.addEdge(START, 'appraiser')
.addEdge('appraiser', 'evidence_analyst')
.addEdge('evidence_analyst', 'lead_detective')
.addEdge('lead_detective', END)
return workflow
}💡 The execution order is defined entirely by the edges:
START → appraiser— workflow begins with the Appraiserappraiser → evidence_analyst— after RPT-1 completes, Evidence Analyst runsevidence_analyst → lead_detective— after grounding completes, Lead Detective synthesizeslead_detective → END— Lead Detective's conclusion becomes the final result
👉 Check your /project/JavaScript/starter-project/src/main.ts: it needs no changes from Exercise 04.
import 'dotenv/config'
import { InvestigationWorkflow } from './investigationWorkflow.js'
import { payload } from './payload.js'
async function main() {
console.log('═══════════════════════════════════════════════════════════')
console.log(' 🔍 ART THEFT INVESTIGATION - MULTI-AGENT SYSTEM')
console.log('═══════════════════════════════════════════════════════════\n')
const workflow = new InvestigationWorkflow(process.env.MODEL_NAME!)
const suspectNames = 'Sophie Dubois, Marcus Chen, Viktor Petrov'
console.log('📋 Case Details:')
console.log(` • Stolen Items: ${payload.rows.length} artworks`)
console.log(` • Suspects: ${suspectNames}`)
console.log(` • Investigation Team: 3 specialized agents\n`)
const startTime = Date.now()
const result = await workflow.kickoff({
payload,
suspect_names: suspectNames,
})
const duration = ((Date.now() - startTime) / 1000).toFixed(2)
console.log('\n═══════════════════════════════════════════════════════════')
console.log(' 📘 FINAL INVESTIGATION REPORT')
console.log('═══════════════════════════════════════════════════════════\n')
console.log(result)
console.log('\n═══════════════════════════════════════════════════════════')
console.log(` ⏱️ Investigation completed in ${duration} seconds`)
console.log('═══════════════════════════════════════════════════════════\n')
}
main()👉 Run your complete investigation workflow:
npx tsx src/main.ts⏱️ This may take 2-5 minutes as your agents:
- Predict insurance values for stolen items using SAP-RPT-1
- Search evidence documents for each suspect using the Grounding Service
- Analyze all findings and identify the culprit
👉 Review the final output. Who does your Lead Detective identify as the thief?
👉 Call for the instructor and share your suspect.
If the Lead Detective identifies the wrong suspect, refine the system prompts in agentConfigs.ts.
Which prompts to adjust:
-
Lead Detective's system prompt (
agentConfigs.ts → leadDetective.systemPrompt)- Make it more specific about what evidence to prioritize
- Example: Add "Focus on alibis, financial motives, and access to the museum on the night of the theft"
-
Evidence Analyst's grounding query (
investigationWorkflow.ts → evidenceAnalystNode)- Make the search query more specific
- Example:
"Find evidence about ${suspect}'s alibi, financial records, and museum access on the night of the theft"
Tips for improving prompts:
- ✅ Be specific about what to analyze (alibi, motive, opportunity)
- ✅ Ask the detective to cite specific documents
- ✅ Request cross-referencing of evidence
- ✅ Instruct the detective to explain reasoning step-by-step
- ❌ Avoid vague instructions like "solve the crime" without guidance
- ❌ Don't assume the LLM knows which evidence is most important
You completed a full multi-agent investigation system where:
- Appraiser Node — Calls SAP-RPT-1 to predict missing insurance values from structured data
- Evidence Analyst Node — Searches 8 evidence documents via the Grounding Service for each suspect
- Lead Detective Node — Synthesizes all findings using an LLM to identify the culprit and calculate total losses
- State — Flows through all nodes, accumulating results that later nodes build upon
flowchart TD
A[START] --> B["Appraiser Node\nRPT-1 predictions → appraisal_result"]
B --> C["Evidence Analyst Node\nGrounding searches × 3 suspects → evidence_analysis"]
C --> D["Lead Detective Node\nLLM synthesis → final_conclusion"]
D --> E[END]
The AGENT_CONFIGS object in agentConfigs.ts serves the same purpose as CrewAI's YAML files: it separates agent "personality" from orchestration logic. But as TypeScript objects:
- System prompts are functions that accept runtime data and return a string
- No YAML parsing, no indentation errors, no key synchronization issues
- Your IDE can trace exactly where a system prompt is used and refactor it
Multi-agent systems are powerful because they:
- Distribute Responsibilities across specialized agents with distinct roles
- Enable Collaboration through task delegation and information sharing
- Improve Reasoning by combining multiple expert perspectives
- Handle Complexity by breaking down large problems into manageable subtasks
- Scale Efficiently as new agents and tools can be added without disrupting existing ones
Benefits of multi-agent LangGraph systems:
- Specialization — Each node has exactly the tools and context it needs
- Different models per node — You could use GPT-4o for the detective and a cheaper model for search
- Explicit data flow — State fields make it clear what each node produces and consumes
- Debuggability — Every state transition is observable; add
console.logto any node - Extensibility — Adding a new agent is
.addNode()+.addEdge()+ a new node function
Real-world applications:
- Customer service: Routing agent → Specialist agents → Escalation agent
- Research: Data collection agent → Analysis agent → Report generation agent
- DevOps: Monitoring agent → Diagnosis agent → Remediation agent
- Multi-node LangGraph workflows decompose complex problems into specialized, sequential steps
- Shared state is how nodes communicate: earlier results flow to later nodes via state fields
- System prompts with runtime data (
AGENT_CONFIGS.leadDetective.systemPrompt(...)) enable context-aware synthesis - Edges define execution order: the Lead Detective waits for both predecessors to complete
state.appraisal_result || 'No appraisal result available': always provide fallbacks when reading optional state fields- Prompt engineering is iterative: run, observe, refine until the detective identifies the right suspect
You've successfully built a sophisticated multi-agent AI investigation system in TypeScript that can:
- Predict financial values using the SAP-RPT-1 structured data model
- Search evidence documents using the SAP Grounding Service (RAG)
- Synthesize findings across multiple agents using LangGraph state
- Solve complex problems through collaborative, code-based agent orchestration
- ✅ Understand Generative AI Hub
- ✅ Set up your development space
- ✅ Build a basic agent
- ✅ Add custom tools (RPT-1 model integration)
- ✅ Build a multi-agent workflow
- ✅ Integrate the Grounding Service
- ✅ Solve the museum art theft mystery (this exercise)
Issue: Lead Detective's conclusion doesn't include appraisal values
- Solution: Ensure the
leadDetective.systemPromptinagentConfigs.tsexplicitly asks the LLM to "Summarise the insurance appraisal values" and "Calculate the total estimated insurance value". The LLM only includes what you ask for.
Issue: state.appraisal_result is undefined in the Lead Detective node
- Solution: Check that the Appraiser node is returning the
appraisal_resultfield in its return object. Addconsole.log(state.appraisal_result)at the start ofleadDetectiveNodeto debug.
Issue: Cannot read properties of undefined (reading 'split') in Evidence Analyst
- Solution:
state.suspect_namesis undefined. Ensure yourkickoff()call passessuspect_namesin the input and yourAgentStateinterface includes it as a required field.
Issue: Investigation runs but final_conclusion is empty
- Solution: Verify the
kickoff()method returnsresult.final_conclusion. TheleadDetectiveNodemust return{ final_conclusion: conclusion }and thefinal_conclusionchannel must be declared inchannels.
Issue: Agent identifies the wrong suspect after multiple runs
- Solution: LLMs are non-deterministic by default. Lower
temperatureinmodel_params(try0.3) for more consistent reasoning. Also refine the Lead Detective's system prompt to be more specific about how to weigh evidence.
Issue: Error during final analysis: Error: 429 Too Many Requests
- Solution: You've hit the API rate limit. Wait a moment and retry. If this happens consistently, consider reducing the
max_chunk_countin the grounding config to reduce token usage.