Now that you know how to build a single agent with LangGraph and the SAP Cloud SDK for AI, let's structure the code for a multi-agent system. LangGraph recommends keeping agent configuration close to code.
"LangGraph gives you full programmatic control over your agent graph. Configuration lives in code, making it type-safe, refactorable, and IDE-friendly." — LangGraph philosophy
Unlike CrewAI's YAML-first approach, in LangGraph you define agent behaviour directly in TypeScript. This means:
- No separate config files to keep in sync with your code
- Full TypeScript type checking on your agent definitions
- IDE autocomplete and refactoring across agent configuration
👉 Create the following new files in the src folder:
agentConfigs.ts— agent system promptsinvestigationWorkflow.ts— the LangGraph workflow classmain.ts— the entry point
Agent configurations in LangGraph are TypeScript objects. They can contain static strings or functions that generate system prompts dynamically based on runtime data.
👉 Open /project/JavaScript/starter-project/src/agentConfigs.ts and add:
export const AGENT_CONFIGS = {
evidenceAnalyst: {
systemPrompt: (suspectNames: string) => `You are an Evidence Analyst.
You are a meticulous forensic analyst who specializes in connecting dots between various pieces of evidence.
You have access to document repositories and excel at extracting relevant information from complex data sources.
Your goal: Analyze all available evidence and documents to identify patterns and connections between suspects and the crime
You have access to the call_grounding_service tool to search through evidence documents.
Analyze the suspects: ${suspectNames}
Search for evidence related to each suspect and identify connections to the crime.`,
},
leadDetective: {
systemPrompt: (
appraisalResult: string,
evidenceAnalysis: string,
suspectNames: string,
) =>
`You are the lead detective on this high-profile art theft case. With years of
experience solving complex crimes, you excel at synthesizing information from
multiple sources and identifying the culprit based on evidence and expert analysis.
Your goal: Synthesize all findings from the team to identify the most likely suspect and build a comprehensive case
You have received the following information from your team:
1. INSURANCE APPRAISAL: ${appraisalResult}
2. EVIDENCE ANALYSIS: ${evidenceAnalysis}
3. SUSPECTS: ${suspectNames}
Based on all the evidence and analysis, determine:
- Who is the most likely culprit?
- What evidence supports this conclusion?
- What was their motive and opportunity?
- Summarise the insurance appraisal values of the stolen artworks.
- Calculate the total estimated insurance value of the stolen items based on the appraisal results.
- Provide a comprehensive summary of the case.
Be thorough and analytical in your conclusion.`,
},
};💡 Understanding the configuration:
- System prompts are functions (not static strings) so they can incorporate runtime data like suspect names and prior agent results. TypeScript's template literals (
`...${variable}...`) make this clean and readable.- This is the TypeScript equivalent of CrewAI's
agents.yamlandtasks.yaml— but type-safe, and co-located with the code that uses it.- The
leadDetective.systemPromptfunction takes three arguments: the appraisal result, evidence analysis, and suspect names. This is how inter-agent communication works in LangGraph: one node's output becomes another node's input via shared state. These system prompts will be read in later code implementations of the agents themselves. The extra file is to have a clearer structure and comply to the seperation of concerns paradigm.
The InvestigationWorkflow class encapsulates the entire LangGraph workflow: the graph definition, all agent nodes, and the execution logic.
👉 Update your types.ts file to include all state fields needed for the multi-agent workflow:
export interface AgentState {
payload: RPT1Payload;
suspect_names: string;
appraisal_result?: string;
evidence_analysis?: string;
final_conclusion?: string;
messages: Array<{
role: string;
content: string;
}>;
}💡 The optional fields (
?) start asundefined. Each agent node fills in its part, and LangGraph merges the partial updates into the full state. Thefinal_conclusionfield won't be set until the lead detective runs.
👉 Create /project/JavaScript/starter-project/src/investigationWorkflow.ts:
import { StateGraph, END, START } from '@langchain/langgraph'
import { OrchestrationClient } from '@sap-ai-sdk/orchestration'
import type { AgentState, ModelParams } from './types.js'
import { callRPT1Tool } from './tools.js'
import { AGENT_CONFIGS } from './agentConfigs.js'
export class InvestigationWorkflow {
private orchestrationClient: OrchestrationClient
private graph: StateGraph<AgentState>
constructor(model: string, model_params?: ModelParams) {
this.orchestrationClient = new OrchestrationClient(
{
llm: { model_name: model, model_params: model_params ?? {} },
},
{ resourceGroup: process.env.RESOURCE_GROUP },
)
this.graph = this.buildGraph()
}💡 The constructor:
- Takes
modeland optionalmodel_params: this makes the workflow reusable with different LLMs- Initializes
OrchestrationClientonce for the entire class; it's reused across all LLM-based nodes- Calls
buildGraph()immediately so the graph is ready when you callkickoff()
The appraiser node calls SAP-RPT-1 directly (no LLM involved). It takes the payload from state, runs the prediction, and stores the result.
👉 Add the appraiser node method to your class:
private async appraiserNode(state: AgentState): Promise<Partial<AgentState>> {
console.log('\n🔍 Appraiser Agent starting...')
try {
const result = await callRPT1Tool(state.payload)
const appraisalResult = `Insurance Appraisal Complete: ${result}
Summary: Successfully predicted missing insurance values and item categories for the stolen artworks.`
console.log('✅ Appraisal complete')
return {
appraisal_result: appraisalResult,
messages: [...state.messages, { role: 'assistant', content: appraisalResult }],
}
} catch (error) {
const errorMsg = `Error during appraisal: ${error}`
console.error('❌', errorMsg)
return {
appraisal_result: errorMsg,
messages: [...state.messages, { role: 'assistant', content: errorMsg }],
}
}
}👉 Add the buildGraph method:
private buildGraph(): StateGraph<AgentState> {
const workflow = new StateGraph<AgentState>({
channels: {
payload: null,
suspect_names: null,
appraisal_result: null,
evidence_analysis: null,
final_conclusion: null,
messages: null,
},
})
workflow
.addNode('appraiser', this.appraiserNode.bind(this))
.addEdge(START, 'appraiser')
.addEdge('appraiser', END)
return workflow
}💡 Understanding
.bind(this):When you pass a class method as a callback, JavaScript loses the
thiscontext; the function no longer knows it belongs to the class..bind(this)creates a new function withthispermanently set to the class instance. This is a standard JavaScript pattern when passing class methods as callbacks.
⚠️ Important — Chained API in LangGraph 0.2+:LangGraph 0.2 changed the API for building graphs. You must chain
.addNode(),.addNode(),.addEdge()calls together rather than calling them separately. Separate calls cause TypeScript type errors because node names aren't known until all nodes are registered.// ✅ Correct (chained) workflow .addNode("appraiser", this.appraiserNode.bind(this)) .addEdge(START, "appraiser") .addEdge("appraiser", END); // ❌ Incorrect (separate calls — TypeScript errors in LangGraph 0.2+) workflow.addNode("appraiser", this.appraiserNode.bind(this)); workflow.addEdge(START, "appraiser");
👉 Add the kickoff method to run the workflow:
async kickoff(inputs: { payload: any; suspect_names: string }): Promise<string> {
console.log('🚀 Starting Investigation Workflow...\n')
const initialState: AgentState = {
payload: inputs.payload,
suspect_names: inputs.suspect_names,
messages: [],
}
const app = this.graph.compile()
const result = await app.invoke(initialState)
return result.final_conclusion || 'Investigation completed but no conclusion was reached.'
}
}👉 Create /project/JavaScript/starter-project/src/main.ts:
import "dotenv/config";
import { InvestigationWorkflow } from "./investigationWorkflow.js";
import { payload } from "./payload.js";
async function main() {
const workflow = new InvestigationWorkflow(process.env.MODEL_NAME!);
const suspectNames = "Sophie Dubois, Marcus Chen, Viktor Petrov";
const result = await workflow.kickoff({
payload,
suspect_names: suspectNames,
});
console.log("\n📘 FINAL INVESTIGATION REPORT\n");
console.log(result);
}
main();npx tsx src/main.tsNow add the Evidence Analyst as a second agent. This agent will search evidence documents for each suspect (we'll connect it to real documents in the next exercise; for now it uses the LLM directly).
👉 Add this method to your InvestigationWorkflow class:
private async evidenceAnalystNode(state: AgentState): Promise<Partial<AgentState>> {
console.log('\n🔍 Evidence Analyst starting...')
try {
const suspects = state.suspect_names.split(',').map(s => s.trim())
const evidenceResults: string[] = []
for (const suspect of suspects) {
console.log(` Searching evidence for: ${suspect}`)
// Placeholder — will be replaced with real grounding tool in Exercise 05
evidenceResults.push(`Evidence for ${suspect}: No evidence documents connected yet.`)
}
const evidenceAnalysis = `Evidence Analysis Complete: ${evidenceResults.join('\n\n')}
Summary: Analyzed evidence for all suspects: ${state.suspect_names}`
console.log('✅ Evidence analysis complete')
return {
evidence_analysis: evidenceAnalysis,
messages: [...state.messages, { role: 'assistant', content: evidenceAnalysis }],
}
} catch (error) {
const errorMsg = error instanceof Error ? error.message : String(error)
console.error('❌ Evidence analysis failed:', errorMsg)
return {
evidence_analysis: `Error during evidence analysis: ${errorMsg}`,
messages: [...state.messages, { role: 'assistant', content: `Error during evidence analysis: ${errorMsg}` }],
}
}
}💡
for...ofwithawaitis sequential. Unlike JavaScript'sPromise.all, afor...ofloop processes suspects one at a time. This is intentional: it makes logs readable and avoids overwhelming the external services. In Exercise 05, you'll call the real grounding service inside this loop.
💡
error instanceof Error ? error.message : String(error)is a safe way to extract an error message. Theinstanceof Errorcheck handles properErrorobjects (which have.messageand.stack). TheString(error)fallback handles cases where someone throws a plain string or object.
👉 Update the buildGraph method:
private buildGraph(): StateGraph<AgentState> {
const workflow = new StateGraph<AgentState>({
channels: {
payload: null,
suspect_names: null,
appraisal_result: null,
evidence_analysis: null,
final_conclusion: null,
messages: null,
},
})
workflow
.addNode('appraiser', this.appraiserNode.bind(this))
.addNode('evidence_analyst', this.evidenceAnalystNode.bind(this))
.addEdge(START, 'appraiser')
.addEdge('appraiser', 'evidence_analyst')
.addEdge('evidence_analyst', END)
return workflow
}npx tsx src/main.ts💡 Note: The Evidence Analyst currently produces placeholder output because it doesn't have access to real evidence documents yet. You'll connect it to the Grounding Service in Exercise 05.
You've built a multi-agent LangGraph workflow with specialized roles working in sequence:
1. Code-Based Configuration
- Agent system prompts live in
agentConfigs.ts— TypeScript objects instead of YAML files - Configuration is type-safe, refactorable, and co-located with the code that uses it
2. Specialized Agent Nodes
- Appraiser Node — Calls the SAP-RPT-1 model directly to predict insurance values
- Evidence Analyst Node — Will search evidence documents for each suspect
3. Sequential Execution with Shared State
- LangGraph executes nodes in order following the edges you defined
- Each node reads from and writes to the shared
AgentState - The appraiser runs first, then the evidence analyst builds on that state
When you call workflow.kickoff(inputs):
- Initialization:
app.invoke(initialState)starts the graph - Appraiser runs: Calls RPT-1, stores result in
appraisal_result - Evidence Analyst runs: Reads
suspect_names, stores result inevidence_analysis - Graph ends: Returns the final state
| CrewAI (Python) | LangGraph (TypeScript) |
|---|---|
agents.yaml + tasks.yaml |
agentConfigs.ts (TypeScript objects) |
@CrewBase class decorator |
Plain TypeScript class (InvestigationWorkflow) |
@agent, @task, @crew decorators |
buildGraph() method with .addNode().addEdge() |
Process.sequential |
Edges define the execution order |
self.agents auto-collected |
Nodes registered explicitly with .addNode() |
crew.kickoff(inputs={}) |
app.invoke(initialState) |
💡 LangGraph's approach gives you more explicit control. Every transition between agents is an edge you define. There's no magic collection of agents via decorators; the graph structure is transparent and debuggable.
Benefits of Multi-Agent Systems:
-
Specialization — Each agent is an expert in one domain (valuation vs. investigation)
- Different LLMs can be assigned per node (GPT-4o for the appraiser, Claude for the analyst)
- Each agent has only the tools it needs (principle of least privilege)
-
Scalability — Adding new agents is straightforward
- Add a new node method, register it with
.addNode(), and connect it with.addEdge() - No need to modify existing agents
- Add a new node method, register it with
-
Collaboration — Agents can build upon each other's work
- Sequential processing allows later nodes to use earlier results via shared state
- The
contextpattern (used in Exercise 06) enables explicit data sharing
-
Maintainability — Clear separation of concerns
- Agent "personality" (goals, roles) lives in
agentConfigs.ts - Tool integration lives in
tools.ts - Orchestration logic lives in
investigationWorkflow.ts
- Agent "personality" (goals, roles) lives in
Real-World Applications:
- Customer service: Routing agent → Specialist agents → Escalation agent
- Research: Data collection agent → Analysis agent → Report generation agent
- DevOps: Monitoring agent → Diagnosis agent → Remediation agent
Notice each agent uses different tools:
- Appraiser Node uses
callRPT1Tool— a structured prediction model - Evidence Analyst Node uses placeholder logic now, but needs
callGroundingService(Exercise 05)
This demonstrates tool specialization: agents only get the tools relevant to their role.
- LangGraph StateGraph connects agent nodes with typed edges — the execution order is explicit
- Code-based configuration in
agentConfigs.tsreplaces YAML files — type-safe and co-located .bind(this)is required when passing class methods as LangGraph node callbacks- Chained API (
.addNode().addEdge()) is required in LangGraph 0.2+ for...ofwithawaitprocesses items sequentially — predictable, readable, no race conditionsPartial<AgentState>return type means nodes only update the fields they changed
What's Next?
The Evidence Analyst can't access actual evidence yet. In Exercise 05, you'll integrate the Grounding Service to give it real document access.
- ✅ Understand Generative AI Hub
- ✅ Set up your development space
- ✅ Build a basic agent
- ✅ Add custom tools (RPT-1 model integration)
- ✅ Build a multi-agent workflow (this exercise)
- 📌 Add the Grounding Service: Give your Evidence Analyst access to real documents
- 📌 Solve the crime: Add a Lead Detective to combine findings and crack the case
Issue: TypeError: this is undefined inside node methods
- Solution: Ensure you're using
.bind(this)when registering class methods as nodes:.addNode('appraiser', this.appraiserNode.bind(this))
Issue: TypeScript error on .addEdge(): node name not recognized
- Solution: Make sure you're chaining
.addNode().addNode().addEdge()calls. In LangGraph 0.2+, callingaddEdgebefore all nodes are registered causes type errors.
Issue: process.env.MODEL_NAME is undefined
- Solution: Ensure
import 'dotenv/config'is at the top ofmain.ts(the first import). Without this, environment variables from.envaren't loaded.
Issue: Agent nodes run but state fields are undefined in later nodes
- Solution: Check that each node returns the correct field names matching your
AgentStateinterface. Typos in field names will silently result inundefinedin downstream nodes.