Releases: VoltAgent/voltagent
@voltagent/server-hono@2.0.9
Patch Changes
-
#1199
b6813e9Thanks @omeraplak! - fix: point A2A agent cards at the JSON-RPC endpointA2A agent cards now advertise
/a2a/{serverId}instead of the internal
/.well-known/{serverId}/agent-card.jsondiscovery document. When the card is
served through the Hono or Elysia integrations, VoltAgent also resolves that
endpoint to an absolute URL based on the incoming request. -
Updated dependencies [
b6813e9]:- @voltagent/a2a-server@2.0.3
- @voltagent/server-core@2.1.13
@voltagent/server-elysia@2.0.7
Patch Changes
-
#1199
b6813e9Thanks @omeraplak! - fix: point A2A agent cards at the JSON-RPC endpointA2A agent cards now advertise
/a2a/{serverId}instead of the internal
/.well-known/{serverId}/agent-card.jsondiscovery document. When the card is
served through the Hono or Elysia integrations, VoltAgent also resolves that
endpoint to an absolute URL based on the incoming request. -
Updated dependencies [
b6813e9]:- @voltagent/a2a-server@2.0.3
- @voltagent/server-core@2.1.13
@voltagent/server-core@2.1.13
Patch Changes
-
#1199
b6813e9Thanks @omeraplak! - fix: point A2A agent cards at the JSON-RPC endpointA2A agent cards now advertise
/a2a/{serverId}instead of the internal
/.well-known/{serverId}/agent-card.jsondiscovery document. When the card is
served through the Hono or Elysia integrations, VoltAgent also resolves that
endpoint to an absolute URL based on the incoming request.
@voltagent/a2a-server@2.0.3
Patch Changes
-
#1199
b6813e9Thanks @omeraplak! - fix: point A2A agent cards at the JSON-RPC endpointA2A agent cards now advertise
/a2a/{serverId}instead of the internal
/.well-known/{serverId}/agent-card.jsondiscovery document. When the card is
served through the Hono or Elysia integrations, VoltAgent also resolves that
endpoint to an absolute URL based on the incoming request.
@voltagent/serverless-hono@2.0.10
Patch Changes
-
#1191
a21275fThanks @ravyg! - fix(serverless-hono): defer waitUntil cleanup to prevent tool crashes in Cloudflare WorkersThe
finallyblock intoCloudflareWorker(),toVercelEdge(), andtoDeno()was callingcleanup()immediately when the Response was returned, before streaming and tool execution completed. This cleared the global___voltagent_wait_untilwhile tools were still using it, causing crashes with time-consuming tools.Cleanup is now deferred through the platform's own
waitUntil()so it runs only after all pending background work has settled. -
Updated dependencies [
19fa54b]:- @voltagent/server-core@2.1.12
@voltagent/server-core@2.1.12
@voltagent/core@2.7.0
Minor Changes
-
#1192
0dc2935Thanks @ravyg! - feat(core): addprepareStepto AgentOptions for per-step tool controlSurfaces the AI SDK's
prepareStepcallback as a top-levelAgentOptionsproperty so users can set a default step preparation callback at agent creation time. Per-callprepareStepin method options overrides the agent-level default.This enables controlling tool availability, tool choice, and other step settings on a per-step basis without passing
prepareStepon every call.
@voltagent/server-core@2.1.11
Patch Changes
-
#1183
b48f107Thanks @omeraplak! - feat: persist selected assistant message metadata to memoryYou can enable persisted assistant message metadata at the agent level or per request.
const result = await agent.streamText("Hello", { memory: { userId: "user-1", conversationId: "conv-1", options: { messageMetadataPersistence: { usage: true, finishReason: true, }, }, }, });
With this enabled, fetching messages from memory returns assistant
UIMessage.metadata
with fields likeusageandfinishReason, not just stream-time metadata.REST API requests can enable the same behavior with
options.memory.options:curl -X POST http://localhost:3141/agents/assistant/text \ -H "Content-Type: application/json" \ -d '{ "input": "Hello", "options": { "memory": { "userId": "user-1", "conversationId": "conv-1", "options": { "messageMetadataPersistence": { "usage": true, "finishReason": true } } } } }'
-
Updated dependencies [
b48f107,195155b]:- @voltagent/core@2.6.14
@voltagent/core@2.6.14
Patch Changes
-
#1183
b48f107Thanks @omeraplak! - feat: persist selected assistant message metadata to memoryYou can enable persisted assistant message metadata at the agent level or per request.
const result = await agent.streamText("Hello", { memory: { userId: "user-1", conversationId: "conv-1", options: { messageMetadataPersistence: { usage: true, finishReason: true, }, }, }, });
With this enabled, fetching messages from memory returns assistant
UIMessage.metadata
with fields likeusageandfinishReason, not just stream-time metadata.REST API requests can enable the same behavior with
options.memory.options:curl -X POST http://localhost:3141/agents/assistant/text \ -H "Content-Type: application/json" \ -d '{ "input": "Hello", "options": { "memory": { "userId": "user-1", "conversationId": "conv-1", "options": { "messageMetadataPersistence": { "usage": true, "finishReason": true } } } } }'
-
#1167
195155bThanks @octo-patch! - fix: use OpenAI-compatible adapter for MiniMax provider
@voltagent/core@2.6.13
Patch Changes
- #1172
8cb2aa5Thanks @omeraplak! - fix: tighten prompt-context usage telemetry- redact nested large binary fields when estimating prompt context usage
- preserve circular-reference detection when serializing nested prompt message content
- exclude runtime-only tool metadata from tool schema token estimates
- avoid emitting cached and reasoning token span attributes when their values are zero