Skip to content

Releases: VoltAgent/voltagent

@voltagent/libsql@2.1.2

19 Feb 22:49
24eeb6f

Choose a tag to compare

Patch Changes

  • #1085 f275daf Thanks @omeraplak! - Fix workflow execution filtering by persisted metadata across adapters.

    • Persist options.metadata on workflow execution state so /workflows/executions filters can match tenant/user metadata.
    • Preserve existing execution metadata when updating cancelled/error workflow states.
    • Accept options.metadata in server workflow execution request schema.
    • Fix LibSQL and Cloudflare D1 JSON metadata query comparisons for metadata and metadata.<key> filters.
  • #1082 73cf1d3 Thanks @omeraplak! - Fix workflow state persistence parity across SQL adapters.

    This update persists and returns input, context, and top-level workflowState in workflow state operations. It also ensures suspended workflow state queries include events, output, and cancellation, and adds adapter migrations/column additions where needed.

@voltagent/core@2.4.4

19 Feb 22:49
24eeb6f

Choose a tag to compare

Patch Changes

  • #1085 f275daf Thanks @omeraplak! - Fix workflow execution filtering by persisted metadata across adapters.

    • Persist options.metadata on workflow execution state so /workflows/executions filters can match tenant/user metadata.
    • Preserve existing execution metadata when updating cancelled/error workflow states.
    • Accept options.metadata in server workflow execution request schema.
    • Fix LibSQL and Cloudflare D1 JSON metadata query comparisons for metadata and metadata.<key> filters.
  • #1084 95ad610 Thanks @omeraplak! - Add stream attach support for in-progress workflow executions.

    • Add GET /workflows/:id/executions/:executionId/stream to attach to an active workflow SSE stream.
    • Add replay support for missed SSE events via fromSequence and Last-Event-ID.
    • Keep POST /workflows/:id/stream behavior unchanged for starting new executions.
    • Ensure streamed workflow resume uses a fresh suspend controller so attach clients continue receiving events after resume.

@voltagent/cloudflare-d1@2.1.2

19 Feb 22:48
24eeb6f

Choose a tag to compare

Patch Changes

  • #1085 f275daf Thanks @omeraplak! - Fix workflow execution filtering by persisted metadata across adapters.

    • Persist options.metadata on workflow execution state so /workflows/executions filters can match tenant/user metadata.
    • Preserve existing execution metadata when updating cancelled/error workflow states.
    • Accept options.metadata in server workflow execution request schema.
    • Fix LibSQL and Cloudflare D1 JSON metadata query comparisons for metadata and metadata.<key> filters.
  • #1082 73cf1d3 Thanks @omeraplak! - Fix workflow state persistence parity across SQL adapters.

    This update persists and returns input, context, and top-level workflowState in workflow state operations. It also ensures suspended workflow state queries include events, output, and cancellation, and adds adapter migrations/column additions where needed.

@voltagent/core@2.4.3

18 Feb 18:48
de35e3c

Choose a tag to compare

Patch Changes

  • #1078 fbce8aa Thanks @omeraplak! - fix: persist workflow context mutations across steps and downstream agents/tools

    Workflows now consistently use the execution context map when building step state.
    This ensures context written in one step is visible in later steps and in andAgent calls.

    Also aligns workflow event/stream context payloads with the normalized runtime context.

@voltagent/core@2.4.2

17 Feb 22:01
6791d5e

Choose a tag to compare

Patch Changes

  • #1072 42be052 Thanks @omeraplak! - fix: auto-register standalone Agent VoltOps client for remote observability export

    • When an Agent is created with voltOpsClient and no global client is registered, the agent now seeds AgentRegistry with that client.
    • This enables remote OTLP trace/log exporters that resolve credentials via global registry in standalone new Agent(...) setups (without new VoltAgent(...)).
    • Existing global client precedence is preserved; agent-level client does not override an already configured global client.
    • Added coverage in client-priority.spec.ts for both auto-register and non-override scenarios.
  • #1064 047ff70 Thanks @omeraplak! - Add multi-step loop bodies for andDoWhile and andDoUntil.

    • Loop steps now accept either a single step or a sequential steps array.
    • When steps is provided, each iteration runs the steps in order and feeds each output into the next step.
    • Workflow step serialization now includes loop subSteps when a loop has multiple steps.
    • Added runtime and type tests for chained loop steps.

@voltagent/ag-ui@1.0.4

17 Feb 22:01
6791d5e

Choose a tag to compare

Patch Changes

  • #1074 e2793c1 Thanks @omeraplak! - fix: preserve assistant feedback metadata across AG-UI streams
    • Map VoltAgent message-metadata stream chunks to AG-UI CUSTOM events, which are the protocol-native channel for application-specific metadata.
    • Stop emitting legacy internal tool-result metadata markers from the adapter.
    • Remove the legacy metadata marker filter from model-input message conversion.

@voltagent/server-core@2.1.6

14 Feb 00:58
61463a4

Choose a tag to compare

Patch Changes

  • #1059 ec82442 Thanks @omeraplak! - feat: add persisted feedback-provided markers for message feedback metadata

    • AgentFeedbackMetadata now supports provided, providedAt, and feedbackId.
    • Added Agent.isFeedbackProvided(...) and Agent.isMessageFeedbackProvided(...) helpers.
    • Added agent.markFeedbackProvided(...) to persist a feedback-submitted marker on a stored message so feedback UI can stay hidden after memory reloads.
    • Added result.feedback.markFeedbackProvided(...) and result.feedback.isProvided() helper methods for SDK usage.
    • Updated server response schema to include the new feedback metadata fields.
    const result = await agent.generateText("How was this answer?", {
      userId: "user-1",
      conversationId: "conv-1",
      feedback: true,
    });
    
    if (result.feedback && !result.feedback.isProvided()) {
      // call after your feedback ingestion succeeds
      await result.feedback.markFeedbackProvided({
        feedbackId: "fb_123", // optional
      });
    }
  • Updated dependencies [b0482cb, f36545c, ec82442, b0482cb, b0482cb, 9e5ef29]:

    • @voltagent/core@2.4.1

@voltagent/sandbox-e2b@2.0.2

14 Feb 00:58
61463a4

Choose a tag to compare

Patch Changes

  • #1051 b0482cb Thanks @omeraplak! - fix: align sandbox package core dependency strategy with plugin best practices

    • Update @voltagent/sandbox-daytona to use @voltagent/core via peerDependencies + devDependencies instead of runtime dependencies.
    • Raise @voltagent/sandbox-daytona peer minimum to ^2.3.8 to match runtime usage of normalizeCommandAndArgs.
    • Align @voltagent/sandbox-e2b development dependency on @voltagent/core to ^2.3.8.
  • #1068 b95293b Thanks @omeraplak! - feat: expose the underlying E2B SDK sandbox instance from E2BSandbox.

    • Added a public getSandbox() method that returns the original e2b Sandbox instance so provider-specific APIs (for example files.read) can be used directly.
    • Added E2BSandboxInstance type export for the underlying SDK sandbox type.

@voltagent/sandbox-daytona@2.0.2

14 Feb 00:58
61463a4

Choose a tag to compare

Patch Changes

  • #1051 b0482cb Thanks @omeraplak! - fix: align sandbox package core dependency strategy with plugin best practices

    • Update @voltagent/sandbox-daytona to use @voltagent/core via peerDependencies + devDependencies instead of runtime dependencies.
    • Raise @voltagent/sandbox-daytona peer minimum to ^2.3.8 to match runtime usage of normalizeCommandAndArgs.
    • Align @voltagent/sandbox-e2b development dependency on @voltagent/core to ^2.3.8.
  • #1068 b95293b Thanks @omeraplak! - feat: expose the underlying Daytona SDK sandbox instance from DaytonaSandbox.

    • Added a public getSandbox() method that returns the original @daytonaio/sdk Sandbox instance so provider-specific APIs can be used directly.
    • Added DaytonaSandboxInstance type export for the underlying SDK sandbox type.

@voltagent/core@2.4.1

14 Feb 00:58
61463a4

Choose a tag to compare

Patch Changes

  • #1051 b0482cb Thanks @omeraplak! - Fix workspace skill prompt injection and guidance for skill access tools.

    • Change activated skill prompt injection to include metadata only (name, id, description) instead of embedding full SKILL.md instruction bodies.
    • Clarify workspace skills system prompt so agents use workspace skill tools for skill access and avoid sandbox commands like execute_command, ls /skills, or cat /skills/....
  • #1067 f36545c Thanks @omeraplak! - fix: persist conversation progress incrementally during multi-step runs

    • Added step-level conversation persistence checkpoints so completed steps are no longer only saved at turn finish.
    • Tool completion steps (tool-result / tool-error) now trigger immediate persistence flushes in step mode.
    • Added configurable agent persistence options:
      • conversationPersistence.mode ("step" or "finish")
      • conversationPersistence.debounceMs
      • conversationPersistence.flushOnToolResult
    • Added global VoltAgent default agentConversationPersistence and wiring to registered agents.
  • #1059 ec82442 Thanks @omeraplak! - feat: add persisted feedback-provided markers for message feedback metadata

    • AgentFeedbackMetadata now supports provided, providedAt, and feedbackId.
    • Added Agent.isFeedbackProvided(...) and Agent.isMessageFeedbackProvided(...) helpers.
    • Added agent.markFeedbackProvided(...) to persist a feedback-submitted marker on a stored message so feedback UI can stay hidden after memory reloads.
    • Added result.feedback.markFeedbackProvided(...) and result.feedback.isProvided() helper methods for SDK usage.
    • Updated server response schema to include the new feedback metadata fields.
    const result = await agent.generateText("How was this answer?", {
      userId: "user-1",
      conversationId: "conv-1",
      feedback: true,
    });
    
    if (result.feedback && !result.feedback.isProvided()) {
      // call after your feedback ingestion succeeds
      await result.feedback.markFeedbackProvided({
        feedbackId: "fb_123", // optional
      });
    }
  • #1051 b0482cb Thanks @omeraplak! - Enable workspace skills prompt injection by default when an agent has a workspace with skills configured.

    • Agents now auto-compose a workspace skills prompt hook by default.
    • Added workspaceSkillsPrompt to AgentOptions to customize (WorkspaceSkillsPromptOptions), force (true), or disable (false) prompt injection.
    • When a custom hooks.onPrepareMessages is provided, it now composes with the default workspace skills prompt hook unless workspaceSkillsPrompt is explicitly set to false.
    • Updated workspace skills docs and the examples/with-workspace sample to document and use the new behavior.
  • #1051 b0482cb Thanks @omeraplak! - feat: improve workspace skill compatibility for third-party SKILL.md files that do not declare file allowlists in frontmatter.

    • Infer references, scripts, and assets allowlists from relative Markdown links in skill instructions when explicit frontmatter arrays are missing.
    • This enables skills like microsoft/playwright-cli (installed via npx skills add ...) to read linked reference files through workspace skill tools without manual metadata rewrites.
  • #1066 9e5ef29 Thanks @omeraplak! - feat: improve providerOptions IntelliSense for provider-specific model settings

    • ProviderOptions now includes typed option buckets for openai, anthropic, google, and xai.
    • Existing top-level call option fields (temperature, maxTokens, topP, frequencyPenalty, presencePenalty, etc.) remain supported for backward compatibility.
    • Added type-level coverage for provider-scoped options in the agent type tests.
    • Updated docs to show provider-scoped providerOptions usage in agent, API endpoint, and UI integration examples.
    await agent.generateText("Draft a summary", {
      temperature: 0.3,
      providerOptions: {
        openai: {
          reasoningEffort: "medium",
          textVerbosity: "low",
        },
        anthropic: {
          sendReasoning: true,
        },
        google: {
          thinkingConfig: {
            thinkingBudget: 1024,
          },
        },
        xai: {
          reasoningEffort: "medium",
        },
      },
    });