Middleware support #98
Replies: 2 comments 1 reply
-
|
What exactly would you want to transform and when? Would like to add this, but make it right and flexible for all use-cases |
Beta Was this translation helpful? Give feedback.
-
|
Hey, just wanted to share that I've built a middleware that addresses several of the use cases discussed here — specifically context compression, tool result truncation, and dynamic state injection. It's called import { contextChefMiddleware } from '@context-chef/tanstack-ai';
import { chat } from '@tanstack/ai';
import { openaiText } from '@tanstack/ai-openai';
const stream = chat({
adapter: openaiText('gpt-4o'),
messages,
middleware: [
contextChefMiddleware({
contextWindow: 128_000,
compress: { adapter: openaiText('gpt-4o-mini') },
truncate: { threshold: 5000 },
}),
],
});What it handles:
I wrote a more detailed post in a separate discussion — just wanted to drop a note here since this thread is directly relevant. Full source: context-chef/packages/tanstack-ai |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Would love to see a middleware or plugin system that allows intercepting and transforming the chat flow.
So I'm building a memory integration package for Tanstack AI (already have for Vercel AI SDK using
wrapLanguageModel). The goal is to:Without middleware, developers would need to manually orchestrate multiple steps but with middleware support, it could be as simple as:
This would enable building integrations for cross-cutting concerns like memory injection, logging, caching, rate limiting, etc.
Beta Was this translation helpful? Give feedback.
All reactions