Documentation Index
Fetch the complete documentation index at: https://docs.tensormesh.ai/llms.txt
Use this file to discover all available pages before exploring further.
Generate A Single Response
tensormesh(modelId), tensormesh.chatModel(modelId), and tensormesh.languageModel(modelId) create AI SDK chat models backed by /v1/chat/completions.
import { generateText } from "ai";
import { tensormesh } from "@tensormesh/ai-sdk-provider";
const result = await generateText({
model: tensormesh("Qwen/Qwen3-Coder-30B-A3B-Instruct"),
prompt: "Write a concise implementation plan for adding streaming chat.",
});
console.log(result.text);
Stream A Response
import { streamText } from "ai";
import { tensormesh } from "@tensormesh/ai-sdk-provider";
const result = streamText({
model: tensormesh("Qwen/Qwen3-Coder-30B-A3B-Instruct"),
prompt: "Explain how serverless inference helps developer tooling.",
});
for await (const text of result.textStream) {
process.stdout.write(text);
}
Text Completions
Use completionModel when you specifically need the /v1/completions route.
import { generateText } from "ai";
import { tensormesh } from "@tensormesh/ai-sdk-provider";
const result = await generateText({
model: tensormesh.completionModel("openai/gpt-oss-20b"),
prompt: "Complete this sentence: Serverless inference is useful because",
});
console.log(result.text);
For most applications, prefer tensormesh(modelId) because chat models support richer AI SDK features such as tool calling and structured output.