The default provider instance targets Tensormesh serverless inference:Documentation Index
Fetch the complete documentation index at: https://docs.tensormesh.ai/llms.txt
Use this file to discover all available pages before exploring further.
https://serverless.tensormesh.ai/v1.
Explicit Serverless Provider
UsecreateTensormesh for serverless when you want to pass settings explicitly, such as a custom fetch, custom headers, or an explicit API key.
apiKey: inference API key used as the bearer token. Defaults toTENSORMESH_INFERENCE_API_KEY.baseURL: inference API base URL. Defaults tohttps://serverless.tensormesh.ai/v1.userId: Tensormesh user ID forwarded asX-User-Idfor on-demand inference. Defaults toTENSORMESH_USER_ID.headers: additional headers for each request.fetch: custom fetch implementation.
On-Demand Inference
For on-demand inference, pass the routed base URL and user ID./v1 base URL and use the model name exposed by that on-demand endpoint.
