Send a request and fetch the result directly.In contrast to asynchronous APIs, this API will return the result in the same connection,
and there is no JSON wrapping in either input or outputs. Thus, you can use this API
as if you are directly accessing your service.
When the service is in a cold state (i.e. there are no running replicas due to service
idleness) and a new request is made, a new replica will be started immediately.In such case, the first few requests may get aborted due to timeouts,
until the replica becomes up and running. Please consult your HTTP client’s timeout configuration.
Base URL for your service. This value can be found in the web UI (in service overview’s
Request dialog).Typical value: https://serve-api.dev2.vssl.ai/api/v1/services/<slug>
Response from your service will be relayed. Thus, there is no fixed form of response.The response will be streamed with low latency, so it can be used in live streamed applications,
e.g. chatting or text completions using large language models (LLMs).
HTTP response headers from your service will be generally stripped out.
Only the following headers will be passed along:
Content-Type
Content-Length
Copy
Ask AI
POST /predictions/my-model(...)Content-Type: application/jsonContent-Length: 23{"question": "1+1 = ?"}
Copy
Ask AI
200 OK(...)Content-Type: application/jsonContent-Length: 43{"answer": "The answer is 3. No, it's 11."}