POST
{base_url}
/
request
/
{path}

Overview

Send a request and fetch the result directly.

In contrast to asynchronous APIs, this API will return the result in the same connection, and there is no JSON wrapping in either input or outputs. Thus, you can use this API as if you are directly accessing your service.

When the service is in a cold state (i.e. there are no running replicas due to service idleness) and a new request is made, a new replica will be started immediately.

In such case, the first few requests may get aborted due to timeouts, until the replica becomes up and running. Please consult your HTTP client’s timeout configuration.

Interaction code example

Request

Authorization

You must provide a token in Authorization header with Bearer scheme, as:

Authorization: Bearer <token>

The token can be found in the web UI (in service overview’s Request dialog).

Path parameters

base_url
string
required

Base URL for your service. This value can be found in the web UI (in service overview’s Request dialog).

Typical value: https://serve-api.dev2.vssl.ai/api/v1/services/<slug>

path
string
required

Path to use to make a request to your service.

Your service should provide corresponding endpoint. Common path values used for inference include:

  • /v2/models/my-model/infer
  • /predictions/my-model
  • /v1/completions

Response

Response from your service will be relayed. Thus, there is no fixed form of response.

The response will be streamed with low latency, so it can be used in live streamed applications, e.g. chatting or text completions using large language models (LLMs).

HTTP response headers from your service will be generally stripped out. Only the following headers will be passed along:

  • Content-Type
  • Content-Length