diff --git a/serverless/endpoints/send-requests.mdx b/serverless/endpoints/send-requests.mdx index e95760c6..60878917 100644 --- a/serverless/endpoints/send-requests.mdx +++ b/serverless/endpoints/send-requests.mdx @@ -12,26 +12,34 @@ Serverless endpoints provide synchronous and asynchronous job processing with au ## How requests work -After creating a Serverless [endpoint](/serverless/endpoints/overview), you can start sending it **requests** to submit jobs and retrieve results. A request can include parameters, payloads, and headers that define what the endpoint should process. For example, you can send a `POST` request to submit a job, or a `GET` request to check status of a job, retrieve results, or check endpoint health. +After creating a Serverless [endpoint](/serverless/endpoints/overview), you can start sending it **requests** to submit jobs and retrieve results. -A **job** is a unit of work containing the input data from the request, packaged for processing by your [workers](/serverless/workers/overview). If no worker is immediately available, the job is queued. Once a worker is available, the job is processed by the worker using your [handler function](/serverless/workers/handler-functions). +A request can include parameters, payloads, and headers that define what the endpoint should process. For example, you can send a `POST` request to submit a job, or a `GET` request to check status of a job, retrieve results, or check endpoint health. -When you submit a job request, it can be either synchronous or asynchronous depending on the operation you use: +A **job** is a unit of work containing the input data from the request, packaged for processing by your [workers](/serverless/workers/overview). -- `/runsync` submits a synchronous job. A response is returned as soon as the job is complete. -- `/run` submits an asynchronous job. The job is processed in the background, and you can retrieve the result by sending a `GET` request to the `/status` endpoint. +If no worker is immediately available, the job is queued. Once a worker is available, the job is processed using your worker's [handler function](/serverless/workers/handler-functions). -Queue-based endpoints provide a fixed set of operations for submitting and managing jobs. You can find a full list of operations and examples in the [sections below](/serverless/endpoints/send-requests#operation-overview). +Queue-based endpoints provide a fixed set of operations for submitting and managing jobs. You can find a full list of operations and sample code in the [sections below](/serverless/endpoints/send-requests#operation-overview). - -If you need to create an endpoint that supports custom API paths, use [load balancing endpoints](/serverless/load-balancing/overview). - +## Sync vs. async -## Request input structure +When you submit a job request, it can be either synchronous or asynchronous depending on the operation you use: -When submitting a job with `/runsync` or `/run`, your request must include a JSON object the the key `input`, containing the parameters required by your worker's [handler function](/serverless/workers/handler-functions). +- `/runsync` submits a synchronous job. + - Client waits for the job to complete before returning the result. + - A response is returned as soon as the job is complete. + - Results are available for 1 minute by default (5 minutes max). + - Ideal for quick responses and interactive applications. +- `/run` submits an asynchronous job. + - The job is processed in the background. + - Retrieve the result by sending a `GET` request to the `/status` endpoint. + - Results are available for 30 minutes after completion. + - Ideal for long-running tasks and batch processing. -For example: +## Request input structure + +When submitting a job with `/runsync` or `/run`, your request must include a JSON object with the key `input` containing the parameters required by your worker's [handler function](/serverless/workers/handler-functions). For example: ```json { @@ -41,7 +49,7 @@ For example: } ``` -The exact parameters inside the `input` object depend on your specific worker implementation. Check your worker's documentation for required and optional parameters. +The exact parameters required in the `input` object depend on your specific worker implementation (e.g. `prompt` commonly used for endpoints serving LLMs, but not all workers accept it). Check your worker's documentation for a list of required and optional parameters. ## Send requests from the console @@ -81,6 +89,10 @@ Here's a quick overview of the operations available for queue-based endpoints: | `/purge-queue` | POST | Clear all pending jobs from the queue without affecting jobs already in progress. | | `/health` | GET | Monitor the operational status of your endpoint, including worker and job statistics. | + +If you need to create an endpoint that supports custom API paths, use [load balancing endpoints](/serverless/load-balancing/overview). + + ## Operation reference Below you'll find detailed explanations and examples for each operation using `cURL` and the Runpod SDK. @@ -114,11 +126,23 @@ export ENDPOINT_ID="YOUR_ENDPOINT_ID" Synchronous jobs wait for completion and return the complete result in a single response. This approach works best for shorter tasks where you need immediate results, interactive applications, and simpler client code without status polling. -* **Payload limit**: 20 MB -* **Job availability**: Results are available for 60 seconds after completion +`/runsync` requests have a maximum payload size of 20 MB. + +Results are available for 1 minute by default, but you can append `?wait=x` to the request URL to extend this up to 5 minutes, where `x` is the number of milliseconds to store the results, from 1000 (1 second) to 300000 (5 minutes). + +For example, `?wait=120000` will keep your results available for 2 minutes: + +```sh +https://api.runpod.ai/v2/$ENDPOINT_ID/runsync?wait=120000 +``` + + +`?wait` is only available for `cURL` and standard HTTP request libraries. + + ```sh curl --request POST \ --url https://api.runpod.ai/v2/$ENDPOINT_ID/runsync \ @@ -130,6 +154,7 @@ curl --request POST \ + ```python import runpod import os @@ -140,7 +165,7 @@ endpoint = runpod.Endpoint(os.getenv("ENDPOINT_ID")) try: run_request = endpoint.run_sync( {"prompt": "Hello, world!"}, - timeout=60, # Timeout in seconds + timeout=60, # Client timeout in seconds ) print(run_request) except TimeoutError: @@ -149,6 +174,7 @@ except TimeoutError: + ```javascript const { RUNPOD_API_KEY, ENDPOINT_ID } = process.env; import runpodSdk from "runpod-sdk"; @@ -160,6 +186,8 @@ const result = await endpoint.runSync({ "input": { "prompt": "Hello, World!", }, + timeout: 60000, // Client timeout in milliseconds +}); }); console.log(result); @@ -167,6 +195,7 @@ console.log(result); + ```go package main @@ -199,7 +228,7 @@ func main() { "prompt": "Hello World", }, }, - Timeout: sdk.Int(120), + Timeout: sdk.Int(60), // Client timeout in seconds } output, err := endpoint.RunSync(&jobInput) @@ -212,10 +241,9 @@ func main() { } ``` + - - -`/runsync` requests return a response as soon as the job is complete: +`/runsync` returns a response as soon as the job is complete: ```json { @@ -231,15 +259,14 @@ func main() { "status": "COMPLETED" } ``` - - ### `/run` Asynchronous jobs process in the background and return immediately with a job ID. This approach works best for longer-running tasks that don't require immediate results, operations requiring significant processing time, and managing multiple concurrent jobs. -* **Payload limit**: 10 MB -* **Job availability**: Results are available for 30 minutes after completion +`/run` requests have a maximum payload size of 10 MB. + +Job results are available for 30 minutes after completion. @@ -341,23 +368,32 @@ func main() { ``` - + + +`/run` returns a response with the job ID and status: + ```json { "id": "eaebd6e7-6a92-4bb8-a911-f996ac5ea99d", "status": "IN_QUEUE" } ``` - - + +Further results must be retrieved using the `/status` operation. ### `/status` -Check the current state, execution statistics, and results of previously submitted jobs. The status endpoint provides the current job state, execution statistics like queue delay and processing time, and job output if completed. +Check the current state, execution statistics, and results of previously submitted jobs. The status operation provides the current job state, execution statistics like queue delay and processing time, and job output if completed. + + +You can configure time-to-live (TTL) for individual jobs by appending a TTL parameter to the request URL. + +For example, `https://api.runpod.ai/v2/$ENDPOINT_ID/status/YOUR_JOB_ID?ttl=6000` sets the TTL to 6 seconds. + -Replace `YOUR_JOB_ID` with the actual job ID you received in the response to the `/run` request. +Replace `YOUR_JOB_ID` with the actual job ID you received in the response to the `/run` operation. ```sh curl --request GET \ @@ -476,9 +512,9 @@ func main() { ``` - + -`/status` requests return a JSON response with the job status (e.g. `IN_QUEUE`, `IN_PROGRESS`, `COMPLETED`, `FAILED`), and an optional `output` field if the job is completed: +`/status` returns a JSON response with the job status (e.g. `IN_QUEUE`, `IN_PROGRESS`, `COMPLETED`, `FAILED`), and an optional `output` field if the job is completed: ```json { @@ -493,12 +529,6 @@ func main() { "status": "COMPLETED" } ``` - - - - -You can configure time-to-live (TTL) for individual jobs by appending a TTL parameter: `https://api.runpod.ai/v2/$ENDPOINT_ID/status/YOUR_JOB_ID?ttl=6000` sets the TTL to 6 seconds. - ### `/stream` @@ -629,7 +659,14 @@ func main() { ``` - + + + +The maximum size for a single streamed payload chunk is 1 MB. Larger outputs will be split across multiple chunks. + + +Streaming response format: + ```json [ { @@ -654,12 +691,6 @@ func main() { } ] ``` - - - - -The maximum size for a single streamed payload chunk is 1 MB. Larger outputs will be split across multiple chunks. - ### `/cancel` @@ -794,15 +825,18 @@ func main() { ``` - + + + +`/cancel` requests return a JSON response with the status of the cancel operation: + ```json { "id": "724907fe-7bcc-4e42-998d-52cb93e1421f-u1", "status": "CANCELLED" } ``` - - + ### `/retry` @@ -826,7 +860,7 @@ You'll see the job status updated to `IN_QUEUE` when the job is retried: ``` -Job results expire after a set period. Asynchronous jobs (`/run`) results are available for 30 minutes, while synchronous jobs (`/runsync`) results are available for 1 minute. Once expired, jobs cannot be retried. +Job results expire after a set period. Asynchronous jobs (`/run`) results are available for 30 minutes, while synchronous jobs (`/runsync`) results are available for 1 minute (up to 5 minutes with `?wait=t`). Once expired, jobs cannot be retried. ### `/purge-queue` @@ -881,7 +915,11 @@ main(); ``` - + + + +`/purge-queue` operation only affects jobs waiting in the queue. Jobs already in progress will continue to run. + `/purge-queue` requests return a JSON response with the number of jobs removed from the queue and the status of the purge operation: @@ -891,12 +929,6 @@ main(); "status": "completed" } ``` - - - - -`/purge-queue` operation only affects jobs waiting in the queue. Jobs already in progress will continue to run. - ### `/health` @@ -940,7 +972,7 @@ console.log(health); ``` - + `/health` requests return a JSON response with the current status of the endpoint, including the number of jobs completed, failed, in progress, in queue, and retried, as well as the status of workers. @@ -959,8 +991,6 @@ console.log(health); } } ``` - - ## vLLM and OpenAI requests @@ -1097,14 +1127,4 @@ Here are some common issues and suggested solutions: | Rate limiting | Too many requests in short time | Implement backoff strategy, batch requests when possible | | Missing results | Results expired | Retrieve results within expiration window (30 min for async, 1 min for sync) | -Implementing proper error handling and retry logic will make your integrations more robust and reliable. - -## Related resources - -* [Endpoint configurations](/serverless/endpoints/endpoint-configurations) -* [Python SDK for endpoints](/sdks/python/endpoints) -* [JavaScript SDK for endpoints](/sdks/javascript/endpoints) -* [Go SDK for endpoints](/sdks/go/endpoints) -* [Handler functions](/serverless/workers/handler-functions) -* [Local testing](/serverless/development/local-testing) -* [GitHub integration](/serverless/workers/github-integration) +Implementing proper error handling and retry logic will make your integrations more robust and reliable. \ No newline at end of file diff --git a/serverless/workers/handler-functions.mdx b/serverless/workers/handler-functions.mdx index 150d8197..278b77d7 100644 --- a/serverless/workers/handler-functions.mdx +++ b/serverless/workers/handler-functions.mdx @@ -15,14 +15,16 @@ Before building a handler function, you should understand the structure of job r ```json { - "id": "A_RANDOM_JOB_IDENTIFIER", - "input": { "key": "value" } + "id": "eaebd6e7-6a92-4bb8-a911-f996ac5ea99d", + "input": { + "key": "value" + } } ``` -Your handler will access the `input` field to process the request data. +`id` is a randomly generated unique identifier for the job, while `input` contains the data for your worker to process. -To learn more about endpoint requests, see [Send requests](/serverless/endpoints/send-requests). +To learn more about endpoint requests, see [Send API requests](/serverless/endpoints/send-requests). ## Basic handler implementation @@ -33,13 +35,15 @@ import runpod def handler(job): job_input = job["input"] # Access the input from the request + # Add your custom code here + return "Your job results" runpod.serverless.start({"handler": handler}) # Required ``` -The handler takes a request, extracts the input, processes it, and returns a result. The `runpod.serverless.start()` function launches your serverless application with the specified handler. +The handler takes extracts the input from the job request, processes it, and returns a result. The `runpod.serverless.start()` function launches your serverless application with the specified handler. ## Local testing @@ -73,7 +77,7 @@ You can create several types of handler functions depending on the needs of your ### Standard handlers -The simplest handler type, standard handlers process inputs synchronously and return results directly. +The simplest handler type, standard handlers process inputs synchronously and return them when the job is complete. ```python import runpod @@ -110,7 +114,7 @@ runpod.serverless.start({ }) ``` -By default, outputs from streaming handlers are only available at the `/stream` endpoint. Set `return_aggregate_stream` to `True` to make outputs available from the `/run` and `/runsync` endpoints as well. +By default, outputs from streaming handlers are only available using the `/stream` operation. Set `return_aggregate_stream` to `True` to make outputs available from the `/run` and `/runsync` operations as well. ### Asynchronous handlers @@ -228,7 +232,7 @@ def handler(job): runpod.serverless.start( { "handler": handler, # Required: Specify the sync handler - "return_aggregate_stream": True, # Optional: Aggregate results are accessible via /run endpoint + "return_aggregate_stream": True, # Optional: Aggregate results are accessible via /run operation } ) ``` @@ -269,8 +273,8 @@ A short list of best practices to keep in mind as you build your handler functio Be aware of payload size limits when designing your handler: -* `/run` endpoint: 10 MB -* `/runsync` endpoint: 20 MB +* `/run` operation: 10 MB +* `/runsync` operation: 20 MB If your results exceed these limits, consider stashing them in cloud storage and returning links instead.