Background actions 

What is a background action? 

Background actions in Gadget are actions that have been enqueued to run at some point soon instead of right away and operate independently of the main application logic. Both model and global actions can be run as background actions.

a simple example of enqueueing a background action
await api.enqueue(api.someModelOrGlobalAction, {}, { queue: "my-queue" });

When to use a background action 

Background actions (similar to background jobs) are useful for long-running tasks that don't need to happen immediately.

1. Processing time-consuming operations

For tasks that take a considerable amount of time to complete, such as generating reports, processing images, or performing complex calculations, background actions allow these tasks to be offloaded from the main application flow.

2. Handling external API calls

When interacting with external services or APIs, network latency and rate limiting can introduce delays. Background actions are ideal for managing these calls, especially when the outcome does not need to be immediately presented to the user. They also allow for efficient retry mechanisms in case of failures.

3. Scheduled tasks

For tasks that need to run at specific intervals, such as a data sync, email notifications, or cleanup operations, background actions can be scheduled to run independently of user interactions, ensuring that these operations do not impact the overall performance of the application.

4. Batch processing

Performing operations on large datasets, like batch updates, exports, or imports, can be resource-intensive. By using background actions, these operations can be processed in smaller, manageable chunks without blocking any main functionality within your application.

How to add a background action 

To add a background action to your Gadget application, you define a model or global action, and then enqueue it to run in the background. Actions are serverless JavaScript functions that run in your Gadget backend, with access to your database and the rest of Gadget's framework. Actions can only be run in the background when they have an API trigger.

For example, let's say we have a global action defined in api/actions/sendAnEmail.ts:

1export async function run({ params, emails }) {
2 await emails.send({
3 to:,
4 subject: "Hello from Gadget!",
5 body: "an example email",
6 });
9export const params = {
10 to: {
11 type: "string",
12 required: true,
13 },

You can run this action in the background by making an API call and enqueuing it:

// run in the background
await api.enqueue(api.sendAnEmail, { to: "[email protected]" });

Once you've enqueued your action, Gadget will run it as soon as it can in your serverless hosting environment. You can view the status and outcome of this background action in the Queues dashboard within Gadget.

Enqueuing an action to run in the background 

Enqueuing a background action can be made from the backend or the frontend and is done with the api.enqueue function or with the useEnqueue React hook.

Enqueues take in three inputs:

  1. The action to enqueue.
  2. The input for the action, based on your action's needs.
  3. Options for the enqueue method.

Let's take a look at a couple of examples to use a background action both within the backend and frontend.

Enqueue an action from the backend 

Any model or global action can be used and called to run in the background.

In our example above we used a global action but we can also use a model action like below:

1export async function run({ params, record, logger, api, connections }) {
2 const res = await fetch("", {
3 method: "POST",
4 headers: {
5 "content-type": "application/json",
6 },
7 body: JSON.stringify(record),
8 });
10 if (res.ok) {
11 const responseMessage = await response.text();
12{ responseMessage }, "response from endpoint");
13 } else {
14 throw new Error("Error occurred");
15 }

We can then enqueue the action in the background in either a model or global action.

export async function onSuccess({ params, record, logger, api, connections }) {
await api.enqueue(api.student.sendData, { id: params.studentId });

Or within an HTTP route.

export default async function route({ request, reply, api, logger, connections }) {
const studentId = JSON.parse(request.body).studentId;
await api.enqueue(api.student.sendData, { id: studentId });
await reply.code(200).send();

Enqueue an action from the frontend 

Actions can be enqueued directly from your frontend code with the useEnqueue React hook.

1export function UpdateUserProfileButton(props: {
2 userId: string,
3 newProfileData: { age: number, location: string },
4}) {
5 // useEnqueue hook is used to enqueue an update action
6 const [{ error, fetching, data, handle }, enqueue] = useEnqueue(api.user.update);
8 const onClick = () =>
9 enqueue(
10 // pass the params for the action as the first argument
11 {
12 id: props.userId,
13 changes: props.newProfileData,
14 },
15 // optionally pass options configuring the background action as the second argument
16 {
17 retries: 0,
18 }
19 );
21 // Render the button with feedback based on the enqueue state
22 return (
23 <>
24 {error && <>Failed to enqueue user update: {error.toString()}</>}
25 {fetching && <>Enqueuing update action...</>}
26 {data && <>Enqueued update action with background action id={}</>}
27 <button onClick={onClick}>Update Profile</button>
28 </>
29 );

Scheduling a background action 

To schedule a background action, you can pass the startAt option when enqueueing an action. This allows you to specify when an action is to be executed in the background. The startAt option must pass the scheduled time formatted as an ISO string.

Here's a basic example, we can directly input the ISO string like this:

Using a valid ISO 8601 string to schedule the `change` global action
1export async function run({ record }) {
2 await api.enqueue(
3 api.change,
4 {},
5 // Scheduled to start at noon on April 3, 2024, UTC
6 { startAt: "2024-04-03T12:00:00.000Z" }
7 );

Or we could also create our date using newDate():

Using a JS Date to schedule the `change` global action
1export async function run({ record }) {
2 await api.enqueue(
3 api.change,
4 {},
5 // remember to format the created date as an ISO String like below
6 { startAt: new Date( + 1000 * 60 * 10).toISOString() } // Starts in 10 minutes
7 );

Retrying on failure 

Retries are crucial for handling failures in background actions, especially for unreliable operations like network requests.

By default, background actions are retried up to 6 times using an exponential back-off strategy. This strategy increases the delay between retries, allowing for intermittent issues to be resolved. The retry behavior can be customized by specifying the number of retries and the delay strategy upon enqueueing an action.

For information on configuring how many times and how fast to handle retries read more in the reference here.


We could specify to not retry this action at all:

await api.enqueue(api.publish, {}, { retries: 0 });

Retry this action once:

// retry this action once
await api.enqueue(api.publish, {}, { retries: 1 });

Retry this action once, but wait 30 minutes before doing the retry:

1// retry this action once, but wait 30 minutes before doing the retry
2await api.enqueue(
3 api.publish,
4 {},
5 { retries: { retryCount: 1, initialInterval: 30 * 60 * 1000 } }

Queuing and concurrency control 

To manage the execution of this background action with specific concurrency limits, it can be placed in a dedicated queue. By default, actions are added to a global queue with no concurrency restrictions.

For targeted control:

Single concurrency: To place the action in a named queue that allows only one action to run at a time, simply provide the name of the queue as a string. This implicitly sets the maximum concurrency to 1.

1await api.enqueue(
2 api.publish,
3 {},
4 // setting a queue with default max concurrency set to 1
5 { queue: { name: "dedicated-queue" } }

Custom concurrency: To define a custom concurrency level, pass an object with two properties: the queue's name (string) and the desired maxConcurrency (number). This enables precise control over how many instances of the action can be executed simultaneously within the specified queue.

1await api.enqueue(
2 api.publish,
3 {},
4 // setting a queue with custom max concurrency
5 { queue: { name: "dedicated-queue", maxConcurrency: 4 } }

When to define a targeted concurrency control 

  1. Sequential processing: Consider a scenario where a user accidentally triggers the same action twice, such as upgrading a plan. Without proper concurrency management, both actions could proceed simultaneously, potentially resulting in double charges or duplicate operations. Sequential processing ensures that such operations are executed one after the other, eliminating the risk of duplication.

  2. Resource-specific concurrency: In many cases, operations need to be serialized per resource rather than application-wide. For example, while it's necessary to prevent simultaneous upgrades for a single customer's account, different customers should be able to upgrade their plans concurrently without waiting for each other. This approach ensures efficiency and isolation, preventing one customer's actions from impacting another's.

  3. Interacting with rate-limited systems: When your application interacts with external systems that have rate limits (e.g., APIs), managing concurrency becomes crucial to avoid exceeding these limits. Properly configured concurrency settings ensure that your application respects these limits, maintaining a good standing with the service providers and ensuring reliable integration.

Configuring concurrency control to handle API rate limits 

You can effectively manage rate limits by configuring the maxConcurrency when enqueueing an action. Adjusting this will control the number of background actions that can run simultaneously, indirectly influencing the request rate.

To align with a third-party API's rate limit, you must calculate the appropriate maxConcurrency based on the execution time and the rate limit itself.

Formula to calculate needed concurrency

maxConcurrency = Rate limit (actions/requests per second) × Average action execution time (in seconds)

Best practice

  1. Determine the average execution time of your action (e.g., 200ms).

  2. Calculate the maximum allowable actions per second based on the third-party's rate limit (e.g., 5 requests per second).

  3. Adjust maxConcurrency to ensure that executing jobs concurrently will not exceed the calculated rate.

For example, let's say an action takes about 1 second to execute, with an API rate limit of 5 requests per second, you would calculate the maxConcurrency to be 5. This setting is optimal because it aligns precisely with the rate limit, ensuring you fully utilize the available capacity without the risk of exceeding the limit.

But in another case, imagine a scenario where each action your system performs completes in approximately 500 milliseconds or 0.5 seconds. If the API you're integrating with allows 5 requests per second, you'd calculate your maxConcurrency as 2.5. To avoid surpassing the rate limit, you round down to 2. This adjustment ensures you're using the API efficiently while staying within allowed usage boundaries.

Background action status 

Within the Queues dashboard in Gadget, you can observe the status and timeline of your background actions.

A view of Gadget's queues dashboard where you can observe background action statusA view of a background actions timeline

The status of your running action can be grouped into one of the below categories:

Scheduled: Was enqueued with a schedule of when to run.

Waiting: Either added without a schedule or the scheduled time has arrived.

Running: Is currently executing.

Retrying: Has run at least once and failed but still has retries remaining.

Failed job: Has exhausted all retry attempts and failed every time.

Complete: Has completed successfully with 0 max_retries.

Bulk enqueuing 

Gadget supports running many instances of the same action on a model with bulk actions. Bulk actions can be run in the foreground, or enqueued to run each action in the background.

You can invoke an action in the foreground in bulk by calling the .bulk<Action> function:

1// create 3 widgets in bulk in the foreground
2const widgets = await api.widget.bulkCreate([
3 { name: "foo" },
4 { name: "bar" },
5 { name: "baz" },

And you can enqueue an action in bulk by calling api.enqueue with the foreground bulk action function:

1// create 3 widgets in bulk in the background
2const handles = await api.enqueue(api.widget.bulkCreate, [
3 { name: "foo" },
4 { name: "bar" },
5 { name: "baz" },
8// wait for the result of the first create action
9const widget = await handles[0].result();

Each element of your bulk action will be enqueued as one individual background action with its own id, status, and retry schedule. In the above example, this means 3 different widget.create actions will be enqueued and executed. This means that some of the enqueued actions can succeed right away and others can fail and be retried.

Enqueuing actions in bulk returns an array of handle objects for working with the created background actions. If you want to wait for all the actions in your bulk enqueue to complete, you must await the result of each individual returned handle.

1const handles = await api.enqueue(api.widget.bulkCreate, widgets);
3// wait for all enqueued actions to complete
4const widgets = await Promise.all(
5 (handle) => await handle.result())

Similarly to foreground actions, each model's actions are all available in bulk in the foreground and background:

1// update 2 widgets in bulk in the background
2const handles = await api.enqueue(api.widget.bulkUpdate, [
3 { id: 1, name: "bar" },
4 { id: 2, name: "baz" },
7// delete 3 widgets in bulk in the background
8const handles = await api.enqueue(api.widget.bulkDelete, ["1", "2", "3"]);

Bulk background action options 

Like individual background actions, enqueuing bulk actions supports the full list of background action options: queue, id, startAt, and retries. Each of these options will apply to each individually enqueued background action, and each action's options won't affect the others.

For retries, each enqueued background action from the bulk set will get its own set of retries according to the options you specify.

// bulk create some widgets, and retry each widget create action up to 3 times
const handles = await api.enqueue(api.widget.bulkCreate, widgets, { retries: 3 });

For queue, each enqueued background action from the bulk set will be added to the same concurrency queue, and obey the maximum concurrency you specify. The concurrency limit applies to each individually enqueued background action, not the bulk set as a whole.

// bulk create some widgets in the `user-10` concurrency queue, and create at most 1 widget at a time
const handles = await api.enqueue(api.widget.bulkCreate, widgets, {
queue: { name: "user-10", maxConcurrency: 1 },

For id, the passed id option is suffixed with the index of each enqueued background action to create unique identifiers for each. For example, if you enqueue 3 create widget actions, you'll get back 3 handles, each with a unique ID:

1// bulk create some widgets in the `user-10` concurrency queue, and create at most 1 widget at a time
2const handles = await api.enqueue(
3 api.widget.bulkCreate,
4 [{ name: "foo" }, { name: "bar" }, { name: "baz" }],
5 {
6 id: "test-action",
7 }
10handles[0].id; // => test-action-0
11handles[1].id; // => test-action-1
12handles[2].id; // => test-action-2

Gadget always appends the index of the item submitted with a bulk action to ensure action id uniqueness.

When to use bulk enqueueing 

Bulk enqueuing is most useful as a performance optimization to submit a large chunk of actions to your app all at once, in one HTTP call. It can be slow to enqueue jobs in a loop, so if you're experiencing slow speeds, switch to enqueuing in bulk.

1const widgets = [{ name: "foo" }, { name: "bar" }, { name: "baz" }];
3// slow version: this works, but runs in serial and can get slow due to the overhead of each API call to enqueue the action
4for (const widget of widgets) {
5 await api.enqueue(api.widget.create, widget);
8// fast version: this submits all your actions at once and will enqueue them all in parallel, but runs in serial and can get slow due to the overhead of each API call to enqueue the action
9await api.enqueue(api.widget.bulkCreate, widgets);

Background action limits 

There are some current concurrency limitations to background actions that developers may need to be aware of:

  • Each queue has an upper limit maxConcurrency value of 100
  • The maxConcurrency limit for all queues within a single application is 500
  • A maximum enqueue rate of 80 background actions per second per Gadget environment

Maximum background action enqueue rate 

You can enqueue a maximum of 80 background actions per second per Gadget environment. This includes actions that have been bulk enqueued.

Your apps can temporarily "burst" up to 3x past this limit, to a maximum action enqueue rate of 240 actions per second, but sustained rates above this limit will result in GGT_TOO_MANY_REQUESTS errors.

If you expect to enqueue more than 80 actions per second, this limit can be raised. Please contact Gadget support to discuss your use case!

Max enqueue rate and bursting

Background actions use a leaky bucket algorithm to enforce the maximum enqueue rate. This means that the upper limit is a sustained rate, and makes bursting over this limit for short periods possible.

Below is an example of bulk-creating records for a custom model in groups of 80. A timeout await new Promise(r => setTimeout(r, 1000)); is used to make sure that the limit is not exceeded.

1import { BulkEnqueueSampleGlobalActionContext } from "gadget-server";
4 * @param { BulkEnqueueSampleGlobalActionContext } context
5 */
6export async function run({ params, api }) {
7 // the bg action enqueue limit
8 const LIMIT = 80;
10 // records to be created
11 const myRecords = [];
12 for (let i = 0; i < params.numRecords; i++) {
13 myRecords.push({ value: i });
14 }
16 // loop over the array of records, only enqueueing 80 at a time so limits aren't exceeded
17 while (myRecords.length > 0) {
18 // get a section of 80 records to enqueue
19 const section = myRecords.splice(0, LIMIT);
20 // bulk enqueue create action
21 await api.enqueue(api.custom.bulkCreate, section);
22 // delay for a second, don't exceed rate limits!
23 await new Promise((r) => setTimeout(r, 1000));
24 }
27export const params = {
28 numRecords: { type: "number" },