Background actions
What is a background action?
Background actions in Gadget are actions that have been enqueued to run at some point soon instead of right away and operate independently of the main application logic. Both model and global actions can be run as background actions.
await api.enqueue(api.someModelOrGlobalAction, {}, { queue: "my-queue" });
await api.enqueue(api.someModelOrGlobalAction, {}, { queue: "my-queue" });
When to use a background action
Background actions (similar to background jobs) are useful for long-running tasks that don't need to happen immediately.
1. Processing time-consuming operations
For tasks that take a considerable amount of time to complete, such as generating reports, processing images, or performing complex calculations, background actions allow these tasks to be offloaded from the main application flow.
2. Handling external API calls
When interacting with external services or APIs, network latency and rate limiting can introduce delays. Background actions are ideal for managing these calls, especially when the outcome does not need to be immediately presented to the user. They also allow for efficient retry mechanisms in case of failures.
3. Scheduled tasks
For tasks that need to run at specific intervals, such as a data sync, email notifications, or cleanup operations, background actions can be scheduled to run independently of user interactions, ensuring that these operations do not impact the overall performance of the application.
4. Batch processing
Performing operations on large datasets, like batch updates, exports, or imports, can be resource-intensive. By using background actions, these operations can be processed in smaller, manageable chunks without blocking any main functionality within your application.
How to add a background action
To add a background action to your Gadget application, you define a model or global action, and then enqueue it to run in the background. Actions are serverless JavaScript/TypeScript functions that run in your Gadget backend, with access to your database and the rest of Gadget's framework. Actions can only be run in the background when they have an API trigger.
For example, let's say we have a global action defined in api/actions/sendAnEmail.ts
:
1export const run: ActionRun = async ({ params, emails }) => {2 await emails.sendMail({3 to: params.to,4 subject: "Hello from Gadget!",56 // Email body7 html: "an example email",8 });9};1011export const params = {12 to: {13 type: "string",14 required: true,15 },16};
1export const run: ActionRun = async ({ params, emails }) => {2 await emails.sendMail({3 to: params.to,4 subject: "Hello from Gadget!",56 // Email body7 html: "an example email",8 });9};1011export const params = {12 to: {13 type: "string",14 required: true,15 },16};
You can run this action in the background by making an API call and enqueuing it:
await api.enqueue(api.sendAnEmail, { to: "[email protected]" });
await api.enqueue(api.sendAnEmail, { to: "[email protected]" });
Once you've enqueued your action, Gadget will run it as soon as it can in your serverless hosting environment. You can view the status and outcome of this background action in the Queues dashboard within Gadget.
Enqueuing an action to run in the background
Enqueuing a background action can be made from the backend or the frontend and is done with the api.enqueue
function or with the useEnqueue
React hook.
Enqueues take in three inputs:
- The action to enqueue.
- The input for the action, based on your action's needs.
- Options for the enqueue method.
Let's take a look at a couple of examples to use a background action both within the backend and frontend.
Enqueue an action from the backend
Any model or global action can be used and called to run in the background.
In our example above we used a global action but we can also use a model action like below:
1export const run: ActionRun = async ({2 params,3 record,4 logger,5 api,6 connections,7}) => {8 const res = await fetch("https://sample-api.com/api/v1/test", {9 method: "POST",10 headers: {11 "content-type": "application/json",12 },13 body: JSON.stringify(record),14 });1516 if (res.ok) {17 const responseMessage = await response.text();18 logger.info({ responseMessage }, "response from endpoint");19 } else {20 throw new Error("Error occurred");21 }22};
1export const run: ActionRun = async ({2 params,3 record,4 logger,5 api,6 connections,7}) => {8 const res = await fetch("https://sample-api.com/api/v1/test", {9 method: "POST",10 headers: {11 "content-type": "application/json",12 },13 body: JSON.stringify(record),14 });1516 if (res.ok) {17 const responseMessage = await response.text();18 logger.info({ responseMessage }, "response from endpoint");19 } else {20 throw new Error("Error occurred");21 }22};
We can then enqueue the action in the background in either a model or global action.
1export const onSuccess: ActionOnSuccess = async ({2 params,3 record,4 logger,5 api,6 connections,7}) => {8 await api.enqueue(api.student.sendData, { id: params.studentId });9};
1export const onSuccess: ActionOnSuccess = async ({2 params,3 record,4 logger,5 api,6 connections,7}) => {8 await api.enqueue(api.student.sendData, { id: params.studentId });9};
Or within an HTTP route.
1import { RouteHandler } from "gadget-server";23const route: RouteHandler<{ Body: { studentId: string } }> = async ({4 request,5 reply,6 api,7 logger,8 connections,9}) => {10 const { studentId } = request.body;11 await api.enqueue(api.student.sendData, { id: studentId });12 await reply.code(200).send();13};1415export default route;
1import { RouteHandler } from "gadget-server";23const route: RouteHandler<{ Body: { studentId: string } }> = async ({4 request,5 reply,6 api,7 logger,8 connections,9}) => {10 const { studentId } = request.body;11 await api.enqueue(api.student.sendData, { id: studentId });12 await reply.code(200).send();13};1415export default route;
Enqueue an action from the frontend
Actions can be enqueued directly from your frontend code with the useEnqueue
React hook.
1export function UpdateUserProfileButton(props: { userId: string; newProfileData: { age: number; location: string } }) {2 // useEnqueue hook is used to enqueue an update action3 const [{ error, fetching, data, handle }, enqueue] = useEnqueue(api.user.update);45 const onClick = () => {6 return enqueue(7 // pass the params for the action as the first argument8 {9 id: props.userId,10 changes: props.newProfileData,11 },12 // optionally pass options configuring the background action as the second argument13 {14 retries: 0,15 }16 );17 };1819 // Render the button with feedback based on the enqueue state20 return (21 <>22 {error && <>Failed to enqueue user update: {error.toString()}</>}23 {fetching && <>Enqueuing update action...</>}24 {data && <>Enqueued update action with background action id={handle.id}</>}25 <button onClick={onClick}>Update Profile</button>26 </>27 );28}
1export function UpdateUserProfileButton(props: { userId: string; newProfileData: { age: number; location: string } }) {2 // useEnqueue hook is used to enqueue an update action3 const [{ error, fetching, data, handle }, enqueue] = useEnqueue(api.user.update);45 const onClick = () => {6 return enqueue(7 // pass the params for the action as the first argument8 {9 id: props.userId,10 changes: props.newProfileData,11 },12 // optionally pass options configuring the background action as the second argument13 {14 retries: 0,15 }16 );17 };1819 // Render the button with feedback based on the enqueue state20 return (21 <>22 {error && <>Failed to enqueue user update: {error.toString()}</>}23 {fetching && <>Enqueuing update action...</>}24 {data && <>Enqueued update action with background action id={handle.id}</>}25 <button onClick={onClick}>Update Profile</button>26 </>27 );28}
Scheduling a background action
To schedule a background action, you can pass the startAt
option when enqueueing an action. This allows you to specify when an action is to be executed in the background. The startAt
option must pass the scheduled time formatted as an ISO string.
Here's a basic example, we can directly input the ISO string like this:
1export const run: ActionRun = async ({ record }) => {2 await api.enqueue(3 api.change,4 {},5 // Scheduled to start at noon on April 3, 2024, UTC6 { startAt: "2024-04-03T12:00:00.000Z" }7 );8};
1export const run: ActionRun = async ({ record }) => {2 await api.enqueue(3 api.change,4 {},5 // Scheduled to start at noon on April 3, 2024, UTC6 { startAt: "2024-04-03T12:00:00.000Z" }7 );8};
Or we could also create our date using newDate()
:
1export const run: ActionRun = async ({ record }) => {2 await api.enqueue(3 api.change,4 {},5 // remember to format the created date as an ISO String like below6 {7 // Starts in 10 minutes8 startAt: new Date(Date.now() + 1000 * 60 * 10).toISOString(),9 }10 );11};
1export const run: ActionRun = async ({ record }) => {2 await api.enqueue(3 api.change,4 {},5 // remember to format the created date as an ISO String like below6 {7 // Starts in 10 minutes8 startAt: new Date(Date.now() + 1000 * 60 * 10).toISOString(),9 }10 );11};
Retrying on failure
Retries are crucial for handling failures in background actions, especially for unreliable operations like network requests.
By default, background actions are retried up to 6 times using an exponential back-off strategy. This strategy increases the delay between retries, allowing for intermittent issues to be resolved. The retry behavior can be customized by specifying the number of retries and the delay strategy upon enqueueing an action.
For information on configuring how many times and how fast to handle retries read more in the reference here.
Examples
We could specify to not retry this action at all:
await api.enqueue(api.publish, {}, { retries: 0 });
await api.enqueue(api.publish, {}, { retries: 0 });
Retry this action once:
// retry this action onceawait api.enqueue(api.publish, {}, { retries: 1 });
// retry this action onceawait api.enqueue(api.publish, {}, { retries: 1 });
Retry this action once, but wait 30 minutes before doing the retry:
1// retry this action once, but wait 30 minutes before doing the retry2await api.enqueue(3 api.publish,4 {},5 { retries: { retryCount: 1, initialInterval: 30 * 60 * 1000 } }6);
1// retry this action once, but wait 30 minutes before doing the retry2await api.enqueue(3 api.publish,4 {},5 { retries: { retryCount: 1, initialInterval: 30 * 60 * 1000 } }6);
Awaiting the result
You can await the final result or error of a background action from the client side or server side with the returned BackgroundActionHandle
object. Background action handles are returned by api.enqueue
calls, and can also be created in different spots if you know the id of the background action you'd like to get a handle for.
For example, you can enqueue a publish
action and then await the result:
// enqueue an action and get a handle backconst handle = await api.enqueue(api.publish, {});// await the result of the actionconst result = await handle.result();
// enqueue an action and get a handle backconst handle = await api.enqueue(api.publish, {});// await the result of the actionconst result = await handle.result();
Background actions that succeed will return the result of the action as the result, and background actions that fail all their retries will throw the final failure error when .result()
is called. If an action fails some attempts but then succeeds, .result()
will return the result of the successful attempt.
Since await handle.result()
waits for an attempt to complete, it can potentially take a long time. If the action is failing and has a slow retry schedule, it may be days before the result is available. Conversely, if you know you have a fast schedule or a limited number of retries, handle.result()
will return quickly.
Subscribing to background action results is supported client-side using the api
object.
Selecting properties of the result
If you've enqueued a model action in the background, you can select the fields you desire, including related fields, when accessing the action's results.
You can pass the select
action to handle.result()
to retrieve specific fields of the record returned from a background action:
1const handle = await api.enqueue(api.widget.update, { id: 1, name: "foo" });23const record = await handle.result({4 select: {5 id: true,6 name: true,7 gizmos: { edges: { node: { id: true } } },8 },9});1011// record.id will have the resulting record id12// record.gizmos will have the record's gizmos relationship loaded
1const handle = await api.enqueue(api.widget.update, { id: 1, name: "foo" });23const record = await handle.result({4 select: {5 id: true,6 name: true,7 gizmos: { edges: { node: { id: true } } },8 },9});1011// record.id will have the resulting record id12// record.gizmos will have the record's gizmos relationship loaded
Getting handles from an id
You can create a BackgroundActionHandle
object anywhere you have an api
object if you know the action that was invoked and the id
of the enqueued background action.
For example, you can get a handle for a background action that was enqueued in the past like this:
// get a handle for a background action that was enqueued in the pastconst handle = api.handle(api.publish, "app-job-12345");// await the action succeeding and then retrieve result of the actionconst result = await handle.result();
// get a handle for a background action that was enqueued in the pastconst handle = api.handle(api.publish, "app-job-12345");// await the action succeeding and then retrieve result of the actionconst result = await handle.result();
If you know you're likely to need an action's result in the future, a common pattern is to store the enqueued action id on a record of a model. For example, we can store the id
of the background action on a widget
record like this:
1export const onSuccess: ActionOnSuccess = async ({2 params,3 record,4 logger,5 api,6 connections,7}) => {8 const handle = await api.enqueue(api.publish, { id: params.id });9 // record the enqueued action id on the widget10 record.backgroundActionId = handle.id;11 await save(record);12};
1export const onSuccess: ActionOnSuccess = async ({2 params,3 record,4 logger,5 api,6 connections,7}) => {8 const handle = await api.enqueue(api.publish, { id: params.id });9 // record the enqueued action id on the widget10 record.backgroundActionId = handle.id;11 await save(record);12};
Then, later, you can use this stored id
to boot up a handle, say on the frontend:
1const widget = await api.widget.findFirst();23// get a handle for the background action4const handle = api.handle(api.publish, widget.backgroundActionId);56// await the result of the action7const result = await handle.result();
1const widget = await api.widget.findFirst();23// get a handle for the background action4const handle = api.handle(api.publish, widget.backgroundActionId);56// await the result of the action7const result = await handle.result();
Handle result performance and billing
Accessing the result of a background handle uses an efficient websocket based subscription protocol to await results from the backend. Subscribing to a handle result doesn't incur charges for simply subscribing to the result, but if you subscribe within a server-side request or action, that server-side request will be charged the normal usage rates. For example, if you await handle.result()
within the frontend of your application, you won't be charged any extra request time for opening the websocket and listening until the action has completed. But, if you await handle.result()
in a global action, you'll be billed for the time that global action ran.
Handle result permissions
To access the result of a background action, the requesting api
client must have the same identity as the client that ran the action itself. This means that for example, unauthenticated
users cannot access the results of actions enqueued by other unauthenticated users or by authenticated users.
If you want to make the result of a background action accessible by more users, you can store the result on a data model, and use the normal access control system to grant access to that data model.
Queuing and concurrency control
To manage the execution of this background action with specific concurrency limits, it can be placed in a dedicated queue. By default, actions are added to a global queue with no concurrency restrictions.
For targeted control:
Single concurrency: To place the action in a named queue that allows only one action to run at a time, simply provide the name of the queue as a string. This implicitly sets the maximum concurrency to 1.
1await api.enqueue(2 api.publish,3 {},4 // setting a queue with default max concurrency set to 15 { queue: { name: "dedicated-queue" } }6);
1await api.enqueue(2 api.publish,3 {},4 // setting a queue with default max concurrency set to 15 { queue: { name: "dedicated-queue" } }6);
Custom concurrency: To define a custom concurrency level, pass an object with two properties: the queue's name
(string) and the desired maxConcurrency
(number). This enables precise control over how many instances of the action can be executed simultaneously within the specified queue.
1await api.enqueue(2 api.publish,3 {},4 // setting a queue with custom max concurrency5 { queue: { name: "dedicated-queue", maxConcurrency: 4 } }6);
1await api.enqueue(2 api.publish,3 {},4 // setting a queue with custom max concurrency5 { queue: { name: "dedicated-queue", maxConcurrency: 4 } }6);
When to define a targeted concurrency control
Sequential processing: Consider a scenario where a user accidentally triggers the same action twice, such as upgrading a plan. Without proper concurrency management, both actions could proceed simultaneously, potentially resulting in double charges or duplicate operations. Sequential processing ensures that such operations are executed one after the other, eliminating the risk of duplication.
Resource-specific concurrency: In many cases, operations need to be serialized per resource rather than application-wide. For example, while it's necessary to prevent simultaneous upgrades for a single customer's account, different customers should be able to upgrade their plans concurrently without waiting for each other. This approach ensures efficiency and isolation, preventing one customer's actions from impacting another's.
Interacting with rate-limited systems: When your application interacts with external systems that have rate limits (e.g., APIs), managing concurrency becomes crucial to avoid exceeding these limits. Properly configured concurrency settings ensure that your application respects these limits, maintaining a good standing with the service providers and ensuring reliable integration.
Configuring concurrency control to handle API rate limits
You can effectively manage rate limits by configuring the maxConcurrency
when enqueueing an action. Adjusting this will control the number of background actions that can run simultaneously, indirectly influencing the request rate.
To align with a third-party API's rate limit, you must calculate the appropriate maxConcurrency
based on the execution time and the rate limit itself.
Formula to calculate needed concurrency
maxConcurrency
= Rate limit (actions/requests per second) × Average action execution time (in seconds)
Best practice
Determine the average execution time of your action (e.g., 200ms).
Calculate the maximum allowable actions per second based on the third-party's rate limit (e.g., 5 requests per second).
Adjust
maxConcurrency
to ensure that executing jobs concurrently will not exceed the calculated rate.
For example, let's say an action takes about 1 second to execute, with an API rate limit of 5 requests per second, you would calculate the maxConcurrency
to be 5. This setting is optimal because it aligns precisely with the rate limit, ensuring you fully utilize the available capacity without the risk of exceeding the limit.
But in another case, imagine a scenario where each action your system performs completes in approximately 500 milliseconds or 0.5 seconds. If the API you're integrating with allows 5 requests per second, you'd calculate your maxConcurrency
as 2.5. To avoid surpassing the rate limit, you round down to 2. This adjustment ensures you're using the API efficiently while staying within allowed usage boundaries.
Background action status
Within the Queues
dashboard in Gadget, you can observe the status and timeline of your background actions.
The status of your running action can be grouped into one of the below categories:
Scheduled: Was enqueued with a schedule of when to run.
Waiting: Either added without a schedule or the scheduled time has arrived.
Running: Is currently executing.
Retrying: Has run at least once and failed but still has retries remaining.
Failed job: Has exhausted all retry attempts and failed every time.
Complete: Has completed successfully with 0 max_retries.
Bulk enqueuing
Gadget supports running many instances of the same action on a model with bulk actions. Bulk actions can be run in the foreground, or enqueued to run each action in the background.
You can invoke an action in the foreground in bulk by calling the .bulk<Action>
function:
1// create 3 widgets in bulk in the foreground2const widgets = await api.widget.bulkCreate([3 { name: "foo" },4 { name: "bar" },5 { name: "baz" },6]);
1// create 3 widgets in bulk in the foreground2const widgets = await api.widget.bulkCreate([3 { name: "foo" },4 { name: "bar" },5 { name: "baz" },6]);
And you can enqueue an action in bulk by calling api.enqueue
with the foreground bulk action function:
1// create 3 widgets in bulk in the background2const handles = await api.enqueue(api.widget.bulkCreate, [3 { name: "foo" },4 { name: "bar" },5 { name: "baz" },6]);78// wait for the result of the first create action9const widget = await handles[0].result();
1// create 3 widgets in bulk in the background2const handles = await api.enqueue(api.widget.bulkCreate, [3 { name: "foo" },4 { name: "bar" },5 { name: "baz" },6]);78// wait for the result of the first create action9const widget = await handles[0].result();
Each element of your bulk action will be enqueued as one individual background action with its own id
, status, and retry schedule. In the above example, this means 3 different widget.create
actions will be enqueued and executed. This means that some of the enqueued actions can succeed right away and others can fail and be retried.
Enqueuing actions in bulk returns an array of handle objects for working with the created background actions. If you want to wait for all the actions in your bulk enqueue to complete, you must await the result of each individual returned handle.
1const handles = await api.enqueue(api.widget.bulkCreate, widgets);23// wait for all enqueued actions to complete4const widgets = await Promise.all(5 handles.map(async (handle) => await handle.result())6);
1const handles = await api.enqueue(api.widget.bulkCreate, widgets);23// wait for all enqueued actions to complete4const widgets = await Promise.all(5 handles.map(async (handle) => await handle.result())6);
Similarly to foreground actions, each model's actions are all available in bulk in the foreground and background:
1// update 2 widgets in bulk in the background2const handles = await api.enqueue(api.widget.bulkUpdate, [3 { id: 1, name: "bar" },4 { id: 2, name: "baz" },5]);67// delete 3 widgets in bulk in the background8const handles = await api.enqueue(api.widget.bulkDelete, ["1", "2", "3"]);
1// update 2 widgets in bulk in the background2const handles = await api.enqueue(api.widget.bulkUpdate, [3 { id: 1, name: "bar" },4 { id: 2, name: "baz" },5]);67// delete 3 widgets in bulk in the background8const handles = await api.enqueue(api.widget.bulkDelete, ["1", "2", "3"]);
Bulk background action options
Like individual background actions, enqueuing bulk actions supports the full list of background action options: queue
, id
, startAt
, and retries
. Each of these options will apply to each individually enqueued background action, and each action's options won't affect the others.
For retries
, each enqueued background action from the bulk set will get its own set of retries according to the options you specify.
// bulk create some widgets, and retry each widget create action up to 3 timesconst handles = await api.enqueue(api.widget.bulkCreate, widgets, { retries: 3 });
// bulk create some widgets, and retry each widget create action up to 3 timesconst handles = await api.enqueue(api.widget.bulkCreate, widgets, { retries: 3 });
For queue
, each enqueued background action from the bulk set will be added to the same concurrency queue, and obey the maximum concurrency you specify. The concurrency limit applies to each individually enqueued background action, not the bulk set as a whole.
// bulk create some widgets in the `user-10` concurrency queue, and create at most 1 widget at a timeconst handles = await api.enqueue(api.widget.bulkCreate, widgets, {queue: { name: "user-10", maxConcurrency: 1 },});
// bulk create some widgets in the `user-10` concurrency queue, and create at most 1 widget at a timeconst handles = await api.enqueue(api.widget.bulkCreate, widgets, {queue: { name: "user-10", maxConcurrency: 1 },});
For id
, the passed id
option is suffixed with the index of each enqueued background action to create unique identifiers for each. For example, if you enqueue 3 create widget actions, you'll get back 3 handles, each with a unique ID:
1// bulk create some widgets in the `user-10` concurrency queue, and create at most 1 widget at a time2const handles = await api.enqueue(3 api.widget.bulkCreate,4 [{ name: "foo" }, { name: "bar" }, { name: "baz" }],5 {6 id: "test-action",7 }8);910// => test-action-011handles[0].id;12// => test-action-113handles[1].id;14// => test-action-215handles[2].id;
1// bulk create some widgets in the `user-10` concurrency queue, and create at most 1 widget at a time2const handles = await api.enqueue(3 api.widget.bulkCreate,4 [{ name: "foo" }, { name: "bar" }, { name: "baz" }],5 {6 id: "test-action",7 }8);910// => test-action-011handles[0].id;12// => test-action-113handles[1].id;14// => test-action-215handles[2].id;
Gadget always appends the index of the item submitted with a bulk action to ensure action id uniqueness.
When to use bulk enqueueing
Bulk enqueuing is most useful as a performance optimization to submit a large chunk of actions to your app all at once, in one HTTP call. It can be slow to enqueue jobs in a loop, so if you're experiencing slow speeds, switch to enqueuing in bulk.
1const widgets = [{ name: "foo" }, { name: "bar" }, { name: "baz" }];23// slow version: this works, but runs in serial and can get slow due to the overhead of each API call to enqueue the action4for (const widget of widgets) {5 await api.enqueue(api.widget.create, widget);6}78// fast version: this submits all your actions at once and will enqueue them all in parallel9await api.enqueue(api.widget.bulkCreate, widgets);
1const widgets = [{ name: "foo" }, { name: "bar" }, { name: "baz" }];23// slow version: this works, but runs in serial and can get slow due to the overhead of each API call to enqueue the action4for (const widget of widgets) {5 await api.enqueue(api.widget.create, widget);6}78// fast version: this submits all your actions at once and will enqueue them all in parallel9await api.enqueue(api.widget.bulkCreate, widgets);
Background action limits
There are some concurrency limitations to background actions that developers may need to be aware of:
- Each queue has an upper limit
maxConcurrency
value of 100 - A maximum enqueue rate of 200 background actions per second per Gadget environment
Maximum background action enqueue rate
By default, you can enqueue a maximum of 200 background actions per second per production Gadget environment. This includes actions that have been bulk enqueued. In development environments, the limit is 80 actions enqueued per second.
Your apps can temporarily "burst" up to 3x past this limit, to a maximum action enqueue rate of 600 action enqueues per second (240 in development), but sustained rates above this limit will result in GGT_TOO_MANY_REQUESTS
errors.
If you expect to enqueue more than 200 actions per second in your production environment, this limit can be raised. Please contact Gadget support to discuss your use case.
Background actions use a leaky bucket algorithm to enforce the maximum enqueue rate. This means that the upper limit is a sustained rate, and makes bursting over this limit for short periods possible.
Below is an example of bulk-creating records for a custom
model in groups of 200. A timeout await new Promise(r => setTimeout(r, 1000));
is used to make sure that the limit is not exceeded.
1export const run: ActionRun = async ({ params, api }) => {2 // the bg action enqueue limit3 const LIMIT = 200;45 // records to be created6 const myRecords = [];7 for (let i = 0; i < params.numRecords; i++) {8 myRecords.push({ value: i });9 }1011 // loop over the array of records, only enqueueing 200 at a time so limits aren't exceeded12 while (myRecords.length > 0) {13 // get a section of 200 records to enqueue14 const section = myRecords.splice(0, LIMIT);15 // bulk enqueue create action16 await api.enqueue(api.custom.bulkCreate, section);17 // delay for a second, don't exceed rate limits!18 await new Promise((r) => setTimeout(r, 1000));19 }20};2122export const params = {23 numRecords: { type: "number" },24};
1export const run: ActionRun = async ({ params, api }) => {2 // the bg action enqueue limit3 const LIMIT = 200;45 // records to be created6 const myRecords = [];7 for (let i = 0; i < params.numRecords; i++) {8 myRecords.push({ value: i });9 }1011 // loop over the array of records, only enqueueing 200 at a time so limits aren't exceeded12 while (myRecords.length > 0) {13 // get a section of 200 records to enqueue14 const section = myRecords.splice(0, LIMIT);15 // bulk enqueue create action16 await api.enqueue(api.custom.bulkCreate, section);17 // delay for a second, don't exceed rate limits!18 await new Promise((r) => setTimeout(r, 1000));19 }20};2122export const params = {23 numRecords: { type: "number" },24};