Use Gadget and OpenAI to build a chatbot that generates custom movie scenes 

Topics covered: AI + vector embeddings, Actions, HTTP routes, React frontends
Time to build: ~30 minutes

Learn about Gadget's built-in AI features such as the OpenAI connection, vector databases, and cosine similarity search, and use them to build a chatbot that generates custom movie scenes.

Create a new Gadget app 

Before we get started we need to create a new Gadget app. We can do this at When selecting an app template, make sure you select the Web app type.

Now that we have a new Gadget app, let's start building!

Step 1: Create a movie model 

The first thing we need to do is store some movie quotes in our Gadget app. We're going to make use of Gadget's data models, which are similar to tables in a Postgres database, to store this information.

Start by creating a new model in Gadget:

  1. Click the + button in the DATA MODELS section of the sidebar
  2. Enter movie as the model's API identifier

Now add some fields to your model. Fields are similar to columns in a database table, and allow you to define what kind of data is stored in your model. For our movie model, we'll add the following fields:

  1. Click + in the movie model's FIELDS section
  2. Enter title as the field's API identifier
  3. Click on the + Add Validations drop-down and select Required to make the title field Required

Adding a Required validation to title means that an error will be thrown if a movie is added without a title. Now let's add a field to store the movie's quotes:

  1. Click + in the movie model's FIELDS section
  2. Enter quote as the field's API identifier
  3. Click on the + Add Validations drop-down and select Required to make the quote field Required

Now we have a place to store movie quotes! We also need a field used to store vector embeddings. Vector embeddings are a way of representing text as a vector of numbers. To learn more about vector embeddings, check out our docs on building AI apps.

  1. Click + in the movie model's FIELDS section
  2. Enter embedding as the field's API identifier
  3. Select vector as the field's type
A screenshot of the completed movie model, with title, quote and embedding fields added. The embedding vector field is selected.

That is all that we need to store data for our app! Now we need a way to generate embeddings. Luckily, OpenAI has an API that we can use to pass in text and get back a vector embedding.

Step 2: Add the OpenAI connection 

Gadget has built-in connections to popular APIs, including OpenAI. You can use these connections to interact with external services in your app.

  1. Click on Settings in the sidebar
  2. Click on Plugins
  3. Select OpenAI from the list of plugins
  4. Use the Gadget development keys so you can start using the OpenAI API without an API key
  5. Click Add connection

You can now use the OpenAI connection in your app.

Step 3: Data ingestion 

We need some test data for our app. We're going to use a global action to fetch an open data source hosted on Hugging Face. We will then use the OpenAI connection to generate embeddings for our movie quotes.

  1. Click on Global actions in the sidebar
  2. Click the + Add action button or + next to the ACTIONS section title to create a new global action
  3. Name the action's API Identifier to ingestData

Our OpenAI connection is already set up for us using Gadget-managed credentials. To learn more about how to set up your own OpenAI connection, check out our OpenAI connection docs.

Free OpenAI credits to get you started

Teams in Gadget get free OpenAI credits to use for experimenting during development! Using Gadget-managed OpenAI credentials automatically draws from this credit pool.

  1. Enter the following code in the generated code file (replace the entire file):
1import { IngestDataGlobalActionContext } from "gadget-server";
4 * @param { IngestDataGlobalActionContext } context
5 */
6export async function run({ params, logger, api, connections }) {
7 // use Node's fetch to make a request to
8 const response = await fetch(
9 "",
10 {
11 method: "GET",
12 headers: { "Content-Type": "application/json" },
13 }
14 );
16 const responseJson = await response.json();
18 // log the response
19{ responseJson }, "here is a sample of movies returned from hugging face");
21 if (responseJson?.rows) {
22 // get the data in our record's format
23 const movies = => ({ title:, quote: movie.row.quote, embedding: [] }));
24 // also get input data for the OpenAI embeddings API
25 const input = => `${} from the movie ${movie.row.quote}`);
26 const embeddings = await connections.openai.embeddings.create({
27 input,
28 model: "text-embedding-ada-002",
29 });
30 // append embeddings to movies
31, i) => {
32 movies[i].embedding = movieEmbedding.embedding;
33 });
35 // use the internal API to bulk create movie records
36 await;
37 }

This code:

  • uses fetch to pull in a small sample dataset that stores movie quotes hosted on Hugging Face
  • loops through the returned data and creates a new movie record for each movie quotes
  • uses the OpenAI connection (connections.openai) to generate embeddings for each movie quote
  • uses your Gadget app's internal API to bulk create movie records with the generated embeddings

Now we can run our global action to ingest the data:

  1. Click on the Run Action button to open your action in the API Playground
  2. Run the action

The action will be run and a success message is returned once data has been added to the database.

We can also see the data in our Gadget database by:

  1. Clicking on the movie model in the sidebar
  2. Clicking on Data

You should see movie records, complete with title, quote, and embedding data!

A screenshot of the movie model's data page, with ingested data and embeddings visible

Now that we have data in our database, we are ready to build the user-facing portion of our app.

Step 4: Use a global action to find similar movie quotes 

Our app will allow users to enter a fake movie quote and find movie quotes that are similar to the entered text using a similarity search on the embeddings. We will use a global action to find the top 4 most similar movie quotes, and then present these movies to the user.

We can create a new global action:

  1. Click on Global Actions in the sidebar
  2. Click the + next to the ACTIONS section title to create a new global action
  3. Name action's API Identifier to findSimilarMovies
  4. Enter the following code in the generated code file (replace the entire file):
1import { FindSimilarMoviesGlobalActionContext } from "gadget-server";
4 * @param { FindSimilarMoviesGlobalActionContext } context
5 */
6export async function run({ params, logger, api, connections, scope }) {
7 const { quote } = params;
9 // throw an error if a quote wasn't passed in
10 if (!quote) {
11 throw new Error("Missing quote!");
12 }
14 // create an embedding from the entered quote
15 const response = await connections.openai.embeddings.create({ input: quote, model: "text-embedding-ada-002" });
17 // get the 4 most similar movies that match your quote, and return them to the frontend
18 const movies = await{
19 sort: {
20 embedding: {
21 cosineSimilarityTo:[0].embedding,
22 },
23 },
24 first: 4,
25 select: {
26 id: true,
27 title: true,
28 },
29 });
31 // remove duplicates
32 const filteredMovies = movies.filter((movie, index) => movies.findIndex((m) => m.title === movie.title) === index);
33 return filteredMovies;
36// define custom params to pass values to your global action
37export const params = {
38 quote: { type: "string" },

Finding similar vectors with cosine similarity 

This call from the above function is the key to finding similar movies:{...}) in globalActions/findSimilarMovies.js
1// get the 4 most similar movies that match your quote, and return them to the frontend
2const movies = await{
3 sort: {
4 embedding: {
5 cosineSimilarityTo:[0].embedding,
6 },
7 },
8 first: 4,
9 select: {
10 id: true,
11 title: true,
12 },

Gadget has built-in vector distance sorting which we use to get the most similar vectors to the user's entered text. We use the cosineSimilarityTo operator to find the cosine similarity between the user's entered text and the movie quotes in our database.

Step 5: Add a route to generate a scene 

Now for the final backend development step: adding an HTTP route to our Gadget app that will be called by the frontend to generate a scene. We make use of Gadget's OpenAI connection to generate a scene using the user's entered text and a movie quote.

Why not use a global action?

We used a global action to ingest data and find similar movies, but we're using a route to generate a scene. You might be asking yourself why?

There are two main reasons:

  • Global actions do not support streaming responses, and we want to stream the text returned from OpenAI to the frontend
  • The openAIResponseStream helper we are using integrates seamlessly with HTTP routes

In general, we suggest you use global actions over HTTP routes whenever possible. But when streaming or integrating with external systems or packages, HTTP routes can be a better choice. To read more about when to use each, see the Actions guide.

  1. Modify the routes/POST-chat.js HTTP route file in your Gadget app (replace the entire file):
1import { RouteContext } from "gadget-server";
2import { openAIResponseStream } from "gadget-server/ai";
5 * Route handler for POST chat
6 *
7 * @param { RouteContext } route context - see:
8 *
9 */
10export default async function route({ request, reply, api, logger, connections }) {
11 const prompt = `Here is a fake movie quote: "${request.body.quote}" and a movie selected by a user: "${}". Write a fake scene for that movie that makes use of the quote. Use a maximum of 150 words.`;
13 // get streamed response from OpenAI
14 const stream = await{
15 model: "gpt-3.5-turbo",
16 messages: [
17 { role: "system", content: `You are an expert, hilarious AI screenwriter tasked with generating funny, quirky movie scripts.` },
18 { role: "user", content: prompt },
19 ],
20 stream: true,
21 });
23 await reply.send(openAIResponseStream(stream));

The OpenAI connection is used to call the chat completions endpoint, which generates a scene from the user's selected movie and entered quote.

Now we can call this route from our frontend to generate a scene!

Step 6: Build the frontend 

Now that we have defined our global actions and HTTP route, we can add support to call them from the frontend.

Gadget's React frontends are built on top of Vite, and include support for email/password auth as well as Google Auth. Our frontend code lives in the frontend folder. We will only be making changes to a single frontend route, frontend/routes/signed-in.jsx, which is the route accessed when a user is signed in to our app.

  1. Paste the following code into frontend/routes/signed-in.jsx:
1import { useGlobalAction, useFetch, useMaybeFindFirst, useActionForm } from "@gadgetinc/react";
2import { api } from "../api";
3import { useState } from "react";
5export default function () {
6 // see if movie data exists in the database!
7 const [{ data: movieDataIngested, fetching: checkingIfMovieDataExists, error: errorCheckingForData }, retry] = useMaybeFindFirst(
9 );
10 // fire action used to ingest data from HuggingFace model
11 const [{ fetching: ingestingData, error: errorIngestingData }, ingestData] = useGlobalAction(api.ingestData);
13 return (
14 <main style={{ width: "60vw", position: "absolute", top: 40 }}>
15 <div>
16 <h1>Make-a-movie scene (with AI)</h1>
17 <p>Write a fake movie quote, pick a suggested movie, and let OpenAI generate a new scene!</p>
18 </div>
20 <br />
22 {checkingIfMovieDataExists && <div className="loader"></div>}
24 {ingestingData && (
25 <div className="row">
26 <h2>Fetching movie data...</h2>
27 </div>
28 )}
30 {errorCheckingForData && <p className="error">{errorCheckingForData.message}</p>}
32 {errorIngestingData && <p className="error">{errorIngestingData.message}</p>}
34 {!movieDataIngested && !checkingIfMovieDataExists && (
35 <button
36 onClick={async () => {
37 await ingestData();
38 await retry();
39 }}
40 disabled={ingestingData}
41 >
42 Ingest data
43 </button>
44 )}
46 {movieDataIngested && <MovieQuoteForm />}
47 </main>
48 );
51const MovieQuoteForm = () => {
52 // used to track the reset state of the form
53 const [isReset, setIsReset] = useState(true);
54 // state for the currently selected movie
55 const [movie, setMovie] = useState("");
56 // action for finding movies with similar quotes
57 const { submit, register, actionData, error, formState, watch, reset } = useActionForm(api.findSimilarMovies);
59 // watch changes to the quote state in our form, and store in a variable
60 const quote = watch("quote");
62 return (
63 <>
64 <form
65 onSubmit={async (e) => {
66 e.preventDefault();
67 await submit();
68 setIsReset(true);
69 }}
70 style={{ maxWidth: "100%" }}
71 autocomplete="off"
72 >
73 <div className="row" style={{ display: "flex" }}>
74 <input
75 style={{ flex: "1", marginRight: "4px" }}
76 placeholder="Enter a fake movie quote! Ex. Here's a toast to you, my dear."
77 {...register("quote")}
78 disabled={formState.isSubmitting}
79 />
80 <button type="submit" disabled={formState.isSubmitting} style={{ marginRight: "4px" }}>
81 Find quotes
82 </button>
83 <button
84 disabled={!formState.isDirty}
85 onClick={() => {
86 // reset the form
87 reset();
88 setIsReset(false);
89 }}
90 >
91 Reset
92 </button>
93 </div>
94 </form>
96 {error && <p className="error">There was an error fetching similar movies. Check your Gadget logs for more details!</p>}
97 {actionData?.fetching && <div className="loader"></div>}
99 {quote && actionData && isReset && (
100 <>
101 <div className="row" style={{ textAlign: "left", paddingBottom: 8 }}>
102 <b>Movies with similar quotes:</b>
103 </div>
104 <form>
105 <div className="row" style={{ display: "flex", flexWrap: "wrap", textAlign: "left", gap: 16 }}>
106 {, i) => (
107 <span key={`movie_option_${i}`}>
108 <input
109 type="radio"
110 checked={movieRecord.title == movie}
111 value={movieRecord.title}
112 onChange={(e) => setMovie(}
113 id={}
114 />
115 <label htmlFor={}>{movieRecord.title}</label>
116 </span>
117 ))}
118 </div>
119 </form>
120 </>
121 )}
122 {movie && quote && isReset && <SceneGenerator movie={movie} quote={quote} />}
123 </>
124 );
127const SceneGenerator = ({ movie, quote }) => {
128 // call HTTP route and stream response from OpenAI
129 const [{ data, fetching, error }, sendPrompt] = useFetch("/chat", {
130 method: "post",
131 body: JSON.stringify({ movie, quote }),
132 headers: {
133 "content-type": "application/json",
134 },
135 stream: "string",
136 });
138 return (
139 <section>
140 <button onClick={() => void sendPrompt()}>Generate scene</button>
141 {error && <p className="error">{error.message}</p>}
142 {fetching && <div className="loader" />}
143 {data && (
144 <pre
145 style={{
146 border: "dashed black 1px",
147 background: "#f5f5f5",
148 padding: "10px",
149 maxHeight: "45vh",
150 overflowY: "scroll",
151 whiteSpace: "pre-wrap",
152 textAlign: "left",
153 }}
154 >
155 {data}
156 </pre>
157 )}
158 </section>
159 );

The frontend has 3 components: the default export for the route, the MovieQuoteForm component, and the SceneGenerator component. These 3 components all make use of different @gadgetinc/react hooks that help us make requests and manage our form state. The hooks simplify the management of response and form state, and let us interact with responses and forms in a React-ful way through the returned data, fetching, and error objects.

  • the route's default export is responsible for calling the ingestData global action (if you haven't already done so!) using the useGlobalAction hook (more info on useGlobalAction)
  • the MovieQuoteForm component manages and submits the input form for the entered quote, and calls the findSimilarMovies global action using the useActionForm hook (more info on useActionForm) which then allows users to select a movie from the returned actionData
  • the SceneGenerator component makes a request to the /chat HTTP route using the useFetch hook (more info on useFetch) and displays a streamed response

Remove background-image 

You can clean up the appearance of your project by removing the background-image from the .app CSS class set in frontend/App.css:

Search for and remove this line from the .app class in frontend/App.css
background-image: url("./assets/default-background.svg");

Test your screenwriter 

We are done building! Let's test out the AI screenwriter. Sign-up and sign-in to your app, enter a fake movie quote, select a recommended movie, and watch as the AI screenwriter generates a new scene!

The final step is deploying to production.

Step 7 (Optional): Deploy to Production 

If you want to deploy a Production version of your app, you can do so in just a couple of clicks!

First, you need to use your own OpenAI API key in the OpenAI connection:

  1. Click on the Plugins tab in the left sidebar
  2. Click on the OpenAI connection
  3. Edit the connection and use your API key for the Production environment

Now, deploy your app to Production:

  1. Click on the Deploy button in the bottom right corner of the Gadget UI
  2. Click Deploy Changes

That's it! Your app will be built, optimized, and deployed!

You can preview your Production app:

  1. Click on the app name at the top of the left sidebar
  2. Hover over Go to app and click Production

Alternatively, you can remove --development from the domain of the window you were using to preview your frontend changes while developing.

Next steps 

Congrats! You've built a fullstack web app that makes use of generative AI and vector embeddings! 🎉

In this tutorial, we learned:

  • How to create and store vector fields in Gadget
  • How to stream chat responses from OpenAI to a Gadget frontend using Vercel's AI SDK
  • When to use global actions vs routes in Gadget


If you have any questions, feel free to reach out to us on Discord to ask Gadget employees or the Gadget developer community!