Building with OpenAI 

Gadget has a variety of purpose-built features for building apps that use the exciting innovations in artificial intelligence.

  • vector allows storing and retrieving vectors for implementing semantic search for LLM memory
  • file allows storing generated artifacts like images, audio, or video synthesized by AI models
  • HTTP Routes streaming support support for sending slow-to-generate responses from upstream APIs
  • Support for the openai and langchain node modules (without any annoying edge-environment drawbacks)

Working with LLMs 

Large Language Model (LLM) APIs allow developers to build incredible apps that can understand and generate text. To work with LLMs like OpenAI's GPT-4 or Anthropic's Claude, you can make API calls from your Gadget app to these services.

Using vectors for LLM state 

Developers often need to pass specific context to a call to an LLM. Content like:

  • the current user's details
  • documentation relevant to the user's questions
  • up-to-date information about current events

are all commonly inserted into an LLM prompt to refine the results. This is by no means a complete list -- just about any text can be sent in. However, since the LLMs have a limited input context size, you can't just pass everything under the sun. Instead, you must choose a relevant subset of your available context to the LLM.

Identifying and retrieving the most relevant context to pass is a common problem with a lot of depth. Most folks turn to a vector similarity search to solve this problem. Vector similarity searches allow you to store a big library of potential context entries and retrieve only the most relevant entries for each prompt. Then, only the relevant entries can be inserted into the prompt, keeping it short enough to be processed by the model. Relevance can be assessed without having to build a big rules system or a deep understanding of the plain-language that users might use to describe their problems.

Gadget supports vector similarity searching, filtering and sorting using the vector field on your app's data models.

Vector embeddings of strings 

Vector search requires computing a vector for the content you want to search on. Vector similarity search allows you to ask the database to return records with vectors that are similar to an input vector, so you need a tool to convert any incoming strings (or images, etc) into a vector that you can then store and search with. This conversion-to-a-vector process is called embedding. The quality of the embedding system is what allows strings that are semantically similar but different character-wise to have similar vectors.

For example, let's say a user is asking a chatbot the question "What will keep me dry in the rain?". If we have a bunch of stored content that we might want to feed to the LLM, we don't want to just search for the term "dry" or "rain" in the text, we'd ideally search for related terms, like "umbrella", "shelter", or "waterproof". Vector embeddings are the fancy algorithms that group all these different English words together by producing similar vectors for similar text.

Gadget recommends using OpenAI's embeddings API for computing vectors from strings. You can use the openai package to make calls to the embeddings API:

JavaScript
const response = await openai.createEmbedding({
model: "text-embedding-ada-002",
input: "what will keep me dry in the rain",
});
const vector: number[] = response.data.data[0].embedding;
const response = await openai.createEmbedding({
model: "text-embedding-ada-002",
input: "what will keep me dry in the rain",
});
const vector: number[] = response.data.data[0].embedding;

Read more about installing the openai npm package below.

Storing vectors on models 

To create a database of context for your LLM apps, you can store context as a Model in Gadget. Models allow you to store structured, unstructured, and vector data together in a single record within your app's serverless database.

For example, if we were building a chatbot for a product to answer questions powered by existing documentation, we could store each paragraph of the documentation as a record in a Documentation model, and then use this stored documentation to power the prompt generation for an LLM. We can create a model with the following fields:

  • a string body field to store the plain text
  • a url url field for where the documentation actually came from
  • a vector embedding field for storing a vector embeddings

Then, when creating a new record, we can compute the embedding for the body string automatically, and store it in the embedding field. In the create action of the model, we can add code to the onSuccess function to compute the embedding and save it.

api/models/documentation/create.js
JavaScript
1// import the openai client from a file where we construct it
2import { openai } from "../../openai";
3
4export const run: ActionRun = async ({ api, record }) => {
5 // get an embedding for the record.body field
6 const response = await openai.createEmbedding({
7 model: "text-embedding-ada-002",
8 input: record.body,
9 });
10 const vector = response.data.data[0].embedding;
11
12 // store the vector in the db on this record
13 await api.internal.description.update(record.id, {
14 description: { embedding: vector },
15 });
16};
1// import the openai client from a file where we construct it
2import { openai } from "../../openai";
3
4export const run: ActionRun = async ({ api, record }) => {
5 // get an embedding for the record.body field
6 const response = await openai.createEmbedding({
7 model: "text-embedding-ada-002",
8 input: record.body,
9 });
10 const vector = response.data.data[0].embedding;
11
12 // store the vector in the db on this record
13 await api.internal.description.update(record.id, {
14 description: { embedding: vector },
15 });
16};

Retrieving similar records 

Once you have records stored in your models with a vector field, you can query these records to retrieve records that are similar to a given vector.

Using the example Documentation model mentioned earlier, we can query the stored records to retrieve the body of the documentation snippets most relevant to a given prompt:

JavaScript
1const relevantRecords = await api.documentation.findMany({
2 sort: {
3 embedding: {
4 cosineSimilarityTo: [
5 0.1, 0.15, 0.1, 0.5,
6 // ... the remainder of a vector embedding
7 ],
8 },
9 },
10 first: 10,
11 select: {
12 id: true,
13 body: true,
14 },
15});
1const relevantRecords = await api.documentation.findMany({
2 sort: {
3 embedding: {
4 cosineSimilarityTo: [
5 0.1, 0.15, 0.1, 0.5,
6 // ... the remainder of a vector embedding
7 ],
8 },
9 },
10 first: 10,
11 select: {
12 id: true,
13 body: true,
14 },
15});

This query will retrieve the top 10 most similar records to the given vector, and return only the id and body fields of each record. The embedding field does not need to be selected in this case as it's only being used to power the sorting of the results, and we don't actually need the stored vectors client side.

Read more about the sorts and filters offered by your models with vector fields in the API Reference.

When sorting or filtering by vectors, you need to use the same embedding API to compute the vectors stored on models and the vectors from any user input. For OpenAI, this means you must pass the same model parameter to all your text embedding calls so that the vectors returned are all in the same vector space.

Passing context to an LLM 

With your data embedded into vectors within your app's database, you can query it for context to add to a prompt for an LLM. The flow is generally this:

  1. Take the user's input and embed it into a vector
  2. Query the database to find the most similar context using vector similarity against the input vector
  3. Build a bigger, internal string prompt for the LLM by adding the retrieved context to the user's prompt
  4. Send the prompt to the LLM and forward along its response to the user.

For example, if we have documentation for a product stored in a Documentation model as mentioned above, we can use the most relevant documentation to build a prompt for an LLM. We can accept the user's input as a param to an HTTP Route and then embed it into a vector, retrieve the most relevant records, and then send a combined prompt to OpenAI's chat completion API:

api/routes/GET-ask.js
JavaScript
1import { openai } from "../openai";
2import { RouteHandler } from "gadget-server";
3
4const route: RouteHandler<{ Querystring: { question: string } }> = async ({
5 api,
6 request,
7 reply,
8}) => {
9 // get the user's question from a ?question= query param
10 const question = request.query.question;
11
12 // get a vector embedding for the question
13 const response = await openai.createEmbedding({
14 model: "text-embedding-ada-002",
15 input: question,
16 });
17 const questionVector = response.data.data[0].embedding;
18
19 // find the most relevant records from the documentation model
20 const relevantRecords = await api.documentation.findMany({
21 sort: {
22 embedding: {
23 cosineSimilarityTo: questionVector,
24 },
25 },
26 first: 5,
27 select: {
28 id: true,
29 body: true,
30 },
31 });
32
33 // build a big prompt out of a base prompt, the user's question, and the retrieved context
34 const completion = await openai.createChatCompletion({
35 model: "gpt-3.5-turbo",
36 messages: [
37 {
38 role: "system",
39 content:
40 "You are a helpful assistant who answers users questions with some related documentation.",
41 },
42 {
43 role: "user",
44 content: `
45 Related documentation: ${relevantRecords
46 .map((record) => record.body)
47 .join("\n")}
48
49 Question: ${question}
50 `,
51 },
52 ],
53 });
54
55 await reply.send(completion.data.choices[0].message!);
56};
57
58export default route;
1import { openai } from "../openai";
2import { RouteHandler } from "gadget-server";
3
4const route: RouteHandler<{ Querystring: { question: string } }> = async ({
5 api,
6 request,
7 reply,
8}) => {
9 // get the user's question from a ?question= query param
10 const question = request.query.question;
11
12 // get a vector embedding for the question
13 const response = await openai.createEmbedding({
14 model: "text-embedding-ada-002",
15 input: question,
16 });
17 const questionVector = response.data.data[0].embedding;
18
19 // find the most relevant records from the documentation model
20 const relevantRecords = await api.documentation.findMany({
21 sort: {
22 embedding: {
23 cosineSimilarityTo: questionVector,
24 },
25 },
26 first: 5,
27 select: {
28 id: true,
29 body: true,
30 },
31 });
32
33 // build a big prompt out of a base prompt, the user's question, and the retrieved context
34 const completion = await openai.createChatCompletion({
35 model: "gpt-3.5-turbo",
36 messages: [
37 {
38 role: "system",
39 content:
40 "You are a helpful assistant who answers users questions with some related documentation.",
41 },
42 {
43 role: "user",
44 content: `
45 Related documentation: ${relevantRecords
46 .map((record) => record.body)
47 .join("\n")}
48
49 Question: ${question}
50 `,
51 },
52 ],
53 });
54
55 await reply.send(completion.data.choices[0].message!);
56};
57
58export default route;

Then, we can call this route to get a chat completion:

terminal
> curl https://example-app--development.gadget.app/ask?question=what%20will%2keep%20me%20dry%20in%20the%20rain
A raincoat or shelter will keep you dry in the rain.

At this point, you've successfully implemented semantic search for LLM prompt augmentation in your app! Well done!

For vector similarity to work, we need to also embed the user's input into a vector, and compare that vector to the vectors in the database. We compare vectors to vectors, so anything that got embedded on the way into the database has to be compared to other embeddings only.

Working with Image Generators 

Text-to-image APIs like Stable Diffusion's or DALL-E can be used with Gadget to build incredible image generation apps.

Generating images 

Images can be generated within Gadget apps by making API calls to one of the outstanding image generation models out there, like Stable Diffusion, or OpenAI's DALL-E.

Using Stable Diffusion 

For generating images with Stable Diffusion, we recommend using got for making HTTP requests. Add got version 11 to your package.json:

package.json
json
{
"dependencies": {
"got": "^11"
}
}

and hit the Run yarn install button to install the package. Then, add a STABLE_DIFFUSION_API_KEY environment variable to your Environment Variables in your Settings.

Now, you can use got to make requests to Stable Diffusion in your action code or HTTP routes:

JavaScript
1const response = await got("https://stablediffusionapi.com/api/v3/text2img", {
2 method: "POST",
3 json: {
4 key: process.env["STABLE_DIFFUSION_API_KEY"],
5 prompt: "a cat",
6 samples: 1,
7 width: "512",
8 height: "512",
9 },
10}).json();
11
12// will be a URL to an image of a cat! this url will expire after 1 hour
13const url = response.output[0];
1const response = await got("https://stablediffusionapi.com/api/v3/text2img", {
2 method: "POST",
3 json: {
4 key: process.env["STABLE_DIFFUSION_API_KEY"],
5 prompt: "a cat",
6 samples: 1,
7 width: "512",
8 height: "512",
9 },
10}).json();
11
12// will be a URL to an image of a cat! this url will expire after 1 hour
13const url = response.output[0];

Using DALL-E 

To use DALL-E, we recommend using the openai client from npm. Install the openai package using the instructions below.

Then, use the openai package in your action code or routes:

JavaScript
1import OpenAI from "openai";
2
3const openai = new OpenAI({
4 apiKey: process.env["OPENAI_API_KEY"],
5});
6
7const response = await openai.images.generate({
8 prompt: "a cat",
9 n: 1,
10 size: "1024x1024",
11});
12
13// will be a URL to an image of a cat! this url will expire after 1 hour
14const url = response.data[0].url;
1import OpenAI from "openai";
2
3const openai = new OpenAI({
4 apiKey: process.env["OPENAI_API_KEY"],
5});
6
7const response = await openai.images.generate({
8 prompt: "a cat",
9 n: 1,
10 size: "1024x1024",
11});
12
13// will be a URL to an image of a cat! this url will expire after 1 hour
14const url = response.data[0].url;

Storing images 

Text-to-image APIs typically have a short expiration time for the images they generate. As a result, the URLs returned from these APIs will stop working after a short time. To preserve any images for later use, you need to copy them into your own storage.

Gadget's file field type works well for storing any blob content, including generated images. You can copy files from external URLs in your Gadget app's storage with the copyURL: "https://some-url" input format for file.

To generate and store an image, you can call your preferred image generation API, then pass a URL to a generated image to the copyURL input of a file field on a model of your choosing.

For example, we can create a new generate Action on an example image Model, and add code to the run function for generating the image with the Stable Diffusion API:

api/models/image/onGenerate.js
JavaScript
1import got from "got";
2
3/**
4 * Action code for the generate action on the Image model
5 */
6export const run: ActionRun = async ({ api, record, logger }) => {
7 const generateResponse: { status: string; output: string[] } = await got(
8 "https://stablediffusionapi.com/api/v3/text2img",
9 {
10 method: "POST",
11 json: {
12 key: process.env["STABLE_DIFFUSION_API_KEY"],
13 // pass a prompt from a field named `prompt` on the image model
14 prompt: record.prompt,
15 samples: 1,
16 width: "512",
17 height: "512",
18 },
19 }
20 ).json();
21
22 if (generateResponse.status != "success") {
23 logger.error({ response: generateResponse }, "error generating image");
24 throw new Error("generating image failed");
25 }
26
27 const url = generateResponse.output[0];
28 logger.info({ url }, "generated image");
29
30 // update the image field to store this generated image by passing the url to the `copyURL` input param
31 await api.image.update(record.id, { image: { image: { copyURL: url } } });
32 logger.info("stored generated image in Gadget");
33};
1import got from "got";
2
3/**
4 * Action code for the generate action on the Image model
5 */
6export const run: ActionRun = async ({ api, record, logger }) => {
7 const generateResponse: { status: string; output: string[] } = await got(
8 "https://stablediffusionapi.com/api/v3/text2img",
9 {
10 method: "POST",
11 json: {
12 key: process.env["STABLE_DIFFUSION_API_KEY"],
13 // pass a prompt from a field named `prompt` on the image model
14 prompt: record.prompt,
15 samples: 1,
16 width: "512",
17 height: "512",
18 },
19 }
20 ).json();
21
22 if (generateResponse.status != "success") {
23 logger.error({ response: generateResponse }, "error generating image");
24 throw new Error("generating image failed");
25 }
26
27 const url = generateResponse.output[0];
28 logger.info({ url }, "generated image");
29
30 // update the image field to store this generated image by passing the url to the `copyURL` input param
31 await api.image.update(record.id, { image: { image: { copyURL: url } } });
32 logger.info("stored generated image in Gadget");
33};

Now, you can generate new images with your api client by calling the generate action, and access the generated images from Gadget's high-performance cloud storage:

JavaScript
const record = await api.image.generate({ image: { prompt: "a cool looking cat" } });
// record.image.url will be a URL to an image of a cat! this url will not expire!
console.log(record.image.url);
const record = await api.image.generate({ image: { prompt: "a cool looking cat" } });
// record.image.url will be a URL to an image of a cat! this url will not expire!
console.log(record.image.url);

Installing dependencies 

Gadget apps are built with Node.js, and you can install any dependencies that communicate with third-party AI APIs like OpenAI's by adding packages to your package.json.

Installing OpenAI 

To install the openai package from npm, add it to your package.json:

package.json
json
{
"dependencies": {
"openai": "^4.55.7"
}
}

and hit the Run yarn install button to install the package.

Then, add an OPENAI_API_KEY environment variable in your Environment Variables in your app's Settings section. You can get an OpenAI API Key from the OpenAI developer portal.

Next, construct an instance of the client object in a helper file:

api/openai.js
JavaScript
import OpenAI from "openai";
export const openai = new OpenAI({ apiKey: process.env["OPENAI_API_KEY"] });
import OpenAI from "openai";
export const openai = new OpenAI({ apiKey: process.env["OPENAI_API_KEY"] });

Now, you can use the openai package in your action code or routes:

api/routes/GET-example.js
JavaScript
1import { openai } from "../openai";
2
3// ... somewhere in your route
4const response = await openai.images.generate({
5 prompt: "a cat",
6 n: 1,
7 size: "1024x1024",
8});
9
10// will be a URL to an image of a cat!
11const url = response.data[0].url;
1import { openai } from "../openai";
2
3// ... somewhere in your route
4const response = await openai.images.generate({
5 prompt: "a cat",
6 n: 1,
7 size: "1024x1024",
8});
9
10// will be a URL to an image of a cat!
11const url = response.data[0].url;

Was this page helpful?