Build a storefront chatbot with OpenAI 

Topics covered: Shopify connections, AI + vector embeddings, HTTP routes
Time to build: ~30 minutes

Large Language Model (LLM) APIs allow developers to build apps that can understand and generate text. We can use OpenAI's APIs to build a chatbot that can understand a shopper's question and respond with product recommendations.

In this tutorial, you will learn how to:

Requirements

To get the most out of this tutorial, you will need:

You can fork this Gadget project and try it out yourself.

You will still need to set up the Shopify Connection and theme app extension after forking. Read on to learn how to connect Gadget to a Shopify store!

Fork on Gadget

Step 1: Create a Gadget app and connect to Shopify 

Your first step will be to set up a Gadget project and connect to a Shopify store via the Shopify connection. Create a new Gadget application at gadget.new and select the Shopify app template.

A screenshot of the Shopify app template tile selected on the new app modal, with a domain entered

Connect to Shopify through the Partners dashboard 

Requirements

To complete this connection, you will need a Shopify Partners account as well as a store or development store

Our first step is going to be setting up a custom Shopify application in the Partners dashboard.

Both the Shopify store Admin and the Shopify Partner Dashboard have an Apps section. Ensure that you are on the Shopify Partner Dashboard before continuing.

Click on Apps link in Shopify Partners Dashboard
  • Click the Create App button
Click on Create app button
  • Click the Create app manually button and enter a name for your Shopify app
Shopify's app creation landing page in the Partners Dashboard
  • Click on Settings in the side nav bar
  • Click on Plugins in the modal that opens
  • Select Shopify from the list of plugins and connections
The Gadget homescreen, with the Connections link highlighted
  • Copy the Client ID and Client secret from your newly created Shopify app and paste the values into the Gadget Connections page
Screenshot of the Partners card selected on the Connections page
  • Click Connect to move to scope and model selection

Now we get to select what Shopify scopes we give our application access to, while also picking what Shopify data models we want to import into our Gadget app.

  • Enable the Products Read API scope, and select the underlying Product and Product Image models that we want to import into Gadget
Select Product API scope + model
  • Click Confirm

Now we want to connect our Gadget app to our custom app in the Partners dashboard.

  • In your Shopify app in the Partners dashboard, click on Configuration in the side nav bar so you can edit the App URL and Allowed redirection URL(s) fields
  • Copy the App URL and Allowed redirection URL from the Gadget Connections page and paste them into your custom Shopify App
Screenshot of the connected app, with the App URL and Allowed redirection URL(s) fields

Now we need to install our Shopify app on a store.

Click on the Select store button
  • Click on the store we want to use to develop our app
  • You may be prompted about Store transfer being disabled. This is okay, click Install anyway
  • Click Install app to install your Gadget app on your Shopify store
Having an issue installing?

If you are getting a permissions denied error when installing your app, try logging in to the Shopify store Admin!

Click Install app to authorize our Gadget app with our store

You will be redirected to an embedded admin app that has been generated for you. The code for this app template can be found in web/routes/index.jsx.

App installation was a success

Now you can install your app on a store from the Partners dashboard. Do not sync data yet! You're going to add some code to generate vector embeddings for your products before the sync is run.

Step 2: Set up OpenAI connection 

Now that you are connected to Shopify, you can also set up the OpenAI connection that will be used to fetch embeddings for product descriptions. Gadget provides OpenAI credits for testing while building your app, so you don't need a personal OpenAI API key to get started.

  1. Click on Settings in the nav bar
  2. Click on Plugins
  3. Click on the OpenAI connection
  4. Select the Use Gadget's API keys option in the modal that appears OR enter your own OpenAI API key

Your OpenAI connection is now ready to be used!

Step 3: Add data models for chat responses 

Now we need a new data model to capture the chat responses from OpenAI, and the products recommended to shoppers. We need to use a has many through relationship to relate the chat response to the recommended products.

ERD of the chatbot data models, showing the has many through relationship between chatLog and shopifyProduct, through recommendedProduct. The diagram also shows that shopifyProduct has many shopifyProductImage
  1. Click + next to DATA MODELS to add a new model
  2. Name the model chatLog
  3. Click + next to FIELDS to add a new field
  4. Name the field response and leave it as a string field

This string field will record the response coming back from OpenAI. We also want to keep track of the products that were recommended, so we need to add a has many through relationship to the shopifyProduct model.

  1. Add another field to chatLog called recommendedProducts
  2. Set the field type to a has many through relationship
  3. Set the sibling to shopifyProduct and the joinModel to recommendedProduct, this will create a new recommendedProduct model
  4. Name the field in recommendedProduct that relates to the chatLog model, chatLog
  5. Name the field in recommendedProduct that relates to the shopifyProduct model, product
  6. Name the field in the shopifyProduct model that relates to the recommendedProduct model, chatRecommendations
A screenshot of the has many through relationship defined between the chatlog and shopifyProduct models

Step 4: Add vector field to shopifyProduct model 

Before you add code to create the embeddings from product descriptions, you need a place to store the generated embeddings. You can add a vector field to the shopifyProduct model to store the embeddings.

The vector field types store a vector, or array, of floats. It is useful for storing embeddings and will allow you to perform vector operations like cosine similarity, which helps you find the most similar products to a given chat message.

Learn more about OpenAI, LLMs, and vector embeddings

You are going to use Gadget's built-in OpenAI connection to generate vector embeddings for product descriptions. These embeddings will be used to perform a semantic search to find the products that best match a shopper's chat message.

LLMs and vector embeddings are relatively new technologies, and there are many resources available to learn more about them. Here are some resources to get you started:

To add a vector field to the shopifyProduct model:

  1. Go to the shopifyProduct model in the navigation bar
  2. Click on + in the FIELDS section to add a new field
  3. Name the field descriptionEmbedding
  4. Set the field type to vector
A screenshot of the Description Embedding vector field defined on the Shopify Product model

Now you are set up to store embeddings for products! The next step is adding code to generate these embeddings.

Step 5: Write code effect to create vector embedding 

Now you can add some code to create vector embeddings for all products in your store. You will want to run this code when Shopify fires a products/create or products/update webhook. To do this, you will create a code effect that runs when a Shopify Product is created or updated.

  1. In the FILES explorer, hover over the shopifyProduct folder and click on + to create a new file
  2. Name the file createEmbedding.js
  3. Paste the following code into shopifyProduct/createEmbedding.js:
shopifyProduct/createEmbedding.js
js
1export const createProductEmbedding = async ({ record, api, logger, connections }) => {
2 // only run if the product does not have an embedding, or if the title or body have changed
3 if (!record.descriptionEmbedding || record.changed("title") || record.changed("body")) {
4 try {
5 // get an embedding for the product title + description using the OpenAI connection
6 const response = await connections.openai.embeddings.create({
7 input: `${record.title}: ${record.body}`,
8 model: "text-embedding-ada-002",
9 });
10 const embedding = response.data[0].embedding;
11
12 // write to the Gadget Logs
13 logger.info({ id: record.id }, "got product embedding");
14
15 // use the internal API to store vector embedding in Gadget database, on shopifyProduct model
16 await api.internal.shopifyProduct.update(record.id, { shopifyProduct: { descriptionEmbedding: embedding } });
17 } catch (error) {
18 logger.error({ error }, "error creating embedding");
19 }
20 }
21};

In this snippet:

  • the OpenAI connection is accessed through connections.openai and the embeddings.create() API is called
  • the internal API is used in the onSuccess function to update the shopifyProduct model and set the descriptionEmbedding field

The internal API needs to be used because the shopifyProduct model does not have a Gadget API trigger on this action by default. You can read more about the internal API in the Gadget docs.

Now call this function from your shopifyProduct/actions/create.js and shopifyProduct/actions/update.js actions:

  1. Open shopifyProduct/actions/create.js and paste the following code:
shopifyProduct/actions/create.js
js
1import { applyParams, preventCrossShopDataAccess, save, ActionOptions, CreateShopifyProductActionContext } from "gadget-server";
2import { createProductEmbedding } from "../createEmbedding";
3
4/**
5 * @param { CreateShopifyProductActionContext } context
6 */
7export async function run({ params, record, logger, api, connections }) {
8 applyParams(params, record);
9 await preventCrossShopDataAccess(params, record);
10 await save(record);
11}
12
13/**
14 * @param { CreateShopifyProductActionContext } context
15 */
16export async function onSuccess({ params, record, logger, api, connections }) {
17 await createProductEmbedding({ record, api, logger, connections });
18}
19
20/** @type { ActionOptions } */
21export const options = {
22 actionType: "create",
23};
  1. Open shopifyProduct/actions/update.js and paste the following code:
shopifyProduct/actions/update.js
js
1import { applyParams, preventCrossShopDataAccess, save, ActionOptions, UpdateShopifyProductActionContext } from "gadget-server";
2import { createProductEmbedding } from "../createEmbedding";
3
4/**
5 * @param { UpdateShopifyProductActionContext } context
6 */
7export async function run({ params, record, logger, api, connections }) {
8 applyParams(params, record);
9 await preventCrossShopDataAccess(params, record);
10 await save(record);
11}
12
13/**
14 * @param { UpdateShopifyProductActionContext } context
15 */
16export async function onSuccess({ params, record, logger, api, connections }) {
17 await createProductEmbedding({ record, api, logger, connections });
18}
19
20/** @type { ActionOptions } */
21export const options = {
22 actionType: "update",
23};

Now vector embeddings will be generated when a new product is created, or an existing product is updated.

Generate embeddings for existing products 

Now that the code is in place to generate vector embeddings for products, you can sync existing Shopify products into your Gadget app's database. To do this:

  1. Click on Settings in the nav bar
  2. Click on the Plugins page
  3. Select the Shopify connection
  4. Click on Shop Installs for the connected Development app
Screenshot of the connected app on the Shopify connections page, with the Shop Installs button highlighted
  1. Click on the Sync button for the store you want to sync products from
Screenshot of the shop installs page, with an arrow added to the screenshot to highlight the Sync button for the connected store

Product and product image data will be synced from Shopify to your Gadget app's database. The code effect you added will run for each product and generate a vector embedding for the product. You can see these vector embeddings by going to the Data page for the shopifyProduct model. The vector embeddings will be stored in the descriptionEmbedding field.

A screenshot of the Data page for the shopifyProduct model. The descriptionEmbedding column is highlighted, with vector data generated for products.
Running into rate limit errors?

If you are running into rate limit errors when generating embeddings while syncing, try using your own OpenAI API key. You can add your own API key to the OpenAI connection by clicking on the Settings tab in the OpenAI connection and selecting the Use my own API key option.

Step 6: Add /chat HTTP route 

We will use an HTTP route to handle incoming chat messages from the storefront. The route will take a message from the shopper, use cosine similarity to determine what products to recommend, and stream a response from OpenAI back to the client.

  1. Hover over the routes folder in the FILES explorer and click on + to create a new file
  2. Name the file POST-chat.js

Your app now has a new HTTP route that will be triggered when a POST request is made to /chat. You can add code to this file to handle incoming chat messages.

  1. Paste the following code in routes/POST-chat.js:
routes/POST-chat.js
js
1import { RouteContext } from "gadget-server";
2import { openAIResponseStream } from "gadget-server/ai";
3
4/**
5 * Route handler for POST chat
6 *
7 * @param { RouteContext } route context - see: https://docs.gadget.dev/guides/http-routes/route-configuration#route-context
8 *
9 */
10export default async function route({ request, reply, api, logger, connections }) {
11 // get input from shopper
12 const { message } = request.body;
13
14 // embed the incoming message from the user
15 const embeddingResponse = await connections.openai.embeddings.create({ input: message, model: "text-embedding-ada-002" });
16
17 // find similar product descriptions
18 const products = await api.shopifyProduct.findMany({
19 sort: {
20 descriptionEmbedding: {
21 cosineSimilarityTo: embeddingResponse.data[0].embedding,
22 },
23 },
24 first: 2,
25 filter: {
26 status: {
27 equals: "active",
28 },
29 },
30 select: {
31 id: true,
32 title: true,
33 body: true,
34 handle: true,
35 shop: {
36 domain: true,
37 },
38 images: {
39 edges: {
40 node: {
41 source: true,
42 },
43 },
44 },
45 },
46 });
47
48 // capture products in Gadget's Logs
49 logger.info({ products, message: request.body.message }, "found products most similar to user input");
50
51 const prompt = `You are a helpful shopping assistant trying to match customers with the right product. You will be given a question from a customer and some JSON objects with the id, title, handle, and description (body) of products available for sale that roughly match the customer's question, as well as the store domain. Respond in HTML markup, with an anchor tag at the end with images that link to the product pages and <br /> tags between your text response and product recommendations. The anchor should be of the format: <a href={"https://" + {domain} + "/products/" + {handle}} target="_blank">{title}<img style={border: "1px black solid"} width="200px" src={product.images.edges[0].node.source} /></a> but with the domain, handle, and title replaced with passed-in variables. If you have recommended products, end your response with "Click on a product to learn more!" If you are unsure or if the question seems unrelated to shopping, say "Sorry, I don't know how to help with that", and include some suggestions for better questions to ask. Here are the json products you can use to generate a response: ${JSON.stringify(
52 products
53 )}`;
54
55 // send prompt and similar products to OpenAI to generate a response
56 // using GPT-4 Turbo model
57 const chatResponse = await connections.openai.chat.completions.create({
58 model: "gpt-4-1106-preview",
59 messages: [
60 {
61 role: "system",
62 content: prompt,
63 },
64 { role: "user", content: message },
65 ],
66 stream: true,
67 });
68
69 // function fired after the steam is finished
70 const onComplete = (content) => {
71 // store the response from OpenAI, and the products that were recommended
72 const recommendedProducts = products.map((product) => ({
73 create: {
74 product: {
75 _link: product.id,
76 },
77 },
78 }));
79 void api.chatLog.create({
80 response: content,
81 recommendedProducts,
82 });
83 };
84
85 await reply.send(openAIResponseStream(chatResponse, { onComplete }));
86}

This code will:

  • create a vector embedding from the shopper's message using the OpenAI connection using connections.openai.embeddings.create()
  • use cosine similarity to find the 2 most similar products to the shopper's message using the Gadget API:
cosine similarity in routes/POST-chat.js
js
1const products = await api.shopifyProduct.findMany({
2 sort: {
3 descriptionEmbedding: {
4 // cosine similarity to the embedding of the shopper's message
5 cosineSimilarityTo: embeddingResponse.data[0].embedding,
6 },
7 },
8 first: 2,
9 select: {...}
10});
  • use the OpenAI chat API to generate a response using the gpt-4-1106-preview model
stream chat completion in routes/POST-chat.js
js
1const chatResponse = await connections.openai.chat.completions.create({
2 model: "gpt-4-1106-preview",
3 messages: [
4 {
5 role: "system",
6 content: prompt,
7 },
8 { role: "user", content: message },
9 ],
10 stream: true,
11});
  • stream the response back to the client and save the records to the database using reply.send(openAIResponseStream(chatResponse, { onComplete }))
    • onComplete is a callback function that is called after the stream is finished

Set up CORS handling 

Before we can call our /chat route from a theme extension, we need to enable CORS handling. To do this we will use the @fastify/cors plugin.

  1. Open the Gadget command palette using P or Ctrl P
  2. Enter > in the palette to change to command-mode
  3. Run the following command to install the @fastify/cors plugin:
Run in the Gadget command palette
yarn add @fastify/cors

Once the plugin is installed, you can add it to your app's configuration:

  1. Add a new file in the routes folder called +scope.js
  2. Paste the following code into routes/+scope.js:
routes/+scope.js
js
1import { Server } from "gadget-server";
2import cors from "@fastify/cors";
3
4/**
5 * Route plugin for *
6 *
7 * @param { Server } server - server instance to customize, with customizations scoped to descendant paths
8 *
9 * @see {@link https://www.fastify.dev/docs/latest/Reference/Server}
10 */
11export default async function (server) {
12 await server.register(cors, {
13 origin: true, // allow requests from any domain
14 });
15}

This will allow requests from ANY domain! For your actual, production app, you probably want to set the origin option to a specific domain - the domain of your store. Read more about CORS in Gadget in our documentation.

Your route is now complete! Now all that is needed is a frontend app that allows shoppers to ask a question and displays the response along with product recommendations.

Step 7: Use Shopify theme app extension to embed chatbot 

We make use of an app embed block for our storefront chatbot theme extension. This means that our extension works with any Shopify theme, both vintage and Online Store 2.0 themes.

  1. Use git to clone the Shopify CLI app
  2. cd to the cloned directory
  3. Run yarn install to install dependencies
  4. Update the direct script tag in extensions/theme-extension/blocks/chatbot.liquid to include your app's script tag URL
Direct script tag
The script tag URL can be found in the Installing section of your app's API Reference.
Your script tag needs --development added to your app-specific subdomain when working on your Development environment:
html
<script src="https://example-app--development.gadget.appapi/client/web.min.js" defer="defer"></script>
  1. Run yarn dev to start your app and connect to your existing Partners app and development store

You should now be set up and ready to add the chatbot theme app extension to the storefront theme.

  1. Navigate to your storefront theme editor and click on App embeds in the left sidebar
  2. Enable your app - the chatbot should appear in your storefront preview
  3. Click Save in the top right corner of the page
Theme extension overview

Theme extensions allow you to create custom blocks that merchants can place in the storefront theme editor. You can read more about theme extensions in the Shopify docs. This chatbot extension is an app embed block, so it works with both vintage and Online Store 2.0 themes.

In this theme there are 3 main files:

  • blocks/chatbot.liquid - the main file that renders the chatbot
  • assets/chatbot.js - handles the chatbot logic and calls your Gadget app's API
  • assets/chatbot.css - the CSS file that styles the chatbot

Your chatbot should now be available in your storefront - try it out!

Screenshot of the finished chatbot, with a question entered (asking about snowboards for sale) and a response. The response includes a text response and product recommendations.

Next steps 

You now have a chatbot that can respond to shopper questions with product recommendations! You can continue to build on this app by:

  • customizing the embedded React frontend, found in the frontend folder, to give merchants installation instructions
  • editing the prompt in routes/POST-chat.js to customize the chatbot's response
  • change the look, feel, and merchant customization options for the chatbot by editing the theme extension files
  • maintain full chat context by passing messages back and forth between the client and server
    • also maintain chat context between browser sessions and/or windows by storing the full chat context in the database

Have questions about the tutorial? Join Gadget's developer Discord to ask Gadget employees and join the Gadget developer community!

Want to learn more about building AI apps in Gadget? Check out our building AI apps documentation.