Build a product recommendation chatbot for a Shopify store using OpenAI and Gadget
Topics covered: Shopify connections, AI + vector embeddings, HTTP routes, React frontends
Time to build: ~30 minutes
Large Language Model (LLM) APIs allow developers to build apps that can understand and generate text.
In this tutorial, you will build a product recommendation chatbot for a Shopify store using OpenAI's API and Gadget. The chatbot will utilize OpenAI's text embedding API to generate vector embeddings for product descriptions, which will then be stored in your Gadget database. These embeddings will help to identify the products that best match a shopper's chat message. With Gadget, you can easily sync data from Shopify, build the backend for generating recommendations, and create the chatbot UI.

To get the most out of this tutorial, you will need:
- A Shopify Partners account
- A development store
- An OpenAI account and API Key that can be used to make requests to the OpenAI API
Step 1: Create a Gadget app and connect to Shopify
Your first step will be to set up a Gadget project and connect to a Shopify store via the Shopify connection. Create a new Gadget application at gadget.new and select the Shopify app template.

Connect to Shopify through the Partners dashboard
To complete this connection, you will need a Shopify Partners account as well as a store or development store
Our first step is going to be setting up a custom Shopify application in the Partners dashboard.
- Go to the Shopify Partners dashboard
- Click on the link to the Apps page
Both the Shopify store Admin and the Shopify Partner Dashboard have an Apps section. Ensure that you are on the Shopify Partner Dashboard before continuing.

- Click the Create App button

- Click the Create app manually button and enter a name for your Shopify app

- Go to Connections under the Plugins page in your Gadget app

- Copy the Client ID and Client secret from your newly created Shopify app and paste the values into the Gadget Connections page
- Click Connect on the Gadget Connections page to move to scope and model selection

Now we get to select what Shopify scopes we give our application access to, while also picking what Shopify data models we want to import into our Gadget app.
- Enable the read scope for the Shopify Products API, and select the underlying Product and Product Image models that we want to import into Gadget

- Click Confirm
Now we want to connect our Gadget app to our custom app in the Partners dashboard.
- In your Shopify app in the Partners dashboard, click on App setup in the side nav bar so you can edit the App URL and Allowed redirection URL(s) fields
- Copy the App URL and Allowed redirection URL from the Gadget Connections page and paste them into your custom Shopify App

Now you can install your app on a store from the Partners dashboard. Do not sync data yet! You're going to add some code to generate vector embeddings for your products before the sync is run.
Step 2: Set up OpenAI connection
Now that you are connected to Shopify, you can also set up the OpenAI connection that will be used to fetch embeddings for product descriptions. Gadget provides OpenAI credits for testing while developing, however, you will need to use your own OpenAI API key for this tutorial, as the Gadget-provided credentials are rate-limited.
- Click on Plugins in the nav bar
- Click on the OpenAI connection tile
- Select the Use your own API keys option in the modal that appears
- Paste your OpenAI API key into the Development key field
Your connection is now ready to be used!
Step 3: Add vector field to Shopify Product model
Before you add code to create the embeddings from product descriptions, you need a place to store the generated embeddings. You can add a vector field to the shopifyProduct model to store the embeddings.
The vector field types store a vector, or array, of floats. It is useful for storing embeddings and will allow you to perform vector operations like cosine similarity, which helps you find the most similar products to a given chat message.
You are going to use Gadget's built-in OpenAI connection to generate vector embeddings for product descriptions. These embeddings will be used to perform a semantic search to find the products that best match a shopper's chat message.
LLMs, vector embeddings, and LangChain are relatively new technologies, and there are many resources available to learn more about them. Here are some resources to get you started:
- Building AI apps in Gadget
- Sorting by vector fields Gadget API docs
- OpenAI docs
- LangChain JS docs
To add a vector field to the shopifyProduct model:
- Go to the
shopifyProduct
model in the navigation bar - Click on + in the FIELDS section to add a new field
- Name the field
descriptionEmbedding
- Set the field type to vector

Now you are set up to store embeddings for products! The next step is adding code to generate these embeddings.
Step 4: Write code effect to create vector embedding
Now you can add some code to create vector embeddings for all products in your store. You will want to run this code when Shopify fires a products/create
or products/update
webhook. To do this, you will create a code effect that runs when a Shopify Product is created or updated.
- Go to the
shopifyProduct
model in the navigation bar - Click on the create action in the ACTION selection
- Paste the following code into
shopifyProduct/actions/create.js
to update theonSuccess
function:
shopifyProduct/actions/create.jsjs1import { applyParams, preventCrossShopDataAccess, save, ActionOptions, CreateShopifyProductActionContext } from "gadget-server";23/**4 * @param { CreateShopifyProductActionContext } context5 */6export async function run({ params, record, logger, api }) {7 applyParams(params, record);8 await preventCrossShopDataAccess(params, record);9 await save(record);10}1112/**13 * @param { CreateShopifyProductActionContext } context14 */15export async function onSuccess({ params, record, logger, api, connections }) {16 // only run if the product does not have an embedding, or if the title or body have changed17 if (!record.descriptionEmbedding || record.changed("title") || record.changed("body")) {18 try {19 // get an embedding for the product title + description using the OpenAI connection20 const response = await connections.openai.embeddings.create({21 input: `${record.title}: ${record.body}`,22 model: "text-embedding-ada-002",23 });24 const embedding = response.data[0].embedding;2526 // write to the Gadget Logs27 logger.info({ id: record.id }, "got product embedding");2829 // use the internal API to store vector embedding in Gadget database, on shopifyProduct model30 await api.internal.shopifyProduct.update(record.id, { shopifyProduct: { descriptionEmbedding: embedding } });31 } catch (error) {32 logger.error({ error }, "error creating embedding");33 }34 }35}3637/** @type { ActionOptions } */38export const options = {39 actionType: "create",40};
In this snippet, the OpenAI connection is accessed through connections.openai
and the embeddings.create()
API is called.
The internal API is used in the onSuccess
function to update the shopifyProduct model and set the descriptionEmbedding
field. The internal API needs to be used because the shopifyProduct model does not have Gadget API set as a trigger on this action by default. You can read more about the internal API in the Gadget docs.
To also run this code when a product is updated:
- Create a new file:
shopifyProduct/utils.js
- Copy the contents of
onSuccess
fromshopifyProduct/actions/create.js
into acreateProductEmbedding
function:
shopifyProduct/utils.jsjsexport const createProductEmbedding = async ({ record, api, logger, connections }) => {// contents of onSuccess function from shopifyProduct/actions/create.js};
- Import
createProductEmbedding
intoshopifyProduct/actions/create.js
andshopifyProduct/actions/update.js
- Call
createProductEmbedding
from within theonSuccess
functions insideshopifyProduct/actions/create.js
andshopifyProduct/actions/update.js
:
shopifyProduct/actions/create.js and shopifyProduct/actions/update.jsjs1import { createProductEmbedding } from "../utils";23/**4 * @param { CreateShopifyProductActionContext } context5 */6export async function onSuccess({ params, record, logger, api, connections }) {7 await createProductEmbedding({ record, api, logger, connections });8}
Generate embeddings for existing products
Now that the code is in place to generate vector embeddings for products, you can sync existing Shopify products into your Gadget app's database. To do this:
- Go to the Connections page in the navigation bar
- Click on the Shopify connection
- Click on Shop Installs for the connection

- Click on the Sync button for the store you want to sync products from

Product and product image data will be synced from Shopify to your Gadget app's database. The code effect you added will run for each product and generate a vector embedding for the product. You can see these vector embeddings by going to the Data page for the shopifyProduct
model. The vector embeddings will be stored in the descriptionEmbedding
field.

Step 5: Add HTTP route to handle incoming chat messages
To complete your app backend, you will use cosine similarity on the stored vector embeddings to extract products that are closely related to a shopper's query. These products, along with a prompt, will be passed into LangChain, which will then use an OpenAI model to respond to the shopper's question. In addition, you will include product information to display recommended products by LangChain and provide a link to the store page for these products.
You will also stream the response from LangChain to the shopper's chat window. This will allow you to show the shopper that the chatbot is typing while it is generating a response.
Install LangChain and zod npm packages
To start, install the LangChain and zod
npm packages. The zod
package will be used to provide a parser to LangChain for reliably extracting structured data from the LLM response.
- Open the Gadget command palette using P or Ctrl P
- Enter
>
in the palette to allow you to run yarn commands - Run the following command to install the LangChain client and
zod
:
yarnyarn add [email protected] zod
Add code to handle incoming chat messages
Now you are ready to add some more code. You will start by adding a new HTTP route to handle incoming chat messages. To add a new HTTP route to your Gadget backend:
- Hover over the
routes
folder in the FILES explorer and click on + to create a new file - Name the file POST-chat.js
Your app now has a new HTTP route that will be triggered when a POST request is made to /chat
. You can add code to this file to handle incoming chat messages.
This is the complete code file for POST-chat.js
. You can copy and paste this code into the file you just created. A step-by-step explanation of the code is below.
routes/POST-chat.jsjs1import { RouteContext } from "gadget-server";2import { Readable } from "stream";3import { z } from "zod";4import { ConsoleCallbackHandler } from "langchain/callbacks";5import { OpenAI } from "langchain/llms/openai";6import { PromptTemplate } from "langchain/prompts";7import { LLMChain } from "langchain/chains";8import { StructuredOutputParser } from "langchain/output_parsers";910// a parser for the specific kind of response we want from the LLM11const parser = StructuredOutputParser.fromZodSchema(12 z.object({13 answer: z14 .string()15 .describe("answer to the user's question, not including any product IDs, and only using product titles and descriptions"),16 productIds: z17 .array(z.string())18 .describe(19 "IDs from input product JSON objects for the user to purchase, formatted as an array, or omitted if no products are applicable"20 ),21 })22);2324const prompt = new PromptTemplate({25 template: `You are a helpful shopping assistant trying to match customers with the right product. You will be given a question from a customer and then maybe some JSON objects with the id, title, and description of products available for sale that roughly match the customer's question. Reply to the question suggesting which products to buy. Only use the product titles and descriptions in your response, do not use the product IDs in your response. If you are unsure or if the question seems unrelated to shopping, say "Sorry, I don't know how to help with that", and include some suggestions for better questions to ask. {format_instructions}26 Products: {products}2728 Question: {question}`,29 inputVariables: ["question", "products"],30 partialVariables: { format_instructions: parser.getFormatInstructions() },31});3233/**34 * Route handler for POST chat35 *36 * @param { RouteContext } route context - see: https://docs.gadget.dev/guides/http-routes/route-configuration#route-context37 *38 */39export default async function route({ request, reply, api, logger, connections }) {40 const model = new OpenAI({41 temperature: 0,42 openAIApiKey: connections.openai.configuration.apiKey,43 configuration: {44 basePath: connections.openai.configuration.baseURL,45 },46 streaming: true,47 });4849 const chain = new LLMChain({ llm: model, prompt, outputParser: parser });5051 // embed the incoming message from the user52 const response = await connections.openai.embeddings.create({ input: request.body.message, model: "text-embedding-ada-002" });5354 // find similar product descriptions55 const products = await api.shopifyProduct.findMany({56 sort: {57 descriptionEmbedding: {58 cosineSimilarityTo: response.data[0].embedding,59 },60 },61 first: 4,62 });6364 // capture products in Gadget's Logs65 logger.info({ products, message: request.body.message }, "found products most similar to user input");6667 // JSON-stringify the structured product data to pass to the LLM68 const productString = products69 .map((product) =>70 JSON.stringify({71 id: product.id,72 title: product.title,73 description: product.body,74 })75 )76 .join("\n");7778 // set up a new stream for returning the response from OpenAI79 // any data added to the stream will be streamed from Gadget to the route caller80 // in this case, the route caller is the frontend81 const stream = new Readable({ read() {} });8283 try {84 // start to return the stream immediately85 await reply.send(stream);8687 let tokenText = "";88 // invoke the chain and add the streamed response tokens to the Readable stream89 const resp = await chain.call({ question: request.body.message, products: productString }, [90 new ConsoleCallbackHandler(),91 {92 // as the response is streamed in from OpenAI, stream it to the Gadget frontend93 handleLLMNewToken: (token) => {94 tokenText += token;95 // parse out some of the response formatting tokens96 if (tokenText.includes('"answer": "') && !tokenText.includes('",\n')) {97 stream.push(token);98 }99 },100 },101 ]);102103 // grab the complete response to store records in Chat Log model104 const { answer, productIds } = resp.text;105106 // select all the details of the recommended product if one was selected107 let selectedProducts = undefined;108 if (productIds) {109 try {110 selectedProducts = await api.shopifyProduct.findMany({111 select: {112 title: true,113 handle: true,114 images: {115 edges: {116 node: {117 id: true,118 source: true,119 },120 },121 },122 shop: {123 domain: true,124 },125 },126 filter: {127 id: {128 in: productIds,129 },130 },131 });132133 // only return a single image!134 selectedProducts.forEach((product) => {135 if (product.images.edges.length > 1) {136 product.images.edges.splice(1);137 }138 });139 } catch (error) {140 logger.error({ error }, "error fetching data for selected product");141142 // destroy the stream and push error message143 stream.destroy(error);144 }145 }146147 logger.info({ answer, selectedProducts }, "answer and products being sent to the frontend for display");148149 // send the selected product to the stream150 stream.push(JSON.stringify({ products: selectedProducts }));151 // close the stream152 stream.push(null);153 } catch (error) {154 // log error to Gadget Logs155 logger.error({ error: String(error) }, "error getting chat completion");156157 // destroy the stream and push error message158 stream.destroy(error);159 }160}161162route.options = {163 schema: {164 body: {165 type: "object",166 properties: {167 message: {168 type: "string",169 },170 },171 required: ["message"],172 },173 },174};
Step-by-step instructions for building this route are below.
Set up LangChain
The first thing you need to do when building this route is to set up LangChain. LangChain needs a couple of things defined before it can be used to respond to a user's chat message, including a parser to format the response from OpenAI, a prompt template that contains some text defining the purpose of the prompt and variables that will be passed in, and finally, the OpenAI model that will be used.
Begin with setting up a StructuredOutputParser
. This parser will format the response from OpenAI into a structured JSON object that can be utilized by the frontend to display the response to the user. The parser uses zod
to structure the response, which will consist of a string answer
and an array of product IDs productIds
recommend to shoppers.
routes/POST-chat.jsjs1import { z } from "zod";2import { StructuredOutputParser } from "langchain/output_parsers";34// a parser for the specific kind of response we want from the LLM5const parser = StructuredOutputParser.fromZodSchema(6 z.object({7 answer: z8 .string()9 .describe("answer to the user's question, not including any product IDs, and only using product titles and descriptions"),10 productIds: z11 .array(z.string())12 .describe(13 "IDs from input product JSON objects for the user to purchase, formatted as an array, or omitted if no products are applicable"14 ),15 })16);
Now that you have a parser, you can set up the prompt template. The prompt template is a string that contains the text that will be used to prompt the OpenAI model to respond to the user's chat message. The prompt template can also include variables that will be passed in when the prompt is invoked. In this case, the prompt template includes a variable for the user's question
and a variable for the products
that will be passed in as initial recommendations. Formatting instructions created with the parser are also passed into the prompt.
routes/POST-chat.jsjs1/** additional imports */2import { PromptTemplate } from "langchain/prompts";34/** parser definition */56const prompt = new PromptTemplate({7 template: `You are a helpful shopping assistant trying to match customers with the right product. You will be given a question from a customer and then maybe some JSON objects with the id, title, and description of products available for sale that roughly match the customer's question. Reply to the question suggesting which products to buy. Only use the product titles and descriptions in your response, do not use the product IDs in your response. If you are unsure or if the question seems unrelated to shopping, say "Sorry, I don't know how to help with that", and include some suggestions for better questions to ask. {format_instructions}8 Products: {products}910 Question: {question}`,11 inputVariables: ["question", "products"],12 partialVariables: { format_instructions: parser.getFormatInstructions() },13});
Once the parser and prompt are both defined, you can set up LangChain's OpenAI model and the chain that will be called in the route.
routes/POST-chat.jsjs1/** additional imports */2import { OpenAI } from "langchain/llms/openai";3import { LLMChain } from "langchain/chains";45/**6 * Route handler for POST chat7 *8 * @param { RouteContext } route context - see: https://docs.gadget.dev/guides/http-routes/route-configuration#route-context9 *10 */11export default async function route({ request, reply, api, logger, connections }) {12 /** parser and prompt definition */13 const model = new OpenAI({14 temperature: 0,15 openAIApiKey: connections.openai.configuration.apiKey,16 configuration: {17 basePath: connections.openai.configuration.baseURL,18 },19 streaming: true,20 });2122 const chain = new LLMChain({ llm: model, prompt, outputParser: parser });23}
The temperature
parameter is set to 0 to ensure that the response from OpenAI is deterministic. The streaming
parameter is set to true to ensure that the response from OpenAI is streamed in as it is generated, rather than waiting for the entire response to be generated before returning it. The chain is defined using the model, prompt, and parser.
The OpenAI
model and LLMChain
are used in this tutorial as an example of how to use chains and prompt templates with LangChain.
LangChain has a variety of different models, including chat-specific models, that might
be worth investigating for your app.
Define route parameters
Now that LangChain is set up, you can define the route parameters. The route will accept a message
from the shopper as part of a request body
. The message
will be passed into the prompt template.
To define this message
parameter, you can use the schema
option in the route module's options
object. The schema
option is used to define the JSON schema for the request body.
Define the parameter at the bottom of routes/POST-chat.js
:
routes/POST-chat.jsjs1/** imports */23/** parser, prompt, model, and chain definition */45export default async function route({ request, reply, api, logger, connections }) {6 // route code7}89route.options = {10 schema: {11 body: {12 type: "object",13 properties: {14 message: {15 type: "string",16 },17 },18 required: ["message"],19 },20 },21};
Now you can start to write the actual route code that runs when a shopper asks a question.
Find similar products
The first thing that your route will do is create an embedding vector for the shopper's question and use that vector to find similar products. The embedMessage
function created earlier will be used to embed the shopper's question.
Gadget includes a cosineSimilarityTo
operator that can be used to sort the results of a read query by cosine similarity to a given vector. The cosineSimilarityTo
operator is used in the sort
parameter of the findMany
query to sort the results by cosine similarity to the embedded message. The first
parameter is used to limit the number of results returned to 4.
In other words, the query will return the 4 products that are most similar to the shopper's question.
routes/POST-chat.jsjs1import { embedMessage } from "../openai";23/** imports and chain setup */45/**6 * Route handler for GET /chat7 *8 * @param { import("gadget-server").RouteContext } request context - Everything for handling this route, like the api client, Fastify request, Fastify reply, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context9 *10 * @see {@link https://www.fastify.dev/docs/latest/Reference/Request}11 * @see {@link https://www.fastify.dev/docs/latest/Reference/Reply}12 */13export default async function route({ request, reply, api, logger, connections }) {14 // embed the incoming message from the user15 const embedded = await embedMessage(request.body.message);1617 // find similar product descriptions18 const products = await api.shopifyProduct.findMany({19 sort: {20 descriptionEmbedding: {21 cosineSimilarityTo: embedded,22 },23 },24 first: 4,25 });2627 // capture products in Gadget's Logs28 logger.info({ products, message: request.body.message }, "found products most similar to user input");29}
A product string is then created from the list of returned products:
routes/POST-chat.jsjs1// JSON-stringify the structured product data to pass to the LLM2const productString = products3 .map((product) =>4 JSON.stringify({5 id: product.id,6 title: product.title,7 description: product.body,8 })9 )10 .join("\n");
You are now ready to invoke the chain and stream the response back to the shopper.
Stream response from LangChain
Now that the chain is defined, you can invoke it and stream the response back to the shopper. The call
method of the chain is used to invoke the chain. The call
method takes two parameters: an object containing the input variables for the prompt and an array of callback handlers.
You define a new Readable
stream and immediately send it as a response using await reply.send(stream);
. Once you have done this, any additional data pushed to the stream will be sent back to the route called.
Finally, chain.call()
is invoked to generate a response to the shopper's question, and takes the request.body.message
and productString
as input for the shopper's question and the products with the closest descriptions matching the question, respectively.
The ConsoleCallbackHandler
will output the response from LangChain to the Gadget Logs, so you can see the exact input and output from LangChain. An additional callback is defined using handleLLMNewToken
to stream the response from LangChain to the shopper. The handleLLMNewToken
callback is invoked every time a new token is generated by LangChain. The token
will contain a fragment of the complete response, which will be pushed to the stream and returned to the shopper.
Some additional token handling is also done using tokenText
. In the parser
, a JSON object was defined for a response. You do not want to push the JSON object keys to the shopper, so they are filtered out using tokenText.includes('"answer": "') && !tokenText.includes('",\n')
. The token
is then pushed to the stream.
The stream is closed using stream.push(null);
. In the case of an error, stream.destroy()
is called to close the stream.
routes/POST-chat.jsjs1/** additional imports */2import { Readable } from "stream";3import { ConsoleCallbackHandler } from "langchain/callbacks";\45/** chain setup */67/**8 * Route handler for GET /chat9 *10 * @param { import("gadget-server").RouteContext } request context - Everything for handling this route, like the api client, Fastify request, Fastify reply, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context11 *12 * @see {@link https://www.fastify.dev/docs/latest/Reference/Request}13 * @see {@link https://www.fastify.dev/docs/latest/Reference/Reply}14 */15export default async function route({ request, reply, api, logger, connections }) {16 /** find recommended products using embeddings */1718 // set up a new stream for returning the response from OpenAI19 // any data added to the stream will be streamed from Gadget to the route caller20 // in this case, the route caller is the frontend21 const stream = new Readable({ read() {} });2223 try {24 // start to return the stream immediately25 await reply.send(stream);2627 let tokenText = "";28 // invoke the chain and add the streamed response tokens to the Readable stream29 const resp = await chain.call({ question: request.body.message, products: productString }, [30 new ConsoleCallbackHandler(),31 {32 // as the response is streamed in from OpenAI, stream it to the Gadget frontend33 handleLLMNewToken: (token) => {34 tokenText += token;35 // parse out some of the response formatting tokens36 if (tokenText.includes('"answer": "') && !tokenText.includes('",\n')) {37 stream.push(token);38 }39 },40 },41 ]);4243 // close the stream44 stream.push(null);45 } catch (error) {46 // log error to Gadget Logs47 logger.error({ error: String(error) }, "error getting chat completion");4849 // destroy the stream and push error message50 stream.destroy(error)51 }52};
This will stream the entire chat response back to the shopper. But this isn't all you want to return, you also want additional product info so you can display product listings and provide links to the products from your frontend. To do this, you can use the productIds
returned from the chain. Not all product IDs that were passed in will be used in the response, so it is important to use the IDs returned from the chain and not the IDs grabbed using the cosine similarity operation.
The returned productIds
can be used as a filter on the shopifyProduct
model to grab the product details for the recommended products. The await api.shopifyProduct.findMany()
is a read operation that will return the fields defined in the select
GraphQL query. The product title and handle are returned, as well as the product image source and the product's shop domain. The filter
GraphQL query will filter the results to only include products with an ID that is in the productIds
array.
Once the product details are returned, they are also sent to the route's caller via the stream.
routes/POST-chat.jsjs1/** imports and chain definition */2export default async function route({ request, reply, api, logger, connections }) {3 /** find recommended products using embeddings */45 /** invoke chain and stream response */67 // grab the complete response to store records in Chat Log model8 const { answer, productIds } = resp.text;910 // select all the details of the recommended product if one was selected11 let selectedProducts = undefined;12 if (productIds) {13 try {14 selectedProducts = await api.shopifyProduct.findMany({15 select: {16 title: true,17 handle: true,18 images: {19 edges: {20 node: {21 id: true,22 source: true,23 },24 },25 },26 shop: {27 domain: true,28 },29 },30 filter: {31 id: {32 in: productIds,33 },34 },35 });3637 // only return a single image!38 selectedProducts.forEach((product) => {39 if (product.images.edges.length > 1) {40 product.images.edges.splice(1);41 }42 });43 } catch (error) {44 logger.error({ error }, "error fetching data for selected product");4546 // destroy the stream and push error message47 stream.destroy(error);48 }49 }5051 logger.info({ selectedProducts }, "products being sent to the frontend for display");5253 // send the selected product to the stream54 stream.push(JSON.stringify({ products: selectedProducts }));5556 /** close the steam + catch statement */57}
Your route is now complete! Now all that is needed is a frontend app that allows shoppers to ask a question and displays the response along with product recommendations. You're going to use Gadget's hosted React frontends to build this UI.
Step 6: Build a frontend
All Gadget apps come with hosted Vite frontends that can be used to build your UI. You can use these frontends as a starting point for your UI, or you can build your UI from scratch. For this tutorial, you're going to use the hosted React frontend to build a chat widget that can be embedded on any page of your Shopify store.
Build the chat widget
In your Gadget app's frontend
folder, create a new file called Chat.jsx
. This file will contain the React component for the chat widget. The complete code for this file is below, but you'll walk through each piece of the code in the following sections.
frontend/Chat.jsxjsx1import { useState } from "react";2import "./App.css";3import { useFetch } from "@gadgetinc/react";45export const Chat = () => {6 const [userMessage, setUserMessage] = useState("");7 const [reply, setReply] = useState("");8 const [productRecommendations, setProductRecommendations] = useState(null);9 const [errorMessage, setErrorMessage] = useState("");1011 const [{ data, fetching, error }, sendChat] = useFetch("/chat", {12 method: "POST",13 headers: {14 "content-type": "application/json",15 },16 stream: true,17 });1819 return (20 <section>21 <form22 onSubmit={async (e) => {23 e.preventDefault();24 setReply("");25 setProductRecommendations(null);26 setErrorMessage("");2728 const stream = await sendChat({29 body: JSON.stringify({ message: userMessage }),30 });3132 const decodedStreamReader = stream.pipeThrough(new TextDecoderStream()).getReader();3334 // handle any stream errors35 decodedStreamReader.closed.catch((error) => {36 setErrorMessage(error.toString());37 });3839 let replyText = "";40 let done = false;41 while (!done) {42 const { value, done: doneReading } = await decodedStreamReader.read();4344 done = doneReading;4546 // handle the recommended products47 if (value?.includes(`{"products":`)) {48 setProductRecommendations(JSON.parse(value));49 } else if (value) {50 replyText = replyText + value;51 replyText = replyText.replace('"', "").replace(",", "");52 setReply(replyText);53 }54 }55 }}56 >57 <textarea58 placeholder="Ask a question about this shop's products ...."59 value={userMessage}60 onChange={(event) => setUserMessage(event.currentTarget.value)}61 />62 <input type="submit" value="Ask" disabled={fetching} />63 </form>64 <br />6566 {errorMessage && (67 <section>68 <pre>69 <code>{errorMessage}</code>70 </pre>71 </section>72 )}7374 {reply && (75 <section>76 <p>{reply}</p>77 <br />78 <div>79 {productRecommendations?.products ? (80 productRecommendations.products.map((product, i) => (81 <a key={`${i}_${product.title}`} href={"https://" + product.shop.domain + "/products/" + product.handle} target="_blank">82 {product.title}83 {product.images.edges[0] && (84 <img style={{ border: "1px black solid" }} width="200px" src={product.images.edges[0].node.source} />85 )}86 </a>87 ))88 ) : (89 <span>Loading recommendations...</span>90 )}91 </div>92 </section>93 )}94 {fetching && <span>Thinking...</span>}95 {error && <p className="error">There was an error: {String(error)}</p>}96 </section>97 );98};
Step-by-step chat widget build
The first thing to set up when building the chat widget is the useFetch
hook. This hook is provided by Gadget and is used to make requests to the backend HTTP route. The useFetch
hook takes two arguments: the backend route to call and an options object. The options object is used to configure the request. In this case, you're setting the request method to POST
and the content-type
header to application/json
. You're also setting the stream
option to true
. This option tells the useFetch
hook to return a stream that can be used to read the response from the backend route. The useFetch
hook returns an object containing response and fetching info, and a function that can be called to make the actual request to your chat
route. This function is named sendChat
.
frontend/Chat.jsxjsx1import { useState } from "react";2import "./App.css";3import { useFetch } from "@gadgetinc/react";45export const Chat = () => {6 const [userMessage, setUserMessage] = useState("");7 const [reply, setReply] = useState("");8 const [productRecommendations, setProductRecommendations] = useState(null);910 // Gadget's useFetch hook is used to make requests to the backend HTTP route11 const [{ data, fetching, error }, sendChat] = useFetch("/chat", {12 method: "POST",13 headers: {14 "content-type": "application/json",15 },16 stream: true,17 });1819 return <div>hello, world</div>;20};
React state is also defined above to manage the text that the shopper enters into the chat widget, as well as the streamed chat response and recommended product info. The next thing to add is a <form>
that makes use of this state. A shopper will use the form to ask a question. The form's onSubmit
callback will handle the streamed response from the backend route using Node's built-in streaming tooling.
frontend/Chat.jsxjsx1<form2 onSubmit={async (e) => {3 e.preventDefault();45 // remove any previous messages and product recommendations6 setReply("");7 setProductRecommendations(null);89 // send the user's message to the backend route10 // the response will be streamed back to the frontend11 const stream = await sendChat({12 body: JSON.stringify({ message: userMessage }),13 });1415 // decode the streamed response16 const decodedStreamReader = stream.pipeThrough(new TextDecoderStream()).getReader();1718 // handle any stream errors19 decodedStreamReader.closed.catch((error) => {20 setErrorMessage(error.toString());21 });2223 let replyText = "";24 let done = false;2526 // read the response from the stream27 while (!done) {28 const { value, done: doneReading } = await decodedStreamReader.read();2930 done = doneReading;3132 // handle the recommended products that are returned from the stream33 if (value?.includes(`{"products":`)) {34 setProductRecommendations(JSON.parse(value));35 } else if (value) {36 // handle the chat response37 replyText = replyText + value;38 replyText = replyText.replace('"', "").replace(",", "");39 setReply(replyText);40 }41 }42 }}43>44 <textarea45 placeholder="Ask a question about this shop's products ...."46 value={userMessage}47 onChange={(event) => setUserMessage(event.currentTarget.value)}48 />49 <input type="submit" value="Ask" disabled={fetching} />50</form>
This code will send the user's message to the backend route and then read the response from the stream. The response will be a JSON string that contains the chat response and any recommended products. The while
loop will read the response from the stream until the stream is done. The while
loop will also handle the response by setting the chat response and recommended products in React state.
The next thing to do is render the chat response and recommended products in the chat widget. The chat response is rendered in a <p>
tag, and the recommended products are rendered in an <a>
tag. The <a>
tag will link to the product's page on the shop's storefront.
frontend/Chat.jsxjsx1{2 reply && (3 <section>4 <p>{reply}</p>5 <br />6 <div>7 {productRecommendations?.products ? (8 productRecommendations.products.map((product, i) => (9 <a key={`${i}_${product.title}`} href={"https://" + product.shop.domain + "/products/" + product.handle} target="_blank">10 {product.title}11 {product.images.edges[0] && (12 <img style={{ border: "1px black solid" }} width="200px" src={product.images.edges[0].node.source} />13 )}14 </a>15 ))16 ) : (17 <span>Loading recommendations...</span>18 )}19 </div>20 </section>21 );22}
Adding loading and error messaging is also nice. This can be done by rendering a <span>
with a loading message when the fetching
variable is true
, and rendering a <p>
with an error message when the error
variable is true
.
frontend/Chat.jsxjsx1return (2 <section>3 {/** form and recommendations */}4 {fetching && <span>Thinking...</span>}5 {error && <p className="error">There was an error: {String(error)}</p>}6 </section>7);
Hook up chat widget to frontend project
Now you can add the chat widget to your app's frontend. Because this is not an embedded Shopify app, you can simplify frontend/App.jsx
with the following code that imports the chat widget and renders it at your app's default route:
frontend/App.jsxjsx1import "./App.css";2import { Route, Routes } from "react-router-dom";3import { Chat } from "./Chat";45const App = () => (6 <main>7 <header>8 <a href="https://gadget.dev" className="logo">9 <img src="https://assets.gadget.dev/assets/icon.svg" height="52" alt="Gadget" />10 </a>11 <h1>AI Product Recommender Chatbot</h1>12 </header>13 <Routes>14 <Route path="/" element={<Chat />} />15 </Routes>16 <br />17 <footer>18 Running in <span className={window.gadgetConfig.environment}>{window.gadgetConfig.environment}</span>19 </footer>20 </main>21);2223export default App;
This isn't an admin-embedded Shopify app, so you can use the Provider
from @gadgetinc/react
instead of the App Bridge Provider. The Provider
enables you to make authenticated requests with your Gadget app's api
client defined in frontend/api.js
.
You can overwrite the default code in frontend/main.jsx
with the following:
frontend/main.jsxjsx1import { Provider } from "@gadgetinc/react";2import React from "react";3import ReactDOM from "react-dom/client";4import { BrowserRouter } from "react-router-dom";56import { api } from "./api";7import App from "./App";89const root = document.getElementById("root");10if (!root) throw new Error("#root element not found for booting react app");1112ReactDOM.createRoot(root).render(13 <React.StrictMode>14 <Provider api={api}>15 <BrowserRouter>16 <App />17 </BrowserRouter>18 </Provider>19 </React.StrictMode>20);
Finally, add some styles to make the chat widget look nice. You can copy-paste the following css into frontend/App.css
.
frontend/App.csscss1body {2 background: #f3f3f3;3 color: #252525;4 padding: 55px;5 line-height: 1.5;6 font-family: "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;7 max-width: 60%;8 margin: auto;9}1011main {12 flex: 1 0 auto;13 align-items: center;14 justify-content: center;15}1617h1 {18 font-size: 32px;19 margin: 24px 0;20 max-width: none;21}22h2 {23 font-size: 24px;24 max-width: none;25}26h3 {27 font-size: 16px;28 max-width: none;29}3031table {32 border-collapse: collapse;33 margin: 7px 0px;34}3536td {37 padding: 7px 6px;38}3940header {41 display: flex;42 flex-direction: row;43 align-items: center;44 gap: 1em;45 margin-bottom: 2em;46}4748.logo {49 display: inline-block;50}5152.Development {53 color: #87550b;54}5556.Production {57 color: #5d39bb;58}5960form {61 display: flex;62 flex-direction: column;63 align-items: center;64}6566form > input {67 width: 200px;68}6970code {71 font-family: "SFMono-Regular", Consolas, "Liberation Mono", Menlo, Courier, monospace;72 font-size: 0.95em;73 font-weight: bold;74}7576.logo {77 display: inline-block;78}7980a {81 color: currentColor;82}8384a:hover {85 text-decoration: none;86}8788:focus {89 outline: 1px dashed currentColor;90 outline-offset: 0.15rem;91}9293textarea {94 min-width: 500px;95 min-height: 100px;96 margin-bottom: 16px;97}9899section > div {100 display: flex;101 flex-direction: row;102 justify-content: space-evenly;103}104105section > img {106 border: black solid 1px;107}108109section > div > a {110 display: flex;111 flex-direction: column;112 align-items: center;113 justify-content: space-between;114 width: 50%;115}
Step 7: Test out your chatbot
To view your app:
- Click on your app name in the top left corner of the Gadget dashboard
- Hover over Go to app and select your Development app
Congrats! You have built a chatbot that uses the OpenAI API to find the best products to recommend to shoppers. Test it out by asking a question. The chatbot should respond with a list of products that are relevant to the question.

Next steps
Have questions about the tutorial? Join Gadget's developer Discord to ask Gadget employees and join the Gadget developer community!
Want to learn more about building AI apps in Gadget? Check out our building AI apps documentation.