Build a product recommendation chatbot for a Shopify store using OpenAI and Gadget 

Topics covered: Shopify connections, AI + vector embeddings, HTTP routes, React frontends
Time to build: ~30 minutes

Large Language Model (LLM) APIs allow developers to build apps that can understand and generate text.

In this tutorial, you will build a product recommendation chatbot for a Shopify store using OpenAI's API and Gadget. The chatbot will utilize OpenAI's text embedding API to generate vector embeddings for product descriptions, which will then be stored in your Gadget database. These embeddings will help to identify the products that best match a shopper's chat message. With Gadget, you can easily sync data from Shopify, build the backend for generating recommendations, and create the chatbot UI.

A screenshot of the chatbot in action. The shopper asks about backpacks, and the bot responds with 3 suggestions. The suggested product titles and images are then displayed below the chatbot's response
Requirements

To get the most out of this tutorial, you will need:

You can fork this Gadget project and try it out yourself.

You will still need to set up the Shopify Connection after forking. Read on to learn how to connect Gadget to a Shopify store!

Fork on Gadget

Step 1: Create a Gadget app and connect to Shopify 

Your first step will be to set up a Gadget project and connect to a Shopify store via the Shopify connection. Create a new Gadget application at gadget.new and select the Shopify app template.

A screenshot of the Shopify app template tile selected on the new app modal, with a domain entered

Connect to Shopify through the Partners dashboard 

Requirements

To complete this connection, you will need a Shopify Partners account as well as a store or development store

Our first step is going to be setting up a custom Shopify application in the Partners dashboard.

Both the Shopify store Admin and the Shopify Partner Dashboard have an Apps section. Ensure that you are on the Shopify Partner Dashboard before continuing.

Click on Apps link in Shopify Partners Dashboard
  • Click the Create App button
Click on Create app button
  • Click the Create app manually button and enter a name for your Shopify app
Shopify's app creation landing page in the Partners Dashboard
  • Go to Connections under the Plugins page in your Gadget app
The Gadget homescreen, with the Connections link highlighted
  • Copy the Client ID and Client secret from your newly created Shopify app and paste the values into the Gadget Connections page
  • Screenshot of the Partners card selected on the Connections page
  • Click Connect on the Gadget Connections page to move to scope and model selection

Now we get to select what Shopify scopes we give our application access to, while also picking what Shopify data models we want to import into our Gadget app.

  • Enable the read scope for the Shopify Products API, and select the underlying Product and Product Image models that we want to import into Gadget
Select Product API scope + model
  • Click Confirm

Now we want to connect our Gadget app to our custom app in the Partners dashboard.

  • In your Shopify app in the Partners dashboard, click on App setup in the side nav bar so you can edit the App URL and Allowed redirection URL(s) fields
  • Copy the App URL and Allowed redirection URL from the Gadget Connections page and paste them into your custom Shopify App
  • Screenshot of the connected app, with the App URL and Allowed redirection URL(s) fields

Now you can install your app on a store from the Partners dashboard. Do not sync data yet! You're going to add some code to generate vector embeddings for your products before the sync is run.

Step 2: Set up OpenAI connection 

Now that you are connected to Shopify, you can also set up the OpenAI connection that will be used to fetch embeddings for product descriptions. Gadget provides OpenAI credits for testing while developing, however, you will need to use your own OpenAI API key for this tutorial, as the Gadget-provided credentials are rate-limited.

  1. Click on Plugins in the nav bar
  2. Click on the OpenAI connection tile
  3. Select the Use your own API keys option in the modal that appears
  4. Paste your OpenAI API key into the Development key field

Your connection is now ready to be used!

Step 3: Add vector field to Shopify Product model 

Before you add code to create the embeddings from product descriptions, you need a place to store the generated embeddings. You can add a vector field to the shopifyProduct model to store the embeddings.

The vector field types store a vector, or array, of floats. It is useful for storing embeddings and will allow you to perform vector operations like cosine similarity, which helps you find the most similar products to a given chat message.

Learn more about OpenAI, LLMs, and vector embeddings

You are going to use Gadget's built-in OpenAI connection to generate vector embeddings for product descriptions. These embeddings will be used to perform a semantic search to find the products that best match a shopper's chat message.

LLMs, vector embeddings, and LangChain are relatively new technologies, and there are many resources available to learn more about them. Here are some resources to get you started:

To add a vector field to the shopifyProduct model:

  1. Go to the shopifyProduct model in the navigation bar
  2. Click on + in the FIELDS section to add a new field
  3. Name the field descriptionEmbedding
  4. Set the field type to vector
A screenshot of the Description Embedding vector field defined on the Shopify Product model

Now you are set up to store embeddings for products! The next step is adding code to generate these embeddings.

Step 4: Write code effect to create vector embedding 

Now you can add some code to create vector embeddings for all products in your store. You will want to run this code when Shopify fires a products/create or products/update webhook. To do this, you will create a code effect that runs when a Shopify Product is created or updated.

  1. Go to the shopifyProduct model in the navigation bar
  2. Click on the create action in the ACTION selection
  3. Paste the following code into shopifyProduct/actions/create.js to update the onSuccess function:
shopifyProduct/actions/create.js
js
1import { applyParams, preventCrossShopDataAccess, save, ActionOptions, CreateShopifyProductActionContext } from "gadget-server";
2
3/**
4 * @param { CreateShopifyProductActionContext } context
5 */
6export async function run({ params, record, logger, api }) {
7 applyParams(params, record);
8 await preventCrossShopDataAccess(params, record);
9 await save(record);
10}
11
12/**
13 * @param { CreateShopifyProductActionContext } context
14 */
15export async function onSuccess({ params, record, logger, api, connections }) {
16 // only run if the product does not have an embedding, or if the title or body have changed
17 if (!record.descriptionEmbedding || record.changed("title") || record.changed("body")) {
18 try {
19 // get an embedding for the product title + description using the OpenAI connection
20 const response = await connections.openai.embeddings.create({
21 input: `${record.title}: ${record.body}`,
22 model: "text-embedding-ada-002",
23 });
24 const embedding = response.data[0].embedding;
25
26 // write to the Gadget Logs
27 logger.info({ id: record.id }, "got product embedding");
28
29 // use the internal API to store vector embedding in Gadget database, on shopifyProduct model
30 await api.internal.shopifyProduct.update(record.id, { shopifyProduct: { descriptionEmbedding: embedding } });
31 } catch (error) {
32 logger.error({ error }, "error creating embedding");
33 }
34 }
35}
36
37/** @type { ActionOptions } */
38export const options = {
39 actionType: "create",
40};

In this snippet, the OpenAI connection is accessed through connections.openai and the embeddings.create() API is called.

The internal API is used in the onSuccess function to update the shopifyProduct model and set the descriptionEmbedding field. The internal API needs to be used because the shopifyProduct model does not have Gadget API set as a trigger on this action by default. You can read more about the internal API in the Gadget docs.

Want to update embeddings when a product description is updated?

To also run this code when a product is updated:

  1. Create a new file: shopifyProduct/utils.js
  2. Copy the contents of onSuccess from shopifyProduct/actions/create.js into a createProductEmbedding function:
shopifyProduct/utils.js
js
export const createProductEmbedding = async ({ record, api, logger, connections }) => {
// contents of onSuccess function from shopifyProduct/actions/create.js
};
  • Import createProductEmbedding into shopifyProduct/actions/create.js and shopifyProduct/actions/update.js
  • Call createProductEmbedding from within the onSuccess functions inside shopifyProduct/actions/create.js and shopifyProduct/actions/update.js:
shopifyProduct/actions/create.js and shopifyProduct/actions/update.js
js
1import { createProductEmbedding } from "../utils";
2
3/**
4 * @param { CreateShopifyProductActionContext } context
5 */
6export async function onSuccess({ params, record, logger, api, connections }) {
7 await createProductEmbedding({ record, api, logger, connections });
8}

Generate embeddings for existing products 

Now that the code is in place to generate vector embeddings for products, you can sync existing Shopify products into your Gadget app's database. To do this:

  1. Go to the Connections page in the navigation bar
  2. Click on the Shopify connection
  3. Click on Shop Installs for the connection
Screenshot of the connected app on the Shopify connections page, with the Shop Installs button highlighted
  1. Click on the Sync button for the store you want to sync products from
Screenshot of the shop installs page, with an arrow added to the screenshot to highlight the Sync button for the connected store

Product and product image data will be synced from Shopify to your Gadget app's database. The code effect you added will run for each product and generate a vector embedding for the product. You can see these vector embeddings by going to the Data page for the shopifyProduct model. The vector embeddings will be stored in the descriptionEmbedding field.

A screenshot of the Data page for the shopifyProduct model. The descriptionEmbedding column is highlighted, with vector data generated for products.

Step 5: Add HTTP route to handle incoming chat messages 

To complete your app backend, you will use cosine similarity on the stored vector embeddings to extract products that are closely related to a shopper's query. These products, along with a prompt, will be passed into LangChain, which will then use an OpenAI model to respond to the shopper's question. In addition, you will include product information to display recommended products by LangChain and provide a link to the store page for these products.

You will also stream the response from LangChain to the shopper's chat window. This will allow you to show the shopper that the chatbot is typing while it is generating a response.

Install LangChain and zod npm packages 

To start, install the LangChain and zod npm packages. The zod package will be used to provide a parser to LangChain for reliably extracting structured data from the LLM response.

  1. Open the Gadget command palette using P or Ctrl P
  2. Enter > in the palette to allow you to run yarn commands
  3. Run the following command to install the LangChain client and zod:
yarn
yarn add [email protected] zod

Add code to handle incoming chat messages 

Now you are ready to add some more code. You will start by adding a new HTTP route to handle incoming chat messages. To add a new HTTP route to your Gadget backend:

  • Hover over the routes folder in the FILES explorer and click on + to create a new file
  • Name the file POST-chat.js

Your app now has a new HTTP route that will be triggered when a POST request is made to /chat. You can add code to this file to handle incoming chat messages.

This is the complete code file for POST-chat.js. You can copy and paste this code into the file you just created. A step-by-step explanation of the code is below.

routes/POST-chat.js
js
1import { RouteContext } from "gadget-server";
2import { Readable } from "stream";
3import { z } from "zod";
4import { ConsoleCallbackHandler } from "langchain/callbacks";
5import { OpenAI } from "langchain/llms/openai";
6import { PromptTemplate } from "langchain/prompts";
7import { LLMChain } from "langchain/chains";
8import { StructuredOutputParser } from "langchain/output_parsers";
9
10// a parser for the specific kind of response we want from the LLM
11const parser = StructuredOutputParser.fromZodSchema(
12 z.object({
13 answer: z
14 .string()
15 .describe("answer to the user's question, not including any product IDs, and only using product titles and descriptions"),
16 productIds: z
17 .array(z.string())
18 .describe(
19 "IDs from input product JSON objects for the user to purchase, formatted as an array, or omitted if no products are applicable"
20 ),
21 })
22);
23
24const prompt = new PromptTemplate({
25 template: `You are a helpful shopping assistant trying to match customers with the right product. You will be given a question from a customer and then maybe some JSON objects with the id, title, and description of products available for sale that roughly match the customer's question. Reply to the question suggesting which products to buy. Only use the product titles and descriptions in your response, do not use the product IDs in your response. If you are unsure or if the question seems unrelated to shopping, say "Sorry, I don't know how to help with that", and include some suggestions for better questions to ask. {format_instructions}
26 Products: {products}
27
28 Question: {question}`,
29 inputVariables: ["question", "products"],
30 partialVariables: { format_instructions: parser.getFormatInstructions() },
31});
32
33/**
34 * Route handler for POST chat
35 *
36 * @param { RouteContext } route context - see: https://docs.gadget.dev/guides/http-routes/route-configuration#route-context
37 *
38 */
39export default async function route({ request, reply, api, logger, connections }) {
40 const model = new OpenAI({
41 temperature: 0,
42 openAIApiKey: connections.openai.configuration.apiKey,
43 configuration: {
44 basePath: connections.openai.configuration.baseURL,
45 },
46 streaming: true,
47 });
48
49 const chain = new LLMChain({ llm: model, prompt, outputParser: parser });
50
51 // embed the incoming message from the user
52 const response = await connections.openai.embeddings.create({ input: request.body.message, model: "text-embedding-ada-002" });
53
54 // find similar product descriptions
55 const products = await api.shopifyProduct.findMany({
56 sort: {
57 descriptionEmbedding: {
58 cosineSimilarityTo: response.data[0].embedding,
59 },
60 },
61 first: 4,
62 });
63
64 // capture products in Gadget's Logs
65 logger.info({ products, message: request.body.message }, "found products most similar to user input");
66
67 // JSON-stringify the structured product data to pass to the LLM
68 const productString = products
69 .map((product) =>
70 JSON.stringify({
71 id: product.id,
72 title: product.title,
73 description: product.body,
74 })
75 )
76 .join("\n");
77
78 // set up a new stream for returning the response from OpenAI
79 // any data added to the stream will be streamed from Gadget to the route caller
80 // in this case, the route caller is the frontend
81 const stream = new Readable({ read() {} });
82
83 try {
84 // start to return the stream immediately
85 await reply.send(stream);
86
87 let tokenText = "";
88 // invoke the chain and add the streamed response tokens to the Readable stream
89 const resp = await chain.call({ question: request.body.message, products: productString }, [
90 new ConsoleCallbackHandler(),
91 {
92 // as the response is streamed in from OpenAI, stream it to the Gadget frontend
93 handleLLMNewToken: (token) => {
94 tokenText += token;
95 // parse out some of the response formatting tokens
96 if (tokenText.includes('"answer": "') && !tokenText.includes('",\n')) {
97 stream.push(token);
98 }
99 },
100 },
101 ]);
102
103 // grab the complete response to store records in Chat Log model
104 const { answer, productIds } = resp.text;
105
106 // select all the details of the recommended product if one was selected
107 let selectedProducts = undefined;
108 if (productIds) {
109 try {
110 selectedProducts = await api.shopifyProduct.findMany({
111 select: {
112 title: true,
113 handle: true,
114 images: {
115 edges: {
116 node: {
117 id: true,
118 source: true,
119 },
120 },
121 },
122 shop: {
123 domain: true,
124 },
125 },
126 filter: {
127 id: {
128 in: productIds,
129 },
130 },
131 });
132
133 // only return a single image!
134 selectedProducts.forEach((product) => {
135 if (product.images.edges.length > 1) {
136 product.images.edges.splice(1);
137 }
138 });
139 } catch (error) {
140 logger.error({ error }, "error fetching data for selected product");
141
142 // destroy the stream and push error message
143 stream.destroy(error);
144 }
145 }
146
147 logger.info({ answer, selectedProducts }, "answer and products being sent to the frontend for display");
148
149 // send the selected product to the stream
150 stream.push(JSON.stringify({ products: selectedProducts }));
151 // close the stream
152 stream.push(null);
153 } catch (error) {
154 // log error to Gadget Logs
155 logger.error({ error: String(error) }, "error getting chat completion");
156
157 // destroy the stream and push error message
158 stream.destroy(error);
159 }
160}
161
162route.options = {
163 schema: {
164 body: {
165 type: "object",
166 properties: {
167 message: {
168 type: "string",
169 },
170 },
171 required: ["message"],
172 },
173 },
174};

Step-by-step instructions for building this route are below.

Set up LangChain 

The first thing you need to do when building this route is to set up LangChain. LangChain needs a couple of things defined before it can be used to respond to a user's chat message, including a parser to format the response from OpenAI, a prompt template that contains some text defining the purpose of the prompt and variables that will be passed in, and finally, the OpenAI model that will be used.

Begin with setting up a StructuredOutputParser. This parser will format the response from OpenAI into a structured JSON object that can be utilized by the frontend to display the response to the user. The parser uses zod to structure the response, which will consist of a string answer and an array of product IDs productIds recommend to shoppers.

routes/POST-chat.js
js
1import { z } from "zod";
2import { StructuredOutputParser } from "langchain/output_parsers";
3
4// a parser for the specific kind of response we want from the LLM
5const parser = StructuredOutputParser.fromZodSchema(
6 z.object({
7 answer: z
8 .string()
9 .describe("answer to the user's question, not including any product IDs, and only using product titles and descriptions"),
10 productIds: z
11 .array(z.string())
12 .describe(
13 "IDs from input product JSON objects for the user to purchase, formatted as an array, or omitted if no products are applicable"
14 ),
15 })
16);

Now that you have a parser, you can set up the prompt template. The prompt template is a string that contains the text that will be used to prompt the OpenAI model to respond to the user's chat message. The prompt template can also include variables that will be passed in when the prompt is invoked. In this case, the prompt template includes a variable for the user's question and a variable for the products that will be passed in as initial recommendations. Formatting instructions created with the parser are also passed into the prompt.

routes/POST-chat.js
js
1/** additional imports */
2import { PromptTemplate } from "langchain/prompts";
3
4/** parser definition */
5
6const prompt = new PromptTemplate({
7 template: `You are a helpful shopping assistant trying to match customers with the right product. You will be given a question from a customer and then maybe some JSON objects with the id, title, and description of products available for sale that roughly match the customer's question. Reply to the question suggesting which products to buy. Only use the product titles and descriptions in your response, do not use the product IDs in your response. If you are unsure or if the question seems unrelated to shopping, say "Sorry, I don't know how to help with that", and include some suggestions for better questions to ask. {format_instructions}
8 Products: {products}
9
10 Question: {question}`,
11 inputVariables: ["question", "products"],
12 partialVariables: { format_instructions: parser.getFormatInstructions() },
13});

Once the parser and prompt are both defined, you can set up LangChain's OpenAI model and the chain that will be called in the route.

routes/POST-chat.js
js
1/** additional imports */
2import { OpenAI } from "langchain/llms/openai";
3import { LLMChain } from "langchain/chains";
4
5/**
6 * Route handler for POST chat
7 *
8 * @param { RouteContext } route context - see: https://docs.gadget.dev/guides/http-routes/route-configuration#route-context
9 *
10 */
11export default async function route({ request, reply, api, logger, connections }) {
12 /** parser and prompt definition */
13 const model = new OpenAI({
14 temperature: 0,
15 openAIApiKey: connections.openai.configuration.apiKey,
16 configuration: {
17 basePath: connections.openai.configuration.baseURL,
18 },
19 streaming: true,
20 });
21
22 const chain = new LLMChain({ llm: model, prompt, outputParser: parser });
23}

The temperature parameter is set to 0 to ensure that the response from OpenAI is deterministic. The streaming parameter is set to true to ensure that the response from OpenAI is streamed in as it is generated, rather than waiting for the entire response to be generated before returning it. The chain is defined using the model, prompt, and parser.

LangChain model selection

The OpenAI model and LLMChain are used in this tutorial as an example of how to use chains and prompt templates with LangChain. LangChain has a variety of different models, including chat-specific models, that might be worth investigating for your app.

Define route parameters 

Now that LangChain is set up, you can define the route parameters. The route will accept a message from the shopper as part of a request body. The message will be passed into the prompt template.

To define this message parameter, you can use the schema option in the route module's options object. The schema option is used to define the JSON schema for the request body.

Define the parameter at the bottom of routes/POST-chat.js:

routes/POST-chat.js
js
1/** imports */
2
3/** parser, prompt, model, and chain definition */
4
5export default async function route({ request, reply, api, logger, connections }) {
6 // route code
7}
8
9route.options = {
10 schema: {
11 body: {
12 type: "object",
13 properties: {
14 message: {
15 type: "string",
16 },
17 },
18 required: ["message"],
19 },
20 },
21};

Now you can start to write the actual route code that runs when a shopper asks a question.

Find similar products 

The first thing that your route will do is create an embedding vector for the shopper's question and use that vector to find similar products. The embedMessage function created earlier will be used to embed the shopper's question.

Gadget includes a cosineSimilarityTo operator that can be used to sort the results of a read query by cosine similarity to a given vector. The cosineSimilarityTo operator is used in the sort parameter of the findMany query to sort the results by cosine similarity to the embedded message. The first parameter is used to limit the number of results returned to 4.

In other words, the query will return the 4 products that are most similar to the shopper's question.

routes/POST-chat.js
js
1import { embedMessage } from "../openai";
2
3/** imports and chain setup */
4
5/**
6 * Route handler for GET /chat
7 *
8 * @param { import("gadget-server").RouteContext } request context - Everything for handling this route, like the api client, Fastify request, Fastify reply, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context
9 *
10 * @see {@link https://www.fastify.dev/docs/latest/Reference/Request}
11 * @see {@link https://www.fastify.dev/docs/latest/Reference/Reply}
12 */
13export default async function route({ request, reply, api, logger, connections }) {
14 // embed the incoming message from the user
15 const embedded = await embedMessage(request.body.message);
16
17 // find similar product descriptions
18 const products = await api.shopifyProduct.findMany({
19 sort: {
20 descriptionEmbedding: {
21 cosineSimilarityTo: embedded,
22 },
23 },
24 first: 4,
25 });
26
27 // capture products in Gadget's Logs
28 logger.info({ products, message: request.body.message }, "found products most similar to user input");
29}

A product string is then created from the list of returned products:

routes/POST-chat.js
js
1// JSON-stringify the structured product data to pass to the LLM
2const productString = products
3 .map((product) =>
4 JSON.stringify({
5 id: product.id,
6 title: product.title,
7 description: product.body,
8 })
9 )
10 .join("\n");

You are now ready to invoke the chain and stream the response back to the shopper.

Stream response from LangChain 

Now that the chain is defined, you can invoke it and stream the response back to the shopper. The call method of the chain is used to invoke the chain. The call method takes two parameters: an object containing the input variables for the prompt and an array of callback handlers.

You define a new Readable stream and immediately send it as a response using await reply.send(stream);. Once you have done this, any additional data pushed to the stream will be sent back to the route called.

Finally, chain.call() is invoked to generate a response to the shopper's question, and takes the request.body.message and productString as input for the shopper's question and the products with the closest descriptions matching the question, respectively.

The ConsoleCallbackHandler will output the response from LangChain to the Gadget Logs, so you can see the exact input and output from LangChain. An additional callback is defined using handleLLMNewToken to stream the response from LangChain to the shopper. The handleLLMNewToken callback is invoked every time a new token is generated by LangChain. The token will contain a fragment of the complete response, which will be pushed to the stream and returned to the shopper.

Some additional token handling is also done using tokenText. In the parser, a JSON object was defined for a response. You do not want to push the JSON object keys to the shopper, so they are filtered out using tokenText.includes('"answer": "') && !tokenText.includes('",\n'). The token is then pushed to the stream.

The stream is closed using stream.push(null);. In the case of an error, stream.destroy() is called to close the stream.

routes/POST-chat.js
js
1/** additional imports */
2import { Readable } from "stream";
3import { ConsoleCallbackHandler } from "langchain/callbacks";\
4
5/** chain setup */
6
7/**
8 * Route handler for GET /chat
9 *
10 * @param { import("gadget-server").RouteContext } request context - Everything for handling this route, like the api client, Fastify request, Fastify reply, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context
11 *
12 * @see {@link https://www.fastify.dev/docs/latest/Reference/Request}
13 * @see {@link https://www.fastify.dev/docs/latest/Reference/Reply}
14 */
15export default async function route({ request, reply, api, logger, connections }) {
16 /** find recommended products using embeddings */
17
18 // set up a new stream for returning the response from OpenAI
19 // any data added to the stream will be streamed from Gadget to the route caller
20 // in this case, the route caller is the frontend
21 const stream = new Readable({ read() {} });
22
23 try {
24 // start to return the stream immediately
25 await reply.send(stream);
26
27 let tokenText = "";
28 // invoke the chain and add the streamed response tokens to the Readable stream
29 const resp = await chain.call({ question: request.body.message, products: productString }, [
30 new ConsoleCallbackHandler(),
31 {
32 // as the response is streamed in from OpenAI, stream it to the Gadget frontend
33 handleLLMNewToken: (token) => {
34 tokenText += token;
35 // parse out some of the response formatting tokens
36 if (tokenText.includes('"answer": "') && !tokenText.includes('",\n')) {
37 stream.push(token);
38 }
39 },
40 },
41 ]);
42
43 // close the stream
44 stream.push(null);
45 } catch (error) {
46 // log error to Gadget Logs
47 logger.error({ error: String(error) }, "error getting chat completion");
48
49 // destroy the stream and push error message
50 stream.destroy(error)
51 }
52};

This will stream the entire chat response back to the shopper. But this isn't all you want to return, you also want additional product info so you can display product listings and provide links to the products from your frontend. To do this, you can use the productIds returned from the chain. Not all product IDs that were passed in will be used in the response, so it is important to use the IDs returned from the chain and not the IDs grabbed using the cosine similarity operation.

The returned productIds can be used as a filter on the shopifyProduct model to grab the product details for the recommended products. The await api.shopifyProduct.findMany() is a read operation that will return the fields defined in the select GraphQL query. The product title and handle are returned, as well as the product image source and the product's shop domain. The filter GraphQL query will filter the results to only include products with an ID that is in the productIds array.

Once the product details are returned, they are also sent to the route's caller via the stream.

routes/POST-chat.js
js
1/** imports and chain definition */
2export default async function route({ request, reply, api, logger, connections }) {
3 /** find recommended products using embeddings */
4
5 /** invoke chain and stream response */
6
7 // grab the complete response to store records in Chat Log model
8 const { answer, productIds } = resp.text;
9
10 // select all the details of the recommended product if one was selected
11 let selectedProducts = undefined;
12 if (productIds) {
13 try {
14 selectedProducts = await api.shopifyProduct.findMany({
15 select: {
16 title: true,
17 handle: true,
18 images: {
19 edges: {
20 node: {
21 id: true,
22 source: true,
23 },
24 },
25 },
26 shop: {
27 domain: true,
28 },
29 },
30 filter: {
31 id: {
32 in: productIds,
33 },
34 },
35 });
36
37 // only return a single image!
38 selectedProducts.forEach((product) => {
39 if (product.images.edges.length > 1) {
40 product.images.edges.splice(1);
41 }
42 });
43 } catch (error) {
44 logger.error({ error }, "error fetching data for selected product");
45
46 // destroy the stream and push error message
47 stream.destroy(error);
48 }
49 }
50
51 logger.info({ selectedProducts }, "products being sent to the frontend for display");
52
53 // send the selected product to the stream
54 stream.push(JSON.stringify({ products: selectedProducts }));
55
56 /** close the steam + catch statement */
57}

Your route is now complete! Now all that is needed is a frontend app that allows shoppers to ask a question and displays the response along with product recommendations. You're going to use Gadget's hosted React frontends to build this UI.

Step 6: Build a frontend 

All Gadget apps come with hosted Vite frontends that can be used to build your UI. You can use these frontends as a starting point for your UI, or you can build your UI from scratch. For this tutorial, you're going to use the hosted React frontend to build a chat widget that can be embedded on any page of your Shopify store.

Build the chat widget 

In your Gadget app's frontend folder, create a new file called Chat.jsx. This file will contain the React component for the chat widget. The complete code for this file is below, but you'll walk through each piece of the code in the following sections.

frontend/Chat.jsx
jsx
1import { useState } from "react";
2import "./App.css";
3import { useFetch } from "@gadgetinc/react";
4
5export const Chat = () => {
6 const [userMessage, setUserMessage] = useState("");
7 const [reply, setReply] = useState("");
8 const [productRecommendations, setProductRecommendations] = useState(null);
9 const [errorMessage, setErrorMessage] = useState("");
10
11 const [{ data, fetching, error }, sendChat] = useFetch("/chat", {
12 method: "POST",
13 headers: {
14 "content-type": "application/json",
15 },
16 stream: true,
17 });
18
19 return (
20 <section>
21 <form
22 onSubmit={async (e) => {
23 e.preventDefault();
24 setReply("");
25 setProductRecommendations(null);
26 setErrorMessage("");
27
28 const stream = await sendChat({
29 body: JSON.stringify({ message: userMessage }),
30 });
31
32 const decodedStreamReader = stream.pipeThrough(new TextDecoderStream()).getReader();
33
34 // handle any stream errors
35 decodedStreamReader.closed.catch((error) => {
36 setErrorMessage(error.toString());
37 });
38
39 let replyText = "";
40 let done = false;
41 while (!done) {
42 const { value, done: doneReading } = await decodedStreamReader.read();
43
44 done = doneReading;
45
46 // handle the recommended products
47 if (value?.includes(`{"products":`)) {
48 setProductRecommendations(JSON.parse(value));
49 } else if (value) {
50 replyText = replyText + value;
51 replyText = replyText.replace('"', "").replace(",", "");
52 setReply(replyText);
53 }
54 }
55 }}
56 >
57 <textarea
58 placeholder="Ask a question about this shop's products ...."
59 value={userMessage}
60 onChange={(event) => setUserMessage(event.currentTarget.value)}
61 />
62 <input type="submit" value="Ask" disabled={fetching} />
63 </form>
64 <br />
65
66 {errorMessage && (
67 <section>
68 <pre>
69 <code>{errorMessage}</code>
70 </pre>
71 </section>
72 )}
73
74 {reply && (
75 <section>
76 <p>{reply}</p>
77 <br />
78 <div>
79 {productRecommendations?.products ? (
80 productRecommendations.products.map((product, i) => (
81 <a key={`${i}_${product.title}`} href={"https://" + product.shop.domain + "/products/" + product.handle} target="_blank">
82 {product.title}
83 {product.images.edges[0] && (
84 <img style={{ border: "1px black solid" }} width="200px" src={product.images.edges[0].node.source} />
85 )}
86 </a>
87 ))
88 ) : (
89 <span>Loading recommendations...</span>
90 )}
91 </div>
92 </section>
93 )}
94 {fetching && <span>Thinking...</span>}
95 {error && <p className="error">There was an error: {String(error)}</p>}
96 </section>
97 );
98};

Step-by-step chat widget build 

The first thing to set up when building the chat widget is the useFetch hook. This hook is provided by Gadget and is used to make requests to the backend HTTP route. The useFetch hook takes two arguments: the backend route to call and an options object. The options object is used to configure the request. In this case, you're setting the request method to POST and the content-type header to application/json. You're also setting the stream option to true. This option tells the useFetch hook to return a stream that can be used to read the response from the backend route. The useFetch hook returns an object containing response and fetching info, and a function that can be called to make the actual request to your chat route. This function is named sendChat.

frontend/Chat.jsx
jsx
1import { useState } from "react";
2import "./App.css";
3import { useFetch } from "@gadgetinc/react";
4
5export const Chat = () => {
6 const [userMessage, setUserMessage] = useState("");
7 const [reply, setReply] = useState("");
8 const [productRecommendations, setProductRecommendations] = useState(null);
9
10 // Gadget's useFetch hook is used to make requests to the backend HTTP route
11 const [{ data, fetching, error }, sendChat] = useFetch("/chat", {
12 method: "POST",
13 headers: {
14 "content-type": "application/json",
15 },
16 stream: true,
17 });
18
19 return <div>hello, world</div>;
20};

React state is also defined above to manage the text that the shopper enters into the chat widget, as well as the streamed chat response and recommended product info. The next thing to add is a <form> that makes use of this state. A shopper will use the form to ask a question. The form's onSubmit callback will handle the streamed response from the backend route using Node's built-in streaming tooling.

frontend/Chat.jsx
jsx
1<form
2 onSubmit={async (e) => {
3 e.preventDefault();
4
5 // remove any previous messages and product recommendations
6 setReply("");
7 setProductRecommendations(null);
8
9 // send the user's message to the backend route
10 // the response will be streamed back to the frontend
11 const stream = await sendChat({
12 body: JSON.stringify({ message: userMessage }),
13 });
14
15 // decode the streamed response
16 const decodedStreamReader = stream.pipeThrough(new TextDecoderStream()).getReader();
17
18 // handle any stream errors
19 decodedStreamReader.closed.catch((error) => {
20 setErrorMessage(error.toString());
21 });
22
23 let replyText = "";
24 let done = false;
25
26 // read the response from the stream
27 while (!done) {
28 const { value, done: doneReading } = await decodedStreamReader.read();
29
30 done = doneReading;
31
32 // handle the recommended products that are returned from the stream
33 if (value?.includes(`{"products":`)) {
34 setProductRecommendations(JSON.parse(value));
35 } else if (value) {
36 // handle the chat response
37 replyText = replyText + value;
38 replyText = replyText.replace('"', "").replace(",", "");
39 setReply(replyText);
40 }
41 }
42 }}
43>
44 <textarea
45 placeholder="Ask a question about this shop's products ...."
46 value={userMessage}
47 onChange={(event) => setUserMessage(event.currentTarget.value)}
48 />
49 <input type="submit" value="Ask" disabled={fetching} />
50</form>

This code will send the user's message to the backend route and then read the response from the stream. The response will be a JSON string that contains the chat response and any recommended products. The while loop will read the response from the stream until the stream is done. The while loop will also handle the response by setting the chat response and recommended products in React state.

The next thing to do is render the chat response and recommended products in the chat widget. The chat response is rendered in a <p> tag, and the recommended products are rendered in an <a> tag. The <a> tag will link to the product's page on the shop's storefront.

frontend/Chat.jsx
jsx
1{
2 reply && (
3 <section>
4 <p>{reply}</p>
5 <br />
6 <div>
7 {productRecommendations?.products ? (
8 productRecommendations.products.map((product, i) => (
9 <a key={`${i}_${product.title}`} href={"https://" + product.shop.domain + "/products/" + product.handle} target="_blank">
10 {product.title}
11 {product.images.edges[0] && (
12 <img style={{ border: "1px black solid" }} width="200px" src={product.images.edges[0].node.source} />
13 )}
14 </a>
15 ))
16 ) : (
17 <span>Loading recommendations...</span>
18 )}
19 </div>
20 </section>
21 );
22}

Adding loading and error messaging is also nice. This can be done by rendering a <span> with a loading message when the fetching variable is true, and rendering a <p> with an error message when the error variable is true.

frontend/Chat.jsx
jsx
1return (
2 <section>
3 {/** form and recommendations */}
4 {fetching && <span>Thinking...</span>}
5 {error && <p className="error">There was an error: {String(error)}</p>}
6 </section>
7);

Hook up chat widget to frontend project 

Now you can add the chat widget to your app's frontend. Because this is not an embedded Shopify app, you can simplify frontend/App.jsx with the following code that imports the chat widget and renders it at your app's default route:

frontend/App.jsx
jsx
1import "./App.css";
2import { Route, Routes } from "react-router-dom";
3import { Chat } from "./Chat";
4
5const App = () => (
6 <main>
7 <header>
8 <a href="https://gadget.dev" className="logo">
9 <img src="https://assets.gadget.dev/assets/icon.svg" height="52" alt="Gadget" />
10 </a>
11 <h1>AI Product Recommender Chatbot</h1>
12 </header>
13 <Routes>
14 <Route path="/" element={<Chat />} />
15 </Routes>
16 <br />
17 <footer>
18 Running in <span className={window.gadgetConfig.environment}>{window.gadgetConfig.environment}</span>
19 </footer>
20 </main>
21);
22
23export default App;

This isn't an admin-embedded Shopify app, so you can use the Provider from @gadgetinc/react instead of the App Bridge Provider. The Provider enables you to make authenticated requests with your Gadget app's api client defined in frontend/api.js.

You can overwrite the default code in frontend/main.jsx with the following:

frontend/main.jsx
jsx
1import { Provider } from "@gadgetinc/react";
2import React from "react";
3import ReactDOM from "react-dom/client";
4import { BrowserRouter } from "react-router-dom";
5
6import { api } from "./api";
7import App from "./App";
8
9const root = document.getElementById("root");
10if (!root) throw new Error("#root element not found for booting react app");
11
12ReactDOM.createRoot(root).render(
13 <React.StrictMode>
14 <Provider api={api}>
15 <BrowserRouter>
16 <App />
17 </BrowserRouter>
18 </Provider>
19 </React.StrictMode>
20);

Finally, add some styles to make the chat widget look nice. You can copy-paste the following css into frontend/App.css.

frontend/App.css
css
1body {
2 background: #f3f3f3;
3 color: #252525;
4 padding: 55px;
5 line-height: 1.5;
6 font-family: "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
7 max-width: 60%;
8 margin: auto;
9}
10
11main {
12 flex: 1 0 auto;
13 align-items: center;
14 justify-content: center;
15}
16
17h1 {
18 font-size: 32px;
19 margin: 24px 0;
20 max-width: none;
21}
22h2 {
23 font-size: 24px;
24 max-width: none;
25}
26h3 {
27 font-size: 16px;
28 max-width: none;
29}
30
31table {
32 border-collapse: collapse;
33 margin: 7px 0px;
34}
35
36td {
37 padding: 7px 6px;
38}
39
40header {
41 display: flex;
42 flex-direction: row;
43 align-items: center;
44 gap: 1em;
45 margin-bottom: 2em;
46}
47
48.logo {
49 display: inline-block;
50}
51
52.Development {
53 color: #87550b;
54}
55
56.Production {
57 color: #5d39bb;
58}
59
60form {
61 display: flex;
62 flex-direction: column;
63 align-items: center;
64}
65
66form > input {
67 width: 200px;
68}
69
70code {
71 font-family: "SFMono-Regular", Consolas, "Liberation Mono", Menlo, Courier, monospace;
72 font-size: 0.95em;
73 font-weight: bold;
74}
75
76.logo {
77 display: inline-block;
78}
79
80a {
81 color: currentColor;
82}
83
84a:hover {
85 text-decoration: none;
86}
87
88:focus {
89 outline: 1px dashed currentColor;
90 outline-offset: 0.15rem;
91}
92
93textarea {
94 min-width: 500px;
95 min-height: 100px;
96 margin-bottom: 16px;
97}
98
99section > div {
100 display: flex;
101 flex-direction: row;
102 justify-content: space-evenly;
103}
104
105section > img {
106 border: black solid 1px;
107}
108
109section > div > a {
110 display: flex;
111 flex-direction: column;
112 align-items: center;
113 justify-content: space-between;
114 width: 50%;
115}

Step 7: Test out your chatbot 

To view your app:

  1. Click on your app name in the top left corner of the Gadget dashboard
  2. Hover over Go to app and select your Development app

Congrats! You have built a chatbot that uses the OpenAI API to find the best products to recommend to shoppers. Test it out by asking a question. The chatbot should respond with a list of products that are relevant to the question.

Screenshot of the finished chatbot, with a question entered (asking about backpacks for sale) and a response. The response includes a text response and product recommendations, both generated by langchain

Next steps 

Have questions about the tutorial? Join Gadget's developer Discord to ask Gadget employees and join the Gadget developer community!

Want to learn more about building AI apps in Gadget? Check out our building AI apps documentation.