Build a product recommendation chatbot for a Shopify store using OpenAI and Gadget 

Topics covered: Shopify connections, AI + vector embeddings, HTTP routes, React frontends
Time to build: ~30 minutes

Large Language Model (LLM) APIs allow developers to build apps that can understand and generate text.

In this tutorial, you will build a product recommendation chatbot for a Shopify store using OpenAI's API and Gadget. The chatbot will utilize OpenAI's text embedding API to generate vector embeddings for product descriptions, which will then be stored in your Gadget database. These embeddings will help to identify the products that best match a shopper's chat message. With Gadget, you can easily sync data from Shopify, build the backend for generating recommendations, and create the chatbot UI.

A screenshot of the chatbot in action. The shopper asks about backpacks, and the bot responds with 3 suggestions. The suggested product titles and images are then displayed below the chatbot's response
Requirements

To get the most out of this tutorial, you will need:

You can fork this Gadget project and try it out yourself.

You will still need to set up the Shopify Connection after forking. Read on to learn how to connect Gadget to a Shopify store!

Fork on Gadget

Create a Gadget app and connect to Shopify 

Your first step will be to set up a Gadget project and connect to a Shopify store via the Shopify connection. Create a new Gadget application at gadget.new.

Connect to Shopify through the Partners dashboard 

Requirements

To complete this connection, you will need a Shopify Partners account as well as a store or development store

Our first step is going to be setting up a custom Shopify application in the Partners dashboard.

Both the Shopify store Admin and the Shopify Partner Dashboard have an Apps section. Ensure that you are on the Shopify Partner Dashboard before continuing.

Click on Apps link in Shopify Partners Dashboard
  • Click the Create App button
Click on Create app button
  • Click the Create app manually button and enter a name for your Shopify app
Shopify's app creation landing page in the Partners Dashboard
  • Go to the Connections page in your Gadget app
The Gadget homescreen, with the Connections link highlighted
  • Copy the Client ID and Client secret from your newly created Shopify app and paste the values into the Gadget Connections page
  • Screenshot of the Partners card selected on the Connections page
  • Click Connect on the Gadget Connections page to move to scope and model selection

Now we get to select what Shopify scopes we give our application access to, while also picking what Shopify data models we want to import into our Gadget app.

  • Enable the read scope for the Shopify Products API, and select the underlying Product and Product Image models that we want to import into Gadget
Select Product API scope + model
  • Click Confirm

Now we want to connect our Gadget app to our custom app in the Partners dashboard.

  • In your Shopify app in the Partners dashboard, click on App setup in the side nav bar so you can edit the App URL and Allowed redirection URL(s) fields
  • Copy the App URL and Allowed redirection URL from the Gadget Connections page and paste them into your custom Shopify App
  • Screenshot of the connected app, with the App URL and Allowed redirection URL(s) fields

Now you can install your app on a store from the Partners dashboard. Do not sync data yet! You're going to add some code to generate vector embeddings for your products before the sync is run.

OpenAI setup 

You are going to use the OpenAI API to generate vector embeddings for product descriptions. These embeddings will be used to perform a semantic search to find the products that best match a shopper's chat message. You will use the OpenAI client to make requests to the OpenAI API. The first thing you need to do is set up the OpenAI client and use it to generate embeddings.

Learn more about OpenAI, LLMs, and vector embeddings

LLMs, vector embeddings, and LangChain are relatively new technologies, and there are many resources available to learn more about them. Here are some resources to get you started:

Set OpenAI API key as an environment variable 

We will use an environment variable to store the OpenAI API key. This will allow us to use the same code in development and production without having to hardcode the key. You need to get an OpenAI API key from the OpenAI platform before you can continue.

To set an environment variable in Gadget:

  • Go to the Settings tab in the navigation bar
  • Select Environment Variables under Settings in the navbar to go to the environment variables page
  • Click on + Add Variable to add a new environment variable
  • Set the name to OPENAI_API_KEY and the value to your OpenAI API key
Screenshot of Gadget's environment variables page, with the NODE_ENV and OPENAI_API_KEY environment variables set

Install OpenAI client 

Now that your environment variable is set, you want to set up the OpenAI client in your Gadget app. The OpenAI client is a wrapper around the OpenAI API that makes it easier to make requests to the API. You can install the OpenAI client using yarn in Gadget.

  • Open the Gadget command palette using P or Ctrl P
  • Enter > in the palette to allow you to run yarn commands
  • Enter yarn add openai to install the OpenAI client
A screenshot of Gadget command palette with 'yarn add openai' ready to be run!

The OpenAI client will be installed in your Gadget app's package.json file.

Set up OpenAI client 

Now you can initialize the OpenAI client. You are going to reuse this client in a couple of code files, so you can initialize the client in a separate file and import the client into code effects. You're going to use OpenAI's text-embedding-ada-002 model to generate vector embeddings for product descriptions. You can read more about this model in the OpenAI API docs.

  • Create a new file in your project's root directory by clicking on the + button in the FILES explorer
  • Name the new file openai.js
  • Add the following code to the file:
openai.js
js
1import { Configuration, OpenAIApi } from "openai";
2
3// set up OpenAI client config
4const configuration = new Configuration({
5 apiKey: process.env["OPENAI_API_KEY"],
6});
7
8// init the OpenAI client
9export const openai = new OpenAIApi(configuration);
10
11// util to get vector embedding for passed-in text
12export const embedMessage = async (message) => {
13 const response = await openai.createEmbedding({
14 model: "text-embedding-ada-002",
15 // @ts-ignore
16 input: message,
17 });
18
19 return response.data.data[0].embedding;
20};

This file sets up the OpenAI client and exports it so that you can import it into other files. It also exports a utility function that you will use to get the vector embedding for a message. These embeddings will be used to compare incoming chat messages to product names and descriptions and find the products that best match the shopper's query.

Add vector field to Shopify Product model 

Before you add code to create the embeddings from product descriptions, you need a place to store the generated embeddings. You can add a vector field to the Shopify Product model to store the embeddings.

The vector field types store a vector, or array, of floats. It is useful for storing embeddings and will allow you to perform vector operations like cosine similarity, which helps you find the most similar products to a given chat message.

To add a vector field to the Shopify Product model:

  • Go to the Shopify Product model in the navigation bar
  • Click on + in the FIELDS section to add a new field
  • Name the field Description Embedding
  • Set the field type to vector
A screenshot of the Description Embedding vector field defined on the Shopify Product model

Now you are set up to store embeddings for products! The next step is adding code to generate these embeddings.

Write code effect to create vector embedding 

Now you can add some code to create vector embeddings for all products in your store. You will want to run this code when Shopify fires a products/create or products/update webhook. To do this, you will create a code effect that runs when a Shopify Product is created or updated.

  • Go to the Shopify Product model in the navigation bar
  • Click on the create action in the ACTION selection
  • Click + Add Code Snippet under the SUCCESS EFFECTS to add a new code effect
  • Paste the following code into the generated code file:
shopifyProduct/create/onCreateSuccess.js
js
1import { embedMessage } from "../../openai";
2
3/**
4 * Effect code for create on Shopify Product
5 * @param { import("gadget-server").CreateShopifyProductActionContext } context - Everything for running this effect, like the api client, current record, params, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context
6 */
7module.exports = async ({ api, record, params, logger, connections }) => {
8 // only run if the product does not have an embedding, or if the title or body have changed
9 if (!record.descriptionEmbedding || record.changed("title") || record.changed("body")) {
10 try {
11 // get an embedding for the product title + description
12 const vector = await embedMessage(`${record.title}: ${record.body}`);
13
14 // write to the Gadget Logs
15 logger.info({ id: record.id }, "got product embedding");
16
17 // use the internal API to store vector embedding in Gadget database, on Shopify Product model
18 await api.internal.shopifyProduct.update(record.id, { shopifyProduct: { descriptionEmbedding: vector } });
19 } catch (error) {
20 logger.error({ error }, "error creating embedding");
21 }
22 }
23};

In this snippet, the internal API is used to update the Shopify Product model and set the Description Embedding field. The internal API needs to be used because the Shopify Product model does not have Gadget API set as a trigger on this action by default. You can read more about the internal API in the Gadget docs.

A screenshot of the Shopify Product create action with the code snippet set as a success effect

To also run this code when a product is updated:

  • Go to the Shopify Product model in the navigation bar
  • Click on the update action in the ACTION selection
  • Click + Add Code Snippet under the SUCCESS EFFECTS to add a new code effect
  • Change the default file to the file created on the create action: shopifyProduct/create/onCreateSuccess.js

Generate embeddings for existing products 

Now that the code is in place to generate vector embeddings for products, you can sync existing Shopify products into your Gadget app's database. To do this:

  • Go to the Connections page in the navigation bar
  • Click on the Shopify connection
  • Click on Shop Installs for the connection
Screenshot of the connected app on the Shopify connections page, with the Shop Installs button highlighted
  • Click on the Sync button for the store you want to sync products from
Screenshot of the shop installs page, with an arrow added to the screenshot to highlight the Sync button for the connected store

Product and product image data will be synced from Shopify to your Gadget app's database. The code effect you added will run for each product and generate a vector embedding for the product. You can see these vector embeddings by going to the Data page for the Shopify Product model. The vector embeddings will be stored in the descriptionEmbedding field.

A screenshot of the Data page for the Shopify Product model. The descriptionEmbedding column is highlighted, with vector data generated for products.

Add HTTP route to handle incoming chat messages 

To complete your app backend, you will use cosine similarity on the stored vector embeddings to extract products that are closely related to a shopper's query. These products, along with a prompt, will be passed into LangChain, which will then use an OpenAI model to respond to the shopper's question. In addition, you will include product information to display recommended products by LangChain and provide a link to the store page for these products.

You will also stream the response from LangChain to the shopper's chat window. This will allow you to show the shopper that the chatbot is typing while it is generating a response.

Install LangChain and zod npm packages 

To start, install the LangChain and zod npm packages. The zod package will be used to provide a parser to LangChain for reliably extracting structured data from the LLM response.

  • Open the Gadget command palette using P or Ctrl P
  • Enter > in the palette to allow you to run yarn commands
  • Enter yarn add [email protected] zod to install the LangChain client and zod
A screenshot of Gadget command palette with 'yarn add langchain@0.0.66 zod' ready to be run!

Add code to handle incoming chat messages 

Now you are ready to add some more code. You will start by adding a new HTTP route to handle incoming chat messages. To add a new HTTP route to your Gadget backend:

  • Hover over the routes folder in the FILES explorer and click on + to create a new file
  • Name the file POST-chat.js

Your app now has a new HTTP route that will be triggered when a POST request is made to /chat. You can add code to this file to handle incoming chat messages.

This is the complete code file for POST-chat.js. You can copy and paste this code into the file you just created. A step-by-step explanation of the code is below.

routes/POST-chat.js
js
1import { Readable } from "stream";
2import { z } from "zod";
3import { embedMessage } from "../openai";
4import { ConsoleCallbackHandler } from "langchain/callbacks";
5import { OpenAI } from "langchain/llms/openai";
6import { PromptTemplate } from "langchain/prompts";
7import { LLMChain } from "langchain/chains";
8import { StructuredOutputParser } from "langchain/output_parsers";
9
10// a parser for the specific kind of response we want from the LLM
11const parser = StructuredOutputParser.fromZodSchema(
12 z.object({
13 answer: z
14 .string()
15 .describe("answer to the user's question, not including any product IDs, and only using product titles and descriptions"),
16 productIds: z
17 .array(z.string())
18 .describe(
19 "IDs from input product JSON objects for the user to purchase, formatted as an array, or omitted if no products are applicable"
20 ),
21 })
22);
23
24const prompt = new PromptTemplate({
25 template: `You are a helpful shopping assistant trying to match customers with the right product. You will be given a question from a customer and then maybe some JSON objects with the id, title, and description of products available for sale that roughly match the customer's question. Reply to the question suggesting which products to buy. Only use the product titles and descriptions in your response, do not use the product IDs in your response. If you are unsure or if the question seems unrelated to shopping, say "Sorry, I don't know how to help with that", and include some suggestions for better questions to ask. {format_instructions}
26 Products: {products}
27
28 Question: {question}`,
29 inputVariables: ["question", "products"],
30 partialVariables: { format_instructions: parser.getFormatInstructions() },
31});
32
33const model = new OpenAI({
34 temperature: 0,
35 openAIApiKey: process.env["OPENAI_API_KEY"],
36 streaming: true,
37});
38
39const chain = new LLMChain({ llm: model, prompt, outputParser: parser });
40
41/**
42 * Route handler for GET /chat
43 *
44 * @param { import("gadget-server").RouteContext } request context - Everything for handling this route, like the api client, Fastify request, Fastify reply, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context
45 *
46 * @see {@link https://www.fastify.io/docs/latest/Reference/Request}
47 * @see {@link https://www.fastify.io/docs/latest/Reference/Reply}
48 */
49module.exports = async ({ api, request, reply, logger }) => {
50 // embed the incoming message from the user
51 const embedded = await embedMessage(request.body.message);
52
53 // find similar product descriptions
54 const products = await api.shopifyProduct.findMany({
55 sort: {
56 descriptionEmbedding: {
57 cosineSimilarityTo: embedded,
58 },
59 },
60 first: 4,
61 });
62
63 // capture products in Gadget's Logs
64 logger.info({ products, message: request.body.message }, "found products most similar to user input");
65
66 // JSON-stringify the structured product data to pass to the LLM
67 const productString = products
68 .map((product) =>
69 JSON.stringify({
70 id: product.id,
71 title: product.title,
72 description: product.body,
73 })
74 )
75 .join("\n");
76
77 // set up a new stream for returning the response from OpenAI
78 // any data added to the stream will be streamed from Gadget to the route caller
79 // in this case, the route caller is the frontend
80 const stream = new Readable({ read() {} });
81
82 try {
83 // start to return the stream immediately
84 await reply.send(stream);
85
86 let tokenText = "";
87 // invoke the chain and add the streamed response tokens to the Readable stream
88 const resp = await chain.call({ question: request.body.message, products: productString }, [
89 new ConsoleCallbackHandler(),
90 {
91 // as the response is streamed in from OpenAI, stream it to the Gadget frontend
92 handleLLMNewToken: (token) => {
93 tokenText += token;
94 // parse out some of the response formatting tokens
95 if (tokenText.includes('"answer": "') && !tokenText.includes('",\n')) {
96 stream.push(token);
97 }
98 },
99 },
100 ]);
101
102 // grab the complete response to store records in Chat Log model
103 const { answer, productIds } = resp.text;
104
105 // select all the details of the recommended product if one was selected
106 let selectedProducts = undefined;
107 if (productIds) {
108 try {
109 selectedProducts = await api.shopifyProduct.findMany({
110 select: {
111 title: true,
112 handle: true,
113 images: {
114 edges: {
115 node: {
116 id: true,
117 source: true,
118 },
119 },
120 },
121 shop: {
122 domain: true,
123 },
124 },
125 filter: {
126 id: {
127 in: productIds,
128 },
129 },
130 });
131
132 // only return a single image!
133 selectedProducts.forEach((product) => {
134 if (product.images.edges.length > 1) {
135 product.images.edges.splice(1);
136 }
137 });
138 } catch (error) {
139 logger.error({ error }, "error fetching data for selected product");
140
141 // destroy the stream and push error message
142 stream.destroy(error);
143 }
144 }
145
146 logger.info({ answer, selectedProducts }, "answer and products being sent to the frontend for display");
147
148 // send the selected product to the stream
149 stream.push(JSON.stringify({ products: selectedProducts }));
150 // close the stream
151 stream.push(null);
152 } catch (error) {
153 // log error to Gadget Logs
154 logger.error({ error: String(error) }, "error getting chat completion");
155
156 // destroy the stream and push error message
157 stream.destroy(error);
158 }
159};
160
161module.exports.options = {
162 schema: {
163 body: {
164 type: "object",
165 properties: {
166 message: {
167 type: "string",
168 },
169 },
170 required: ["message"],
171 },
172 },
173};

Step-by-step instructions for building this route are below.

Set up LangChain 

The first thing you need to do when building this route is to set up LangChain. LangChain needs a couple of things defined before it can be used to respond to a user's chat message, including a parser to format the response from OpenAI, a prompt template that contains some text defining the purpose of the prompt and variables that will be passed in, and finally, the OpenAI model that will be used.

Begin with setting up a StructuredOutputParser. This parser will format the response from OpenAI into a structured JSON object that can be utilized by the frontend to display the response to the user. The parser uses zod to structure the response, which will consist of a string answer and an array of product IDs productIds recommend to shoppers.

routes/POST-chat.js
js
1import { z } from "zod";
2import { StructuredOutputParser } from "langchain/output_parsers";
3
4// a parser for the specific kind of response we want from the LLM
5const parser = StructuredOutputParser.fromZodSchema(
6 z.object({
7 answer: z
8 .string()
9 .describe("answer to the user's question, not including any product IDs, and only using product titles and descriptions"),
10 productIds: z
11 .array(z.string())
12 .describe(
13 "IDs from input product JSON objects for the user to purchase, formatted as an array, or omitted if no products are applicable"
14 ),
15 })
16);

Now that you have a parser, you can set up the prompt template. The prompt template is a string that contains the text that will be used to prompt the OpenAI model to respond to the user's chat message. The prompt template can also include variables that will be passed in when the prompt is invoked. In this case, the prompt template includes a variable for the user's question and a variable for the products that will be passed in as initial recommendations. Formatting instructions created with the parser are also passed into the prompt.

routes/POST-chat.js
js
1/** additional imports */
2import { PromptTemplate } from "langchain/prompts";
3
4/** parser definition */
5
6const prompt = new PromptTemplate({
7 template: `You are a helpful shopping assistant trying to match customers with the right product. You will be given a question from a customer and then maybe some JSON objects with the id, title, and description of products available for sale that roughly match the customer's question. Reply to the question suggesting which products to buy. Only use the product titles and descriptions in your response, do not use the product IDs in your response. If you are unsure or if the question seems unrelated to shopping, say "Sorry, I don't know how to help with that", and include some suggestions for better questions to ask. {format_instructions}
8 Products: {products}
9
10 Question: {question}`,
11 inputVariables: ["question", "products"],
12 partialVariables: { format_instructions: parser.getFormatInstructions() },
13});

Once the parser and prompt are both defined, you can set up LangChain's OpenAI model and the chain that will be called in the route.

routes/POST-chat.js
js
1/** additional imports */
2import { OpenAI } from "langchain/llms/openai";
3import { LLMChain } from "langchain/chains";
4
5/** parser and prompt definition */
6
7const model = new OpenAI({
8 temperature: 0,
9 openAIApiKey: process.env["OPENAI_API_KEY"],
10 streaming: true,
11});
12
13const chain = new LLMChain({ llm: model, prompt, outputParser: parser });

The temperature parameter is set to 0 to ensure that the response from OpenAI is deterministic. The streaming parameter is set to true to ensure that the response from OpenAI is streamed in as it is generated, rather than waiting for the entire response to be generated before returning it. The chain is defined using the model, prompt, and parser.

LangChain model selection

The OpenAI model and LLMChain are used in this tutorial as an example of how to use chains and prompt templates with LangChain. LangChain has a variety of different models, including chat-specific models, that might be worth investigating for your app.

Define route parameters 

Now that LangChain is set up, you can define the route parameters. The route will accept a message from the shopper as part of a request body. The message will be passed into the prompt template.

To define this message parameter, you can use the schema option in the route module's options object. The schema option is used to define the JSON schema for the request body.

Define the parameter at the bottom of routes/POST-chat.js:

routes/POST-chat.js
js
1/** imports */
2
3/** parser, prompt, model, and chain definition */
4
5module.exports = async ({ api, request, reply, logger }) => {
6 // route code
7};
8
9module.exports.options = {
10 schema: {
11 body: {
12 type: "object",
13 properties: {
14 message: {
15 type: "string",
16 },
17 },
18 required: ["message"],
19 },
20 },
21};

Now you can start to write the actual route code that runs when a shopper asks a question.

Find similar products 

The first thing that your route will do is create an embedding vector for the shopper's question and use that vector to find similar products. The embedMessage function created earlier will be used to embed the shopper's question.

Gadget includes a cosineSimilarityTo operator that can be used to sort the results of a read query by cosine similarity to a given vector. The cosineSimilarityTo operator is used in the sort parameter of the findMany query to sort the results by cosine similarity to the embedded message. The first parameter is used to limit the number of results returned to 4.

In other words, the query will return the 4 products that are most similar to the shopper's question.

routes/POST-chat.js
js
1import { embedMessage } from "../openai";
2
3/** imports and chain setup */
4
5/**
6 * Route handler for GET /chat
7 *
8 * @param { import("gadget-server").RouteContext } request context - Everything for handling this route, like the api client, Fastify request, Fastify reply, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context
9 *
10 * @see {@link https://www.fastify.io/docs/latest/Reference/Request}
11 * @see {@link https://www.fastify.io/docs/latest/Reference/Reply}
12 */
13module.exports = async ({ api, request, reply, logger }) => {
14 // embed the incoming message from the user
15 const embedded = await embedMessage(request.body.message);
16
17 // find similar product descriptions
18 const products = await api.shopifyProduct.findMany({
19 sort: {
20 descriptionEmbedding: {
21 cosineSimilarityTo: embedded,
22 },
23 },
24 first: 4,
25 });
26
27 // capture products in Gadget's Logs
28 logger.info({ products, message: request.body.message }, "found products most similar to user input");
29};

A product string is then created from the list of returned products:

routes/POST-chat.js
js
1// JSON-stringify the structured product data to pass to the LLM
2const productString = products
3 .map((product) =>
4 JSON.stringify({
5 id: product.id,
6 title: product.title,
7 description: product.body,
8 })
9 )
10 .join("\n");

You are now ready to invoke the chain and stream the response back to the shopper.

Stream response from LangChain 

Now that the chain is defined, you can invoke it and stream the response back to the shopper. The call method of the chain is used to invoke the chain. The call method takes two parameters: an object containing the input variables for the prompt and an array of callback handlers.

You define a new Readable stream and immediately send it as a response using await reply.send(stream);. Once you have done this, any additional data pushed to the stream will be sent back to the route called.

Finally, chain.call() is invoked to generate a response to the shopper's question, and takes the request.body.message and productString as input for the shopper's question and the products with the closest descriptions matching the question, respectively.

The ConsoleCallbackHandler will output the response from LangChain to the Gadget Logs, so you can see the exact input and output from LangChain. An additional callback is defined using handleLLMNewToken to stream the response from LangChain to the shopper. The handleLLMNewToken callback is invoked every time a new token is generated by LangChain. The token will contain a fragment of the complete response, which will be pushed to the stream and returned to the shopper.

Some additional token handling is also done using tokenText. In the parser, a JSON object was defined for a response. You do not want to push the JSON object keys to the shopper, so they are filtered out using tokenText.includes('"answer": "') && !tokenText.includes('",\n'). The token is then pushed to the stream.

The stream is closed using stream.push(null);. In the case of an error, stream.destroy() is called to close the stream.

routes/POST-chat.js
js
1/** additional imports */
2import { Readable } from "stream";
3import { ConsoleCallbackHandler } from "langchain/callbacks";\
4
5/** chain setup */
6
7/**
8 * Route handler for GET /chat
9 *
10 * @param { import("gadget-server").RouteContext } request context - Everything for handling this route, like the api client, Fastify request, Fastify reply, etc. More on effect context: https://docs.gadget.dev/guides/extending-with-code#effect-context
11 *
12 * @see {@link https://www.fastify.io/docs/latest/Reference/Request}
13 * @see {@link https://www.fastify.io/docs/latest/Reference/Reply}
14 */
15module.exports = async ({ api, request, reply, logger }) => {
16
17 /** find recommended products using embeddings */
18
19 // set up a new stream for returning the response from OpenAI
20 // any data added to the stream will be streamed from Gadget to the route caller
21 // in this case, the route caller is the frontend
22 const stream = new Readable({ read() {} });
23
24 try {
25 // start to return the stream immediately
26 await reply.send(stream);
27
28 let tokenText = "";
29 // invoke the chain and add the streamed response tokens to the Readable stream
30 const resp = await chain.call({ question: request.body.message, products: productString }, [
31 new ConsoleCallbackHandler(),
32 {
33 // as the response is streamed in from OpenAI, stream it to the Gadget frontend
34 handleLLMNewToken: (token) => {
35 tokenText += token;
36 // parse out some of the response formatting tokens
37 if (tokenText.includes('"answer": "') && !tokenText.includes('",\n')) {
38 stream.push(token);
39 }
40 },
41 },
42 ]);
43
44 // close the stream
45 stream.push(null);
46 } catch (error) {
47 // log error to Gadget Logs
48 logger.error({ error: String(error) }, "error getting chat completion");
49
50 // destroy the stream and push error message
51 stream.destroy(error)
52 }
53};

This will stream the entire chat response back to the shopper. But this isn't all you want to return, you also want additional product info so you can display product listings and provide links to the products from your frontend. To do this, you can use the productIds returned from the chain. Not all product IDs that were passed in will be used in the response, so it is important to use the IDs returned from the chain and not the IDs grabbed using the cosine similarity operation.

The returned productIds can be used as a filter on the shopifyProduct model to grab the product details for the recommended products. The await api.shopifyProduct.findMany() is a read operation that will return the fields defined in the select GraphQL query. The product title and handle are returned, as well as the product image source and the product's shop domain. The filter GraphQL query will filter the results to only include products with an ID that is in the productIds array.

Once the product details are returned, they are also sent to the route's caller via the stream.

routes/POST-chat.js
js
1/** imports and chain definition */
2module.exports = async ({ api, request, reply, logger }) => {
3 /** find recommended products using embeddings */
4
5 /** invoke chain and stream response */
6
7 // grab the complete response to store records in Chat Log model
8 const { answer, productIds } = resp.text;
9
10 // select all the details of the recommended product if one was selected
11 let selectedProducts = undefined;
12 if (productIds) {
13 try {
14 selectedProducts = await api.shopifyProduct.findMany({
15 select: {
16 title: true,
17 handle: true,
18 images: {
19 edges: {
20 node: {
21 id: true,
22 source: true,
23 },
24 },
25 },
26 shop: {
27 domain: true,
28 },
29 },
30 filter: {
31 id: {
32 in: productIds,
33 },
34 },
35 });
36
37 // only return a single image!
38 selectedProducts.forEach((product) => {
39 if (product.images.edges.length > 1) {
40 product.images.edges.splice(1);
41 }
42 });
43 } catch (error) {
44 logger.error({ error }, "error fetching data for selected product");
45
46 // destroy the stream and push error message
47 stream.destroy(error);
48 }
49 }
50
51 logger.info({ selectedProducts }, "products being sent to the frontend for display");
52
53 // send the selected product to the stream
54 stream.push(JSON.stringify({ products: selectedProducts }));
55
56 /** close the steam + catch statement */
57};

Your route is now complete! Now all that is needed is a frontend app that allows shoppers to ask a question and displays the response along with product recommendations. You're going to use Gadget's hosted React frontends to build this UI.

Build a frontend 

All Gadget apps come with hosted Vite frontends that can be used to build your UI. You can use these frontends as a starting point for your UI, or you can build your UI from scratch. For this tutorial, you're going to use the hosted React frontend to build a chat widget that can be embedded on any page of your Shopify store.

Build the chat widget 

In your Gadget app's frontend folder, create a new file called Chat.jsx. This file will contain the React component for the chat widget. The complete code for this file is below, but you'll walk through each piece of the code in the following sections.

frontend/Chat.jsx
jsx
1import { useState } from "react";
2import "./App.css";
3import { useFetch } from "@gadgetinc/react";
4
5export const Chat = () => {
6 const [userMessage, setUserMessage] = useState("");
7 const [reply, setReply] = useState("");
8 const [productRecommendations, setProductRecommendations] = useState(null);
9 const [errorMessage, setErrorMessage] = useState("");
10
11 const [{ data, fetching, error }, sendChat] = useFetch("/chat", {
12 method: "POST",
13 headers: {
14 "content-type": "application/json",
15 },
16 stream: true,
17 });
18
19 return (
20 <section>
21 <form
22 onSubmit={async (e) => {
23 e.preventDefault();
24 setReply("");
25 setProductRecommendations(null);
26 setErrorMessage("");
27
28 const stream = await sendChat({
29 body: JSON.stringify({ message: userMessage }),
30 });
31
32 const decodedStreamReader = stream.pipeThrough(new TextDecoderStream()).getReader();
33
34 // handle any stream errors
35 decodedStreamReader.closed.catch((error) => {
36 setErrorMessage(error.toString());
37 });
38
39 let replyText = "";
40 let done = false;
41 while (!done) {
42 const { value, done: doneReading } = await decodedStreamReader.read();
43
44 done = doneReading;
45
46 // handle the recommended products
47 if (value?.includes(`{"products":`)) {
48 setProductRecommendations(JSON.parse(value));
49 } else if (value) {
50 replyText = replyText + value;
51 replyText = replyText.replace('"', "").replace(",", "");
52 setReply(replyText);
53 }
54 }
55 }}
56 >
57 <textarea
58 placeholder="Ask a question about this shop's products ...."
59 value={userMessage}
60 onChange={(event) => setUserMessage(event.currentTarget.value)}
61 />
62 <input type="submit" value="Ask" disabled={fetching} />
63 </form>
64 <br />
65
66 {errorMessage && (
67 <section>
68 <pre>
69 <code>{errorMessage}</code>
70 </pre>
71 </section>
72 )}
73
74 {reply && (
75 <section>
76 <p>{reply}</p>
77 <br />
78 <div>
79 {productRecommendations?.products ? (
80 productRecommendations.products.map((product, i) => (
81 <a key={`${i}_${product.title}`} href={"https://" + product.shop.domain + "/products/" + product.handle} target="_blank">
82 {product.title}
83 {product.images.edges[0] && (
84 <img style={{ border: "1px black solid" }} width="200px" src={product.images.edges[0].node.source} />
85 )}
86 </a>
87 ))
88 ) : (
89 <span>Loading recommendations...</span>
90 )}
91 </div>
92 </section>
93 )}
94 {fetching && <span>Thinking...</span>}
95 {error && <p className="error">There was an error: {String(error)}</p>}
96 </section>
97 );
98};

Step-by-step chat widget build 

The first thing to set up when building the chat widget is the useFetch hook. This hook is provided by Gadget and is used to make requests to the backend HTTP route. The useFetch hook takes two arguments: the backend route to call and an options object. The options object is used to configure the request. In this case, you're setting the request method to POST and the content-type header to application/json. You're also setting the stream option to true. This option tells the useFetch hook to return a stream that can be used to read the response from the backend route. The useFetch hook returns an object containing response and fetching info, and a function that can be called to make the actual request to your chat route. This function is named sendChat.

frontend/Chat.jsx
jsx
1import { useState } from "react";
2import "./App.css";
3import { useFetch } from "@gadgetinc/react";
4
5export const Chat = () => {
6 const [userMessage, setUserMessage] = useState("");
7 const [reply, setReply] = useState("");
8 const [productRecommendations, setProductRecommendations] = useState(null);
9
10 // Gadget's useFetch hook is used to make requests to the backend HTTP route
11 const [{ data, fetching, error }, sendChat] = useFetch("/chat", {
12 method: "POST",
13 headers: {
14 "content-type": "application/json",
15 },
16 stream: true,
17 });
18
19 return <div>hello, world</div>;
20};

React state is also defined above to manage the text that the shopper enters into the chat widget, as well as the streamed chat response and recommended product info. The next thing to add is a <form> that makes use of this state. A shopper will use the form to ask a question. The form's onSubmit callback will handle the streamed response from the backend route using Node's built-in streaming tooling.

frontend/Chat.jsx
jsx
1<form
2 onSubmit={async (e) => {
3 e.preventDefault();
4
5 // remove any previous messages and product recommendations
6 setReply("");
7 setProductRecommendations(null);
8
9 // send the user's message to the backend route
10 // the response will be streamed back to the frontend
11 const stream = await sendChat({
12 body: JSON.stringify({ message: userMessage }),
13 });
14
15 // decode the streamed response
16 const decodedStreamReader = stream.pipeThrough(new TextDecoderStream()).getReader();
17
18 // handle any stream errors
19 decodedStreamReader.closed.catch((error) => {
20 setErrorMessage(error.toString());
21 });
22
23 let replyText = "";
24 let done = false;
25
26 // read the response from the stream
27 while (!done) {
28 const { value, done: doneReading } = await decodedStreamReader.read();
29
30 done = doneReading;
31
32 // handle the recommended products that are returned from the stream
33 if (value?.includes(`{"products":`)) {
34 setProductRecommendations(JSON.parse(value));
35 } else if (value) {
36 // handle the chat response
37 replyText = replyText + value;
38 replyText = replyText.replace('"', "").replace(",", "");
39 setReply(replyText);
40 }
41 }
42 }}
43>
44 <textarea
45 placeholder="Ask a question about this shop's products ...."
46 value={userMessage}
47 onChange={(event) => setUserMessage(event.currentTarget.value)}
48 />
49 <input type="submit" value="Ask" disabled={fetching} />
50</form>

This code will send the user's message to the backend route and then read the response from the stream. The response will be a JSON string that contains the chat response and any recommended products. The while loop will read the response from the stream until the stream is done. The while loop will also handle the response by setting the chat response and recommended products in React state.

The next thing to do is render the chat response and recommended products in the chat widget. The chat response is rendered in a <p> tag, and the recommended products are rendered in an <a> tag. The <a> tag will link to the product's page on the shop's storefront.

frontend/Chat.jsx
jsx
1{
2 reply && (
3 <section>
4 <p>{reply}</p>
5 <br />
6 <div>
7 {productRecommendations?.products ? (
8 productRecommendations.products.map((product, i) => (
9 <a key={`${i}_${product.title}`} href={"https://" + product.shop.domain + "/products/" + product.handle} target="_blank">
10 {product.title}
11 {product.images.edges[0] && (
12 <img style={{ border: "1px black solid" }} width="200px" src={product.images.edges[0].node.source} />
13 )}
14 </a>
15 ))
16 ) : (
17 <span>Loading recommendations...</span>
18 )}
19 </div>
20 </section>
21 );
22}

Adding loading and error messaging is also nice. This can be done by rendering a <span> with a loading message when the fetching variable is true, and rendering a <p> with an error message when the error variable is true.

frontend/Chat.jsx
jsx
1return (
2 <section>
3 {/** form and recommendations */}
4 {fetching && <span>Thinking...</span>}
5 {error && <p className="error">There was an error: {String(error)}</p>}
6 </section>
7);

Hook up chat widget to frontend project 

Now you can add the chat widget to your app's frontend. Because this is not an embedded Shopify app, you can simplify frontend/App.jsx with the following code that imports the chat widget and renders it at your app's default route:

frontend/App.jsx
jsx
1import "./App.css";
2import { Route, Routes } from "react-router-dom";
3import { Chat } from "./Chat";
4
5const App = () => (
6 <main>
7 <header>
8 <a href="https://gadget.dev" className="logo">
9 <img src="https://assets.gadget.dev/assets/icon.svg" height="52" alt="Gadget" />
10 </a>
11 <h1>AI Product Recommender Chatbot</h1>
12 </header>
13 <Routes>
14 <Route path="/" element={<Chat />} />
15 </Routes>
16 <br />
17 <footer>
18 Running in <span className={window.gadgetConfig.environment}>{window.gadgetConfig.environment}</span>
19 </footer>
20 </main>
21);
22
23export default App;

This isn't an admin-embedded Shopify app, so you can use the Provider from @gadgetinc/react instead of the App Bridge Provider. The Provider enables you to make authenticated requests with your Gadget app's api client defined in frontend/api.js.

You can overwrite the default code in frontend/main.jsx with the following:

frontend/main.jsx
jsx
1import { Provider } from "@gadgetinc/react";
2import React from "react";
3import ReactDOM from "react-dom/client";
4import { BrowserRouter } from "react-router-dom";
5
6import { api } from "./api";
7import App from "./App";
8
9const root = document.getElementById("root");
10if (!root) throw new Error("#root element not found for booting react app");
11
12ReactDOM.createRoot(root).render(
13 <React.StrictMode>
14 <Provider value={api.connection.currentClient}>
15 <BrowserRouter>
16 <App />
17 </BrowserRouter>
18 </Provider>
19 </React.StrictMode>
20);

Finally, add some styles to make the chat widget look nice. You can copy-paste the following css into frontend/App.css.

frontend/App.css
css
1body {
2 background: #f3f3f3;
3 color: #252525;
4 padding: 55px;
5 line-height: 1.5;
6 font-family: "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
7 max-width: 60%;
8 margin: auto;
9}
10
11main {
12 flex: 1 0 auto;
13 align-items: center;
14 justify-content: center;
15}
16
17h1 {
18 font-size: 32px;
19 margin: 24px 0;
20 max-width: none;
21}
22h2 {
23 font-size: 24px;
24 max-width: none;
25}
26h3 {
27 font-size: 16px;
28 max-width: none;
29}
30
31table {
32 border-collapse: collapse;
33 margin: 7px 0px;
34}
35
36td {
37 padding: 7px 6px;
38}
39
40header {
41 display: flex;
42 flex-direction: row;
43 align-items: center;
44 gap: 1em;
45 margin-bottom: 2em;
46}
47
48.logo {
49 display: inline-block;
50}
51
52.Development {
53 color: #87550b;
54}
55
56.Production {
57 color: #5d39bb;
58}
59
60form {
61 display: flex;
62 flex-direction: column;
63 align-items: center;
64}
65
66form > input {
67 width: 200px;
68}
69
70code {
71 font-family: "SFMono-Regular", Consolas, "Liberation Mono", Menlo, Courier, monospace;
72 font-size: 0.95em;
73 font-weight: bold;
74}
75
76.logo {
77 display: inline-block;
78}
79
80a {
81 color: currentColor;
82}
83
84a:hover {
85 text-decoration: none;
86}
87
88:focus {
89 outline: 1px dashed currentColor;
90 outline-offset: 0.15rem;
91}
92
93textarea {
94 min-width: 500px;
95 min-height: 100px;
96 margin-bottom: 16px;
97}
98
99section > div {
100 display: flex;
101 flex-direction: row;
102 justify-content: space-evenly;
103}
104
105section > img {
106 border: black solid 1px;
107}
108
109section > div > a {
110 display: flex;
111 flex-direction: column;
112 align-items: center;
113 justify-content: space-between;
114 width: 50%;
115}

Test out your chatbot 

To view your app:

  • Click on your app name in the top left corner of the Gadget dashboard
  • Hover over Go to app and select your Development app

Congrats! You have built a chatbot that uses the OpenAI API to find the best products to recommend to shoppers. Test it out by asking a question. The chatbot should respond with a list of products that are relevant to the question.

Screenshot of the finished chatbot, with a question entered (asking about backpacks for sale) and a response. The response includes a text response and product recommendations, both generated by langchain

Next steps 

Have questions about the tutorial? Join Gadget's developer Discord to ask Gadget employees and join the Gadget developer community!

Want to learn more about building AI apps in Gadget? Check out our building AI apps documentation.