What does the OpenAI connection provide? 

The OpenAI connection allows you to interact with the OpenAI API using a pre-authenticated client. It's designed to simplify the process of integrating OpenAI's powerful language models into your Gadget application.

Setting up the OpenAI connection 

In your Gadget application, click Connections on the sidebar to view the list of connections and click on the OpenAI connection to configure it.

A view of Gadget's connections page with an arrow pointing to the OpenAI connection

Choose between using Gadget's managed OpenAI API key or using your own API keys.

A view of Gadget's OpenAI connection modal with the Use Gadget's keys option selected

Option 1: Use Gadget's keys 

For a quick start without signing up for an OpenAI account, choose Gadget managed keys. They come pre-configured and include $50 in OpenAI credits to begin. Keep in mind that Gadget managed keys have rate limits and are meant for development purposes only. When you're ready for production, replace them with your own OpenAI keys.

Option 2: Use your own API keys 

If you already have an OpenAI account or desire greater API control, simply use your own OpenAI keys. This grants you the ability to manage API rate limits and access the full range of OpenAI platform features.

How does the OpenAI connection work? 

With the OpenAI connection set up, you can now start using the OpenAI API in your Gadget actions and routes to leverage the power of OpenAI's models.

Example: Using a chat completion in a global action 

Create a new global action with the API Identifier useOpenAI and add the following code to the action's run function:

1export async function run({ params, logger, api, connections, scope }) {
2 const chatCompletion = await{
3 model: "gpt-3.5-turbo",
4 messages: [{ role: "user", content: "Hello from Gadget!" }],
5 });
7 scope.result = { message: chatCompletion.choices[0].message };
A view of a global action using the OpenAI connection.

Now click the Run action button to go to your API playground and test the action. You should see a result like this:

2 "data": {
3 "useOpenAI": {
4 "success": true,
5 "errors": null,
6 "result": {
7 "message": {
8 "role": "assistant",
9 "content": "Hello Gadget! How can I assist you today?"
10 }
11 }
12 }
13 },
14 "extensions": {
15 "logs": "",
16 "traceId": "bda7ac999cd8ec291ecdb47f12319720"
17 }

Using LangChain with Gadget-managed credentials 

If you are working with LangChain, please note that with gadget-managed keys, passing the required API key is not enough; you also need to include a baseURL parameter. The snippets below show how to set up two models using LangChain: ChatOpenAI and OpenAIEmbeddings.

Chat model integration 

1import { ChatOpenAI } from "langchain/chat_models/openai";
3const chatModel = new ChatOpenAI({
4 streaming: true,
5 modelName: "gpt-3.5-turbo-0613",
6 temperature: 0,
7 openAIApiKey: connections.openai.configuration.apiKey,
8 configuration: {
9 basePath: connections.openai.configuration.baseURL,
10 // Set the baseURL for OpenAI API calls
11 },

Text embedding integration 

1import { OpenAIEmbeddings } from "langchain/embeddings/openai";
2const openAIEmbedding = new OpenAIEmbeddings(
3 {
4 openAIApiKey: connections.openai.configuration.apiKey,
5 },
6 {
7 basePath: connections.openai.configuration.baseURL,
8 // Set the baseURL for OpenAI API calls
9 }