This package is a simple and convenient way to log all requests made through the OpenAI API with Helicone. You can easily track and manage your OpenAI API usage and monitor your GPT models' cost, latency, and performance on the Helicone platform.
-
To get started, install the
helicone-openai
package:npm install @helicone/helicone
-
Set HELICONE_API_KEY as an environment variable:
Set HELICONE_API_KEY as an environment variable:
ℹ️ You can also set the Helicone API Key in your code (See below).
-
Replace:
const { ClientOptions, OpenAI } = require("openai");
with:
const { HeliconeProxyOpenAI as OpenAI, IHeliconeProxyClientOptions as ClientOptions } = require("helicone");
-
Make a request Chat, Completion, Embedding, etc usage is equivalent to OpenAI package.
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, heliconeMeta: { apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable // ... additional helicone meta fields }, }); const chatCompletion = await openai.chat.completion.create({ model: "gpt-3.5-turbo", messages: [{ role: "user", content: "Hello world" }], }); console.log(chatCompletion.data.choices[0].message);
Ensure you store the helicone-id header returned in the original response.
const { data, response } = await openai.chat.completion
.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello world" }],
})
.withResponse();
const heliconeId = response.headers.get("helicone-id");
await openai.helicone.logFeedback(heliconeId, HeliconeFeedbackRating.Positive); // or Negative
interface IHeliconeMeta {
apiKey?: string;
properties?: { [key: string]: any };
cache?: boolean;
retry?: boolean | { [key: string]: any };
rateLimitPolicy?: string | { [key: string]: any };
user?: string;
baseUrl?: string;
onFeedback?: OnHeliconeFeedback; // Callback after feedback was processed
}
type OnHeliconeLog = (response: Response) => Promise<void>;
type OnHeliconeFeedback = (result: Response) => Promise<void>;
const options = new IHeliconeProxyClientOptions({
apiKey,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
cache: true,
retry: true,
properties: {
Session: "24",
Conversation: "support_issue_2",
},
rateLimitPolicy: {
quota: 10,
time_window: 60,
segment: "Session",
},
},
});
-
To get started, install the
helicone-openai
package:npm install @helicone/helicone
-
Set HELICONE_API_KEY as an environment variable:
Set HELICONE_API_KEY as an environment variable:
ℹ️ You can also set the Helicone API Key in your code (See below).
-
Replace:
const { ClientOptions, OpenAI } = require("openai");
with:
const { HeliconeAsyncOpenAI as OpenAI, IHeliconeAsyncClientOptions as ClientOptions } = require("helicone");
-
Make a request Chat, Completion, Embedding, etc usage is equivalent to OpenAI package.
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, heliconeMeta: { apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable // ... additional helicone meta fields }, }); const chatCompletion = await openai.chat.completion.create({ model: "gpt-3.5-turbo", messages: [{ role: "user", content: "Hello world" }], }); console.log(chatCompletion.data.choices[0].message);
With Async logging, you must retrieve the helicone-id
header from the log response (not LLM response).
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
onLog: async (response: Response) => {
const heliconeId = response.headers.get("helicone-id");
await openai.helicone.logFeedback(
heliconeId,
HeliconeFeedbackRating.Positive
);
},
},
});
Async logging loses some additional features such as cache, rate limits, and retries
interface IHeliconeMeta {
apiKey?: string;
properties?: { [key: string]: any };
user?: string;
baseUrl?: string;
onLog?: OnHeliconeLog;
onFeedback?: OnHeliconeFeedback;
}
type OnHeliconeLog = (response: Response) => Promise<void>;
type OnHeliconeFeedback = (result: Response) => Promise<void>;
For more information see our documentation.