Text Completions

Edgen AI Text Completions SDK

The Edgen AI Text Completions SDK empowers developers to generate dynamic, real-time text completions with streaming capabilities. It integrates seamlessly with any Edgen AI text model, offering flexibility with custom parameters and tailored instructions, making it ideal for conversational AI, content generation, and more.


Key Features

  • Flexible Models: Compatible with all Edgen AI text models, including GPT-based variants.
  • Real-Time Streaming: Provides instant response streaming for low-latency applications.
  • Customizable Parameters: Adjust settings like temperature, max tokens, and more.
  • Versatile Applications: Perfect for chatbots, creative writing, document analysis, function calling, and more.
  • Simple Integration: Designed for quick setup and minimal configuration.

Getting Started

Setting Up the Client

To get started, install the Edgen AI library and use it to configure the client:

import EdgenAI from "edgenai";
 
const client = new EdgenAI({
  apiKey: "YOUR_API_KEY",
});

Generating Text Completions

Here is an example of how to generate text completions with the SDK:

async function generateTextCompletions() {
  const response = await client.chat.completions.create({
    model: "clara-8B",
    messages: [
      { role: "system", content: "You are a sales assistant." },
      { role: "user", content: "What is the price of the product?" },
    ],
  });
  console.log(response.choices[0].message.content);
}
 
generateTextCompletions();

By default, the API returns the text completions as a JSON object with the following structure:

{
  "id": "chatcmpl-7zv9rDxkEhjLBfU0mfxbG67ArTV60",
  "object": "chat.completion",
  "created": 1699342000,
  "model": "clara-8B",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The price of the product is $100."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 14,
    "completion_tokens": 12,
    "total_tokens": 26
  }
}

Streaming text completions

Here is a simple example of how to stream text completions:

async function streamTextCompletions() {
    const stream = await client.chat.completions.create({
        model: "clara-8B",
        messages: [
            { role: "system", content: "You are a sales assistant." },
            { role: "user", content: "What is the price of the product?" },
        ],
        temperature: 0.5,
        max_tokens: 100,
        top_p: 0.9,
        top_k: 50,
        stream: true,
    });
 
    for await (const chunk of stream) {
        console.log(chunk.choices[0]?.delta?.content || "");
    }
 
}
 
streamTextCompletions();

By default, the API returns the text completions as a JSON object with the following structure:

{
    "id": "chatcmpl-7zv9rDxkEhjLBfU0mfxbG67ArTV60",
    "object": "chat.completion.chunk",
    "created": 1699342000,
    "model": "clara-8B",
    "choices": [
        {
            "delta": {
                "role": "assistant",
                "content": "The"
            },
            "index": 0,
            "finish_reason": null
        }
    ]
}