HTTP SSE (Server-Sent Events) Access
To provide broader compatibility and a more convenient access method, the Bitseek Chat gateway also supports streaming calls via the HTTP SSE (Server-Sent Events) protocol.
This interface is designed to be fully compatible with OpenAI's Chat Completions API. This means you can use any OpenAI-compatible client library (such as the official openai libraries for Python and Node.js) to connect easily by pointing the base_url (or baseURL) to our service address.
Endpoint
- URL:
https://chat-proxy.bitseek.ai/v1/chat/completions - Method:
POST - Headers:
Content-Type: application/jsonAccept: text/event-streamAuthorization: Bearer YOUR_API_KEY
Request Body
The request body follows OpenAI's chat/completions format. You must set the stream parameter to true.
Example:
{
"messages": [
{
"role": "user",
"content": "Hello, please introduce yourself."
}
],
"stream": true
}
Client Examples
Python
This example shows how to call the service using the official openai Python library.
1. Installation:
pip install openai
2. Code:
from openai import OpenAI
# 1. Initialize the client
# Point the base_url to the Bitseek Chat service address
# Use your authentication key for api_key
client = OpenAI(
base_url="https://chat-proxy.bitseek.ai/v1",
api_key="YOUR_API_KEY"
)
# 2. Make a streaming request
# Set the stream parameter to True
stream = client.chat.completions.create(
messages=[{"role": "user", "content": "Write a short poem about the starry sky"}],
stream= true
)
# 3. Process the streaming response
# Iterate through the event stream and print the content
print("AI: ", end="")
for chunk in stream:
content = chunk.choices[0].delta.content
if content:
print(content, end="", flush=True)
print() # Newline at the end
Node.js
This example shows how to call the service using the official openai Node.js library.
1. Installation:
npm install openai
2. Code:
import OpenAI from "openai";
// 1. Initialize the client
// Point the baseURL to the Bitseek Chat service address
// Use your authentication key for apiKey
const client = new OpenAI({
baseURL: "https://chat-proxy.bitseek.ai/v1",
apiKey: "YOUR_API_KEY",
});
async function main() {
// 2. Make a streaming request
// Set the stream parameter to true
const stream = await client.chat.completions.create({
messages: [
{
role: "user",
content: "Write a short poem about the starry sky",
},
],
stream: true,
});
// 3. Process the streaming response
// Iterate through the event stream and print the content
process.stdout.write("AI: ");
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
process.stdout.write("\n"); // Newline at the end
}
main();
Java
This example uses the popular openai-java library by Theo Kanning.
1. Dependency (Maven):
Add this to your pom.xml:
<dependency>
<groupId>com.theokanning.openai-gpt3-java</groupId>
<artifactId>service</artifactId>
<version>0.18.2</version>
</dependency>
2. Code:
import com.theokanning.openai.completion.chat.ChatCompletionRequest;
import com.theokanning.openai.completion.chat.ChatMessage;
import com.theokanning.openai.service.OpenAiService;
import java.time.Duration;
import java.util.List;
class SseExample {
public static void main(String[] args) {
// 1. Initialize the service
// Use your authentication key for the token.
// Customize the base URL to point to the Bitseek Chat service.
// Note: The base URL should NOT include the /v1 path for this library.
String token = "YOUR_API_KEY";
String baseUrl = "https://your-api-domain.com";
OpenAiService service = new OpenAiService(token, baseUrl, Duration.ofSeconds(60));
// 2. Create a streaming request
// Set stream to true to enable SSE.
ChatCompletionRequest request = ChatCompletionRequest.builder()
.model("your-model-name")
.messages(List.of(new ChatMessage("user", "Write a short poem about the starry sky")))
.stream(true)
.build();
// 3. Process the streaming response
// The streamChatCompletion method returns a Flowable that emits events.
// We block and print each chunk's content to the console.
System.out.print("AI: ");
service.streamChatCompletion(request)
.blockingForEach(chunk -> {
chunk.getChoices().forEach(choice -> {
String content = choice.getMessage().getContent();
if (content != null) {
System.out.print(content);
}
});
});
System.out.println(); // Newline at the end
service.shutdownExecutor();
}
}
General Code Explanation
base_url/baseURL: This is the most critical parameter. You must set it to the API address provided by Bitseek Chat. Note the path differences required by different libraries (/v1for Python/Node.js, no path for the Java library).api_key/token: Provide your valid API key for authentication.stream: true: This flag enables the SSE streaming response.chunk: Eachchunkobject received from the stream is consistent with the OpenAI SDK'sChatCompletionChunkstructure, allowing you to easily extract incremental content.
Data Stream Format
Each event returned by the server is a data: field containing an OpenAI-compatible JSON string. The stream is terminated by a special data: [DONE] event. The client libraries handle the parsing of these events automatically.