Implementing Text Streaming with OpenAI Integration

Posted By :Mohd Ubaish |22nd February 2024

Server-Side Implementation

Let's start with the server-side implementation. We'll be using Node.js and Express for our backend. Here's a breakdown of the key components: 

 

 

1. Initializing the OpenAI Client: First, you need to set up your Node.js server to interact with OpenAI's API. You can achieve this by installing the openai npm package and initializing the client with your API key.

 

 

const OpenAI = require('openai-api');
const openai = new OpenAI('YOUR_API_KEY');

 

2. Handling Requests: When a request is received from the client, you'll collect the conversation history and pass it to the OpenAI API for completion.

 

app.post('/api/v1/chat', async (req, res) => {
    const { history } = req.body;

    try {
        const completion = await openai.chat.completions.create({
            model: "gpt-3.5-turbo",
            messages: history,
            stream: true,
        });

        res.status(200).set({
            'Content-Type': 'application/json',
            'Transfer-Encoding': 'chunked',
            'X-Content-Type-Options': 'nosniff',
        });

        let responseEnded = false;

        for await (const chunk of completion) {
            if (!responseEnded && chunk && chunk.choices && chunk.choices[0] && chunk.choices[0].delta && chunk.choices[0].delta.content) {
                const responseChunk = chunk.choices[0].delta.content;
                res.write(responseChunk);
            }
        }

        if (!responseEnded) {
            res.end(); // End the response stream once all chunks are sent
            responseEnded = true;
        }
    } catch (error) {
        console.error('Error:', error);
        res.status(500).json({ error: 'Internal Server Error' });
    }
});

3. Streaming Response: As the completion is generated by the OpenAI model, you'll stream the chunks of text back to the client using chunked encoding.

 

 

Client-Side Implementation

 

1. Sending Requests: When a user inputs a message, you'll send a request to the server with the conversation history.

 

const generateResponse = async () => {
    try {
        const newGptHistory = [...gptHistory, { role: "user", content: message }, { role: "system", content: "" }];
        setGptHistory(newGptHistory);
        setMessage("");
        setLoading(true);
        const response = await fetch('http://localhost:5000/api/v1/chat', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': localStorage.getItem('authorization')
            },
            body: JSON.stringify({ history: newGptHistory }),
        });

        const reader = response.body.getReader();
        let responseMessage = "";
        while (true) {
            const { done, value } = await reader.read();

            if (done) {
                break;
            }

            const stringValue = new TextDecoder('utf-8').decode(value);
            setStreamData(prev => prev + stringValue);
            responseMessage += stringValue;
        }

        const updatedHistory = [...gptHistory, { role: "user", content: message }, { role: "system", content: responseMessage }];
        setGptHistory(updatedHistory);
        setStreamData("");
        setLoading(false);
    } catch (error) {
        console.error('Error fetching data:', error);
    }
};

 

 

2. Handling Streamed Response: As the response is received in chunks, you'll update the conversation history and display the streamed text to the user in real-time.


By following these steps, you can implement text streaming with OpenAI integration in your web applications, enabling seamless communication between users and AI models.

In conclusion, text streaming offers a powerful way to enhance user experiences by providing real-time interactions with AI models. With OpenAI's language models and the right implementation, you can create dynamic and engaging applications that adapt to user inputs instantaneously.

 

 

 

 


About Author

Mohd Ubaish

Mohd Ubaish is a highly skilled Backend Developer with expertise in a wide range of technologies, including React, Node.js, Express.js, MongoDB, and JavaScript.With a deep understanding of both front-end and back-end development, he is currently dedicated to the development of TripCongo Web Discovery.He is committed to creating a user-friendly experience on the frontend while ensuring the seamless functionality of the backend. By staying up-to-date with the latest industry trends and advancements, he strives to provide innovative solutions and optimize the performance of the application.

Request For Proposal

[contact-form-7 404 "Not Found"]

Ready to innovate ? Let's get in touch

Chat With Us