Harnessing OpenAI's GPT-4 API with Python, Poetry and Unittest

Play this article

The OpenAI's GPT-4, an advanced language model, offers a powerful tool for natural language processing tasks, including text generation, translation, and more. In this tutorial, I'll explain how to interact with the GPT-4 API using Python, manage dependencies with Poetry, and write tests using the unittest library.

Setup and Dependencies

To start, I use Poetry, a Python dependency management tool that ensures the project works seamlessly across different environments. It uses a pyproject.toml file to manage dependencies.

Here's a sample pyproject.toml:

name = "chatgpt-api"
version = "0.1.0"
description = "Python script to interact with the ChatGPT API"
authors = ["Your Name <your.email@example.com>"]

python = "^3.9"
openai = "^0.27.0"
toml = "^0.10.2"

pytest = "^6.2.5"

requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

This file includes the project metadata and dependencies. Among the dependencies are Python 3.9, OpenAI's Python client library, and the toml library for parsing and creating TOML files.

Interacting with the OpenAI API

The Python script main.py uses the OpenAI Python library to interact with the GPT-4 API. I use the config.toml file to store the OpenAI API key securely.

The main.py script is structured into three main functions:


This function loads the config.toml file and returns the OpenAI API key.

def load_config():
    # Load the configuration file
    config = toml.load("config.toml")
    return config["openai"]["api_key"]


This function accepts a prompt, sends it to the GPT-4 API, and returns the generated text. It first loads the API key with the load_config function, then uses the openai.Completion.create method to call the API.

def call_chat_api(prompt):
    api_key = load_config()
    openai.api_key = api_key

    response = openai.Completion.create(
    return response.choices[0].text.strip()


This is the main function that gets user input, calls the call_chat_api function with the input, and prints the generated response.

def main():
    prompt = input("Enter your question: ")
    response = call_chat_api(prompt)
    print(f"ChatGPT response: {response}")

Testing with Unittest

To ensure the code works correctly, I'll use Python's unittest framework to write tests. Here's a basic test for the load_config function:

import unittest
from unittest.mock import patch
from main import load_config

class TestLoadConfig(unittest.TestCase):
    def test_load_config(self, mock_load):
        mock_load.return_value = {"openai": {"api_key": "test_key"}}
        result = load_config()
        self.assertEqual(result, "test_key")

if __name__ == '__main__':

The TestLoadConfig class inherits from unittest.TestCase. It has a single test method, test_load_config, which tests the load_config function from the main.py script. The @patch decorator is used to mock the toml.load function, so it returns a predefined result instead of reading an actual file. This allows us to test load_config in isolation.


In this tutorial, I've walked through creating a Python script that uses the OpenAI's GPT-4 API for text generation. I have used Poetry for managing dependencies and unittest for writing tests to ensure the functionality of the code. With the understanding of these concepts, you can now integrate OpenAI's powerful language model into your Python applications.

Remember to always securely handle your API keys, especially when your code is hosted publicly. Never hard-code your API keys directly into your scripts, always use a secure method such as environment variables or configuration files that aren't included in your version control system.

I hope this guide serves as a helpful starting point for your journey with OpenAI's GPT-4 and Python. Happy coding!

Did you find this article valuable?

Support Theo van der Sluijs by becoming a sponsor. Any amount is appreciated!