Photo by Glenn Carstens-Peters on Unsplash
Harnessing OpenAI's GPT-4 API with Python, Poetry and Unittest
The OpenAI's GPT-4, an advanced language model, offers a powerful tool for natural language processing tasks, including text generation, translation, and more. In this tutorial, I'll explain how to interact with the GPT-4 API using Python, manage dependencies with Poetry, and write tests using the unittest library.
Setup and Dependencies
To start, I use Poetry, a Python dependency management tool that ensures the project works seamlessly across different environments. It uses a pyproject.toml
file to manage dependencies.
Here's a sample pyproject.toml
:
[tool.poetry]
name = "chatgpt-api"
version = "0.1.0"
description = "Python script to interact with the ChatGPT API"
authors = ["Your Name <your.email@example.com>"]
[tool.poetry.dependencies]
python = "^3.9"
openai = "^0.27.0"
toml = "^0.10.2"
[tool.poetry.dev-dependencies]
pytest = "^6.2.5"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
This file includes the project metadata and dependencies. Among the dependencies are Python 3.9, OpenAI's Python client library, and the toml library for parsing and creating TOML files.
Interacting with the OpenAI API
The Python script main.py
uses the OpenAI Python library to interact with the GPT-4 API. I use the config.toml
file to store the OpenAI API key securely.
The main.py
script is structured into three main functions:
load_config()
This function loads the config.toml
file and returns the OpenAI API key.
def load_config():
# Load the configuration file
config = toml.load("config.toml")
return config["openai"]["api_key"]
call_chat_api(prompt)
This function accepts a prompt, sends it to the GPT-4 API, and returns the generated text. It first loads the API key with the load_config
function, then uses the openai.Completion.create
method to call the API.
def call_chat_api(prompt):
api_key = load_config()
openai.api_key = api_key
response = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
max_tokens=60
)
return response.choices[0].text.strip()
main()
This is the main function that gets user input, calls the call_chat_api
function with the input, and prints the generated response.
def main():
prompt = input("Enter your question: ")
response = call_chat_api(prompt)
print(f"ChatGPT response: {response}")
Testing with Unittest
To ensure the code works correctly, I'll use Python's unittest framework to write tests. Here's a basic test for the load_config
function:
import unittest
from unittest.mock import patch
from main import load_config
class TestLoadConfig(unittest.TestCase):
@patch('toml.load')
def test_load_config(self, mock_load):
mock_load.return
mock_load.return_value = {"openai": {"api_key": "test_key"}}
result = load_config()
self.assertEqual(result, "test_key")
if __name__ == '__main__':
unittest.main()
The TestLoadConfig
class inherits from unittest.TestCase
. It has a single test method, test_load_config
, which tests the load_config
function from the main.py
script. The @patch
decorator is used to mock the toml.load
function, so it returns a predefined result instead of reading an actual file. This allows us to test load_config
in isolation.
Conclusion
In this tutorial, I've walked through creating a Python script that uses the OpenAI's GPT-4 API for text generation. I have used Poetry for managing dependencies and unittest for writing tests to ensure the functionality of the code. With the understanding of these concepts, you can now integrate OpenAI's powerful language model into your Python applications.
Remember to always securely handle your API keys, especially when your code is hosted publicly. Never hard-code your API keys directly into your scripts, always use a secure method such as environment variables or configuration files that aren't included in your version control system.
I hope this guide serves as a helpful starting point for your journey with OpenAI's GPT-4 and Python. Happy coding!