Playing With GPT-3 Using The OpenAI API And Python

Photo by DeepMind on Unsplash

Playing With GPT-3 Using The OpenAI API And Python

Play around with the GPT-3 language model using the OpenAI API and corresponding Python SDK.

ยท

6 min read

Introduction

In this article, we're going to be playing around with GPT-3. I'll show you how you can access it, and provide some examples of what you can do with it, and what kind of applications you can potentially build with it!

Getting started

Before you can use GPT-3, you must first create an account with OpenAI. Once you have set up your account, in order to access the API, you need to add billing credentials. OpenAI will charge you on a per request basis. You can view the API cost here.

Once you have added your billing details, you will be able to retrieve your API key. You will need this to access the API. It is important that you keep this a secret, as anyone who has access to this key will be able to make requests on your behalf, charging you money.

IMPORTANT

Since the API costs money to use, it is a good idea that you make sure you take this into consideration before releasing your app. If your app makes an OpenAPI request every time someone loads it and you are then charged for this, you might incur a lot of costs very quickly.

I would recommend that you only allow authenticated users to use your application, and I definitely would recommend adding some sort of API throttling. If you are building some sort of SaaS application, maybe you could make it so that the client is charged for every request they make, making sure that they take on the costs associated and not you. This can be automated through Stripe usage records which you can find out more about here.

Using the API

Setting up our environment

Now that you've got your API key, let's have some fun! To make our life easier let's use the OpenAI SDK for Python. OpenAI also has SDK's available for Node.js, however, for this demo, we are going to be using Python. You can install the Python OpenAI SDK using the command pip3 install openai.

Next, create a new .env file. This is what we're going to store our API key in locally, which you can do by adding the following line to the file

OPENAI_API_KEY=YOUR_API_KEY

(where YOUR_API_KEY is replaced with your OpenAI API key).

It is important you keep this file out of any public GitHub repositories, which you can do using a .gitignore file and adding .env to it.

Now in order to load the .env file we're going to need the dotenv dependency, which you can install using pip3 install python-dotenv.

Now create a new Python file and add the following lines of code

import os
import openai
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

def main():
    pass

if __name__ == "__main__":
    main()

This is a nice starter for you which will automatically load your API key from the .env file into the OpenAI SDK so that it is ready to be used. In general, it is good practice to use

if __name__ == "__main__":
    # Your code here

Generating text

To generate text using GPT-3, add the following code to the main function

response = openai.Completion.create(
    model="text-davinci-002",
    prompt="Today I went to the movies and...",
    temperature=1,
    max_tokens=60,
)

print(response)
  • The model parameter specifies the type of model that will generate the text. By default, OpenAI provides a few models you can choose which you can view here. In addition, you can even create your own models, however, that is outside the scope of this tutorial.
  • The prompt parameter specifies the input prompt that you feed from the model, which the model will then autocomplete a response to. This can be whatever you wish.
  • The temperature parameter specifies the uncertainty of the response. This means that the model is more likely to generate something creative and can be thought of as the model taking a risk and deviating from the normal response. Setting this parameter to 1 means the model will return a result it is not as sure of, compared to giving this parameter a value of 0 which means the model will return the result it is almost certain of.
  • The max_tokens parameter specifies the maximum amount of tokens the model is allowed to generate as a part of its output. You are charged for the more tokens you generate, so make sure to be careful with this parameter.

If you run the code, you should get an API response that contains the response that the AI model autogenerated from your prompt, for example

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\n\nI saw a great film!"
    }
  ],
  "created": 1658030956,
  "id": "cmpl-5UpsiIqm3IyQmFy1op27TOZ6Brvc6",
  "model": "text-davinci-002",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 16,
    "prompt_tokens": 8,
    "total_tokens": 24
  }
}

Pretty cool! In addition, you can tell the model what you want it to do, and it will conform to it. For example, let's see if we get the model to be able to format a date for us with the following prompt

"Format the following time in the form of DD/MM/YYYY

May 4th 1989"

Response

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\n\n04/05/1989"
    }
  ],
  "created": 1658031618,
  "id": "cmpl-5Uq3OlXZA57KTkn2MabHh8l8FdbnS",
  "model": "text-davinci-002",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 8,
    "prompt_tokens": 20,
    "total_tokens": 28
  }
}

How awesome is that? Now you can take that string response from the model and process it however you wish for the rest of your application.

Of course, GPT-3 is far more capable than just date formatting, this is just one example. I encourage you to play around with the model and see what you can do with it! Some examples of other tasks GPT-3 is capable of include:

  • Translation
  • Summarization
  • Code completion
  • Recipe creation

If you can think of it, GPT-3 can probably do it.

Conclusion

So now you know how you can take advantage of one of the most advanced language models to date for all of your personal or business needs.

There are just a few things you need to be wary of regarding costs, however, this is definitely worth it considering the power you gain access to, not to mention saving you time and money from having to build, train, test, and deploy your own machine learning model which is unlikely to achieve results even close to that of GPT-3.

If you need inspiration for projects to build with GPT-3, check out a list of examples they have provided for you here. In addition, if you want more information about using GPT-3 for your application, checkout the docs! Finally, make sure that you are aware of and follow the OpenAI usage guidelines.

ย