Table of contents
- Introduction
- Prerequisites
- FastAPI
- OpenAI
- 1. Setting Up Your FastAPI Project
- 2. Getting Your OpenAI API Key
- 3. Integrating OpenAI into Your FastAPI App
- 4. Create a .env file for your API key and configure it
- 5. Define a function to train the AI
- 6. Creating an API Endpoint for OpenAI
- 7. Adding Error Handling
- 8. Testing Your FastAPI OpenAI Endpoint
- 9. Deploying Your FastAPI App
- Additional Resources
Introduction
In this tutorial, we will walk you through the process of integrating OpenAI into a FastAPI application. By the end, you'll have a FastAPI app that can generate AI-powered text based on user input.
Prerequisites
Before we begin, ensure you have the following:
FastAPI (
pip install fastapi
)Uvicorn (
pip install "uvicorn[standard]"
)An OpenAI account (Sign up at openai.com)
An OpenAI API key (create one on your OpenAI profile)
FastAPI
FastAPI is a modern, fast, and highly efficient web framework for building APIs with Python. It is designed to be easy to use and productive for developers while also providing high performance. FastAPI is known for its automatic generation of interactive API documentation, based on standardized Python type hints, which makes it easy to understand and use. It also supports asynchronous programming, making it suitable for handling real-time and high-concurrency applications. FastAPI is often chosen by developers for building web services, microservices, and RESTful APIs.
OpenAI
OpenAI is an artificial intelligence research organization and company that focuses on developing and promoting advanced AI technologies. OpenAI has created a range of cutting-edge AI models, including GPT-3 (Generative Pre-trained Transformer 3), which is a powerful language model capable of understanding and generating human-like text. OpenAI's mission is to ensure that artificial general intelligence (AGI), highly autonomous systems that outperform humans in the most economically valuable work, benefits all of humanity. OpenAI conducts research, develops AI models and applications, and partners with organizations to advance AI technology while emphasizing ethical considerations and responsible AI development.
1. Setting Up Your FastAPI Project
Start by creating a directory for your project. From your terminal, create a folder and navigate to it:
# Replace <folder_name> with the name of the folder
mkdir <folder_name>
cd <folder_name>
For best practices, create a virtual environment and activate it:
# Replace <environment_name> with the name of your virtual environment
python -m venv <environment_name>
source venv/bin/activate # On Windows: venv\Scripts\activate
Install FastAPI and Uvicorn:
pip install fastapi uvicorn[standard]
# To install the full recommended package run "pip install fastapi[all]"
In your working directory, create a file main.py
. Now, let's create a FastAPI app:
from fastapi import FastAPI
app = FastAPI()
You have successfully set up the basic structure of your FastAPI project.
2. Getting Your OpenAI API Key
Sign up for an OpenAI account at openai.com if you haven't already.
Once you're logged in, navigate to your profile tab on your dashboard and create an API key.
Once created, copy the API key to the clipboard.
3. Integrating OpenAI into Your FastAPI App
To integrate OpenAI into your FastAPI app, you'll need to install the OpenAI Python library. In your working directory, run:
pip install openai
You're now ready to use OpenAI in your FastAPI app.
4. Create a .env
file for your API key and configure it
For best practices and security purposes, your key should be saved in a .env
file. To do that, create a file .env
in your root directory and insert your API key:
# Replace 'YOUR_API_KEY' with your actual OpenAI API key
OPENAI_API_KEY = "YOUR_API_KEY"
To inject your API key into your code without exposing it, you have to configure some settings. In your main.py
file. add the following code:
import openai
from pydantic_setting import BaseSettings
class Settings(BaseSettings):
openai_api_key: str
class Config:
env_file = ".env"
settings = Settings()
openai.api_key = settings.openai_api_key
5. Define a function to train the AI
After configuring your settings, it is time to train the AI with a prompt. A prompt is a string of words that tell the AI what to do. For the best results, a detailed prompt is key. You can also include examples of what you want your response to look like.
def generate_prompt(genre):
return """list 4 songs associated with the genre
Genre: Pop
Songs: Billie Jean by Michael Jackson, Bad Guy by Billie Eilish
Genre: Rock
Songs: Bohemian Rhapsody by Queen, Stairway to Heaven by Led Zeppelin
Genre: {}
Songs:
""".format(
genre
)
6. Creating an API Endpoint for OpenAI
Let's define an API endpoint that takes user input and generates AI-powered text using OpenAI. Add the following code to main.py
:
import openai
from fastapi import FastAPI, Request
# Create a FastAPI app instance
app = FastAPI()
# Define a POST route for generating text
@app.post("/generate-text/")
async def generate_text(request: Request, genre: str):
# Use the OpenAI API to generate text
response = openai.Completion.create(
model="text-davinci-003",
prompt=generate_prompt(genre),
temperature=0.6,
max_tokens=50,
n=1
)
# Extract the generated text from the OpenAI response
Songs: str = response.choices[0].text
# Return the generated text as a JSON response
return {"Songs": Songs}
This endpoint accepts a POST request with a genre
parameter and uses OpenAI to generate 2 songs based on the genre provided. The key components of this endpoint include:
response
: This variable is used to store the response generated by the OpenAI API. It will contain the text completion(s) produced by the model.model="text-davinci-003"
: Specifies the specific language model to use, which in this case is "text-davinci-003." OpenAI provides various models with different capabilities, and this model is chosen for text generation. You can choose to use any model of your choice.prompt=generate_prompt(genre)
: Theprompt
parameter is set to call the functiongenerate_prompt()
. It takes thegenre
as an argument to generate an appropriate prompt for the text generation task.temperature=0.6
: Temperature is a parameter that affects the randomness and creativity of the generated text. A higher value (e.g., 0.6) makes the output more diverse and creative, while a lower value (e.g., 0.2) makes it more focused and deterministic. A moderate value of 0.6 is chosen to strike a balance between creativity and coherence.max_tokens=50
: This parameter limits the length of the generated text by specifying the maximum number of tokens (words or characters) in the response. Here, it is set to 50 tokens, meaning that the generated completion should not exceed this length.n=1
: Then
parameter determines the number of completions to generate. In the code, it is set to 1, indicating that the code intends to generate a single text completion.
7. Adding Error Handling
It's essential to add error handling for cases where OpenAI API calls fail or inputs are invalid. Update your endpoint to handle errors gracefully:
import openai
from pydantic_settings import BaseSettings
from fastapi import FastAPI, Request, HTTPException
# Create a FastAPI app instance
app = FastAPI()
# Define a POST route for generating text
@app.post("/generate-text/")
async def generate_text(request: Request, genre: str):
try:
# Use the OpenAI API to generate text
response = openai.Completion.create(
model="text-davinci-003",
prompt=generate_prompt(genre),
temperature=0.6,
max_tokens=50,
n=1
)
# Extract the generated text from the OpenAI response
Songs: str = response.choices[0].text
# Return the generated text as a JSON response
return {"Songs": Songs}
except Exception as e:
# Handle any exceptions that may occur during text generation
raise HTTPException(status_code=500, detail=f"OpenAI error: {str(e)}")
Congratulations! You've successfully integrated OpenAI into a FastAPI app. You can now generate text using AI on your application.
8. Testing Your FastAPI OpenAI Endpoint
To test your FastAPI OpenAI endpoint, run your FastAPI app with Uvicorn:
uvicorn main:app --reload
Now, make a POST request to http://127.0.0.1:8000/generate-text/
with a genre
parameter containing the genre you want to generate AI songs from.
You should receive a response with the AI-generated text.
9. Deploying Your FastAPI App
To deploy your FastAPI app with OpenAI integration, you can use platforms like Heroku, AWS, or DigitalOcean. Ensure you configure environment variables for your API key in the production environment.
Additional Resources
Listed below is a link to the GitHub repository containing a working FastApi and OpenAi app I wrote to check symptoms based on user input, FastApi's documentation and documentation for OpenAI's API for further research.
If you encounter issues or have questions, don't hesitate to reach out for help or consult the documentation provided in the references section.