Skip to main content

OpenAI (Text Completion)

LiteLLM supports OpenAI text completion models

Required API Keys

import os 
os.environ["OPENAI_API_KEY"] = "your-api-key"

Usage

import os 
from litellm import completion

os.environ["OPENAI_API_KEY"] = "your-api-key"

# openai call
response = completion(
    model = "gpt-3.5-turbo-instruct", 
    messages=[{ "content": "Hello, how are you?","role": "user"}]
)

Usage - LiteLLM Proxy Server

Here's how to call OpenAI models with the LiteLLM Proxy Server

1. Save key in your environment

export OPENAI_API_KEY=""

2. Start the proxy

model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: openai/gpt-3.5-turbo                          # The `openai/` prefix will call openai.chat.completions.create
      api_key: os.environ/OPENAI_API_KEY
  - model_name: gpt-3.5-turbo-instruct
    litellm_params:
      model: text-completion-openai/gpt-3.5-turbo-instruct # The `text-completion-openai/` prefix will call openai.completions.create
      api_key: os.environ/OPENAI_API_KEY

3. Test it

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-3.5-turbo-instruct",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ]
    }
'

OpenAI Text Completion Models / Instruct Models

Model NameFunction Call
gpt-3.5-turbo-instructresponse = completion(model="gpt-3.5-turbo-instruct", messages=messages)
gpt-3.5-turbo-instruct-0914response = completion(model="gpt-3.5-turbo-instruct-0914", messages=messages)
text-davinci-003response = completion(model="text-davinci-003", messages=messages)
ada-001response = completion(model="ada-001", messages=messages)
curie-001response = completion(model="curie-001", messages=messages)
babbage-001response = completion(model="babbage-001", messages=messages)
babbage-002response = completion(model="babbage-002", messages=messages)
davinci-002response = completion(model="davinci-002", messages=messages)