ChatCompletion ChatCompletion

yaml
type: "io.kestra.plugin.openai.ChatCompletion"

Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API.

For more information, refer to the Chat Completions API docs.

Examples

Based on a prompt input, generate a completion response and pass it to a downstream task.

yaml
id: openai
namespace: company.team

inputs:
  - id: prompt
    type: STRING
    defaults: What is data orchestration?

tasks:
  - id: completion
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-4o
    prompt: "{{ inputs.prompt }}"

  - id: response
    type: io.kestra.plugin.core.debug.Return
    format: {{ outputs.completion.choices[0].message.content }}"

Based on a prompt input, ask OpenAI to call a function that determines whether you need to respond to a customer's review immediately or wait until later, and then comes up with a suggested response.

yaml
id: openai
namespace: company.team

inputs:
  - id: prompt
    type: STRING
    defaults: I love your product and would purchase it again!

tasks:
  - id: prioritize_response
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-4o
    messages:
      - role: user
        content: "{{ inputs.prompt }}"
    functions:
      - name: respond_to_review
        description: Given the customer product review provided as input, determines how urgently a reply is required and then provides suggested response text.
        parameters:
          - name: response_urgency
            type: string
            description: How urgently this customer review needs a reply. Bad reviews
                         must be addressed immediately before anyone sees them. Good reviews can
                         wait until later.
            required: true
            enumValues:
              - reply_immediately
              - reply_later
          - name: response_text
            type: string
            description: The text to post online in response to this review.
            required: true

  - id: response_urgency
    type: io.kestra.plugin.core.debug.Return
    format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_urgency }}"

  - id: response_text
    type: io.kestra.plugin.core.debug.Return
    format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_text }}"

Properties

apiKey

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

The OpenAI API key.

model

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

ID of the model to use e.g. 'gpt-4'

See the OpenAI model's documentation page for more details.

clientTimeout

  • Type: integer
  • Dynamic:
  • Required:
  • Default: 10

The maximum number of seconds to wait for a response.

frequencyPenalty

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

functionCall

  • Type: string
  • Dynamic: ✔️
  • Required:

The name of the function OpenAI should generate a call for.

Enter a specific function name, or 'auto' to let the model decide. The default is auto.

functions

The function call(s) the API can use when generating completions.

logitBias

  • Type: object
  • SubType: integer
  • Dynamic: ✔️
  • Required:

Modify the likelihood of specified tokens appearing in the completion. Defaults to null.

maxTokens

  • Type:
    • integer
    • string
  • Dynamic: ✔️
  • Required:

messages

  • Type: array
  • SubType: ChatMessage
  • Dynamic: ✔️
  • Required:

A list of messages comprising the conversation so far.

Required if prompt is not set.

n

  • Type:
    • integer
    • string
  • Dynamic: ✔️
  • Required:

presencePenalty

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

prompt

  • Type: string
  • Dynamic: ✔️
  • Required:

The prompt(s) to generate completions for. By default, this prompt will be sent as a user role.

If not provided, make sure to set the messages property.

stop

  • Type: array
  • SubType: string
  • Dynamic: ✔️
  • Required:

Up to 4 sequences where the API will stop generating further tokens. Defaults to null.

temperature

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

topP

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

user

  • Type: string
  • Dynamic: ✔️
  • Required:

A unique identifier representing your end-user.

Outputs

choices

A list of all generated completions.

id

  • Type: string
  • Required:

Unique ID assigned to this Chat Completion.

model

  • Type: string
  • Required:

The GPT model used.

object

  • Type: string
  • Required:

The type of object returned, should be "chat.completion".

usage

The API usage for this request.

Definitions

com.theokanning.openai.completion.chat.ChatFunctionCall

Properties

arguments
name
  • Type: string
  • Dynamic:
  • Required:

io.kestra.plugin.openai.ChatCompletion-PluginChatFunctionParameter

Properties

description
  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

A description of the function parameter.

Provide as many details as possible to ensure the model returns an accurate parameter.

name
  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

The name of the function parameter.

enumValues
  • Type: array
  • SubType: string
  • Dynamic: ✔️
  • Required:

A list of values that the model must choose from when setting this parameter.

Optional, but useful when for classification problems.

required
  • Type:
    • boolean
    • string
  • Dynamic: ✔️
  • Required:

com.fasterxml.jackson.databind.JsonNode

io.kestra.plugin.openai.ChatCompletion-PluginChatFunction

Properties

description
  • Type: string
  • Dynamic: ✔️
  • Required:

A description of what the function does.

name
  • Type: string
  • Dynamic: ✔️
  • Required:

The name of the function.

parameters

The function's parameters.

com.theokanning.openai.completion.chat.ChatCompletionChoice

Properties

finish_reason
  • Type: string
  • Dynamic:
  • Required:
index
  • Type: integer
  • Dynamic:
  • Required:
message

com.theokanning.openai.Usage

Properties

completion_tokens
  • Type: integer
  • Dynamic:
  • Required:
prompt_tokens
  • Type: integer
  • Dynamic:
  • Required:
total_tokens
  • Type: integer
  • Dynamic:
  • Required:

com.theokanning.openai.completion.chat.ChatMessage

Properties

content
  • Type: string
  • Dynamic:
  • Required:
function_call
name
  • Type: string
  • Dynamic:
  • Required:
role
  • Type: string
  • Dynamic:
  • Required:

Was this page helpful?