• Tools
Tools
  • Tools
loading...
No Results
  • AirDroid Business
    • Index
    • Clear app data and cache
    • Create a group
    • Delete groups
    • Disable lost mode
    • Error codes
    • Enable lost mode
    • Field reference
    • Get average screen time
    • Get account activities
    • Get a group
    • Get a group id by group name
    • Get a device by name
    • Get a device app by name
    • Get an activity log
    • Get all devices
    • Get all device apps
    • Get all devices with filter
    • Get device info push
    • Get device location report
    • Get device network connection history
    • Get device application usage duration
    • Get device application report
    • Get device online status report
    • Get device remote access report
    • Get data usage overview and trends
    • Get tag ids by tag names
    • Get top 10 apps by usage duration
    • Get top 10 data usage apps
    • Lock a device
    • Move devices to a group
    • Open app to foreground
    • Power off a device
    • Reboot device
    • Remote operation
    • Set tags
    • Turn off device screen
    • Unenroll a device
    • Update a device name
    • Update a device remark
    • Update a group name
    • Update a group remark
  • ActiveCampaign
  • Asana
  • AWS-S3
  • AWS Lambda
  • Appstore
  • BambooHR
  • Bitbucket
  • Brevo
  • Coda
  • Code
  • ConvertKit
  • CSV
  • Crypto
  • Clockify
  • Data Shaping
  • Date & Time
  • Delay
  • DingTalk
  • Discourse
  • Discord
  • Dropbox
  • Elastic Security
  • FeiShu
  • Firecrawl
  • Freshdesk
  • Freshservice
  • Freshworks CRM
  • Gerrit
  • Gitlab
  • Github
  • Grafana
  • Google Ads
  • Google Docs
  • Google Drive
  • Google Gmail
  • Google Sheets
  • Google Analytics
  • Google Calendar
  • Google Developer
  • Harvest
  • HaloPSA
  • Hacker News
  • Hubspot
  • Help Scout
  • Intercom
  • Jira
  • Jenkins
  • Kafka
  • Linear
  • Lemlist
  • MySQL
  • monday.com
  • Metabase
  • MailChimp
  • Microsoft Excel
  • Microsoft Teams
  • Microsoft To Do
  • Microsoft OneDrive
  • Microsoft Outlook
  • Microsoft SharePoint
  • Notion
  • Nextcloud
  • Odoo
  • Ortto
  • Okta
  • PayPal
  • Paddle
  • Pipedrive
  • PostHog
  • PostgreSQL
  • OpenAI
  • Qdrant
  • QRCode
  • QuickBooks
  • Redis
  • Strapi
  • Stripe
  • Splunk
  • Shopify
  • SendGrid
  • Segment
  • ServiceNow
  • Search&Crawl
  • Text
  • Trello
  • Twilio
  • Todoist
  • Telegram
  • Webflow
  • Wikipedia
  • WordPress
  • WooCommerce
  • Xml
  • YouTube
  • Zulip
  • Zoom
  • Zendesk
  • Zammad
  • Zoho CRM
Home > Tools

OpenAI

1. Overview

The OpenAI node for GoInsight integrates the full power of OpenAI's artificial intelligence platform into your automation workflows. It provides access to state-of-the-art models for natural language processing, image generation, audio processing, and the advanced Assistants API.

By using this node, you can build intelligent applications capable of:

  • Conversational AI: Engage in natural conversations using models like GPT-4o and GPT-4o-mini via Chat Completions or the Assistants API.
  • Content Generation: Generate high-quality text, code, and summaries, or create vivid images using DALL-E 3.
  • Audio Processing: Convert text to lifelike speech (TTS) and transcribe or translate audio recordings into text (Whisper).
  • File & Data Management: Upload, list, and delete files for use with fine-tuning or assistants.
  • Safety & Analysis: Analyze images for content understanding and classify text to detect policy violations using the Moderation API.

2. Prerequisites

Before using the OpenAI node, please ensure you have the following:

  • OpenAI Account: You must have a valid account with OpenAI.
  • API Key: An active API key is required to authenticate requests. You can generate this in your OpenAI user dashboard.
  • Credit Balance: Ensure your OpenAI account has sufficient credits or a linked payment method, as the API usage is billed based on tokens or generation counts.

3. Credentials

For detailed guidelines on how to acquire and configure your credentials, please refer to our official documentation: Credential Configuration Guide.

4. Supported Operations

Summary

This node supports operations across various OpenAI resources, including Assistants, Chat Models, Images, Audio, Files, and Moderation.

Resource Operation Description
Assistant Create an Assistant Creates a new OpenAI assistant with model, instructions, optional tools, and metadata settings.
Assistant List Assistants Retrieves a paginated list of assistants from your OpenAI account.
Assistant Update an Assistant Updates an existing OpenAI assistant using partial-update behavior.
Assistant Delete an Assistant Deletes an existing OpenAI assistant by ID and returns the full upstream deletion response.
Assistant Message an Assistant Sends a user message to an OpenAI assistant, executes a run, and returns the assistant reply.
Chat / Model Message a Model Sends chat messages to an OpenAI model and returns the generated response with token usage details.
Chat / Model List Models Lists all available OpenAI models accessible with your API key.
Image Generate an Image Generates images from text prompts using OpenAI DALL-E API.
Image Analyze Image Analyzes an image with OpenAI vision models and returns the full upstream response payload.
Audio Generate Audio Generates speech audio from text with OpenAI text-to-speech models.
Audio Transcribe a Recording Transcribes recorded audio into text using OpenAI Whisper transcription API.
Audio Translate a Recording Translates recorded audio into English text using OpenAI Whisper translation API.
File Upload a File Uploads a file to OpenAI from either Base64 content or a public URL.
File List Files Lists files uploaded to OpenAI with optional purpose filtering, sorting, and cursor-based pagination.
File Delete a File Permanently deletes an uploaded file from OpenAI storage.
Moderation Classify Text for Violations Classifies text for policy violations with OpenAI Moderations API.

Operation Details

Create an Assistant

Creates a new OpenAI assistant with model, instructions, optional tools, and metadata settings.

When to use:

  • You need a reusable assistant configuration for repeated workflows.
  • You want to predefine behavior and tools before starting conversation threads.
  • You need assistant metadata for ownership, versioning, or routing.

Key points:

  • Use Tools as a native array and Metadata as a native object (not JSON-escaped strings).
  • StatusCode is -1 for local validation errors, 200 after upstream response, and 500 for network/system failures.
  • OriginalStatusCode preserves exact upstream HTTP status for diagnostics.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • Model: Assistant model to use. Example: 'gpt-4o-mini' for lower cost or 'gpt-4o' for stronger reasoning. Default is 'gpt-4o-mini'.
  • Name: Optional assistant name shown in OpenAI resources and dashboards. Example: 'Customer Support Bot'.
  • Instructions: System behavior instructions for the assistant. Keep this explicit and task-oriented. Example: 'You are a helpful customer support assistant. Reply concisely and cite relevant policy points when needed.'
  • Description: Optional human-readable description of the assistant's purpose. Example: 'AI assistant for handling customer inquiries'.
  • Tools: Tool configuration list. Each item needs a 'type' field, such as 'code_interpreter', 'file_search', or 'function'. Example: [{"type":"code_interpreter"},{"type":"file_search"}]
  • Metadata: Metadata key-value object for storing assistant context. Common fields include 'department', 'version', and 'owner'. Example: {"department":"customer_service","version":"1.0"}
  • Temperature: Controls response creativity. 0 = deterministic, 1 = balanced, 2 = highly creative. Range: 0-2. Default is 1.
  • TopP: Controls nucleus sampling diversity. Lower values make output more focused. Range: 0-1. Default is 1.
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • Assistant (object): Full JSON response returned by OpenAI for the created assistant, including id, model, tools, metadata, and timestamps.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

List Assistants

Retrieves a paginated list of assistants from your OpenAI account.

When to use:

  • Review available assistants before selecting one for downstream tasks
  • Build assistant management flows with cursor-based pagination
  • Inspect assistant metadata, tools, and model configuration

Key points:

  • Returns complete assistant objects from OpenAI without field filtering
  • Includes pagination metadata inside Assistants (hasMore, firstId, lastId)
  • StatusCode is 200 when the request reaches OpenAI API; check OriginalStatusCode and ErrorMessage for business errors
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • Limit: Maximum number of assistants to return (1-100).
  • Order: Sort order by creation time. 'asc' = oldest first, 'desc' = newest first. Default: desc
  • After: Cursor for pagination. Use the LastId from the previous response to get the next page. Example: asst_abc123
  • Before: Cursor for pagination. Use the FirstId from the previous response to get the previous page. Example: asst_xyz789
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • Assistants (object): Business data object containing the complete assistants list response. Structure includes: assistants (array of assistant objects with id, name, model, instructions, description, tools, metadata, created_at), hasMore (boolean), firstId (string), lastId (string). Returns empty object {} if operation failed (check ErrorMessage for reason).
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Update an Assistant

Updates an existing OpenAI assistant using partial-update behavior: only provided fields are changed.

When to use:

  • You need to adjust assistant behavior without recreating it.
  • You want to update model, instructions, tools, or metadata incrementally.
  • You need to preserve existing fields by only sending targeted updates.

Key points:

  • AssistantId is required; all other fields are optional patch-style updates.
  • Use AdditionalFields.TopP for nucleus sampling and AdditionalFields.BaseUrl for custom endpoints.
  • OpenAI recommends changing either Temperature or TopP first when tuning randomness.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Input Parameters:

  • AssistantId: The ID of the assistant to update. Format starts with 'asst_'. You can obtain it from List Assistants or Create an Assistant results. Example: 'asst_abc123'.

Options:

  • Model: Optional new model for the assistant. Example: 'gpt-4o' or 'gpt-4o-mini'. Leave empty to keep current value.
  • Name: Optional new assistant name. Leave empty to keep current value.
  • Instructions: Optional new system instructions. Leave empty to keep existing instructions unchanged.
  • Description: Optional assistant description update. Leave empty to keep current value.
  • Tools: Optional tool configuration list. Each item should include 'type' (code_interpreter, file_search, or function). Example: [{"type":"code_interpreter"}]
  • Metadata: Optional metadata object for custom key-value fields (up to platform limits). Example: {"version":"1.0","category":"education"}
  • Temperature: Optional randomness control for outputs. Range 0-2. Lower values are more deterministic. Leave empty to keep current value.
  • AdditionalFields: Advanced optional fields as an object. Supported keys: TopP (0-1, nucleus sampling) and BaseUrl (default: https://api.openai.com/v1). Most users should only set BaseUrl when using Azure OpenAI or a custom proxy. Example: {"TopP":0.9,"BaseUrl":"https://api.openai.com/v1"}

Output:

  • Assistant (object): Full JSON response returned by OpenAI for the updated assistant, including id, model, tools, metadata, and other assistant fields.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Delete an Assistant

Deletes an existing OpenAI assistant by ID and returns the full upstream deletion response.

When to use:

  • You need to permanently remove an assistant that is no longer needed.
  • You want to automate cleanup of test or obsolete assistants.
  • You need upstream deletion payload for audit logging.

Key points:

  • AssistantId is required and should come from list/create assistant actions.
  • StatusCode is -1 for local validation errors, 200 after upstream response, and 500 for network/system failures.
  • Deletion is irreversible; verify the target assistant before calling this action.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Input Parameters:

  • AssistantId: The ID of the assistant to delete. Format typically starts with 'asst_' (example: 'asst_abc123XYZ456def789'). You can obtain this ID from 'List Assistants' or 'Create an Assistant'.

Options:

  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • DeletedAssistant (object): Full JSON response returned by OpenAI for assistant deletion, typically including id, object, and deleted fields.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Message an Assistant

Sends a user message to an OpenAI assistant, executes a run, and returns the assistant reply from the target thread.

When to use:

  • You need to ask an assistant a question and get a synchronous reply in one action.
  • You want to continue a previous conversation using ThreadId.
  • You need run metadata (thread/run/message IDs and final status) for tracking.

Key points:

  • Leave ThreadId empty to start a new conversation thread automatically.
  • RunStatus may be completed, failed, cancelled, expired, in_progress, or queued.
  • If the assistant has tools enabled (functions, code interpreter, file search), they may execute during the run.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.
  • If run status becomes requires_action, this action returns an explicit error because tool outputs are not auto-submitted.

Input Parameters:

  • AssistantId: The ID of the OpenAI Assistant to message. Format: starts with 'asst_'. You can get it from OpenAI Dashboard or the Create Assistant action. Example: 'asst_abc123'.
  • Message: User message content sent to the assistant. Use plain text instructions/questions. Example: 'What is the weather like today?'

Options:

  • ThreadId: Existing thread ID to continue a conversation (starts with 'thread_'). Leave empty to create a new thread automatically. Example: 'thread_abc123'.
  • MaxWaitSeconds: Maximum seconds to wait for run completion before returning a timeout-style business error. Must be >= 1. Default is 60.
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • AssistantMessage (object): Message exchange result object containing ThreadId, RunId, MessageId, Reply, and RunStatus. RunStatus values include: completed, failed, cancelled, expired, in_progress, queued.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Message a Model

Sends chat messages to an OpenAI model and returns the generated response with token usage details.

When to use:

  • Build chatbot flows that need direct model responses
  • Generate text, summaries, or code from structured prompts
  • Capture token usage for monitoring and optimization

Key points:

  • Messages must be object-array entries with role and content
  • Stop accepts string-array values, each entry is a stop sequence
  • Put optional BaseUrl and Stream in AdditionalFields
  • StatusCode is 200 once OpenAI is reached; use OriginalStatusCode and ErrorMessage for business error handling
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.
  • Stream=true is supported through AdditionalFields.Stream; streaming chunks are merged into final Message.content.

Input Parameters:

  • Messages: Conversation messages array. Each message must include role and content. Supported roles: system, user, assistant, developer, tool. Example: [{"role":"developer","content":"Always output JSON."},{"role":"user","content":"Summarize this text."}]

Options:

  • Model: The model to use for completion. Default is gpt-4o-mini.
  • Temperature: Sampling temperature (0-2). Higher values make output more creative/random; lower values make output more deterministic. Default: 1.
  • MaxTokens: Maximum number of tokens to generate in the completion.
  • TopP: Nucleus sampling threshold (0-1). 1.0 considers all candidates, 0.1 keeps only the most likely 10% probability mass. Default: 1.
  • FrequencyPenalty: Frequency penalty (-2.0 to 2.0). Positive values reduce repeated words, negative values allow more repetition. Default: 0.
  • PresencePenalty: Presence penalty (-2.0 to 2.0). Positive values encourage introducing new topics, negative values keep focus on current topics. Default: 0.
  • Stop: Stop sequences array. Generation stops when any sequence is encountered. Example: ["

", "END"]

  • AdditionalFields: Optional advanced settings object. Supported keys: BaseUrl (string, default https://api.openai.com/v1), Stream (boolean, default false). Example: {"BaseUrl": "https://api.openai.com/v1", "Stream": false}

Output:

  • Message (object): Business data object containing the complete chat result. Structure includes: content (string), finishReason (string), promptTokens (number), completionTokens (number), totalTokens (number), model (string). Returns empty object {} if operation failed (check ErrorMessage for reason).
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

List Models

Lists all available OpenAI models accessible with your API key.

When to use:

  • Check which models are available for your account
  • Verify model access before calling Chat Completion or Image Generation
  • Get model metadata (creation date, owner) for inventory or audit

Key points:

  • Returns all models your API key has access to (GPT, DALL-E, Whisper, etc.)
  • No pagination needed; the full list is returned in one call
  • Use Organization parameter to scope results to a specific org
  • Model IDs from this list can be used in other OpenAI actions
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}
  • Organization: OpenAI organization ID (optional). Used to scope API requests to a specific organization for billing and access control. Leave empty for personal accounts.
  • Timeout: Request timeout in seconds. Must be a positive integer. Default: 30

Output:

  • ModelList (object): Business data object containing the models list. On success: {"models": [{"id": str, "object": "model", "created": int, "owned_by": str}, ...], "object": "list"}. Returns empty object {} if operation failed (check ErrorMessage).
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Generate an Image

Generates images from text prompts using OpenAI DALL-E API.

When to use:

  • Create custom images from text descriptions for content, marketing, or design
  • Generate visual assets (illustrations, concept art, product mockups)
  • Prototype UI designs or creative concepts quickly

Key points:

  • Supports DALL-E 2 (faster, multi-image) and DALL-E 3 (higher quality, 1 image only)
  • DALL-E 3 may revise your prompt for better results (check revised_prompt in response)
  • URL format links expire after 1 hour; use b64_json for persistent storage
  • Image generation may take 10-30 seconds depending on model and quality settings
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.
  • gpt-image-1 and gpt-image-1.5 are supported; b64_json is recommended for stable downstream handling.

Input Parameters:

  • Prompt: The text prompt describing the image to generate. Be descriptive about subject, style, lighting, and mood for best results. DALL-E 3 may revise your prompt. Example: "A futuristic cityscape at sunset with flying cars, digital art style"

Options:

  • Model: Image model to use. Supported: dall-e-2, dall-e-3, gpt-image-1, gpt-image-1.5. dall-e-3 supports N=1 only. gpt-image models commonly return b64_json output.
  • Size: Image dimensions. DALL-E 2: 256x256, 512x512, 1024x1024. DALL-E 3: 1024x1024 (square), 1792x1024 (landscape), 1024x1792 (portrait). Default: 1024x1024
  • Quality: Image quality (DALL-E 3 only). standard: faster generation. hd: more detailed, higher cost. Default: standard
  • Style: Image style (DALL-E 3 only). vivid: dramatic, vibrant colors. natural: realistic, natural-looking. Default: vivid
  • N: Number of images to generate. DALL-E 3: only 1 supported. DALL-E 2: 1-10. Default: 1
  • ResponseFormat: Response format. url: returns image URLs (expire after 1 hour). b64_json: returns base64-encoded data (for immediate processing). Default: url
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • GeneratedImage (object): Business data object containing image generation result. On success: {"images": [{"url": str, "revised_prompt": str}], "model": str}. Each image object contains url or b64_json depending on ResponseFormat, and revised_prompt (DALL-E 3 only). Returns empty object {} if failed.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Analyze Image

Analyzes an image with OpenAI vision models and returns the full upstream response payload.

When to use:

  • You need model-generated understanding of image content from a URL or base64 input.
  • You want token usage and model metadata for downstream auditing.
  • You need to keep full upstream response data for flexible post-processing.

Key points:

  • Provide exactly one image source: ImageUrl or ImageBase64.
  • StatusCode is -1 for local validation errors, 200 after upstream response, and 500 for network/system failures.
  • BaseUrl should stay default unless using Azure OpenAI or a trusted proxy.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • ImageUrl: URL of the image to analyze. Provide exactly one of ImageUrl or ImageBase64 (not both). If both are provided, the action returns a parameter validation error. Example: 'https://example.com/sample-image.jpg'.
  • ImageBase64: Base64-encoded image data. Provide exactly one of ImageBase64 or ImageUrl. You can pass either a full data URI (e.g., 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...') or raw base64 content (e.g., 'iVBORw0KGgoAAAANSUhEUgAA...'). If no prefix is provided, JPEG is assumed.
  • Prompt: Instruction for what the model should analyze in the image. Example: 'What objects are visible in this image?'
  • Model: Vision-capable OpenAI model name. Default is 'gpt-4o'. Example: 'gpt-4o-mini'.
  • MaxTokens: Maximum length of the analysis response (in tokens, roughly 1 token ≈ 0.75 English words). Higher values allow longer descriptions. Default is 300 (≈225 words). Range: 1-4096.
  • Temperature: Controls creativity of the response. 0 = factual and deterministic (recommended for image analysis), 1 = balanced, 2 = very creative. Range: 0-2. Default is 0.0.
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • ImageAnalysis (object): Full JSON response returned by OpenAI for the image analysis request. Key fields: choices[0].message.content (string): The actual image analysis text generated by the model; usage.prompt_tokens (number): Prompt token count (including image input); usage.completion_tokens (number): Response token count; usage.total_tokens (number): Total token usage; model (string): Actual model version used; id (string): Completion request identifier. To extract the analysis text, use ImageAnalysis.choices[0].message.content.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Generate Audio

Generates speech audio from text with OpenAI text-to-speech models and returns audio content as base64.

When to use:

  • You need spoken audio output from plain text content.
  • You want configurable voice, audio format, and speaking speed.
  • You need machine-readable audio payload for downstream storage or playback.

Key points:

  • InputText is required and limited to 4096 characters.
  • Audio.AudioData is base64 and must be decoded before saving/playing.
  • StatusCode is -1 for local validation errors, 200 after upstream response, and 500 for network/system failures.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Input Parameters:

  • InputText: Text content to convert to speech. Maximum 4096 characters. Example: 'Hello, this is a test of OpenAI text-to-speech API. The quick brown fox jumps over the lazy dog.'

Options:

  • Model: TTS model to use: tts-1, tts-1-hd, or gpt-4o-mini-tts. Default: tts-1.
  • Voice: Voice preset for generated speech. Supported values: alloy, echo, fable, onyx, nova, shimmer, ash, ballad, coral, sage, verse, marin, cedar.
  • ResponseFormat: Audio output format. Options: mp3 (recommended), opus (streaming-friendly), aac, flac (lossless), wav (uncompressed), pcm (raw audio). Default is mp3.
  • Speed: Speech speed multiplier. Range: 0.25 to 4.0. 1.0 is normal speed.
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • Audio (object): Generated audio payload object. On success, includes AudioData (base64), AudioFormat, Model, Voice, Speed, ContentType, and ContentLengthBytes.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Transcribe a Recording

Transcribes recorded audio into text using OpenAI Whisper transcription API.

When to use:

  • You need searchable text from meetings, interviews, calls, or voice notes.
  • You want subtitle outputs in srt or vtt formats.
  • You need detailed timing and segment metadata with verbose_json.

Key points:

  • Provide exactly one audio source: AudioFileBase64 or AudioFileUrl.
  • Model supports whisper-1, gpt-4o-transcribe, and gpt-4o-mini-transcribe.
  • Prompt and Temperature are explicit inputs; use AdditionalFields only for BaseUrl override.
  • StatusCode=200 means upstream API was reached; check ErrorMessage and OriginalStatusCode for business errors.

Options:

  • AudioFileBase64: Base64-encoded audio file content. Provide either this or AudioFileUrl, not both. Supported formats: mp3, mp4, mpeg, mpga, m4a, wav, webm (max 25 MB). Example format (truncated): //uQxAAAAAAAAAAAAAAAAAAAAAAASW5mbwAAAA8AAAACAAADhAC...
  • AudioFileUrl: Publicly accessible URL of the audio file. Provide either this or AudioFileBase64, not both. Example: https://example.com/meeting.mp3
  • Model: Transcription model. Supported values: whisper-1, gpt-4o-transcribe, gpt-4o-mini-transcribe. Default: whisper-1.
  • Language: ISO-639-1 language code of the audio. Leave empty for auto-detection. Specifying the language improves accuracy and speed. Examples: en, zh, ja, es.
  • ResponseFormat: Output format. json: basic JSON with text only. verbose_json: full JSON with timestamps and segments. text: plain text. srt/vtt: subtitle file formats. Default: json
  • TimestampGranularities: Timestamp detail level (only works when ResponseFormat=verbose_json). Options: segment (Sentence/phrase timestamps), word (Word-level timestamps), segment,word (Both levels), Empty (No extra timestamps).
  • Prompt: Optional text prompt to guide transcription vocabulary or style. Example: "Technical talk about PyTorch, CUDA, and machine learning."
  • Temperature: Sampling temperature for transcription output. Range: 0.0-1.0. Default: 0.0. Lower values are more deterministic, higher values add randomness.
  • AdditionalFields: Advanced option. Leave empty for standard OpenAI API. Supported key: BaseUrl (string). Default: https://api.openai.com/v1. Change this only if using Azure OpenAI Service or a trusted enterprise proxy.

Output:

  • Transcription (object): Complete transcription result object from OpenAI API. Core fields: text (string), language (string), duration (number), segments (array), words (array), model (string), response_format (string). For text/srt/vtt formats, the object contains text output and metadata fields.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Translate a Recording

Translates recorded audio into English text using OpenAI Whisper translation API.

When to use:

  • Convert non-English speech recordings into English text
  • Generate subtitle-ready outputs in srt or vtt formats
  • Get segment-level timestamps with verbose_json output

Key points:

  • Recommended: use AudioFileUrl (simpler); use AudioFileBase64 only when URL is unavailable
  • Model supports whisper-1, gpt-4o-transcribe, and gpt-4o-mini-transcribe.
  • Use ResponseFormat=verbose_json when you need duration/language/segments metadata
  • Set AdditionalFields.BaseUrl only for Azure OpenAI or trusted compatible proxy endpoints
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • AudioFileBase64: Base64-encoded audio data to translate to English. Supports two formats: 1) Raw base64, example: UklGRiQAAABXQVZF... 2) Data URI, example: data:audio/mp3;base64,UklGRiQAAABXQVZF... Provide either AudioFileBase64 or AudioFileUrl (not both). Supports mp3, mp4, mpeg, mpga, m4a, wav, webm.
  • AudioFileUrl: Publicly accessible URL of the audio file to translate to English. Recommended: use AudioFileUrl (simpler). Only use AudioFileBase64 if you cannot provide a public URL. Example: https://example.com/audio/meeting_recording_spanish.mp3
  • Model: Translation model. Supported values: whisper-1, gpt-4o-transcribe, gpt-4o-mini-transcribe. Default: whisper-1.
  • Prompt: Optional context text to guide spelling/style during translation. Example: 'This is a business meeting recording discussing Q4 sales targets.'
  • ResponseFormat: Output format. Supported values: json (translated text), verbose_json (text + duration + language + segments with timestamps), text (plain text), srt (subtitle format), vtt (web subtitle format). Default: json. Use verbose_json if you need timing information.
  • Temperature: Controls translation randomness. 0 = fully deterministic output (recommended), 1 = more variation. Most users should keep default. Range: 0-1. Default: 0.
  • AdditionalFields: Advanced optional settings object. Supported key: BaseUrl (default: https://api.openai.com/v1). Only set BaseUrl when using Azure OpenAI or an OpenAI-compatible proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • Translation (object): Translation result object. Fields vary by ResponseFormat: all formats include response_format and model; json includes text; verbose_json includes text, duration, language, and segments array (id, start, end, text); text/srt/vtt include formatted text output. Returns empty object {} if operation failed (check ErrorMessage for reason).
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Upload a File

Uploads a file to OpenAI from either Base64 content or a public URL.

When to use:

  • You need to provide documents or datasets for assistants, fine-tuning, batch, or vision workflows.
  • Your file is already in memory as Base64 content from upstream steps.
  • Your file is hosted at a public URL and should be fetched before upload.

Key points:

  • Provide exactly one source: FileContentBase64 or FileUrl.
  • Purpose accepts: fine-tune, assistants, batch, vision, user_data, evals.
  • Use AdditionalFields.BaseUrl only for non-default OpenAI-compatible endpoints.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • FileContentBase64: Base64 encoded file content. Use this OR FileUrl, not both. Recommended when file bytes are already available from prior steps. Example: 'SGVsbG8gV29ybGQh' (Hello World).
  • FileUrl: Public URL of the file to upload. Use this OR FileContentBase64, not both. URL must be directly accessible without authentication. Example: 'https://example.com/documents/training_data.jsonl'.
  • Filename: Optional filename for the uploaded file. If empty, the system uses a default name or derives it from FileUrl.
  • Purpose: File usage purpose. Supported values: fine-tune, assistants (default), batch, vision, user_data, evals. Example: user_data.
  • AdditionalFields: Advanced optional settings object. Supported key: BaseUrl (default: https://api.openai.com/v1). Only set BaseUrl when using Azure OpenAI, gateway, or compatible proxy endpoint. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • File (object): Complete OpenAI file object from API response, including id, object, bytes, created_at, filename, purpose, status, and any additional fields returned by OpenAI.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

List Files

Lists files uploaded to OpenAI with optional purpose filtering, sorting, and cursor-based pagination.

When to use:

  • You need to browse uploaded files before downstream operations.
  • You need pagination-aware file listing in automation workflows.
  • You need full upstream file metadata for routing or auditing.

Key points:

  • Use After with the last file id from a previous page to continue pagination.
  • Purpose is optional; leave empty to list all files.
  • StatusCode is -1 for local validation errors, 200 after upstream response, and 500 for network/system failures.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • Purpose: Filter files by purpose. Supported values: fine-tune, assistants, batch, vision, user_data, evals. Leave empty to list all files.
  • Limit: Maximum number of files to return per request. Range: 1-10000. Default is 20.
  • Order: Sort direction by creation time. Use 'desc' for newest first or 'asc' for oldest first. Default is 'desc'.
  • After: Pagination cursor for results after a specific file ID. Use the last file id from the previous page. Example: 'file-abc123xyz789'.
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only for Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • FileList (object): Full JSON response returned by OpenAI list files endpoint, including data array, has_more, and object fields.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Delete a File

Permanently deletes an uploaded file from OpenAI storage. This action cannot be undone.

When to use:

  • Remove files no longer needed for fine-tuning or assistants
  • Clean up temporary uploads after processing
  • Free up storage quota in your OpenAI account

Key points:

  • Deletion is permanent and irreversible
  • Idempotent: deleting an already-deleted file succeeds with a warning
  • FileId can be obtained from Upload File or List Files actions
  • Files currently in use by fine-tuning jobs or assistants may fail to delete
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Input Parameters:

  • FileId: The ID of the file to delete. Format: file-{alphanumeric} (e.g., file-abc123xyz789). Can be obtained from Upload File or List Files actions. Deletion is permanent and cannot be undone.

Options:

  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • DeletedFile (object): Business data object containing file deletion result. On success: {"deleted": true, "fileId": str, "object": "file"}. Idempotent: returns success even if file was already deleted. Returns empty object {} if operation failed (check ErrorMessage).
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

Classify Text for Violations

Classifies text for policy violations with OpenAI Moderations API and returns the full upstream moderation payload.

When to use:

  • You need to detect whether user-generated text may violate safety policies.
  • You need category-level violation signals for routing or policy enforcement.
  • You need full moderation response data for auditing or downstream storage.

Key points:

  • InputText accepts plain text in any language; provide meaningful content for reliable results.
  • StatusCode is -1 for local validation errors, 200 after upstream response, and 500 for network/system failures.
  • OriginalStatusCode preserves the exact upstream HTTP status for debugging and observability.
  • StatusCode=200 means the upstream API was reached. If ErrorMessage is not empty, treat it as a business error and check OriginalStatusCode for debugging.

Options:

  • InputText: The text content to classify for policy violations. Supports plain text in any language. Empty text is allowed, but usually not useful for moderation decisions. Example: 'I want to kill them all'.
  • Model: Moderation model to use. Recommended: 'text-moderation-latest'. Other possible values include versioned moderation models exposed by OpenAI. Default is 'text-moderation-latest'.
  • AdditionalFields: Advanced optional fields as an object. Supported key: BaseUrl (default: https://api.openai.com/v1). Use BaseUrl only when targeting Azure OpenAI or a trusted proxy. Example: {"BaseUrl":"https://api.openai.com/v1"}

Output:

  • ModerationResult (object): Full JSON response returned by OpenAI Moderations API, including id, model, and results with flagged, categories, and category_scores fields.
  • OriginalStatusCode (number): The original HTTP status code returned by OpenAI API. 0 if request did not reach the API (local validation error or network error).
  • StatusCode (number): Operation status code. -1 for parameter validation error, 200 for request completed (check ErrorMessage for business errors), 500 for network/system errors (Agent may retry).
  • ErrorMessage (string): Error details if any, empty string otherwise.

5. Example Usage

This section guides you through creating a simple chatbot workflow using the Message a Model action. This is the most common way to interact with OpenAI's GPT models.

Scenario: We will create a workflow that takes a user's question and uses the GPT-4o-mini model to generate a concise answer.

Workflow Overview: Start -> Openai (Message a Model) -> Answer

Step-by-Step Guide:

  1. Add the Tool Node:
    • In your workflow canvas, click the "+" button to add a new node.
    • Select the "Tools" tab.
    • Search for and select Openai.
    • From the list of supported operations, choose Message a Model. This will add the node to your canvas.
  2. Configure the Node:
    • Click the new Message a Model node to open its configuration panel.
    • Credentials: Select your configured OpenAI API credentials from the dropdown menu.
    • Parameters:
    • Model: Keep the default gpt-4o-mini for a balance of speed and cost, or change it to gpt-4o for higher intelligence.
    • Messages: This is where you define the conversation. You need to provide a JSON array of message objects. For a simple question, enter:
    • Temperature: (Optional) Leave at 1 for standard creativity, or lower it to 0.5 for more focused answers.
    [
      {"role": "user", "content": "Explain quantum computing in one sentence."}
    ]
    
  3. Run and Verify:
    • Ensure there are no error indicators on the node.
    • Click the "Run" or "Test Run" button in the canvas.
    • Once the execution is complete, click the "Logs" icon on the node to inspect the output.
    • Look for the Message output object. The content field inside it will contain the model's response (e.g., "Quantum computing uses the principles of quantum mechanics to process information in ways that classical computers cannot.").

Conclusion: You have successfully configured a basic AI chat node. You can now connect the Message.content output to an Answer node to display the result to the end user.

6. FAQs

Q: Why am I getting a 401 Unauthorized error?

A: This usually indicates an issue with your API key. Please check the following:

  • Validity: Ensure your API key has not been revoked or expired.
  • Configuration: Verify that the correct credential is selected in the node settings.
  • Permissions: Ensure your API key has access to the organization you are trying to use.

Q: How do I use GPT-4?

A: To use GPT-4, simply enter gpt-4 or gpt-4o in the Model input field of the "Message a Model" action. Note that your OpenAI account must have payment history or be in a tier that allows access to GPT-4 models.

Q: What is the difference between "Message a Model" and "Message an Assistant"?

A: They serve different purposes:

  • Message a Model: This is a stateless, direct call to the Chat Completions API. You must provide the full conversation history in the Messages array every time if you want the model to remember context.
  • Message an Assistant: This uses the Assistants API. It is stateful; OpenAI manages the conversation history (Threads) for you. It also supports advanced features like file search and code interpretation automatically.

7. Official Documentation

Openai Official API Documentation

Updated on: Apr 14, 2026
Was This Page Helpful?
Prev PostgreSQL
Next Qdrant
Discussion

Leave a Reply. Cancel reply

Your email address will not be published. Required fields are marked*

Product-related questions?Contact Our Support Team to Get a Quick Solution>
On this page
  • 1. Overview
  • 2. Prerequisites
  • 3. Credentials
  • 4. Supported Operations
    • Summary
    • Operation Details
  • 5. Example Usage
  • 6. FAQs
  • 7. Official Documentation
loading...
No Results