1. Overview
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
With the GoInsight AWS S3 node, you can seamlessly integrate S3 storage operations into your automated workflows. This allows you to perform a full range of object, folder, and bucket management tasks, including:
- Bucket Management: Create, delete, and list all your S3 buckets.
- Folder Management: Create, delete, and list folders within your buckets.
- File Management: Upload, download, delete, copy, and search for files (objects) within your S3 buckets.
2. Prerequisites
Before using this node, you need to have a valid AWS account with programmatic access enabled. You may also need appropriate IAM (Identity and Access Management) permissions to create and manage S3 buckets and objects.
3. Credentials
For a detailed guide on how to obtain and configure your credentials, please refer to our official documentation: Credentials Configuration Guide.
4. Supported Operations
Summary
This node provides operations centered around the management of S3 Buckets, Folders, and Files. The table below summarizes the available actions (aligned with the latest published DSL).
| Resource | Operation | Description |
|---|---|---|
| Bucket | Create Bucket | Creates an S3 bucket with optional ACL and S3-compatible endpoint; idempotent if you already own the bucket. |
| Bucket | Delete Bucket | Deletes an empty bucket permanently. |
| Bucket | Get Buckets | Lists buckets and owner information for the credentials. |
| Folder | Create Folder | Creates a folder placeholder (zero-byte key ending with /). |
| Folder | Delete Folder | Deletes a prefix and all objects under it. |
| Folder | Get Folders | Scans the bucket to list folder paths (can be slow on huge buckets). |
| File | Upload File | Uploads Base64 content; may overwrite an existing object. |
| File | Download File | Downloads an object and returns Base64 content (size limits apply). |
| File | Copy File | Server-side copy within or across buckets; existing destination keys are overwritten. |
| File | Delete Object | Deletes a single object (S3 succeeds even if the key did not exist). |
| File | Get Files | Lists files (not directories) with prefix filter and pagination. |
| File | Search Objects | ListObjectsV2 listing with prefix and pagination; filter by suffix/name in your workflow if needed. |
Operation Details
Create Bucket
Creates an AWS S3 bucket in the specified region with optional ACL and custom endpoint support using native HTTP signing (no boto3). Bucket names must be globally unique. If the bucket already exists and is owned by your account, the call can succeed with BucketCreated=false so retries after timeouts are safe.
Input Parameters:
- BucketName: S3 bucket name to create.
Options:
- SessionToken: Optional session token for temporary credentials (e.g. STS).
- EndpointUrl: Optional custom S3-compatible endpoint (MinIO, Wasabi, OSS, etc.).
- Acl: Optional canned ACL (private, public-read, authenticated-read, …); leave empty for default.
Output:
- BucketCreated (bool): Whether the bucket was created in this call.
- BucketData (object): BucketName, BucketRegion, BucketLocation, RequestId, etc.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Normalized operation status code.
- ErrorMessage (string): Error message if any.
- Hint (string): Suggested next step on error; empty on success.
- Retryable (bool): Whether retrying may help for transient failures.
Delete Bucket
Permanently deletes an empty bucket. Irreversible; the name may later be taken by another account.
Input Parameters:
- BucketName: S3 bucket name to delete.
Options:
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- BucketDeleted (bool): Whether the bucket was deleted.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Get Buckets
Retrieves all buckets visible to the credentials plus owner metadata (read-only).
Options:
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- BucketsRetrieved (bool): Whether the list call succeeded.
- BucketsList (object-array): Bucket entries from the API.
- Owner (object): Owner information.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Summary (string): One-line human-readable result summary.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Create Folder
Creates a folder by writing a zero-byte object whose key ends with / (S3 has no real directories).
Input Parameters:
- BucketName: S3 bucket name where the folder will be created.
- FolderName: Folder name; a trailing slash is applied if missing.
Options:
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- FolderCreated (bool): Whether the folder object was created.
- FolderData (object): Details of the created placeholder object.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Delete Folder
Deletes every object whose key starts with the folder prefix. Irreversible.
Input Parameters:
- BucketName: S3 bucket name where the folder is located.
- FolderName: Folder prefix to delete (contents included).
Options:
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- FolderDeleted (bool): Whether the delete sequence completed successfully.
- DeletedObjects (object-array): Per-object delete results.
- TotalObjects (number): Objects that required deletion.
- SuccessfulCount (number): Objects successfully deleted.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Summary (string): One-line human-readable result summary.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Get Folders
Lists folder paths by scanning object keys (can be slow or hit limits on very large buckets). Use Prefix and MaxResults to narrow scope.
Input Parameters:
- BucketName: S3 bucket name to list folders from.
Options:
- MaxResults: Max folders to return (1–10000; default 1000). Check HasMore if more exist.
- Prefix: Optional folder prefix; use a trailing / for precise folder matching.
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom endpoint URL (include https://).
Output:
- FoldersFound (bool): Whether listing succeeded.
- FoldersData (object-array): Entries with FolderName, FolderPath, Depth, etc.
- TotalFolders (number): Count returned in this response (may be capped by MaxResults).
- HasMore (bool): Whether more folders exist beyond MaxResults.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Summary (string): One-line human-readable result summary.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Upload File
Uploads Base64-decoded bytes to an object key. Overwrites an existing object with the same key without prompting. Not for objects larger than 5 GB (multipart upload required above that).
Input Parameters:
- BucketName: S3 bucket name where the file will be uploaded.
- ObjectKey: Object key (path) for the uploaded file.
- FileContent: Base64-encoded file content.
Options:
- ContentType: MIME type (default application/octet-stream).
- MaxSizeMb: Maximum decoded size in MB (default 100).
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- UploadSuccessful (bool): Whether the upload completed successfully.
- FileLocation (string): S3 URI of the object.
- ContentLength (number): Uploaded size in bytes.
- ETag (string): ETag for verification.
- VersionId (string): Present when bucket versioning is enabled.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Download File
Downloads an object and returns Base64-encoded content. Default MaxSizeMb is 50; increase if needed, subject to workflow memory.
Input Parameters:
- BucketName: S3 bucket name where the file is located.
- ObjectKey: Object key to download.
Options:
- MaxSizeMb: Maximum file size in MB (default 50). Exceeding fails without partial download.
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- DownloadSuccessful (bool): Whether the download completed successfully.
- FileLocation (string): S3 URI of the object.
- ContentType (string): MIME type.
- ContentLength (number): Size in bytes.
- LastModified (string): Last-Modified header value.
- ETag (string): ETag for verification.
- FileContent (string): Base64-encoded content.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Copy File
Server-side copy (including cross-bucket). An existing DestKey is overwritten. Does not delete the source.
Input Parameters:
- SourceBucket: Source bucket name.
- SourceKey: Source object key.
- DestBucket: Destination bucket name.
- DestKey: Destination object key.
Options:
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- CopySuccessful (bool): Whether the copy succeeded.
- SourceLocation (string): S3 URI of the source object.
- DestLocation (string): S3 URI of the destination object.
- ETag (string): ETag of the new object.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Delete Object
Deletes one object. S3 may return success even when the object did not exist.
Input Parameters:
- BucketName: Bucket containing the object.
- ObjectKey: Object key to delete.
Options:
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- ObjectDeleted (bool): Whether the delete request completed successfully.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Get Files
Lists file objects (excludes directory placeholders), with Prefix filter and ListObjectsV2 pagination.
Input Parameters:
- BucketName: Bucket to list.
Options:
- MaxKeys: Page size per request (1–1000; default 1000). Use pagination for more than 1000 files.
- Prefix: Key prefix filter (no wildcards).
- ContinuationToken: Pass NextContinuationToken from the previous response to fetch the next page.
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- ListComplete (bool): Whether this page was retrieved and parsed successfully.
- FilesCount (number): Number of files in this page.
- FilesData (object-array): Per-object metadata (Key, Size, LastModified, ETag, StorageClass, …).
- IsTruncated (bool): Whether more keys exist after this page.
- NextContinuationToken (string): Token for the next ContinuationToken input; empty when not truncated.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Operation status code.
- ErrorMessage (string): Error message if any.
- Summary (string): One-line human-readable result summary.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
Search Objects
Uses ListObjectsV2: server-side filter is Prefix only (suffix / “contains” must be done client-side across pages). Read-only.
Input Parameters:
- BucketName: Bucket to list.
Options:
- Prefix: Keys starting with this prefix (only server-side filter).
- MaxKeys: Page size (capped at 1000; default 1000).
- ContinuationToken: Pagination; use NextContinuationToken from the previous response.
- SessionToken: Optional session token for temporary credentials.
- EndpointUrl: Optional custom S3-compatible endpoint.
Output:
- SearchComplete (bool): Whether the list call for this page succeeded.
- ObjectsFound (number): Object count in this page.
- ObjectsData (object-array): Object metadata for this page.
- IsTruncated (bool): Whether more pages exist.
- NextContinuationToken (string): Token for the next request.
- OriginalStatusCode (number): HTTP status from S3; 0 if the request did not reach the API.
- StatusCode (number): Normalized status (see node output notes for semantics).
- ErrorMessage (string): Error message if any.
- Summary (string): One-line human-readable result summary.
- Hint (string): Suggested next step on error.
- Retryable (bool): Whether retrying may help for transient failures.
5. Example Usage
This section will guide you through creating a simple workflow to upload a file to your AWS S3 bucket.
Workflow Overview
The workflow will consist of three nodes: Start -> AWS S3: Upload File -> Answer.
Step-by-Step Guide
- Add the AWS S3 Node:
- In the workflow canvas, click the + icon to add a new node.
- Select the "Tools" tab in the popup panel.
- Find and select Aws S3 from the list of tools.
- From the list of supported operations for Aws S3, click on Upload File to add the node to the canvas.
- Configure the Node:
- Click on the newly added Upload File node to open its configuration panel on the right.
- Configure Credentials: In the credentials field at the top, select your pre-configured Aws S3 credential from the dropdown menu.
- Fill in Parameters:
- BucketName: Enter the name of the S3 bucket where you want to upload the file (e.g., my-test-bucket).
- ObjectKey: Specify the full path and name for the file in the bucket (e.g., reports/monthly-report.txt).
- FileContent: Provide the content of the file, encoded in Base64. For example, to upload a file with the text "Hello, S3!", you would use the Base64 string SGVsbG8sIFMzIQ==.
- Run and Verify:
- Once all required parameters are filled correctly, the error indicator on the top right of the workflow canvas will disappear.
- Click the "Run" button in the top right corner to execute the workflow.
- After a successful run, you can click the log icon to view the detailed inputs and outputs of the node and verify that the operation was successful. You should see an UploadSuccessful output with a value of true.
Final Workflow
After completing these steps, your workflow is fully configured. When you run it, a new file will be created in your specified S3 bucket with the content you provided.
6. FAQs
Q: I'm getting a 403 Forbidden error. What should I do?
A: A 403 error typically indicates a permissions issue. Please check the following:
- IAM Permissions: Ensure the IAM user or role associated with your credentials has the necessary permissions (e.g., s3:PutObject, s3:GetObject, s3:ListBucket) for the bucket and objects you are trying to access.
- Bucket Policy: Review the bucket policy to ensure it doesn't explicitly deny access to your user or role.
Q: Why is my file upload failing with a "File size exceeds maximum limit" error?
A: The Upload File and Download File nodes enforce a maximum decoded size via MaxSizeMb (defaults are 100 MB for upload and 50 MB for download). Increase MaxSizeMb if your workflow allows the extra memory, or use a different integration pattern for very large objects.
7. Official Documentation
For more in-depth information about the AWS S3 API, please refer to the AWS S3 Official API Documentation.
Leave a Reply.