Orbitvu Station enables seamless uploading of your generated content to AWS S3. To ensure successful and secure integration, you will need to grant Orbitvu Cloud the necessary permissions within your AWS storage configuration. Detailed instructions for AWS S3 setup are provided below. Please note that a basic understanding of AWS S3 is recommended.
After setting up AWS S3 access for Orbitvu, please provide the necessary storage information (for example, your bucket name) in the account settings section of your Orbitvu Cloud.
Restrictions
AWS Assumed role feature that is required by AWS S3 integration only works within a single AWS partition. Since the Orbitvu account is in the "aws" partition (Standard commercial AWS regions), customers with accounts in the "aws-cn" partition (China regions) and "aws-us-gov" partition will not be able to use this feature.
Requirements
To utilize the AWS S3 integration functionality, clients must have both an active Support plan and their S3 settings properly configured in Orbitvu Cloud.
Uploaded file types
Orbitvu Station uploads 2D images and video files to S3 using the defined Name patterns. The 360-degree presentations are uploaded as zip archives using the current session name.
Each file uploaded to S3 will have a metadata key x-amz-meta-sku with the SKU value configured in the Orbitvu Station session. This can be used for further processing, e.g. with AWS Lambda.
AWS S3 configuration
Our solution leverages assumed roles – a highly secure and recommended approach – to facilitate the upload of files from Orbitvu to your S3 bucket. This method ensures that Orbitvu can access your storage without requiring permanent credentials. The setup process includes the following key steps:
- Defining the specific permissions required for Orbitvu's access (permission policy).
- Creating an IAM role that will be assumed by Orbitvu.
- Establishing a trust relationship for this role, explicitly allowing the Orbitvu account to assume it.
- Linking the defined permission policy to this new role.
A comprehensive breakdown of each of these steps can be found in the sections below.
Create policy
1. Log in to your AWS account
2. Visit IAM -> Policies and click Create policy

3. Define the policy
The policy must allow the s3:PutObject action on your S3 bucket.
The simplest way to create the policy definition is to switch to JSON view and paste the following definition changing bucket name to the name of your bucket.
{
"Version": "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": [ "s3:PutObject" ],
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
}]
}
IMPORTANT! Change the <BUCKET_NAME> to the real name of your bucket, e.g. arn:aws:s3:::my-s3-bucket-for-ov/* !
4. Click Next, enter role name e.g., my-bucket-put-policy and confirm policy creation.
Create role
1. Visit IAM -> Roles and click Create role
2. Select trusted entity type
- Select AWS account as Trusted entity type
- Select Another AWS account in the account selection field
- Enter Orbitvu's account id: 642402037748
- Select Require external ID (this is recommended step)
- Enter your own unique secret that will additionally identify Orbitvu (write this down as it will be required later).
- Click Next
3. Attach the policy created earlier to the role
On the screen below we assume that the policy from the first step was named ovstation-upload-test-policy, but you should use your own policy name.
4. Click Next and finalize role creation by providing a name of the new role.
Store the Role ARN as this will be required to configure the solution. You can find the Role ARN in role details.
As soon as these steps are done you can visit the External storage configuration at the Orbitvu Cloud. You'll have to provide the following there:
- Bucket name
- AWS Region name
- Role ARN
- External ID
Upload customization with AWS Lambda
AWS Lambda functions offer a powerful way to automate file processing in your S3 bucket. When new files are uploaded, Lambda can trigger actions like converting image formats or transferring data to external systems such as an ERP (using x-amz-meta-sku metatag for example).
For instance, if you upload 360-degree presentations as ZIP files, a Lambda function can automatically extract the contents and remove the original ZIP archive. Below is an example of a Lambda function designed for this purpose.
Please not that this Lambda function is for the illustration purposes. In real environments it is recommended to use IaC tools like terraform or frameworks like serverless to deploy Lambda files.
Create Lambda in AWS
Go to AWS -> Lambda and click Create Function
Enter function name and select runtime. For the sample function provided use Python 3.13
Copy and paste the function's code to the editor, then click Deploy
Go to Configuration -> General configuration and click Edit.
Set timeout to 1 minute as unzip might take some time.
Go to Configuration -> Triggers -> Add Trigger
Choose S3 trigger, enter your bucket's name and set ".zip" as suffix (as this Lambda will operate only on zip archives).
Go to Configuration -> Permissions and click role name to open role details
Click Add permissions and Create inline policy
Add permissions to your bucket as below, replacing <YOUR_BUCKET_NAME> with the name of your bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<YOUR_BUCKET_NAME>/*"
}
]
}
Click Next, provide the policy name and confirm policy creation.
That's all! You can now upload some zip file and verify if your function works properly.