Skip to content

What is AWS S3?

Amazon S3 (Simple Storage Service) is the industry-standard object store — the backbone of millions of production workloads ranging from static sites and data lakes to backup archives and AI training corpora. It exposes an HTTP/S API that dozens of other object stores now replicate (Backblaze B2, Wasabi, Cloudflare R2, MinIO, Google Cloud Storage, Storj, IDrive e2), making S3 the de-facto interchange format for cloud objects. S3 offers 11 nines of durability, storage classes from Standard to Glacier Deep Archive, lifecycle policies, versioning, replication, and IAM-scoped access.

Moving data between S3 and another object store — or out to consumer clouds like Google Drive / Dropbox / OneDrive — is where CloudsLinker shines. Native AWS tools (S3 CLI, DataSync, Storage Gateway) are powerful but AWS-to-AWS-centric and priced accordingly. CloudsLinker runs cloud-to-cloud copies over the S3 API with multipart upload, parallel part transfer, and automatic throttling — letting you move to cheaper tiers like Wasabi (no egress fees) or B2, or replicate to a completely different provider for disaster recovery.

Key features of AWS S3

Why connect AWS S3 to CloudsLinker

CloudsLinker connects to any S3-compatible bucket with Access Key ID + Secret Access Key + Region (or Endpoint, for non-AWS providers). Multipart uploads are used for objects above 100 MB. The same connector works for AWS S3, Wasabi, Backblaze B2, Cloudflare R2, DigitalOcean Spaces, IDrive e2, Storj, Google Cloud Storage (via HMAC), MinIO, and any custom S3-compatible endpoint.

What you can do with AWS S3 on CloudsLinker

Bucket-to-bucket copies across providers

Bucket-to-bucket copies across providers

Copy from S3 to Wasabi, B2, R2, GCS, Azure-compatible targets, or out to OneDrive / Google Drive / Dropbox. Server-to-server, zero egress through your machine.

Multipart upload with parallel parts

Multipart upload with parallel parts

Large objects are split into 5 MB–5 GB parts and uploaded in parallel, enabling multi-TB copies without timeout.

Scheduled bucket sync

Scheduled bucket sync

Hourly / daily / weekly jobs with delta mode — only new or changed objects are copied on each run.

Filter by prefix, size, modified date

Filter by prefix, size, modified date

Sync just <code>s3://bucket/logs/2026/</code>, skip files > 10 GB, or copy only objects modified in the last 7 days.

Common AWS S3 transfer scenarios

Cross-provider backup: S3 → Wasabi / Backblaze B2 / Cloudflare R2

S3 Replication is an AWS-to-AWS feature; a full-provider outage or account compromise takes both copies out at once. Schedule a CloudsLinker incremental sync from S3 to Wasabi ($6.99/TB, no egress) or B2 ($6/TB) for true provider-independent disaster recovery. Most multi-TB workloads run for $20–40/month in destination storage.

Escape AWS egress fees for static assets

Serving media from S3 over the open internet costs $0.09/GB after 100 GB/mo. Mirror the bucket to Cloudflare R2 with CloudsLinker (R2 has zero egress fees) and front it with Cloudflare's CDN — you'll cut bandwidth bill by 90 %+ on high-traffic origins.

Data residency: move EU data to a regional provider

Customers in regulated industries sometimes require data to leave AWS entirely. CloudsLinker copies EU-region buckets to Scaleway (France), IDrive e2 (EU region), or any self-hosted MinIO cluster in a single scheduled job, preserving folder structure and metadata.

Migrate from legacy S3 to S3 with different storage class

If you've been on S3 Standard for years and realized most data is cold, use CloudsLinker to copy the cold prefix to a Glacier Instant Retrieval bucket (or GCS Archive, or a cheaper provider) then delete from origin. Far more controllable than a naive Lifecycle rule.

Ingest S3 buckets into Google Drive / OneDrive for human-friendly access

Developers love S3; business users want a folder view. CloudsLinker can copy a filtered subset of an S3 bucket (e.g. only reports/*.pdf) to a shared Google Drive folder on a nightly schedule, giving non-technical teams easy access without exposing bucket URLs.

How to connect AWS S3 (or any S3-compatible bucket) to CloudsLinker

S3 authenticates with Access Key + Secret Key. The same flow works for Wasabi, B2, R2, GCS, DigitalOcean, Storj, IDrive e2, Scaleway and MinIO — only the Endpoint / Region changes.

For AWS S3 specifically:

  1. Sign in to the AWS ConsoleIAMUsers.
  2. Create a dedicated user for CloudsLinker (e.g. cloudslinker-backup). Do not use your root account.
  3. Attach a policy scoped to the bucket(s) you want to transfer:
    • Minimum for read-only source: s3:GetObject, s3:ListBucket on the specific bucket ARN.
    • For destination / sync: also s3:PutObject, s3:DeleteObject, s3:AbortMultipartUpload.
  4. Under Security credentialsCreate access key → choose Third-party service. Save the Access Key ID and Secret Access Key — AWS shows the secret only once.
  5. In CloudsLinker, click Add CloudAWS S3 → enter display name, Access Key, Secret Key, and the bucket’s Region (e.g. us-east-1, eu-west-1).
  6. Click Confirm — CloudsLinker validates the credentials by calling ListBuckets and shows the connection as ready.

For other S3-compatible providers:

The flow is the same, but you’ll also paste an Endpoint URL:

Provider Endpoint
Wasabi s3.<region>.wasabisys.com
Backblaze B2 s3.<region>.backblazeb2.com
Cloudflare R2 https://<account-id>.r2.cloudflarestorage.com
DigitalOcean Spaces <region>.digitaloceanspaces.com
Storj gateway.storjshare.io
Google Cloud Storage storage.googleapis.com (with HMAC key)
IDrive e2 <region>.idrivee2-XX.com
Scaleway s3.<region>.scw.cloud

To revoke access: AWS Console → IAM → Users → select user → Security credentialsDeactivate or Delete the Access Key.

AWS S3 upload & download limits you should know

S3 is one of the most permissive object stores in the industry — the real limits are on request rates and request sizes, not storage volume. CloudsLinker’s multipart strategy is built around these:

  • Maximum single object size: 5 TB. Reached only via multipart upload.
  • Single PUT (non-multipart): 5 GB max. Beyond that, multipart is mandatory.
  • Multipart part size: 5 MB minimum, 5 GB maximum. Final part can be smaller than 5 MB.
  • Maximum parts per object: 10,000. Combined with 5 GB max part, that’s the path to the 5 TB total.
  • AWS Console upload cap: 160 GB — but this is console-only. API / SDK / CloudsLinker are not affected.
  • Request rate per prefix: 3,500 PUT/POST/DELETE/sec and 5,500 GET/HEAD/sec. CloudsLinker spreads writes across prefixes in very-large jobs to avoid hitting the per-prefix ceiling.
  • Storage volume: effectively unlimited per bucket. No cap on number of objects or bytes per account.
  • Download: no account-level bandwidth cap. You pay egress per GB; CloudsLinker uses delta sync to minimize it.

Sources: Amazon S3 multipart upload limits, Uploading objects, Multipart upload overview.

AWS S3 + CloudsLinker — Frequently Asked Questions

Does CloudsLinker support all S3-compatible providers, not just AWS?

Yes. The same connector works for AWS S3, Wasabi, Backblaze B2, Cloudflare R2, DigitalOcean Spaces, IDrive e2, Storj, Google Cloud Storage (via HMAC keys), Scaleway, and any custom S3-compatible endpoint (MinIO, Ceph, self-hosted). You provide Access Key + Secret + Region / Endpoint.

What's the largest single object I can transfer?

S3's hard ceiling is 5 TB per object. CloudsLinker uses multipart upload with 5 MB–5 GB parts (up to 10,000 parts per object) to reach that ceiling without timeouts. A single PUT is capped at 5 GB — anything larger automatically switches to multipart.

Are my S3 credentials stored securely?

Access Key and Secret Key are encrypted at rest with AES-256 and only decrypted inside the transfer worker when running a job. Credentials are never exposed in logs or UI. For extra safety, create an IAM user scoped to the specific bucket(s) with minimum required permissions — not a root account key.

How do I avoid AWS egress fees when backing up to another cloud?

Egress is billed per GB leaving AWS. Strategies: (1) choose a destination with $0 ingress (most do, including Wasabi, B2, R2, GCS); (2) use CloudsLinker's delta sync so subsequent runs only copy changed objects; (3) filter out objects you don't actually need to replicate. Initial full copy still costs egress, but incremental runs keep it minimal.

Does CloudsLinker preserve object metadata, storage class, and tags?

User metadata (x-amz-meta-*) and content headers are preserved cross-provider. Storage class is provider-specific — CloudsLinker maps to the closest equivalent on the destination (e.g. S3 Standard → Wasabi Standard, S3 Glacier Instant Retrieval → B2 Standard if destination has no archive tier). Object tags are preserved on AWS-to-AWS copies.

Can I copy a bucket to a different AWS region?

Yes — connect the same bucket twice with different regions, or use two separate bucket connections. For same-account cross-region replication, native S3 Replication is often cheaper; CloudsLinker wins when the destination is a different account or provider.

What about versioned objects?

By default, CloudsLinker copies the current version of each object. To migrate full version history, enable the 'include versions' option in the job settings; this enumerates all versions and copies them in order.

Will CloudsLinker work with buckets that have KMS encryption enabled?

Yes, as long as the IAM user used by CloudsLinker has kms:Decrypt on the relevant CMK for GET operations, and kms:Encrypt on the destination CMK for PUT. Data is re-encrypted at the destination with its own key.

Does the AWS Console file size limit apply?

No. The 160 GB console upload limit only applies to uploads done through the AWS web console. CloudsLinker uses the REST API / SDK, which reaches the full 5 TB object ceiling via multipart upload.

Can I run a cross-account S3 migration?

Yes. Connect each AWS account as a separate bucket source / destination with its own Access Key. CloudsLinker handles the assume-role / cross-account auth within each connection.

AWS S3 transfer guides

Step-by-step walkthroughs for moving data to and from AWS S3.

Conclusion

S3 is the universal object-storage API, but staying inside the AWS ecosystem for backup or migration means paying AWS prices and concentrating risk. CloudsLinker makes cross-provider S3 transfers as easy as cross-region — with multipart upload, delta sync, and storage-class-aware copies. Connect your bucket once and run scheduled migrations, backups, or full-scale provider switches from the browser.

Online storage services supported by CloudsLinker

Transfer data between over 49 cloud services with CloudsLinker

OneDrive

OneDrive

Google Drive

Google Drive

Google Photos

Google Photos

Shared Drive

Shared Drive

OneDrive for Business

OneDrive for Business

Dropbox

Dropbox

Box

Box

Mega

Mega

pCloud

pCloud

Yandex

Yandex

ProtonDrive

ProtonDrive

AWS

AWS

GCS

GCS

iDrive

iDrive

Storj

Storj

DigitalOcean

DigitalOcean

Wasabi

Wasabi

1fichier

1fichier

PikPak

PikPak

TeleBox

TeleBox

OpenDrive

OpenDrive

Backblaze B2

Backblaze B2

Fastmail file

Fastmail file

SharePoint

SharePoint

Nextcloud

Nextcloud

ownCloud

ownCloud

Premiumize me

Premiumize me

HiDrive

HiDrive

Put.io

Put.io

Sugar Sync

Sugar Sync

Jottacloud

Jottacloud

Seafile

Seafile

Ftp

Ftp

SFtp

SFtp

NAS

NAS

WebDav

WebDav

4shared

4shared

Icedrive

Icedrive

Cloudflare R2

Cloudflare R2

Scaleway

Scaleway

Doi

Doi

iCloud Drive

iCloud Drive

iCloud Photos

iCloud Photos

FileLU

FileLU

Zoho WorkDrive

Zoho WorkDrive

Telia Cloud / Sky

Telia Cloud / Sky

Drime

Drime

Filen

Filen

TeraBox

TeraBox

Didn't find your cloud service? Contact: [email protected]