What is AWS S3?
Amazon S3 (Simple Storage Service) is the industry-standard object store — the backbone of millions of production workloads ranging from static sites and data lakes to backup archives and AI training corpora. It exposes an HTTP/S API that dozens of other object stores now replicate (Backblaze B2, Wasabi, Cloudflare R2, MinIO, Google Cloud Storage, Storj, IDrive e2), making S3 the de-facto interchange format for cloud objects. S3 offers 11 nines of durability, storage classes from Standard to Glacier Deep Archive, lifecycle policies, versioning, replication, and IAM-scoped access.
Moving data between S3 and another object store — or out to consumer clouds like Google Drive / Dropbox / OneDrive — is where CloudsLinker shines. Native AWS tools (S3 CLI, DataSync, Storage Gateway) are powerful but AWS-to-AWS-centric and priced accordingly. CloudsLinker runs cloud-to-cloud copies over the S3 API with multipart upload, parallel part transfer, and automatic throttling — letting you move to cheaper tiers like Wasabi (no egress fees) or B2, or replicate to a completely different provider for disaster recovery.
Key features of AWS S3
Why connect AWS S3 to CloudsLinker
CloudsLinker connects to any S3-compatible bucket with Access Key ID + Secret Access Key + Region (or Endpoint, for non-AWS providers). Multipart uploads are used for objects above 100 MB. The same connector works for AWS S3, Wasabi, Backblaze B2, Cloudflare R2, DigitalOcean Spaces, IDrive e2, Storj, Google Cloud Storage (via HMAC), MinIO, and any custom S3-compatible endpoint.
What you can do with AWS S3 on CloudsLinker
Bucket-to-bucket copies across providers
Copy from S3 to Wasabi, B2, R2, GCS, Azure-compatible targets, or out to OneDrive / Google Drive / Dropbox. Server-to-server, zero egress through your machine.
Multipart upload with parallel parts
Large objects are split into 5 MB–5 GB parts and uploaded in parallel, enabling multi-TB copies without timeout.
Scheduled bucket sync
Hourly / daily / weekly jobs with delta mode — only new or changed objects are copied on each run.
Filter by prefix, size, modified date
Sync just <code>s3://bucket/logs/2026/</code>, skip files > 10 GB, or copy only objects modified in the last 7 days.
Common AWS S3 transfer scenarios
Cross-provider backup: S3 → Wasabi / Backblaze B2 / Cloudflare R2
S3 Replication is an AWS-to-AWS feature; a full-provider outage or account compromise takes both copies out at once. Schedule a CloudsLinker incremental sync from S3 to Wasabi ($6.99/TB, no egress) or B2 ($6/TB) for true provider-independent disaster recovery. Most multi-TB workloads run for $20–40/month in destination storage.
Escape AWS egress fees for static assets
Serving media from S3 over the open internet costs $0.09/GB after 100 GB/mo. Mirror the bucket to Cloudflare R2 with CloudsLinker (R2 has zero egress fees) and front it with Cloudflare's CDN — you'll cut bandwidth bill by 90 %+ on high-traffic origins.
Data residency: move EU data to a regional provider
Customers in regulated industries sometimes require data to leave AWS entirely. CloudsLinker copies EU-region buckets to Scaleway (France), IDrive e2 (EU region), or any self-hosted MinIO cluster in a single scheduled job, preserving folder structure and metadata.
Migrate from legacy S3 to S3 with different storage class
If you've been on S3 Standard for years and realized most data is cold, use CloudsLinker to copy the cold prefix to a Glacier Instant Retrieval bucket (or GCS Archive, or a cheaper provider) then delete from origin. Far more controllable than a naive Lifecycle rule.
Ingest S3 buckets into Google Drive / OneDrive for human-friendly access
Developers love S3; business users want a folder view. CloudsLinker can copy a filtered subset of an S3 bucket (e.g. only reports/*.pdf) to a shared Google Drive folder on a nightly schedule, giving non-technical teams easy access without exposing bucket URLs.
How to connect AWS S3 (or any S3-compatible bucket) to CloudsLinker
S3 authenticates with Access Key + Secret Key. The same flow works for Wasabi, B2, R2, GCS, DigitalOcean, Storj, IDrive e2, Scaleway and MinIO — only the Endpoint / Region changes.
For AWS S3 specifically:
- Sign in to the AWS Console → IAM → Users.
- Create a dedicated user for CloudsLinker (e.g.
cloudslinker-backup). Do not use your root account. - Attach a policy scoped to the bucket(s) you want to transfer:
- Minimum for read-only source:
s3:GetObject,s3:ListBucketon the specific bucket ARN. - For destination / sync: also
s3:PutObject,s3:DeleteObject,s3:AbortMultipartUpload.
- Minimum for read-only source:
- Under Security credentials → Create access key → choose Third-party service. Save the Access Key ID and Secret Access Key — AWS shows the secret only once.
- In CloudsLinker, click Add Cloud → AWS S3 → enter display name, Access Key, Secret Key, and the bucket’s Region (e.g.
us-east-1,eu-west-1). - Click Confirm — CloudsLinker validates the credentials by calling
ListBucketsand shows the connection as ready.
For other S3-compatible providers:
The flow is the same, but you’ll also paste an Endpoint URL:
| Provider | Endpoint |
|---|---|
| Wasabi | s3.<region>.wasabisys.com |
| Backblaze B2 | s3.<region>.backblazeb2.com |
| Cloudflare R2 | https://<account-id>.r2.cloudflarestorage.com |
| DigitalOcean Spaces | <region>.digitaloceanspaces.com |
| Storj | gateway.storjshare.io |
| Google Cloud Storage | storage.googleapis.com (with HMAC key) |
| IDrive e2 | <region>.idrivee2-XX.com |
| Scaleway | s3.<region>.scw.cloud |
To revoke access: AWS Console → IAM → Users → select user → Security credentials → Deactivate or Delete the Access Key.
AWS S3 upload & download limits you should know
S3 is one of the most permissive object stores in the industry — the real limits are on request rates and request sizes, not storage volume. CloudsLinker’s multipart strategy is built around these:
- Maximum single object size: 5 TB. Reached only via multipart upload.
- Single PUT (non-multipart): 5 GB max. Beyond that, multipart is mandatory.
- Multipart part size: 5 MB minimum, 5 GB maximum. Final part can be smaller than 5 MB.
- Maximum parts per object: 10,000. Combined with 5 GB max part, that’s the path to the 5 TB total.
- AWS Console upload cap: 160 GB — but this is console-only. API / SDK / CloudsLinker are not affected.
- Request rate per prefix: 3,500 PUT/POST/DELETE/sec and 5,500 GET/HEAD/sec. CloudsLinker spreads writes across prefixes in very-large jobs to avoid hitting the per-prefix ceiling.
- Storage volume: effectively unlimited per bucket. No cap on number of objects or bytes per account.
- Download: no account-level bandwidth cap. You pay egress per GB; CloudsLinker uses delta sync to minimize it.
Sources: Amazon S3 multipart upload limits, Uploading objects, Multipart upload overview.
AWS S3 + CloudsLinker — Frequently Asked Questions
Does CloudsLinker support all S3-compatible providers, not just AWS?
What's the largest single object I can transfer?
Are my S3 credentials stored securely?
How do I avoid AWS egress fees when backing up to another cloud?
Does CloudsLinker preserve object metadata, storage class, and tags?
x-amz-meta-*) and content headers are preserved cross-provider. Storage class is provider-specific — CloudsLinker maps to the closest equivalent on the destination (e.g. S3 Standard → Wasabi Standard, S3 Glacier Instant Retrieval → B2 Standard if destination has no archive tier). Object tags are preserved on AWS-to-AWS copies.
Can I copy a bucket to a different AWS region?
What about versioned objects?
Will CloudsLinker work with buckets that have KMS encryption enabled?
kms:Decrypt on the relevant CMK for GET operations, and kms:Encrypt on the destination CMK for PUT. Data is re-encrypted at the destination with its own key.
Does the AWS Console file size limit apply?
Can I run a cross-account S3 migration?
AWS S3 transfer guides
Step-by-step walkthroughs for moving data to and from AWS S3.
Conclusion
S3 is the universal object-storage API, but staying inside the AWS ecosystem for backup or migration means paying AWS prices and concentrating risk. CloudsLinker makes cross-provider S3 transfers as easy as cross-region — with multipart upload, delta sync, and storage-class-aware copies. Connect your bucket once and run scheduled migrations, backups, or full-scale provider switches from the browser.
Online storage services supported by CloudsLinker
Transfer data between over 49 cloud services with CloudsLinker
Didn't find your cloud service? Contact: [email protected]