What is Google Cloud Storage?
Google Cloud Storage (GCS) is the object-storage tier of Google Cloud Platform — the same infrastructure that backs Google Drive, Google Photos, YouTube, and Gmail at the byte level. GCS offers four primary storage classes: Standard (hot, no minimum retention), Nearline ($0.01/GB retrieval, 30-day minimum), Coldline ($0.02/GB retrieval, 90-day minimum), and Archive ($0.05/GB retrieval, 365-day minimum). Each bucket can mix all four classes via Object Lifecycle Management, automatically tiering objects from Standard down to Archive based on access patterns. GCS exposes both a native JSON API and a fully **S3-compatible XML API via HMAC keys**, making it a drop-in destination for S3-aware tools — including CloudsLinker.
What sets GCS apart from AWS S3, Wasabi, B2, and R2 is the deeper integration with Google's analytics and ML stack — BigQuery reads directly from GCS without ingest, Vertex AI training jobs pull from GCS at line rate, and Dataflow / Pub/Sub pipelines write to GCS as their canonical sink. The trade is that GCS pricing is closer to AWS S3 (~$0.020/GB Standard, plus egress fees of $0.01–$0.12/GB) than to discount S3-clones. CloudsLinker connects via HMAC keys (the S3-compatible auth path), which works with both user-account and service-account credentials. Use GCS as a destination for BigQuery-bound pipelines, or migrate from GCS to cheaper providers (Wasabi, B2, R2) when egress patterns shift.
Key features of Google Cloud Storage
Why connect Google Cloud Storage to CloudsLinker
CloudsLinker connects to Google Cloud Storage using GCS HMAC keys (Access ID + Secret) and the universal endpoint storage.googleapis.com. The connection works with HMAC keys tied to either a user account (24-character access ID) or a service account (61-character access ID); service-account HMAC keys are recommended for production migrations because they survive user offboarding. Transfers run server-to-server via multipart upload with up to 10,000 parts per object, with automatic storage-class targeting on the destination side.
What you can do with Google Cloud Storage on CloudsLinker
S3-compatible HMAC migration
Migrate from AWS S3, Wasabi, B2, or R2 into GCS at full multipart speed. The same S3 client CloudsLinker uses for AWS works for GCS via HMAC keys.
Server-to-server multipart
Up to 10,000 parts per object, parallel-uploaded for maximum throughput. No data flows through your machine.
Scheduled & incremental sync
Hourly / daily / weekly with delta mode. Useful for keeping a GCS bucket as the canonical source for BigQuery while replicating to S3 for cross-cloud DR.
Filter by prefix, size, modified date
Migrate only <code>gs://bucket/year=2026/</code>, skip files > 1 TB, or sync only this week's writes.
Common Google Cloud Storage transfer scenarios
Migrate AWS S3 → GCS to feed BigQuery / Vertex AI
Companies running data analytics on Google Cloud often need their existing S3 data lake mirrored into GCS for BigQuery and Vertex AI. CloudsLinker copies bucket-by-bucket from S3 to GCS via HMAC keys — typical 10 TB seed completes in 1–2 days, then nightly delta keeps GCS current. AWS egress fees apply on the source side, so filter aggressively to avoid replicating data you don't actually query.
Migrate GCS → Cloudflare R2 to escape egress fees
If your GCS bucket serves > $200/month in public egress, R2's true zero-egress model often cuts the bill by 90 %. CloudsLinker copies the bucket into R2 in a single multipart-parallel job. Storage costs slightly more on R2 ($15/TB vs ~$20/TB on GCS Standard), but egress savings flip the math instantly.
Cross-cloud DR: GCS → AWS S3 or Wasabi
Provider-independent DR matters even on GCP. Schedule a CloudsLinker incremental from your primary GCS bucket to Wasabi ($6.99/TB) or AWS S3 in a different region. Object Lock available on Wasabi for ransomware-resistant immutability.
Tier-down: GCS Standard → Coldline / Archive (or off-platform)
GCS Lifecycle Management can auto-tier within GCS, but for truly cold data, off-platform storage on B2 ($6/TB) is cheaper than even GCS Archive ($1.20/TB) when you factor in retrieval fees and minimum retention. CloudsLinker filters by modification date and offloads cold prefixes.
Replicate between two GCP projects or regions
Cross-project bucket replication usually requires custom IAM and Storage Transfer Service. CloudsLinker connects each project's bucket as a separate cloud via project-scoped HMAC keys, then runs cross-project copies with full audit logging.
How to connect Google Cloud Storage to CloudsLinker
GCS authenticates with S3-compatible HMAC keys plus the universal endpoint storage.googleapis.com.
Before you start
Generate an HMAC key on a service account scoped to the specific bucket(s) — never use a user-account HMAC key for production migrations:
- Sign in to the Google Cloud Console at https://console.cloud.google.com.
- Choose the GCP project that owns the target bucket.
- Create or pick a service account under IAM & Admin → Service Accounts. Recommended name:
cloudslinker-migrate@<project>.iam.gserviceaccount.com. - Grant the service account the minimum IAM permissions:
roles/storage.objectViewer(read source) orroles/storage.objectAdmin(read + write destination), scoped to specific buckets via IAM Conditions.
- Go to Cloud Storage → Settings → Interoperability in the left navigation.
- Under Service account HMAC, click Create a key for a service account → select the service account → Create key.
- Copy the Access ID (61 characters) and Secret (40-character base64). Google shows the secret only once.
Connection steps
- In CloudsLinker, click Add Cloud → choose Google Cloud Storage.
- Enter a display name (e.g. “GCS — production-data-lake”).
- Paste the Access ID and Secret from step 7 above.
- Enter the Endpoint: storage.googleapis.com (universal — no per-region variant).
- Click Confirm — CloudsLinker validates with
ListBucketsand shows the connection ready.
Revoke access
To revoke CloudsLinker’s HMAC key later: Cloud Console → Cloud Storage → Settings → Interoperability → find the key → Set inactive or Delete. Or delete the entire service account if no longer needed.
Google Cloud Storage upload & download limits you should know
GCS limits are S3-compatible with Google-specific pricing tiers and minimum-retention rules per storage class:
- Maximum object size: 5 TiB (~5.5 TB).
- Maximum multipart parts per object: 10,000. Part size 5 MB minimum, 5 GB maximum.
- Minimum retention durations (per storage class):
- Standard: none. Delete after 1 second, billed for 1 second.
- Nearline: 30 days. Early-delete charged.
- Coldline: 90 days. Early-delete charged.
- Archive: 365 days. Early-delete charged.
- Retrieval fees (per storage class):
- Standard: free.
- Nearline: $0.01/GB.
- Coldline: $0.02/GB.
- Archive: $0.05/GB.
- Storage pricing (US multi-region, Standard): ~$0.020/GB/month (~$20/TB). Other regions and classes vary.
- Egress fees: $0.01–$0.12/GB depending on destination region and bandwidth tier (Premium vs Standard). Free egress to other GCP services in the same region.
- API rate limits: GCS scales request rate with bucket usage; default starts around 1,000 ops/sec per bucket, ramping with sustained load. CloudsLinker handles
503and429with back-off. - HMAC keys per service account: up to 10 active keys at a time.
- Bucket / object count: practically unlimited.
- Versioning: supported via S3-compatible API.
- Object Lock equivalent: Bucket Lock + retention policies provide WORM-style immutability.
Sources: Google Cloud: Storage pricing, Cloud Storage: Quotas & limits, Cloud Storage: Object uploads, Cloud Storage: Interoperability with S3.
Google Cloud Storage + CloudsLinker — Frequently Asked Questions
Why does CloudsLinker use HMAC keys instead of GCS native authentication?
What's the largest object I can transfer?
Should I create the HMAC key for a user account or a service account?
How are GCS storage classes handled during migration?
Will I pay egress fees during a GCS-out migration?
Are my HMAC keys safe with CloudsLinker?
Does CloudsLinker support bucket versioning and object lifecycle policies?
Can I migrate data into GCS from non-cloud sources (FTP, NAS)?
How does this compare to GCP's Storage Transfer Service?
Is this an official Google Cloud partnership?
Google Cloud Storage transfer guides
Step-by-step walkthroughs for moving data to and from Google Cloud Storage.
Conclusion
Google Cloud Storage is the canonical bucket for any team running BigQuery, Vertex AI, or Dataflow — and the right destination when you want bytes near Google's analytics and ML stack. CloudsLinker treats GCS as a first-class S3-compatible target via HMAC keys, with multipart-parallel transfers and storage-class-aware copies. Connect with HMAC + storage.googleapis.com endpoint and run your first migration in minutes.
Online storage services supported by CloudsLinker
Transfer data between over 48 cloud services with CloudsLinker
Didn't find your cloud service? Contact: [email protected]