Skip to content

What is Google Cloud Storage?

Google Cloud Storage (GCS) is the object-storage tier of Google Cloud Platform — the same infrastructure that backs Google Drive, Google Photos, YouTube, and Gmail at the byte level. GCS offers four primary storage classes: Standard (hot, no minimum retention), Nearline ($0.01/GB retrieval, 30-day minimum), Coldline ($0.02/GB retrieval, 90-day minimum), and Archive ($0.05/GB retrieval, 365-day minimum). Each bucket can mix all four classes via Object Lifecycle Management, automatically tiering objects from Standard down to Archive based on access patterns. GCS exposes both a native JSON API and a fully **S3-compatible XML API via HMAC keys**, making it a drop-in destination for S3-aware tools — including CloudsLinker.

What sets GCS apart from AWS S3, Wasabi, B2, and R2 is the deeper integration with Google's analytics and ML stack — BigQuery reads directly from GCS without ingest, Vertex AI training jobs pull from GCS at line rate, and Dataflow / Pub/Sub pipelines write to GCS as their canonical sink. The trade is that GCS pricing is closer to AWS S3 (~$0.020/GB Standard, plus egress fees of $0.01–$0.12/GB) than to discount S3-clones. CloudsLinker connects via HMAC keys (the S3-compatible auth path), which works with both user-account and service-account credentials. Use GCS as a destination for BigQuery-bound pipelines, or migrate from GCS to cheaper providers (Wasabi, B2, R2) when egress patterns shift.

Key features of Google Cloud Storage

Why connect Google Cloud Storage to CloudsLinker

CloudsLinker connects to Google Cloud Storage using GCS HMAC keys (Access ID + Secret) and the universal endpoint storage.googleapis.com. The connection works with HMAC keys tied to either a user account (24-character access ID) or a service account (61-character access ID); service-account HMAC keys are recommended for production migrations because they survive user offboarding. Transfers run server-to-server via multipart upload with up to 10,000 parts per object, with automatic storage-class targeting on the destination side.

What you can do with Google Cloud Storage on CloudsLinker

S3-compatible HMAC migration

S3-compatible HMAC migration

Migrate from AWS S3, Wasabi, B2, or R2 into GCS at full multipart speed. The same S3 client CloudsLinker uses for AWS works for GCS via HMAC keys.

Server-to-server multipart

Server-to-server multipart

Up to 10,000 parts per object, parallel-uploaded for maximum throughput. No data flows through your machine.

Scheduled & incremental sync

Scheduled & incremental sync

Hourly / daily / weekly with delta mode. Useful for keeping a GCS bucket as the canonical source for BigQuery while replicating to S3 for cross-cloud DR.

Filter by prefix, size, modified date

Filter by prefix, size, modified date

Migrate only <code>gs://bucket/year=2026/</code>, skip files > 1 TB, or sync only this week's writes.

Common Google Cloud Storage transfer scenarios

Migrate AWS S3 → GCS to feed BigQuery / Vertex AI

Companies running data analytics on Google Cloud often need their existing S3 data lake mirrored into GCS for BigQuery and Vertex AI. CloudsLinker copies bucket-by-bucket from S3 to GCS via HMAC keys — typical 10 TB seed completes in 1–2 days, then nightly delta keeps GCS current. AWS egress fees apply on the source side, so filter aggressively to avoid replicating data you don't actually query.

Migrate GCS → Cloudflare R2 to escape egress fees

If your GCS bucket serves > $200/month in public egress, R2's true zero-egress model often cuts the bill by 90 %. CloudsLinker copies the bucket into R2 in a single multipart-parallel job. Storage costs slightly more on R2 ($15/TB vs ~$20/TB on GCS Standard), but egress savings flip the math instantly.

Cross-cloud DR: GCS → AWS S3 or Wasabi

Provider-independent DR matters even on GCP. Schedule a CloudsLinker incremental from your primary GCS bucket to Wasabi ($6.99/TB) or AWS S3 in a different region. Object Lock available on Wasabi for ransomware-resistant immutability.

Tier-down: GCS Standard → Coldline / Archive (or off-platform)

GCS Lifecycle Management can auto-tier within GCS, but for truly cold data, off-platform storage on B2 ($6/TB) is cheaper than even GCS Archive ($1.20/TB) when you factor in retrieval fees and minimum retention. CloudsLinker filters by modification date and offloads cold prefixes.

Replicate between two GCP projects or regions

Cross-project bucket replication usually requires custom IAM and Storage Transfer Service. CloudsLinker connects each project's bucket as a separate cloud via project-scoped HMAC keys, then runs cross-project copies with full audit logging.

How to connect Google Cloud Storage to CloudsLinker

GCS authenticates with S3-compatible HMAC keys plus the universal endpoint storage.googleapis.com.

Before you start

Generate an HMAC key on a service account scoped to the specific bucket(s) — never use a user-account HMAC key for production migrations:

  1. Sign in to the Google Cloud Console at https://console.cloud.google.com.
  2. Choose the GCP project that owns the target bucket.
  3. Create or pick a service account under IAM & Admin → Service Accounts. Recommended name: cloudslinker-migrate@<project>.iam.gserviceaccount.com.
  4. Grant the service account the minimum IAM permissions:
    • roles/storage.objectViewer (read source) or
    • roles/storage.objectAdmin (read + write destination), scoped to specific buckets via IAM Conditions.
  5. Go to Cloud Storage → Settings → Interoperability in the left navigation.
  6. Under Service account HMAC, click Create a key for a service account → select the service account → Create key.
  7. Copy the Access ID (61 characters) and Secret (40-character base64). Google shows the secret only once.

Connection steps

  1. In CloudsLinker, click Add Cloud → choose Google Cloud Storage.
  2. Enter a display name (e.g. “GCS — production-data-lake”).
  3. Paste the Access ID and Secret from step 7 above.
  4. Enter the Endpoint: storage.googleapis.com (universal — no per-region variant).
  5. Click Confirm — CloudsLinker validates with ListBuckets and shows the connection ready.

Revoke access

To revoke CloudsLinker’s HMAC key later: Cloud Console → Cloud Storage → Settings → Interoperability → find the key → Set inactive or Delete. Or delete the entire service account if no longer needed.

Google Cloud Storage upload & download limits you should know

GCS limits are S3-compatible with Google-specific pricing tiers and minimum-retention rules per storage class:

  • Maximum object size: 5 TiB (~5.5 TB).
  • Maximum multipart parts per object: 10,000. Part size 5 MB minimum, 5 GB maximum.
  • Minimum retention durations (per storage class):
    • Standard: none. Delete after 1 second, billed for 1 second.
    • Nearline: 30 days. Early-delete charged.
    • Coldline: 90 days. Early-delete charged.
    • Archive: 365 days. Early-delete charged.
  • Retrieval fees (per storage class):
    • Standard: free.
    • Nearline: $0.01/GB.
    • Coldline: $0.02/GB.
    • Archive: $0.05/GB.
  • Storage pricing (US multi-region, Standard): ~$0.020/GB/month (~$20/TB). Other regions and classes vary.
  • Egress fees: $0.01–$0.12/GB depending on destination region and bandwidth tier (Premium vs Standard). Free egress to other GCP services in the same region.
  • API rate limits: GCS scales request rate with bucket usage; default starts around 1,000 ops/sec per bucket, ramping with sustained load. CloudsLinker handles 503 and 429 with back-off.
  • HMAC keys per service account: up to 10 active keys at a time.
  • Bucket / object count: practically unlimited.
  • Versioning: supported via S3-compatible API.
  • Object Lock equivalent: Bucket Lock + retention policies provide WORM-style immutability.

Sources: Google Cloud: Storage pricing, Cloud Storage: Quotas & limits, Cloud Storage: Object uploads, Cloud Storage: Interoperability with S3.

Google Cloud Storage + CloudsLinker — Frequently Asked Questions

Why does CloudsLinker use HMAC keys instead of GCS native authentication?

HMAC keys make GCS S3-compatible — meaning the same connector code that works for AWS S3, Wasabi, and Backblaze B2 works for GCS without modification. Native Google authentication (OAuth, service account keys) would require a separate integration path and offers no functional benefit for migration workloads.

What's the largest object I can transfer?

5 TiB per object — the GCS hard ceiling, identical to AWS S3. Multipart upload supports up to 10,000 parts per object, with parts ranging from 5 MB to 5 GB. CloudsLinker switches to multipart automatically above 100 MB.

Should I create the HMAC key for a user account or a service account?

Service account, for production migrations. User-account HMAC keys are tied to an individual employee — if they leave the company, the key is revoked and the migration breaks. Service-account keys are tied to a programmatic identity that persists. Service-account access IDs are 61 characters; user-account access IDs are 24 characters.

How are GCS storage classes handled during migration?

When the source is GCS, CloudsLinker reads the storage class metadata. When the destination is GCS, you specify the target class (Standard / Nearline / Coldline / Archive). When migrating between providers, storage class doesn't map 1:1 — CloudsLinker writes everything as Standard on the destination unless you configure a destination-specific tier.

Will I pay egress fees during a GCS-out migration?

Yes. Google Cloud charges egress when data leaves GCS for the public internet — currently $0.01–$0.12 per GB depending on destination region and bandwidth tier. CloudsLinker minimizes egress with delta sync after the initial seed and with prefix / size / date filters. Egress is free if the destination is also a GCP project in the same region.

Are my HMAC keys safe with CloudsLinker?

Yes. Access ID and Secret are encrypted at rest with AES-256 and decrypted only inside the active transfer worker. Best practice: create the HMAC key on a service account scoped to the specific bucket(s) via IAM, not on a user account or with project-wide permissions.

Does CloudsLinker support bucket versioning and object lifecycle policies?

Yes. Versioned objects: CloudsLinker copies the current version by default; enable 'include versions' to migrate full version history. Lifecycle policies on the destination GCS bucket apply automatically once objects land — they're not migrated as policies but recreated by you on the destination side.

Can I migrate data into GCS from non-cloud sources (FTP, NAS)?

Yes. CloudsLinker supports FTP / SFTP / NAS / WebDAV as sources — useful for moving on-prem data lakes into GCS for BigQuery analysis.

How does this compare to GCP's Storage Transfer Service?

Storage Transfer Service is a Google-native tool optimized for GCS as the destination — well-suited if you're moving from S3 or another GCS bucket into GCS at scale on GCP-billed infrastructure. CloudsLinker is more flexible: it works with 140+ source/destination clouds, runs jobs from a single UI, and is the better choice for hybrid workflows.

Is this an official Google Cloud partnership?

No. CloudsLinker is a third-party tool that uses GCS's S3-compatible XML API under your HMAC key. Revoke access by deleting the HMAC key in Cloud Console → Cloud Storage → Settings → Interoperability.

Google Cloud Storage transfer guides

Step-by-step walkthroughs for moving data to and from Google Cloud Storage.

Conclusion

Google Cloud Storage is the canonical bucket for any team running BigQuery, Vertex AI, or Dataflow — and the right destination when you want bytes near Google's analytics and ML stack. CloudsLinker treats GCS as a first-class S3-compatible target via HMAC keys, with multipart-parallel transfers and storage-class-aware copies. Connect with HMAC + storage.googleapis.com endpoint and run your first migration in minutes.

Online storage services supported by CloudsLinker

Transfer data between over 48 cloud services with CloudsLinker

OneDrive

OneDrive

Google Drive

Google Drive

Google Photos

Google Photos

Shared Drive

Shared Drive

OneDrive for Business

OneDrive for Business

Dropbox

Dropbox

Box

Box

Mega

Mega

pCloud

pCloud

Yandex

Yandex

ProtonDrive

ProtonDrive

AWS

AWS

GCS

GCS

iDrive

iDrive

Storj

Storj

DigitalOcean

DigitalOcean

Wasabi

Wasabi

1fichier

1fichier

PikPak

PikPak

TeleBox

TeleBox

OpenDrive

OpenDrive

Backblaze B2

Backblaze B2

Fastmail file

Fastmail file

SharePoint

SharePoint

Nextcloud

Nextcloud

ownCloud

ownCloud

Premiumize me

Premiumize me

HiDrive

HiDrive

Put.io

Put.io

Sugar Sync

Sugar Sync

Jottacloud

Jottacloud

Seafile

Seafile

Ftp

Ftp

SFtp

SFtp

NAS

NAS

WebDav

WebDav

4shared

4shared

Icedrive

Icedrive

Cloudflare R2

Cloudflare R2

Scaleway

Scaleway

Doi

Doi

iCloud Drive

iCloud Drive

iCloud Photos

iCloud Photos

FileLU

FileLU

Zoho WorkDrive

Zoho WorkDrive

Telia Cloud / Sky

Telia Cloud / Sky

Drime

Drime

Filen

Filen

Didn't find your cloud service? Contact: [email protected]