Cloud storage pricing is designed to look simple. You see a rate per gigabyte per month, compare a few providers, and assume the numbers tell the full story. They do not.
The storage fee is often the smallest part of the bill. What drives costs up are the charges attached to moving data, accessing it, retrieving it from archive tiers, and meeting compliance requirements. These are not hidden in the sense of being deliberately concealed, but they are easy to overlook until you see the first unexpectedly large invoice.
This post covers five cost categories that consistently catch businesses off guard, with concrete examples of what each looks like at scale.
What this covers:
Storage pricing as the baseline
Data egress fees and why they dominate at scale
API request charges for high-traffic applications
Vendor lock-in and migration costs
Storage tier tradeoffs and retrieval fees
Compliance and security overhead
The Baseline: Storage Pricing
The cost per gigabyte per month is where most comparisons start and unfortunately stop. Current standard rates across the major providers:
Provider | Standard storage (per GB/month) |
|---|---|
Amazon S3 Standard | ~$0.023 |
Google Cloud Storage Standard | ~$0.020 |
Microsoft Azure Blob Storage | ~$0.018 |
These rates are for data at rest. The moment you start moving or accessing that data, additional charges apply — and they often exceed the storage cost itself.
1. Data Egress Fees
Egress fees apply when data leaves a cloud provider's network, whether to the internet, to a user's browser, to a CDN, or to another cloud provider. This is frequently the largest unexpected line item on a cloud bill.
A media company hosting video on AWS S3 pays the storage rate while the files sit idle. The moment a user plays a video, data moves from S3 outward, and egress charges apply to every gigabyte transferred.
Current outbound transfer rates to the internet:
Provider | Egress fee (first 10TB/month) |
|---|---|
Amazon S3 | $0.09 per GB |
Google Cloud Storage | $0.12 per GB |
Microsoft Azure | $0.085 per GB |
One terabyte of outbound data costs roughly $90 on AWS, $120 on Google Cloud, and $85 on Azure. For applications that serve large files to many users, those numbers compound quickly. A video platform delivering 50TB of content per month is looking at $4,500 or more in egress fees alone on a single provider.
Planning note: If your application serves files directly to end users, model egress costs against expected traffic before selecting a provider. The cheapest storage rate means little if egress fees dominate the bill.
2. API Request Charges
Every interaction with cloud storage, whether reading a file, writing one, listing directory contents, or deleting an object, generates an API call. These are billed per request, typically in increments of one million.
AWS S3 charges $0.40 per million GET requests. That rate sounds negligible until you calculate it at scale.
An example: a web application with 10,000 daily page views where each page loads five objects from S3 generates 50,000 GET requests per day, or roughly 1.5 million per month. At $0.40 per million, that is about $0.60 per month at this traffic level.
Scale that to 500,000 daily users with the same page structure and the bill reaches $30 per month from requests alone, before egress or storage. Multiply by applications with dozens of assets per page and the numbers climb faster.
High-frequency operations like LIST calls, which some applications run continuously for directory traversal or sync operations, can generate far more requests than GET calls and deserve specific attention during architecture review.
3. Vendor Lock-In and Migration Costs
Cloud storage services are not interchangeable. APIs, SDKs, access control models, and storage class naming conventions differ between providers. Code written against the AWS S3 SDK does not run against Google Cloud Storage without modification.
The practical consequence is that migrating data or workloads between providers is costly in two ways: the egress fees applied to the data transfer, and the engineering time required to adapt the application layer.
A concrete example: a company running image hosting on S3 decides to move part of its workload to Google Cloud to diversify costs. Moving 50TB of data incurs roughly $4,500 in S3 egress fees. The engineering effort to adapt the application, test the migration, and handle the cutover takes two weeks of developer time. That is the real cost of a provider switch that looked straightforward on paper.
Designing for portability from the start reduces this risk. Using an abstraction layer in application code that wraps provider-specific SDK calls, or choosing tools that support multiple backends (such as rclone for transfers or MinIO-compatible APIs), preserves optionality without requiring a full architecture rethink later.
4. Storage Tier Tradeoffs and Retrieval Fees
All major providers offer multiple storage tiers priced according to access frequency. Choosing the cheapest tier for data that gets accessed regularly is a false economy — retrieval fees and slower access times offset the storage savings.
Tier | Typical use case | Storage cost | Retrieval cost |
|---|---|---|---|
Standard | Frequently accessed data | Higher | Negligible |
Infrequent Access | Data accessed a few times per month | Medium | Per-GB fee applies |
Archive (Glacier, etc.) | Long-term backups, compliance records | Very low | Per-GB fee, hours of latency |
The archive tier illustrates the tradeoff most clearly. AWS Glacier charges as little as $0.004 per GB per month for storage, which looks attractive for large backup datasets. Retrieval costs $0.0025 per GB on standard retrieval, plus a per-request fee, and the data is not immediately available — retrieval can take several hours depending on the tier selected.
For a 10TB backup that rarely needs to be accessed, this is a reasonable choice. For data that a compliance team might need to pull on short notice, the retrieval delay and cost become a business problem. A 10TB retrieval from Glacier at standard rates costs roughly $25 plus wait time that could run into a compliance deadline.
The rule worth applying: match the storage tier to actual access patterns, not anticipated ones. If access frequency is genuinely uncertain, default to a higher tier until the pattern becomes clear.
5. Compliance and Security Overhead
Moving data to cloud storage does not transfer compliance responsibility. Businesses handling personal data under GDPR, healthcare data under HIPAA, or financial data under PCI DSS remain accountable for encryption, access control, logging, and audit trails regardless of where the data sits.
Most cloud providers offer the underlying tools — encryption at rest and in transit, fine-grained IAM policies, audit logging — but these services add to the bill and require configuration expertize to implement correctly.
A health-tech startup storing patient data on AWS S3 needs server-side encryption (manageable within S3), detailed access logging via AWS CloudTrail, key management through AWS KMS, and potentially third-party compliance monitoring tools. Together these can add $300 or more per month before any data is actually accessed, and they require ongoing maintenance as the infrastructure grows.
For regulated industries, the compliance cost should be estimated as a fixed overhead in the initial cost model rather than discovered after the architecture is deployed.
Planning Checklist Before Committing to a Provider
Before finalizing a cloud storage architecture, it is worth running through these questions:
What is the expected monthly egress volume, and what does that cost on each shortlisted provider?
How many API requests will the application generate per month at current and projected scale?
Which storage tier matches the actual access frequency of this data?
What compliance requirements apply, and what tools are needed to meet them on this platform?
What would it cost in egress fees and engineering time to migrate away from this provider in two years?
Running the numbers on each of these before signing up avoids the situations where a provider that looked cheap at signup becomes expensive at scale.
Key Takeaways
Storage fees are the baseline. Egress, API calls, and retrieval fees frequently exceed them for active applications.
Egress fees apply whenever data leaves a provider's network. For content-heavy applications, this is typically the largest cost driver.
API request charges are small individually but scale directly with application traffic.
Vendor lock-in is a real cost. Migration between providers incurs egress fees and engineering time that are worth estimating before committing to an architecture.
Storage tier selection should match actual access frequency. Retrieval fees and latency on archive tiers can create operational problems for data accessed more often than expected.
Compliance requirements add fixed overhead that should be included in the initial cost model.
Conclusion
Cloud storage costs are predictable if you model them correctly from the start. The providers are not hiding the pricing — egress rates, API charges, and retrieval fees are all documented. The issue is that most cost comparisons focus on the storage rate and treat the rest as secondary.
For any application beyond simple static file hosting, taking the time to model the full cost profile against expected usage patterns before selecting a provider is the difference between a bill that matches expectations and one that does not.
Working through a specific cloud storage architecture or cost problem? Describe it in the comments.




