AWS Cost Traps Explained

Learn how to avoid the frequent mistakes that make your bill go nuts.

Presented by

Want to appear here? Talk with us

Together with DoiT
Stop Cloud Waste Before It Eats Your Margins

Learn how leading FinOps teams bring clarity to chaos, eliminate waste, and bring spend back in line with business value. It’s time to fix the fundamentals and build for sustainable, profitable growth.

In this webinar, DoiT FinOps experts share proven frameworks, real-world examples, and practical steps to align cloud spend with measurable business value.

 

COST OPTIMIZATION
AWS Cloud Cost Traps Explained

We've watched startups and big companies get hit by these same cost traps. Here are the five worst ones I see all the time, plus how to stop them.

The Forgotten Resource Problem

AWS makes it super easy to create new things. But it's terrible at telling you when you don't need them anymore. People leave behind old storage drives after shutting down servers. They forget about test databases running for months. Load balancers sit there doing nothing.

CloudWatch Logs Gone Wild

CloudWatch charges you for storing and processing logs. One client turned on debug logging for a function that ran hundreds of times per second. Their log bill jumped $500 in two weeks.

Data Transfer Surprises

Keep chatty services in the same zone when possible. Use VPC Endpoints for AWS services instead of routing through expensive gateways. Put CloudFront in front of your content to reduce repeated data transfers.

Storage Overkill

People often pick expensive storage options "just in case" without checking if they need them. Switching to high-performance storage can cost 4 times more than standard options with no real benefit.

Start with basic storage and watch your usage before upgrading. Use on-demand pricing for unpredictable workloads instead of guessing too high.

NAT Gateway Overuse

NAT Gateways cost about $32 per month plus fees for data processing. Many workloads only need to reach AWS services like S3, but they route through expensive NAT Gateways instead of using free endpoints.

Use Gateway Endpoints for S3 and DynamoDB in private networks. Only route through NAT when you truly need internet access.

 

OPEN SOURCE
Streamline GCP Project Cleanup with GCP-Nuke

GCP-Nuke is a powerful tool that helps you clean up your Google Cloud Platform projects by removing all resources automatically.

Getting Started

You can install it easily using brew and need to enable several Google Cloud services first.

Smart Filtering

The real power comes from its config file system. You can tell GCP-Nuke exactly what to keep and what to remove. For example, you might want to protect resources labeled as "managed_by: terraform" or "gcp-nuke: ignore" while cleaning everything else.

The tool can handle major Google Cloud services like Cloud Functions, Cloud Run, storage buckets, compute instances, and BigQuery datasets across multiple regions.

Automation Options

For regular cleanup, you can set up an automated system using Cloud Run jobs and Cloud Scheduler. This means your sandbox can clean itself every day at 2 AM without any manual work.

The author provides complete Docker and Terraform configurations to make this automation simple to deploy.

 

FINOPS EVENTS
Mastering AI Economics

Learn how top engineering and FinOps teams are aligning performance with budget by optimizing architecture, tracking true cost per model, and using practical insights to stay ahead of runaway spend.

What we’ll cover:

  • Understanding unit costs: What they are, why they matter, and how to track them

  • Cost-efficient architecture: Design patterns and trade-offs that lower compute and storage bills

  • Data & model strategy: How to optimize what you train, when, and where

  • FinOps for AI: Building transparency and accountability into fast-moving AI teams

Speakers

Vaibhav Sharma
David Gross
Alon Savo
Host: Victor Garcia

September 9th - 6:00 PM CEST / 10AM EST

AWS
Create AWS Budget Alerts with SNS and CDK

This post shows you how to create AWS Budgets using AWS CDK, send alerts through email and SNS, and handle special cases like encrypted topics.

Budgets are great for alerting when costs hit certain levels. But remember, budget alerts are not instant - AWS updates billing data at least once per day.

A budget by itself isn't very useful. You need to add notifications so you get alerted when you reach your limits. You can notify up to 10 email recipients or use an SNS topic.

Using an SNS topic has advantages over email subscribers. You can add different types of subscribers like email, chat, or custom functions to your SNS topic. You have one place to manage all subscribers instead of updating every budget separately.

The AWS budgets service needs permission to publish messages to your SNS topic. You must add a resource policy to the topic that allows the budgets service to call the SNS Publish action.

Once everything is set up correctly and deployed, you should get email notifications when your spending crosses the thresholds you set.

 

CLOUD PROVIDERS
AWS Releases MCP Server for Cloud Billing, New Instances and More

AWS

⭐️ A Dedicated MCP Server for Your Billing Data This is an interesting update from AWS. AWS Billing and Cost Management has introduced a dedicated MCP server. Lots of debate on its accuracy.

Amazon Bedrock Reaches Government Cloud Amazon Bedrock is now available in AWS GovCloud (US-West), enabling U.S. government customers to build AI applications within strict federal compliance frameworks.

Enhanced Monitoring with Custom CloudWatch Metrics Amazon CloudWatch Application Signals now supports custom metrics creation, allowing teams to monitor specific business indicators alongside standard performance data.

New EC2 Instances Offer Right-Sizing Opportunities Amazon EC2 M8i and M8i-Flex instances are generally available, powered by 4th Gen Intel Xeon processors.

Improved Search Performance at Lower Costs Amazon OpenSearch Service now supports I8g instances with AWS Graviton3 processors, optimized for high-volume indexing and search workloads.

AI Training Gets More Powerful Amazon EC2 P5 instances featuring NVIDIA H100 GPUs are now available for SageMaker jobs, providing high-performance compute for large-scale AI training.

Microsoft Azure

Firewall Modernization with IPv6 Support Azure Firewall now supports IPv6 filtering for both Standard and Premium SKUs, enabling traffic filtering across virtual networks.

IoT Efficiency Improvements Azure IoT Hub's MQTT v5 protocol support (in preview) enhances IoT device communication reliability and efficiency.

Google Cloud

🫙 

GCP
Resize Images on Budget with GCP Cloud Functions & CDN

Website owners face a common problem with images. Large photos eat up bandwidth, cost money to transfer, and make websites load slowly. Most big websites solve this by resizing images automatically. But storing multiple sizes of every photo gets expensive fast.

Google Cloud offers a smart solution using three main tools. Cloud Storage holds your original full-size images. Cloud Functions resize photos when someone requests them. The Load Balancer with CDN delivers these resized images quickly and saves copies at edge servers around the world.

Setting Up Storage

First, create a Google Cloud Storage bucket and upload your original images. The example shows a 6.3 MB photo of the Nasdaq building that needs to be made smaller for web use.

Building the Function

Next, create a Cloud Function that does the actual resizing work. The author used ChatGPT to write Python code that pulls images from storage, resizes them based on URL parameters, and sends back the smaller version.

Creating the Load Balancer

The final step involves setting up a Global Load Balancer with CDN enabled. This acts as the front door that users access to get images. When someone requests a photo with specific dimensions, the load balancer calls the function, gets the resized image, and caches it for future requests.

Real Results

The system works impressively well. That 6.5 MB Nasdaq photo gets resized to just 63 KB when requested at 500x600 pixels. Users access images through a simple URL format that specifies the image name and desired size.

The CDN caching means popular images load instantly after the first request. Less popular images might get removed from cache over time, but frequently accessed photos stay ready at edge servers worldwide.

📺️ VIDEO
Understanding FinOps from Visibility to Action

Unlock the journey from visibility to action in FinOps with expert guest Diana Lezcano. In this episode, we dive into the world of cloud cost optimization, exploring the tools, strategies, and best practices that help engineering teams reduce cloud costs, build trust, and make data-driven decisions.

Whether you’re an engineer, finance leader, or product manager, this conversation will give you practical tips to manage and optimize cloud spend effectively.

 

🎖️ MENTION OF HONOUR
FinOps Success: Cutting Startup Cloud Costs by 80%

ApeCloud, a small startup company, managed to cut their cloud costs by 80% over two years without hiring a special team or spending big money on expensive tools. Instead, they created a small group from their development team to work on this cost-cutting project.

Their main rule for using cloud resources is simple: no approval needed, everyone can access what they need. To make this work automatically, they built their own tool called Apepipe. This tool does several important things:

  1. It sends cost alerts and tips through chat apps like Slack.

  2. Workers can check cloud resources and prices right from their chat without switching to different websites.

  3. The tool has a dashboard that shows all cloud resources in one place.

  4. It runs automatic programs that clean up unused resources and optimize settings without human help.

The company shared many specific ways they save money:

  • They use spot instances that cost much less but might get shut down sometimes.

  • For long-term needs, they buy reserved instances that offer big discounts.

  • They automatically turn off servers at night and on weekends.

  • They create backup images of unused servers instead of keeping them running.

They also watch traffic costs carefully since moving data between different zones can be expensive. They use internal networks when possible and clean up old storage regularly. For GPU computing, they found cheaper alternatives that work just as well as expensive options.

The company learned that small details matter a lot. Choosing the right availability zone, setting up storage correctly, and keeping software updated can all save significant money.

 

Professional Spotlight
Hunter Harris

It’s great to have people like Hunter in the Community. Always open for helping other, knows his stuff. Great guy, well deserved!

That’s all for this week. See you next Sunday!

Join FinOps professionals at the FinOps Weekly Summit 2025 and discover how to:

Transform from reactive fire-fighting to strategic leadership 

Master AI-powered cost optimization

Build bulletproof unit economics

This is the only major FinOps event left in 2025, featuring battle-tested strategies from companies managing billions in cloud spend.

FinOps for Everyone, baby.

October 23rd, 2025 | 4:00 PM - 8:00 PM CEST

Limited seats available