Presented by

Want to appear here? Talk with us

TOGETHER WITH CLOUDZERO
Make AI Profitable Not Just Powerful

Use FinOps practices to tie AI spend directly to business value.

Learn how with the AI Cost Optimization Playbook.

AWS
Cut Your AWS Bill by 40% [CODE INCLUDED]

FinOps Core Goal: The main objective of FinOps is to bring financial accountability to your cloud spending, ensuring that every dollar spent delivers value rather than just slashing costs blindly.

Top Cost-Saving Strategies:

  • Savings Plans: You can save 30–60% by committing to a set hourly spend on compute resources. It is recommended to commit to 70–80% of your daily baseline spend to allow for architectural flexibility.

  • Spot Instances & Right-Sizing: Use Spot Instances for interruptible workloads (like CI/CD or stateless workers) to cut costs by 60–90%. Additionally, safely downsize over-provisioned instances if your average CPU usage sits below 20%.

  • Graviton Processors: Switching to AWS Graviton (ARM) chips can lower your compute costs by 20% while running 20% faster.

  • Storage Optimization: Set up S3 lifecycle policies to automatically move infrequently accessed data to cheaper tiers like Glacier. For active storage, upgrade EBS volumes from gp2 to gp3 for an instant 20% savings, and make sure to delete unattached volumes.

  • Networking: Replace expensive NAT Gateways with VPC Endpoints for internal AWS API calls to services like S3 and DynamoDB.

Building a Cost-Aware Culture: Cloud optimization is not a one-time project, but an ongoing practice. To succeed, teams need visibility through cost dashboards, budget alerts to prevent surprises, and weekly 15-minute meetings to review top expenses and maintain accountability.

MEETUPS
Madrid & Abuja Coming Soon

ONLINE EVENTS
Webinar: Build vs Buy in the AI Era

Join us as we're getting into the real talk behind one of the most common — and quietly costly — calls that engineering and finance leaders make: build or buy your FinOps tooling.

11:00 AM | Erik Peterson - CTO @ CloudZero on Build vs Buy in AI era

11:20 AM | Panel: Erik Peterson, Ben de Mora (FinOps OG) and Victor Garcia

11:40 AM | Q&A with attendees

📅​ Date: May 7th
🕗​ Time: 11:00 AM EST / 17:00 CEST
📍 Online

CLOUD PROVIDERS
GKE Kubernetes Rightsizing embedded into FinOps Hub

AWS

AWS Data Exports now support cross-account delivery for CUR 2.0 and FOCUS reports, eliminating storage duplication and replication costs for centralized teams.

Aurora Serverless v4 delivers 30% better performance with scale-to-zero capabilities, reducing idle spend for bursty workloads.

EC2 now allows hiding managed resources (EKS/Lambda) from inventory views to reduce noise in cost reconciliation.

Google Cloud

GKE right-sizing recommendations are now integrated into the FinOps hub, centralizing container optimization insights.

Gemini Cloud Assist now provides contextual explanations for cost fluctuations to accelerate troubleshooting of bill spikes.

Application Monitoring now tracks AI token usage, enabling teams to correlate AI consumption with application behavior.

Azure

Premium SSD v2 for Azure Database for PostgreSQL is GA, offering 4x higher IOPS and improved price-performance to lower cost-per-IO for database workloads.

FINOPS
AI Agents in FinOps | The New Frontier for Optimizing Cloud Costs

Discover the future of cloud cost optimization in this interview with Julia Berger. We explore how to transform the financial management of your cloud infrastructure and the revolution that AI brings to automation in the industry.

KUBERNETES
The Persistence of Waste

Despite 15 years of cloud and container advancements, average Kubernetes CPU utilization remains stuck at an inefficient 10%.

This persistent waste is primarily driven by human behavior rather than technology limitations; engineers routinely over-provision resources to avoid outages, and a lack of internal cost visibility means there is little incentive to optimize.

Additionally, Kubernetes itself adds significant overhead, with more than half of a cluster's resources being consumed by system infrastructure rather than actual application workloads.

This inefficiency extends even to expensive AI infrastructure, where GPU utilization averages just 15% to 25%.

Ultimately, this chronic over-provisioning causes massive financial waste and environmental impact, as idle servers continue to draw 40% to 50% of their peak power.

KUBERNETES
How to Cut Idle Resource Spending on Kubernetes

Kubernetes clusters frequently waste 40% to 70% of their cloud budget because developers over-provision CPU and memory for safety, and cloud providers bill for reserved capacity rather than actual use. To eliminate this phantom cost, organizations should follow a three-step approach:

  • Measure real usage: Track actual CPU and memory consumption with monitoring tools over 7 to 14 days to capture normal traffic patterns.

  • Right-size resources: Use the Vertical Pod Autoscaler (VPA) in recommendation mode to receive optimized resource suggestions based on actual usage.

  • Set hard limits: Implement LimitRanges to set default values for containers and ResourceQuotas to cap total consumption per namespace.

Additionally, it is crucial to avoid the common mistake of setting resource requests equal to resource limits, as this forces inefficient server packing.

Finally, treat cost optimization as an ongoing process rather than a one-time project by deploying monitoring dashboards, assigning cost centers to teams, and scheduling quarterly resource audits. Implementing these practices can ultimately reduce idle resource waste from 60% down to 15%.

🎖️ MENTION OF HONOUR
The Hidden Architecture of AI Vendor Lock-in

AI vendor lock-in becomes a critical issue much faster and hides better than traditional cloud dependencies.

Unlike standard software, AI lock-in is built into the models themselves through investments that cannot easily migrate between providers, such as model fine-tuning (which can cost up to $200,000), specific prompt libraries, and evaluation frameworks.

Because these elements do not appear as standard line items in cost management tools, the dependency often sneaks up on organizations. While most enterprise contracts focus on basic metrics like token pricing or seat licenses, they often overlook the massive switching costs.

To prevent this, FinOps teams should use cost allocation tags to make AI specific workloads (like fine-tuning and inference) visible early on.

Additionally, procurement must negotiate transition terms, such as rights to prompts, evaluation datasets, and pre-agreed migration rates, at the initial contract signing, which is when the buyer has the most leverage.

Organizations need to fully understand the long-term technical and financial relationship they are committing to with an AI provider

Save 20% on FinOps Certifications

The job market is hungry for certified professionals who can prove results. Don't let your company's budget leak due to a lack of specialization.

Use code: FINOPSWEEKLY to get an instant 20% discount on the most prestigious certification bundles:

  • FinOps Certified Practitioner (The foundation for success).

  • FinOps Certified Engineer (For high-level technical profiles).

  • FinOps Certified FOCUS Analyst (Specializing in data standards).

  • FinOps for AI (The frontier of modern efficiency).

Save $300 and get access to FinOps X

If you want to be in the room where the big decisions are made, you need to be at FinOps X. We’ve secured preferred access for our community:

  • Code: FINOPSWEEKLYX26

  • Your Savings: $300 USD.

  • Final Price: $899 (Official Price: $1,199).

PROFESSIONAL SPOTLIGHT
Jenna Gegg

Finance-Driven FinOps Expert | Cloud Cost Optimization | Turning Data Into Strategic Decisions

Rate Today's Newsletter

Feedback = Better Newsletter for You

Login or Subscribe to participate

Recommended for you