How to Automate Executive Cost Reports

Leverage workflows to get reports from your cost data.

Presented by

Want to appear here? Talk with us

TOGETHER WITH WIV
Manual FinOps Won’t Survive 2026. Here’s Why.

Dashboards aren’t action.

Alerts aren’t impact.

And tickets? Still piling up.

Cloud waste is still slipping through the cracks because most FinOps processes stop at visibility. In 2026, that’s not going to cut it.

We broke down:

  • Why “simple” FinOps is still so hard to scale

  • What’s really blocking automation (spoiler: it’s not AI)

  • How no-code workflows can finally do the heavy lifting for you

📅 Want to see how it works? Book a demo and get your time back.

 

AUTOMATION
Automating Executive-Ready Cost Optimization FinOps Reports

The AWS Cost Reporter is a Python tool that takes all that technical data and creates a proper Word document. It adds charts, tables, and clear explanations that make sense to executives and finance teams. Think of it as translating computer language into human language.

The tool needs Python 3.9 or higher to run. On Mac computers, there's a special way to install it because Apple protects its built-in Python. You can either create a separate Python space or tell your computer it's okay to add these new tools. The reporter needs four main helpers: matplotlib for making charts, numpy for math, pandas for organizing data, and python-docx for creating Word files.

Once everything is set up, you can run the reporter with a simple command. Point it to where your audit data lives, tell it where to save the report, and add a flag if you want charts included. You can also add your company name and who prepared the report.

Starting with version 4.6.0, you can run both the audit and the report together with one command. This means you can check your AWS costs and get a finished report without doing anything in between.

The final report includes several useful sections. It starts with key findings like which services cost the most money. Then it breaks down compute resources, storage, networking, and other areas. Each section shows what you're spending and where you could save money. If you include charts, you'll see a pie chart of your top five costs and a bar chart showing possible savings.

The report also gives you a plan for what to fix first. It shows which changes will save the most money and how hard they'll be to do. This helps teams decide where to start. Future versions will add more features like custom branding, the ability to turn sections on or off, tracking costs over time, and creating PDF versions.

 

CLOUD PROVIDERS
AWS 60% Cost Reduction Update

AWS

Amazon Kinesis Data Streams launched On-demand Advantage mode with up to ~60% lower throughput rates and no fixed per-stream charges. Ad-hoc warming enables instant capacity increases without overprovisioning, directly improving cost efficiency for high-throughput or spiky streams.

AWS Marketplace added flexible pricing (contract and usage-based) for AI agent tooling. This aligns costs to actual usage patterns and speeds deployments, helping control spend on AI tools.

Amazon EKS Split Cost Allocation now imports up to 50 Kubernetes pod labels as cost allocation tags in AWS CUR. This enables pod-level chargeback and showback without custom tagging pipelines.

AWS Config added 42 managed rules for tagging and cost governance, plus support for 49 new resource types. This expands automated compliance checks and reduces blind spots when scanning for cost-driving resources.

Microsoft Azure

Azure Cosmos DB Query Advisor (GA) provides actionable recommendations to reduce RU consumption and improve query efficiency, delivering direct savings on throughput provisioning.

Azure Storage Mover (GA) enables fully managed S3-to-Blob transfers with parallel server-to-server copies and incremental syncs. This eliminates migration infrastructure overhead and reduces transfer costs for large datasets.

Google Cloud

Cost Anomaly Detection is now GA and enabled by default for all projects. AI-generated thresholds provide relevant alerts without tuning, with root-cause analysis and free access as part of Google's cost management tools.

BigQuery reservation groups (Preview) let you prioritize idle slot sharing within grouped reservations.

Compute Engine now shows which VMs consume specific reservations (GA). This visibility helps verify reservation usage and identify opportunities to optimize committed use purchases.

 

FINOPS EVENTS
Event: The Hybrid FinOps Advantage

This doesn’t stop! Looking forward to have you all the 13th of November to explore the Hybrid FinOps. Discover how FinOps 2.0 strategies deliver comprehensive optimization across your entire technology portfolio.

November 13th - 6:00 PM CEST / 10AM EST

Achieve total cost visibility across data centers, multi-cloud, SaaS, and AI infrastructure

Optimize the complete technology stack with unified intelligence and automation

Break down silos between FinOps, ITAM, procurement, and engineering teams

Speakers: Jeremy Chaplin, Gerhard Behr & Victor Garcia

 

BIGQUERY
Cost Aware Modelling in BigQuery

You built a super fast dashboard that loads in 15 seconds. Everyone loves it. Then the monthly bill arrives and your BigQuery costs have doubled. That beautiful dashboard scans 2 terabytes of data every time someone opens it.

Most of us learned to build databases the old way, where we paid once for servers and optimized for speed. But cloud databases like BigQuery flip that thinking upside down. A query that runs in 10 seconds can cost 1,000 times more than one that takes 30 seconds. The only number that matters for your bill is totalbytesbilled.

The One Big Table approach seems better at first. When you only need two columns, BigQuery reads just those two and ignores the other 498. Fast and cheap. But the moment someone runs SELECT * to preview data, they scan all 500 columns across terabytes.

The smart way is using STRUCT and ARRAY, BigQuery's native nested types. You store user information directly inside each order record. No JOIN needed. The query only reads the exact nested columns you need.

Building a Cost Monitoring System

BigQuery tracks every query in INFORMATIONSCHEMA.JOBSBY_PROJECT. You can build dashboards on top of this data.

Find your 10 most expensive queries from the last week. Track down who ran them and fix the problems.

Look for queries that scan huge amounts but compute very little. These are usually SELECT * with LIMIT 10. Pure waste.

Identify which tables show up most in expensive queries. Focus your optimization work on those five tables first.

Four Rules to Control Costs

Stop thinking in JOINs. Build nested models with STRUCT and ARRAY as your default choice. Treat JOINs as a warning sign in your final models.Make partitioning required, not optional.

Any table over 1TB must be partitioned by date. Set requirepartitionfilter to TRUE so queries without date filters fail immediately instead of scanning everything.

Write cost rules into your code review process. If someone changes a high-cost table without proper filters, the code review should block them automatically.

Train your team. Show analysts the "This query will process X TB" message before they click Run. Give them a dashboard showing exactly how much their queries cost.

 

📺️ PODCAST
FinOps Weekly Summit Talks

More and more FinOps Weekly Summit Sessions are getting released. We’ll complete the release next week but you can already check day 1 completely.

 

AWS
Improve Cost Visibility with AWS Cost Categories

AWS Cost Categories is a free tool that helps businesses sort and understand their cloud spending in ways that make sense for them. Think of Cost Categories like folders on your computer.

One practical example is tracking costs by location. If your company runs services in North America, Europe, and Asia, you can create groups that show spending for each area. Instead of seeing dozens of individual regions, you get a clear picture of how much each major area costs.

You can also combine different ways of looking at costs. For instance, you might want to see how much each team spends in each region. By using multiple rules together, you can create views that answer specific questions about your spending.

Sometimes costs cannot be easily assigned to one group. Support fees or data transfer charges might benefit multiple teams. The split charges feature solves this by dividing these shared costs fairly. You can split them evenly, by fixed percentages, or based on how much each group actually uses.

 

FINOPS CULTURE
Why Context Is Relevant in FinOps (and in everything)

A vice president of engineering gets a message saying everything's okay. Some days that brings relief. Other days it means almost nothing. The same words carry different weight depending on what happened before. That difference is context.

Context is all the unsaid information that makes words meaningful. It turns raw data into something useful. It's the gap between getting an answer and getting the right answer for your specific situation.

When someone asks if a report is ready, the question only works if you know what's happening around it. Are they about to walk into a big meeting? Have they been working late all week? Is this routine or urgent? Without knowing these things, you're just guessing what they really need.

Context shows up in four main ways: Human context includes language, culture, and relationships. Organizational context covers power structures, money, and rules. Technical context deals with systems and how they connect. Physical context includes location, time zones, and available resources.

Most tools fail because they assume everyone needs the same thing. They give correct answers that don't actually help because they ignore the person's specific world. That's how something technically right becomes practically useless.

Context is really about understanding encoded into systems. It's what changes basic operations into real customer care. It's the difference between saying the data exists somewhere and saying here's what matters to you and why.

 

🎖️ MENTION OF HONOUR
Executives from Top Finance share FinOps Tips

Big financial companies like Morgan Stanley, Vanguard, and Capital One have moved their computer systems to the public cloud. These companies found out that using the cloud can actually cost more money if you are not careful about how much you use it. Here is what these companies are doing to keep costs down:

Vanguard created a special team to watch cloud spending. They make a weekly list of the top 10 apps that use the most cloud power. They even give badges to workers who save money.

Wells Fargo holds daily meetings to check a special screen that shows where money is being spent. They also turn off systems at night when no one needs them, like shutting off lights in an empty building.

Morgan Stanley buys cloud services in bulk for one to three years at a time to get better prices. They also changed how some of their apps work so they use less cloud power. One change cut their bill in half.

Capital One uses a tool that moves old files to cheaper storage spaces. This cut their storage costs by 35 percent.

All these companies say the same thing matters most. The money people and the tech people need to work together and talk to each other. Before, they worked separately. Now they share the job of watching costs.

 

PROFESSIONAL SPOTLIGHT
Wallid Battou

AWS Community Optimizer

Walid gave a masterclass of optimization for EKS, EC2 and Fargate in our summit. He’s been helping the community from the beginning, someone to reach out!

 

Learn Hands On FinOps

We’ve created Learn FinOps Weekly because we believe that hands on learning is the way to make real impact with IT Budgets.

With courses made by FinOps Professionals, well-known Authors in the industry.

Carefully curated with the FinOps Weekly Team to align with our actionable way of doing things.

We have a limited time offer for the launch that ends today.

Please take advantage as we are increasing pricing in the coming days.

Let’s learn actionable FinOps together

See you all inside!