Videogame FinOps Practices

How can you actually do FinOps even for Video Games

Presented by

Want to appear here? Talk with us

Together with DoIt Cloud Intelligence
Get Ahead Of These 4 Common FinOps Mistakes

FinOps teams are stretched thin, trying to control cloud spend, model costs, and scale impact.

 

GOOGLE CLOUD
Running a Video Game Server on GKE: Efficient Budget Server Setup

A software engineer who loves the factory-building game Factorio wanted to host a multiplayer server for friends without relying on someone's personal computer being always online. The solution was moving the game server to Google Cloud using GKE Autopilot with some smart cost-saving tricks.

The Setup Process:

First, create a GKE Autopilot cluster that manages server resources automatically. This means no manual work with virtual machines or disks.

Next, use Kubernetes ConfigMaps to store game settings and mod configurations. The author shared their mod list including helpful tools like calculators and quality-of-life improvements.

The main deployment uses a container image of the Factorio headless server. It includes init containers that copy configurations and can delete old save files when starting fresh.

Money-Saving Features:

The setup uses Spot VMs which cost much less but can be shut down anytime. Since Factorio auto-saves regularly, this works fine in practice.

An automated scheduler turns the server off at midnight and back on at 5 PM every day. This cuts the monthly bill by about one-third.

Players can manually turn the server on or off using simple kubectl commands when needed.

The server uses a persistent disk to store game saves and configurations. Resource requests start at 2 CPU cores and 4GB memory but can be adjusted as factories grow larger. A LoadBalancer provides an external IP address that players use to connect to the game.

The author notes this setup has been reliable for multiplayer sessions and saves money compared to running a dedicated server 24/7.

 

AWS
Automate AWS Cost Allocation with Lambda & EventBridge

Managing cloud costs across different departments can be tricky when you have multiple AWS accounts. While AWS Organizations helps group accounts together, it's hard to see how much each department spends without a good system in place.

The Setup

You organize your AWS accounts into organizational units, with one for each department. Then you create a small program using AWS Lambda that runs automatically at the end of each month. This program looks at all your department groups and creates cost reports for each one.

The Process

Amazon EventBridge acts like an alarm clock, telling the Lambda program to run on the last day of each month. The program then checks which accounts belong to each department and creates cost categories with names like "OU-Marketing" or "OU-Engineering." If these categories already exist, it updates them with any new accounts.

The Benefits

Once set up, you can create monthly reports in AWS Cost Explorer that show exactly how much each department spent. You can save these reports and use them every month to bill departments for their cloud usage. The system also shows which specific accounts within each department spent the most money.

Getting Started

The solution requires six main steps: creating the Lambda function, setting it up with the right code, giving it permission to read your organization data, scheduling it to run monthly, checking that the cost categories work correctly, and finally setting up your monthly reports.

 

FINOPS EVENTS
FinOps Weekly Summit 2025

23rd of October, 2025

Stop Chasing Cloud Costs. Start Driving Business Value.

The FinOps Weekly Summit 2025 is the one online event this year designed to give you the actionable playbooks you need to transform your FinOps practice. Move from reactive fire-fighting to strategic leadership by learning directly from the world’s top experts on unit economics, automation, and governance.

Become:

  • A known brand by Sponsoring the event

  • A FinOps Expert by Submitting your talk to the event.

  • A Real Practitioner by Learning from FinOps Experts attending the event.

This is where the future of FinOps is decided.

Don’t get left behind.

23rd October - 4:00 PM CEST / 10AM EST

AWS
Cost Management in Amazon SageMaker Unified Studio

When you create projects in SageMaker Unified Studio, AWS automatically adds special tags to all the resources you use. These tags work like labels that help you track which costs belong to which project or team.

Setting Up Cost Tracking

You need to turn on cost allocation tags in your AWS billing settings. The main tags to activate are AmazonDataZoneDomain, AmazonDataZoneProject, AmazonDataZoneEnvironment, and AmazonDataZoneBlueprint. These tags get added automatically when you create projects.

Viewing Costs with Cost Explorer

Cost Explorer is AWS's built-in tool for looking at your spending. You can filter by project tags to see exactly how much each SageMaker project costs. You can also break down costs by different AWS services like Amazon Redshift or other tools your project uses.

Advanced Cost Analysis with Data Exports

For more detailed cost analysis, you can use AWS Data Exports with Amazon Athena. This lets you write SQL queries to examine your costs in detail. You can see spending patterns, compare projects, and get specific cost breakdowns by service type.

Creating Dashboards with QuickSight

If you want visual reports, you can connect Amazon QuickSight to your cost data. This creates interactive dashboards showing cost trends, project comparisons, and spending alerts. Finance teams can use these dashboards to track budgets and send automated reports to stakeholders.

Key Benefits

This approach helps organizations understand exactly where their AI and analytics spending goes. Teams can see which projects cost the most, track spending over time, and make better budget decisions. It also makes it easier to charge costs back to the right departments or business units.

 

CLOUD PROVIDERS
AWS & Me Are the Only Ones Working !?

AWS

EKS Clusters Scale Up Massively Amazon EKS now supports up to 100,000 worker nodes in a single cluster - a huge scalability leap! From a FinOps perspective, this enables workload consolidation into fewer, larger clusters, simplifying management and improving resource utilization across your infrastructure.

S3 Gets Multiple Cost-Saving Updates AWS reduced pricing for S3 metadata changes - now much cheaper than full upload requests. Perfect for data tagging and lifecycle management. Additionally, S3 Tables introduced features that reduce data compaction costs, lowering operational overhead for data lakes.

AI Integration Comes to S3 S3 now previews native vector search capabilities for AI applications. This could be significantly more cost-effective than maintaining separate specialized vector databases for similarity searches and embeddings.

Smarter Cost Monitoring AWS Cost Anomaly Detection upgraded its ML models for better accuracy, reducing false alarms while understanding seasonal patterns and growth trends.

Free Tier Becomes More Flexible AWS replaced specific service limits with a $75 monthly credit system, offering greater flexibility for experimenting with various services.

COST MANAGEMENT
iFood's Epic Cloud Tagging Success Story

iFood, the Brazilian food delivery giant, cracked the code on cloud cost management by achieving something most companies only dream of: 98% cloud resource tagging coverage.

But here's what makes their story special. They didn't just solve the tagging problem. They built what they call a "universal identifier system" that connects everything from cloud costs to incident response to software billing.

The secret sauce was treating tags as metadata rather than just labels. Their system uses a simple format called "owner-layer-slug" that creates a unique fingerprint for every resource. This means when something breaks at 3 AM, the system knows exactly who to call. When the monthly cloud bill arrives, it knows exactly which team spent what.

The transformation wasn't instant. Like many companies, iFood started in what they call the "Dark Ages of Manual Tagging" where engineers had to remember to add tags by hand. Spoiler alert: humans forget things, especially when they're rushing to fix production issues.

Their breakthrough came from automation. Instead of asking people to tag resources manually, they built systems that tag everything automatically based on who created it and where it lives in their infrastructure. The tags become part of the deployment process, not an afterthought.

📺️ VIDEO
Expert Masterclass: FOCUS Success Case in Azure

In this Masterclass, we teach you how to design an event-driven architecture based on Azure Functions that processes billing information and divides costs using a combination of general rules and specific allocations that can be managed and pre-set in advance. Using FOCUS and Power BI

 

🎖️ MENTION OF HONOUR
FinOps for AI: A Practical Guide

AI is changing how companies spend money on cloud services, and traditional cost management isn't keeping up.

Companies are rushing to add AI features to their products, but AI costs work differently than regular cloud expenses. Instead of just paying for servers and storage, you now pay for tokens, API calls, and GPU time. The problem is that AI costs can be hard to predict and control. The solution is a three-step approach:

Step 1 - Make it visible: Start by tracking what AI tools your teams are using and how much they cost. Tag everything related to AI projects. Set up alerts when spending jumps unexpectedly.

Step 2 - Add accountability: Show teams how much their AI experiments cost. Separate research projects from live products. Start predicting future costs based on past token usage.

Step 3 - Connect costs to results: Track how much each AI feature costs per customer or per use. Optimize prompts to use fewer tokens. Build cost checks into your development process.

The key practices include setting limits on API usage, caching common responses to avoid repeat calls, and regularly comparing prices between different AI providers.

 

Professional Spotlight
Tania Fedirko

FinOps Full-Stack Expert

She’s one of the most active community members we have with us. Someone really helpful, sharing knowledge such as the AI guide above. Put notifications on her profile and you’ll be learning every week!

That’s all for this week. See you next Sunday!

Join The Largest FinOps Online Event of 2025

Join FinOps professionals at the FinOps Weekly Summit 2025 and discover how to:

Transform from reactive fire-fighting to strategic leadership — Learn the proven frameworks that top practitioners use to turn cloud cost management into a competitive advantage

Master AI-powered cost optimization — Get exclusive access to the latest automation tools and techniques that can reduce your cloud spend by up to 40% while accelerating innovation

Build bulletproof unit economics — Walk away with actionable playbooks for calculating true cloud ROI and proving business value to executives who control your budget

"But I don't have time for another webinar..."

This isn't another generic webinar. This is the only major FinOps event left in 2025, featuring battle-tested strategies from companies managing billions in cloud spend.

October 23rd, 2025 | 4:00 PM - 8:00 PM CEST

Limited seats available