Avoiding unpredictable cloud CPU costs.
When SiteMana onboarded a large new publisher, our infrastructure load increased exponentially overnight. Each visitor page view flowed directly into our real-time ingestion pipeline. This rapid traffic caused CPU credits to quickly exhaust on our AWS x86-based t3.medium instances. As a result, performance was throttled at the exact moment we needed stability most. We quickly realized our system was not just scaling, it was breaking.
SiteMana needed a solution to manage 20–100 million daily visitor events. It also had to scale efficiently and support real-time TensorFlow inference to predict user purchase intent.
SiteMana provides identity resolution and real-time purchase-intent prediction for e-commerce brands and publishers. Any delay in processing translates causes lost opportunities and revenue. Unpredictable CPU costs from AWS credit overages made budgeting difficult. This also affected operational efficiency.
We migrated our real-time ingestion and ML inference workloads to Arm-based AWS Graviton3 (m7g.medium) instances. Arm provides a CPU architecture optimized for consistent performance, predictable costs, and improved network throughput.
Steps we followed:
Challenge: Ensure runtime compatibility across architectures.
Solution: Use AWS ALB to route incremental live traffic to Arm instances. Validate compatibility and performance parity without downtime.
Challenge: Reduce latency concerns with TensorFlow inference.
Solution: Optimize TensorFlow batch sizes and threading configurations. Use Arm Neon SIMD instructions to reduce latency and outperform x86.
After migration, SiteMana experienced significant operational improvements:
x86 (t3.medium) vs Arm (m7g.medium)
CPU Architecture | x86 (variable performance via credits) | Arm (consistent sustained CPU) | Predictable, no throttling |
vCPU Count | 2 | 1 | More efficient CPU usage |
Network Bandwidth | 5 Gbps | 12.5 Gbps | 2.5× increase |
CPU Credit Throttling | Frequent under load | None | Eliminated completely |
Inference Latency (p95) | 29ms | 25ms | 15% faster inference |
On-demand Cost (hourly, USD) | $0.0416 | $0.0408 | ~2% lower per hour |
Monthly Infrastructure Cost (20 instances, 100M events/day) | $800 (with credit overages | $596 (no credit model) | ~25% monthly savings |
Infrastructure Complexity | Separate instances for ingestion & ML | Single instance for both workloads | Significant simplification |
Our success with Arm-based AWS Graviton3 has led us to further other Arm-based architectures for other services and workloads. We encourage others considering similar transitions to begin testing and evaluate the benefits directly.
Leave a Reply