Deck the Halls with Walls of Metrics
For some, the holiday season is a time to show gratitude. Others spend time with those they hold dear. Or maybe, it is a time to reflect upon the previous year, and look at the year ahead. Or, just clean out the house and donate all that stuff you didn’t need all year. These are all great reasons for the season. Here at Skytap, we’d like to take this opportunity to look at some of the colored lights that “deck our halls” in the form of Skytap customer metrics.
Customers who visit us often stop in the halls to gaze at these Skytap monitors that track the global usage of our services, across all cloud platform providers. Initially these dashboards were monitoring our Seattle data center to ensure that our capacity was aligned as closely as possible to customer usage patterns. The end of the year was a great time to review and reflect upon what our needs would be for the coming year.
With the growth of Skytap came more global data centers, new innovations behind the scenes to more elastically expand to meet capacity, and running our environment management atop third party cloud infrastructures like AWS and Softlayer. Now there are many more critical variables we need to watch daily, and the need to automate how we gather this data and make notifications is greater than ever.
Aside from the need for production monitoring, it is also interesting to look at the patterns in usage data as a study on how Skytap customers leverage our cloud environments for software development and testing on a daily, monthly, or annual basis.
Take a peek for instance at some 30-day usage metrics for a typical large enterprise development customer (name redacted of course). You can see a lot of interesting trends from this first set of graphs. Notice how every work week’s concurrency and usage of SVMs (Skytap Virtual Machines; our unit of measure for VMs+CPUs used) is like a left hand jutting up from the bottom as people spin up environments and run their daily tests, with 4 fingers for peaks Monday thru Thursday, and a smaller thumb for Friday, then a weekend lull.
Pretty cool stuff. Every week has this similar pattern for many customers who get into a rhythm of daily build/test exercises. Then we look toward the end of this month. It’s interrupted by holidays, and SVM usage level seems to drop off by more than 50% the last 2 weeks of the year because of it. This is another consistent trend from year to year for many customers.
Note how this change doesn’t affect the overall storage quota for the customer as drastically as you’d expect at the lower right. Yes, it looks like a cliff, but that drop is only from 270TB to around 240TB. This indicates that even if usage is not active, storage requirements remain relatively stable, if the company is still holding onto that reserve of specific environments.
You might get a takeaway from this view that the company could better manage their quota of environments for its lab users, but that’s not entirely practical in many shops. Yes, they should clean out a few dusty instances, or ensure they are setting policies to automatically suspend or delete some systems. But a purge is seldom in order, if things are going right. When work resumes in 2015, those teams will want to be ready to roll and spin up with their last good lab configurations in an instant. In general you see storage quotas growing incrementally over time, if a company is managing active development, integration, training and support projects.
Outside of the technical support and product/feature information requests we receive, customers often ask us for advice about best practices for quota allocation and management. “How can I budget for the right amount of capacity to cover my changing needs throughout the next year, and how can I maximize the efficiency of that capacity?”
So as 2015 progresses, expect more from Skytap on this topic, because we believe it is a great indicator of how companies not only perceive costs, but receive value from our products.