**GOTCHA:** This is, in fact, on Tuesday due to Monday's holiday. Please make a note!
We have an amazing double header tonight with Xebia and Expero.
Case Study: Comprehensive Scaling and Capacity Planning with k8s, Terraform, Ansible and Gatling
Now that shared nothing, noSQL solutions are widely available in the marketplace, deploying clusters of most sizes is both practical and necessary. But how quickly do my clusters need to expand (or contract) and how can you be sure that you’ve properly planned and budgeted for that deluge of traffic after your new product announcement?
In this case study, we’ll walk you through how we setup a dynamically scaling cluster management solution that could add or remove capacity as planned traffic demands. All of this informed by a sophisticated test harness to game out for possible and probable scenarios.
Topics that will be covered are:
1. Deploying complex configuration dynamically within Ansible and Terraform scripts
2. Automating a fluctuating test harness to model spikes and plateaus within read and write traffic patterns with Kubernetes
3. Keeping a healthy ecosystem through monitoring and proactive maintenance
4. Push button provisioning for handling “Black Friday” type spikes in demand
Brian Hall (linkedin) leads the Graph and Analytics Practice at Expero, with consulting expertise across a wide array of graph engines including JanusGraph, DataStax, Neo4j, TigerGraph, Neptune, and CosmosDB.
Smarter DevOps: How Data is Driving the Next Wave in DevOps Intelligence
The data within your DevOps pipeline has limitless potential. It may highlight security issues, identify causes for release failures, or pinpoint ways to improve efficiency. Why is it that so much of the power of
data remains untapped? Harvesting DevOps data holds the key to continuous improvement in software delivery. This data will
give organizations more meaningful metrics and KPIs, insights into areas to improve processes, and value through increased efficiency. Advances in machine learning technologies offer new opportunities to gain significant value from this data.
So, what benefits can users expect?
Tracking application delivery: Visibility at each stage of the release pipeline allows users to quickly identify problems wherever they occur and quickly resolve.
Automated governance: Automating security, compliance, and audit data gathering decreases time spent on repetitive labor-intensive activities, allowing DevOps teams to focus on app development.
Ensuring application quality: Advances in data analysis can now find subtle issues and trends, ensuring that releases consistently achieve desired quality.
Predicting and preventing failures: Past history is the truest predictor of future success. AI/ML analyze the conditions surrounding past release failures to provide predictions of similar problems
in the future. Failing releases can be identified and resolved before they occur, saving precious
time and money.
Analyzing business impact: Improvements in efficiency translate into improved business results. Identifying key bottlenecks and areas for improvement provides tangible business value that can
be quantified and tracked.
Join us to learn what the future holds for DevOps intelligence.
Dan Beauregard is VP, Cloud & DevOps Evangelist for XebiaLabs. Dan has held senior director roles in both solution engineering and product management at various cloud-disrupting companies (Scalr, OneCloud, DynamicOps—acquired by VMware), where he helped transform the Cloud Management Platform market by guiding organizations in their adoption of the cloud technologies. He brings to XebiaLabs years of experience in both software development and cloud and is an AWS Certified Solution Architect.