IT Briefcase Exclusive Interview: Application Delivery in a Modern Network Infrastructure
July 13, 2017 No CommentsWith Simon Jones, Evangelist and Director of Marketing Communications at Cedexis
The modern network isn’t all based in a static data center. Rather it combines self-managed data centers, clouds for storage and computation, and content delivery networks for delivery. Where once a monolithic application delivery controller (ADC) acted as the master decision-maker for all load balancing, today organizations increasingly use software-based ADC solutions, caching servers, and open-source local traffic manager like NGINX.
Which is all to say: love it or hate it, today we live in a world of hybrid infrastructures. However hard we may try, we cannot get away without combining the best of private and public infrastructure. Few, if any, modern technology services are served through all self-managed hardware (and even the behemoths building de facto private clouds and CDNs must often manage hardware that spans numerous co-locations and POPs, where the actual hardware is outside their physical control).
Evangelist and Director of Marketing at Cedexis, Simon Jones outlines the future with application delivery and specifically how to address the challenges of managing multiple sites and infrastructure types.
- Q. Where do you envision application delivery heading in the coming years?
A. Application delivery is headed out of the data center and into the cloud. The architectures we use today to provide applications is inevitably stored, managed, computed, and delivered across a range of locations, virtual and physical, and a hardware solution that can see only within the four walls surrounding it simply can’t get the job done. And not only is it going to need to be out in the cloud monitoring all the pieces of the infrastructure, it’s also going to need to deliver real consumer feedback at the point of consumption: today’s application user doesn’t care what the load on the back-end is, they just want their service to work.
- Q. What are the benefits of software-defined, cloud-native application delivery?
A. They are two-fold. First, software-defined application delivery is boundaryless: it can monitor, analyze, interact with, and ultimately control architectural elements regardless of where they are in the topology. This means making the right decisions on traffic routing that consider actual capacity, contractual limits that might make spinning up a new instance cost-inefficient, and so much more. Secondly, it enables real-time user experience feedback, ensuring that the health of the infrastructure is balanced against customer experience – neither has primacy over the other, and so a proper equilibrium can be found.
- Q. How can a provider ensure that each user receives a high quality of experience (QoE)?
A. While that greatly depends on the type of application, and the content within it, the high-level reality is that providers must have direct feedback from the user’s point of consumption, coupled with an automated and intelligent system that makes adjustments on the fly. It is insufficient to monitor without action, or to make changes without real time data to guide decisions – both are necessary to deliver on the promise of QoE.
- Q. With so many locations (many of them virtual and essentially placeless), how can a provider ensure that each piece of the puzzle is working efficiently? Or that each request for resources is efficiently and cost-effectively selected from the myriad possibilities?
A. A cloud-native application delivery platform receives data from the myriad locations – synthetic monitoring data, information from local load balancers, real user measurements, etc. – and uses it to make decisions that reflect the immediate reality. This is then provided back to the locations in a closed feedback loop, which ensures each delivery decision is not only informed by all those that went before, but that its outcome informs all future decisions.
- Q. What is specifically needed to operate a hybrid infrastructure in a reasonable and efficient way?
A. What is needed is a cloud-native, software-defined application delivery platform that, collects data from all locations, combines and rationalizes that data to understand in real-time the best routes for future traffic, and builds and distributes decisions that all locations need to route traffic efficiently and effectively
- Q. How does Cedexis’ technology aide in solving issues companies face surrounding application delivery?
A. Our approach to solving this problem is built on three unique elements:
a) Real User Measurements: over the last 7 years, we have built a community of over 50,000 web properties and applications, through which we collect some 14 billion measurements a day. This is the most comprehensive source of real user experience measurements in the world.
b) Real–time routing decisions: our decision engine, Openmix, reacts to changes in the routes traffic can travel (whether impacted by internal factors like overloaded servers, or external factors like overwhelmed peering connections between CDNs and ISPs) in seconds, effectively avoiding trouble before it happens.
c) Deep and broad data integration: the Cedexis Application Delivery Platform integrates with dozens of external systems (e.g. NGINX+, New Relic, Akamai, and many more) to draw relevant external data into all its decisioning. This enables our customers to balance all relevant factors when building their load balancing algorithms.
About Simon Jones:
Simon Jones is the Evangelist and Head of Marketing at Cedexis. An expert in Web, Cloud and SaaS technologies, he has experience spanning start-ups and mature organizations, and has held led teams across all disciplines, from software engineering to sales and marketing. A prolific writer and presenter throughout his over 20-year career in Silicon Valley, he has been featured at events including NAB, Casual Connect, and Streaming Media; and in publications across the technology landscape.