IT Briefcase Exclusive Interview with Napatech: What the Third Platform Requires of the Network
June 3, 2015 No CommentsWhat many recent IT predictions share is a reference to the astronomical quantities of data being produced by cloud, mobile, Big Data and social technologies – what analyst firm IDC refers to as the “third platform.” It is clear that regardless of the platform, or the means of delivery, the volume, variety and velocity of data in networks is continuing at explosive rates.
Clearly, there is a need in today’s infrastructure for platforms and tools that accelerate access to data. As network engineers work to deliver these massive data streams in real time, performance and application monitoring is turning into a pressure cooker, with multiple usage crises dragging down network performance at any given time.
In this interview, Dan Joe Barry, VP Positioning and Chief Evangelist for Napatech, speaks with IT Briefcase about the emerging technology of software acceleration platforms and tools.
- Q: What’s the current thinking around accelerating network performance?
A. There is a need for software acceleration and support across a variety of platforms. To address this need, hardware acceleration must be used to both abstract/de-couple hardware complexity from the software while also providing performance acceleration. Separating the application and network layers helps to achieve this goal while also opening appliances up to opportunities that support new functions that are not normally associated with their original design.
- Q: So both hardware and software acceleration play a role here?
A. That’s right; the one helps the other. By deploying high-performance network adapters, administrators can identify well-known applications in hardware by examining layer-one to layer-four header information at line-speed. By identifying what is performed in hardware and what is performed in application software, more network functions can be offloaded to hardware, thus allowing application software to focus on application intelligence while freeing up CPU cycles to allow more analysis to be performed at greater speeds.
Also, massive parallel processing of data becomes possible because the hardware that provides this information can be used to identify and distribute flows up to 32 server CPU cores. All of this should be provided with low CPU utilization. Appliance designers should consider features that ensure as much processing power and memory resources as possible and identify applications that require memory-intensive packet payload processing.
- Q: It sounds like administrators just need to find a good network adapter and the problem’s solved.
A. Well, there are many tools today that address the problem of downstream analytics in a voluminous environment; however, the ability of these tools to perform real-time analysis and alerting is limited by their performance. Solutions that are used to extract, transform and load data into downstream systems tend to increase the latency between data collection and data analysis. In addition, the volume and variety of data being ingested makes it impossible for analysts and decision makers to locate the data they need across the various analysis platforms.
- Q: What’s to be done, then? How can administrators get the real-time analysis they need?
A. They need to push intelligence to the point of data ingress to improve real-time analysis capabilities and accelerate “third platform” activities. Best practices include real-time alerting, in-line analytics and intelligent data flow. Real-time alerting is the ability to know what data is entering the system in real time, before it reaches decision making tools, thus providing intelligent alerts to stakeholders, informing them of the presence of new data that is of interest for their area of responsibility.
In-line analytics means that organizations are making use of perishable insights—that is, data whose value declines rapidly over time. This requires that organizations begin to analyze data at the very moment it is received. Doing so ensures that an organization can begin acting on what is happening immediately.
Finally, intelligent data flow means that administrators are inspecting data immediately upon ingress. This way, data flow decisions can be made to direct data to downstream consumers at line-rate. It minimizes the unnecessary flow of data through downstream brokers and processing engines. Using these three best practices, organizations will be able to manage the ever-increasing data loads without compromise. By scaling with increasing connectivity speeds, as well as accelerating network management and security applications, enterprises will have greater success navigating the third platform and beyond.
Daniel Joseph Barry is VP Positioning and Chief Evangelist at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector. From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.