The idea of create once, use many is not new in the world of IT. There is no reason to re-invent the wheel – right? Then why is it not being practiced more diligently? Is it because the consumers of various technology services do not like saving time, resources and cost or have they not been offered such an option?
In hardware and software development, reusability has been a natural and primary method of choice for increasing efficiency and it has been applied for decades now. Regardless of application and platform, we see many examples of it in both hardware and software. Although it appears that reusability has been more formalized in hardware, it has been more of an opportunistic approach in software development.
However, the state of software reusability may be uplifting by the recent trends in software application development. Micro-services, coupled with containers, are shifting the software development paradigm towards delivering applications through modularized services. These services can not only be reused, but also repurposed based on the new objectives at hand. The driving force behind the push forward for this hardware and software trend is simply the agility, efficiency, predictability and repeatability in service deployment that consumers demand.
All of this holds true when validating application performance and assessing security postures within a network infrastructure. Today’s technology options offer a multitude of solutions in relation to customer security infrastructure deployment and policy enforcements assessment. Customers may opt for an on-premise solution due to security mandates which consist of high-performance physical appliance or flexible virtual appliance. Alternatively, customers may opt for elasticity and a pay-as-you-go solution in the public cloud. The range of deployment possibilities may vary depending not only on the type of customer, but on the level of demand put on those customers as well. It may even vary at different time cycles due to conformance, right-sizing or service migration from physical to cloud services.
For an enterprise customer, it becomes crucial to gain agility, flexibility and repeatability across these deployment platforms with a unified test framework. This eliminates the need for additional time, resources and cost associated with rebuilding and reconfiguration. The engine behind this framework should not only deliver assessments via emulated traffic with realism (not simulated traffic or basic pcap replay), but should also be continuously updated with current malware, attack, and application content. It would be with such a unified validation approach that enterprise customers facing a range of deployment models can minimize the need for additional time, resources and expenses when doing on-going security and performance assessments, ultimately ensuring consistent user quality of experience.
Spirent CyberFlood provides a single controller architecture that can perform L4 – L7 performance and scalability, as well as cyber security assessments. It does this while also managing those test configurations across resources (queues) of physical appliances, virtual appliances and public cloud (AWS or Azure). CyberFlood generates test traffic based on realism of stateful scenarios and updates continuously with latest application, malware and attacks through TestCloud.
Please visit us at www.spirent.com to learn more about how Spirent CyberFlood can help in validating enterprise network infrastructure security and performance.