Articles: NTAF Interview Series, Issue 2
NTAF Through the Eyes of a Former Member and Founder: An Interview with Kingston Duffie
Part 2 of 2
Kingston Duffie was Founder and CEO of Fanfare, as well as a founding member of the Network Test Automation Forum, (NTAF). Mr. Duffie was interviewed in June 2011 for a two-part article (Part 1 can be read here). In this second and final installment of an interview with Kingston Duffie, the former founder and CEO of Fanfare, as well as an original founding member of the Network Test Automation Forum (NTAF), delves further into his retrospective view on the forum’s scope, the ideology of a test automation standard and his opinion on its evolution to real-world test lab automation environments.
*Since Spirent’s acquisition of Fanfare Mr. Duffie has moved on to pursue other opportunities and is no longer involved in NTAF.
Automation often involves a lot of home-grown tools, as well as off-the-shelf products – how does NTAF fit technically into each environment?
Every comprehensive testing system I’ve ever seen is made up of many different components. Some are commercial. Some are open-source. And there is almost always a significant fraction which is home-grown – because of the specialized requirements in each environment. NTAF is built on top of a very fundamental reference model that avoids any hierarchy. Any type of component is free to be an “NTAF entity” which places on it a small burden to declare its existence and its capabilities. At that point, any capabilities that it exposes can be used by any other NTAF entity. And it is free to consume the services exposed by other entities. We stop thinking about whether a component is commercial or not. One assembles complete testing solutions by finding the best NTAF-supported entities and constructing additional entities to fill any gaps you find.
How will users handle the installed base of legacy (i.e. non-NTAF equipment)?
If a tool or piece of equipment already has some automation interface (such as a Tcl library) then it is relatively easy to create a proxy that exposes that interface to other NTAF entities. I expect to see vendors stepping forward relatively quickly with proxies like this for their own products – as a stepping stone to full NTAF integration into those products.
However, as I said earlier, NTAF is not an “all or nothing” proposition. Anyone who has built or managed a large testing lab is familiar with the realities of evolution. Parts of the lab will begin to use NTAF while others remain as they are. The goal should not be to make a lab NTAF-compliant. The goal should be to incrementally enhance and adapt in the most efficient way possible. NTAF will be adopted or not based on its ability to be the easiest and fastest way to get new capabilities into testing labs – and that requires that NTAF not depend on having all parts of the solution be NTAF-enabled.
Automation brings together diverse products from multiple vendors – how does the NTAF specification offer a unified experience?
This has been a challenging puzzle to solve. Inevitably, there have been calls to have NTAF provide a standard to allow customers to mix and match components in a solution. This desire to avoid single-source components is completely understandable. Customers do not want to put themselves at the mercy of one vendor for a given component. The problem with this thinking is that vendors have shown that they will fight against this type of commoditization. They see that this will force them into a race to the bottom.
In some cases, it is absolutely appropriate to have a common set of base functionality that a customer can demand from multiple vendors – allowing them to compete, instead, on value-added innovation. NTAF fully embraces this idea. The specifications allow multiple vendors to declare support for a common set of supported actions. However, NTAF itself doesn’t standardize what those common actions are. (Perhaps at some point in the future NTAF may step in to help standardize some of these basic building blocks but it hasn’t done so yet.)
NTAF addresses this problem of “vendor lock-in” primarily in a different way. Each NTAF entity is required to fully describe its capabilities in a way that they can be consumed by any other NTAF entity. This encourages solutions that are less monolithic and more componentized. Components from two or three different vendors might not be identical but might still be intended to fulfill similar roles in an overall solution. Because the capabilities of these components are fully self-describing, the other components around them can normally adapt easily (even at runtime) to different capability variants. In this way, a customer can choose to integrate components from many different vendors.
Regarding the question of a “unified experience”, NTAF intentionally avoided any special place in the standards formally defining “dashboards” or “databases” or any other centralized function. This was done with the view that a fully peer-to-peer relationship among components would place no vendor in a position of special influence. That is not to say that an assembled solution would not include components that provide this unifying experience. Vendors can choose, for example, to design a dashboard of products that support the NTAF specification by taking advantage of all of the self-describing NTAF components in the environment and exposing capabilities that encourage these components to provide enhanced functionality when working together.
Automation in networking means writing Tcl scripts to drive test equipment – what happens to that paradigm in an NTAF context?
NTAF was designed to be completely language-neutral. Instead, it talks only about how NTAF entities interact with their environment. We have already demonstrated tools from different vendors working together via NTAF when one of those tools is implemented in Tcl while another is implemented in Java or C#. With appropriate libraries, it becomes very easy to write a Tcl script that takes advantage of a network of NTAF entities – using their self-describing capabilities to perform a variety of tasks. Before NTAF, each tool vendor might provide proprietary Tcl libraries to interact with their hardware or software products. Now an automation engineer is free to choose the language of their choice (including Tcl) to write their automation scripts to drive a wide variety of tools without having to use anything other than a standard XMPP library.
NTAF also enables a new class of tools that enables the possibility of creating automation without having to do any programming at all. Tools like Spirent’s iTest and Ixia’s TestConductor are examples of these. They exploit a special feature designed into NTAF that allows tools to declare activities they are performing in a language that matches what is needed to automate those functions – enabling so-called “capture-replay” mechanisms.
Why was XMPP chosen for the NTAF standard?
At the very beginning, the NTAF technical committee identified the core requirements. Customers indicated that tools should be loosely coupled, communicate in ways that are easy to troubleshoot, and should require a minimum of management and configuration. We also concluded that we should avoid “reinventing the wheel”. We spent time looking at other standards to see if they met our needs. XMPP fit the bill nicely. As its full name suggests, the Extensible Messaging and Presence Protocol was specifically designed for messaging. Its presence functions are perfect for handling inventory and discovery which we concluded were critical to avoid a lot of management and configuration. And since it is built on top of XML at several layers, it is extremely extensible – allowing us to build on it, rather than having to change it.
Where do you see the first implementations of NTAF coming into existence?
Some vendors have already released support for NTAF in a few of their tools, with more to follow. What is more important, however, is when there will be a sufficient collection of tools that adopt the NTAF standards to assemble complete solutions. That will certainly take time; however, it is notable that NTAF can be used for certain specific integration points in an overall solution without being used comprehensively. For example, consider a new automation script that has to drive several different components. One of those components is a new specialized traffic generator that supports NTAF. Rather than building a new Tcl library for this traffic generator, this script could deal with other components via specialized Tcl libraries but deal with the new traffic generator directly via NTAF. Over time, more and more of these specific interactions will be based on NTAF and one day we’ll see that the whole lab is using NTAF.
What are the upcoming challenges that NTAF must focus on to be successful?
NTAF has done an excellent job so far of focusing on the real-world problems that those assembling large test environments face. At the same time, this work is done in a way that seeks to establish a very extensible stable foundation, rather than trying to deliver “point-level” solutions. It is now important for NTAF to reinforce the work it has already done rather than begin innovation on different frontiers. It must work closely with those who are trying to implement NTAF on their own products and address shortcomings they encounter. And they must work with those assembling solutions to head off problems in making it all work together – issues such as troubleshooting, conformance, etc.
About Kingston Duffie
Kingston Duffie’s career in the networking industry started at Bell Northern Research and Northern Telecom in the 1980’s where he worked on the first generation of digital telephone switching and later packet switching. In 1990, he came to Silicon Valley where he founded three successful venture-backed start-up companies. The most recent of these companies was Fanfare which was acquired by Spirent Communications earlier this year. Kingston is now working on a new start-up company focusing on a new generation of internet-based messaging.