NTAF Annual Members Meeting

By Todd Law, NTAF President, Spirent Communications Last month, NTAF members got together for their annual face-to-face meeting.  This year, the meeting was held in Richardson, Texas, hosted at the offices of Verizon, NTAF member and founder.  Representatives were present from Spirent, Verizon, Juniper, Ericsson, and VTM.  As in previous years, the purpose of the meeting was to discuss the state of the forum, and to set the strategy for NTAF going forward. One of the dominant themes I heard at the meeting was the ongoing need to manage large numbers of tools in test labs.  Service provider labs can have hundreds, if not thousands of tools, they have to manage.  Simply keeping track of these tools is a huge challenge.  This gets more complicated as labs get consolidated into “super-labs”, which are used by multiple groups, spread across multiple geographies. Another interesting and related problem that got aired was the need for security in such large labs.  The security problem is not so much the classic security scenario with outsiders somehow trying to break into a closed network, but rather, security even amongst those inside the network.  When a company’s network has many thousands of users – and some of those users may be vendors who are competing with each other – how can the network ensure that vendors only can use only what they are supposed to use?  When a company’s own employees can be a risk, as recently happened at Sony, how can disaster scenarios be prevented? And these were just a couple of the topics discussed.  There appears to be no shortage of problems for NTAF to solve – and we are already discussing how NTAF can best address these problems in its technical committee and working groups. To stay updated with what NTAF is doing, please visit the NTAF website,  join the NTAF group on LinkedIn or subscribe to email updates.

Legal Issues with APIs

Todd Law – NTAF President

The world of APIs has suddenly been thrown back into legal  limbo.

Three years ago, it looked like the legal uncertainty around APIs had been settled.  Two major legal disputes had run their course, one in the European Union and the other in the United States.  Developers, including test automation engineers, could proceed with their work, and not have to worry about legal aspects of APIs.

The European dispute over APIs concluded in May of 2012, in a case between the SAS Institute and World Programming Limited, that “the functionality of a computer program, and the programming language it is written in, cannot be protected by copyright”, effectively meaning that APIs are not copyrightable.  Neither of these organizations are household names, so it’s worth explaining a little about them.  The SAS Institute is a private software company with about 13,000 employees, and is headquartered in North Carolina.  World Programming Limited is a private company headquartered in the UK.  WPL’s main product, World Programming System, can use programs written in the language of SAS (a language used for statistical analysis), without the need for translating them.  In other words, WPL created some APIs which allowed their customers to use their existing scripts or programs on a different platform.  Sound familiar?  But because WPL was only mimicking the functionality, and did not have access to source code, the EU Court of Justice (the highest court in the EU) ruled in WPL’s favor.

The American dispute over APIs also concluded in May of 2012 – but this case involved much bigger names.  In this case, a US jury found that Google, in the development of its Android operating system, did not infringe on Oracle’s Java-related patents.  The trial judge also ruled that the structure of the Java APIs was not copyrightable.  So the world looked safe for APIs.  For a while.  In the US dispute, however, the decision was made at the district (lower) court level, so of course, Oracle appealed to a Federal Circuit court, which partially reversed the district court’s decision, ruling in favor of Oracle on the copy right issue.  This happened just over a year ago, in May of 2014.  Back into a state of limbo.

So fast forward to June 29, 2015.  Google, backed by dozens of law professors, recently asked the US Supreme Court to weigh in on the issue.   The Supreme Court even invited the Obama administration to submit a brief as to whether it should hear the case.  The Obama administration suggested it should not hear the case, and the Supreme court followed the administration’s suggestion.  But that’s not the end of the story – the lawsuit will now likely head back to a lower district court for a ruling.  It could take years for the US courts to decide conclusively on this.

Testing IoT

IoT, or Internet of Things, is a broad topic, with many application areas that will each have different priorities.  To put things in perspective, I talked to Ken Van Orman, Senior Product Manager at Spirent Communications, to get his take on how testing fits into the IoT puzzle.

“When most people think of Internet of Things they naturally migrate to consumer ‘things’ – smart watches, light bulbs, audio devices and home appliances. But there are other industries that touch us directly and indirectly that will become part of the IoT – automobiles, power plants and factory automation. Some power plants and factories utilize industrial Ethernet today, but there is a strong desire to increase the level of sophistication, control and standardization. Similarly, automobiles today are part of the IoT but in a limited way – think GM OnStar, Ford SYNC and BMW ConnectedDrive.

“The Industrial Ethernet world is also latching on to the IEEE’s work in time sensitive networking, or TSN,  for the purpose of sending control signals and collecting sensor information.  In this case, the application area might be the factory floor, where determinism provided by reliable timing, is valued – things have to happen ‘immediately’ or with very precise timing to control factory robots. It’s a similar story for power plants, aviation, broadcast networks, and ADAS (Advanced Driver Assistance Systems).”

“ADAS will be a big enabler of IoT in the auto industry. These systems will rely on traffic and accident warnings from roadside sensors and vehicle to vehicle communications – think of the vehicles and sensors ahead of you that can provide advanced warning of accidents and unsafe road conditions.”

“For industries new to Ethernet and IP, conformance will undoubtedly be important for IoT. A good example is the automotive industry which has relied on industry-specific protocols for a while.  Another area is performance testing. Many of the applications require networks to be time-sensitive in nature – for example, we expect the control network for a car’s braking system to be fast and reliable enough to ensure safety.  Consequently, new protocols have been defined for “Time-Sensitive Networks” by the IEEE’s Time-Sensitive Network Task Group.    However, testing these protocols is a new thing for the automotive industry.  To fill that gap, the AVnu Alliance, has defined test procedures and processes to ensure that switches and other products conform to IEEE AVB standards.”

“What’s also interesting is that some of these players, especially in the automotive space, do not really even care about IP, but rather are focusing on the underlying Ethernet, which is largely not used today in factories. This raises the question of what defines an Internet of Things device if it doesn’t even speak IP!”

In other words, like Apple’s iWatch, devices in these applications won’t actually speak the Internet Protocol themselves, and maybe should not even be considered part of the IoT.  Somehow, however, terms such as “the Ethernet of Things” or “things (sometimes) connected to the Internet of Things” definitely do not sound as catchy J.

-          Todd Law – Vice-President of NTAF

Re-definition of broadband

By Ameya Barve, NTAF Marketing Chair, Spirent Communications

The FCC recently changed the definition of “broadband” as we know it. It raised the minimum download speeds needed from 4Mbps to 25Mbps, and the minimum upload speed from 1Mbps to 3Mbps. This new definition has a profound impact not only for the consumers (about 20% of all US consumers now no longer have broadband based on the new definition) but also for the Internet Service Providers – the biggest impact will be to DSL providers as the new definition essentially removes most DSL services from the broadband discussion, as DSL (which is delivered over telephone lines) can never reach the new download threshold due to technical limitations.

It has also left the cable providers scrambling to meet the new requirements. The question though is, how will they test and certify that their services meet the new criteria as laid out by the re-definition? As part of ensuring that their services perform to the new requirements, the providers will have to do significantly more inter-operability and integration testing and this is where NTAF can help. NTAF recently released two new specifications, TS-005 which helps defines high-level APIs and the TS-006 which helps describe topologies in reports. Both the new specifications as well as the original NTAF specifications (TS-001 to TS-005) will significantly improve the time it takes for the Service Providers to perform their testing.

The re-definition is arguably a good move by the FCC – it brings the definition more up-to-date and realistic with today’s needs. Testing and certifying to the new definition will bring up new challenges and NTAF can help by providing the specifications that will allow the ISPs to perform interoperability and integration testing quicker than before. How the ISPs react to this new definition though, remains to be seen.

Eclipse Titan received release approval from the Eclipse Foundation

By Elemer Lelik, Ericsson

Eclipse Titan is a fully-featured test development and execution environment based on the ETSI-standard TTCN-3 language. In the last 15 or so years, Titan and TTCN-3 have been used in testing of hundreds of types of telecom nodes, networking or telecom software within Ericsson –  as well as for assembling a set of conformance tests for NTAF.  Conservatively, over 100.000 developer man hours have been invested in Titan.

As the TTCN-3 language itself suffered from a lack of open source implementations, Ericsson took the decision to contribute Titan,  in cooperation with the Eclipse Foundation, to the open source community.  To achieve this, the project has been aligned with the Eclipse development process, and the source code has been subjected to a series of  Intellectual Property reviews according to the Eclipse legal process, establishing its clear provenance.  On the 25th of March, the project  received the final approval from  the Eclipse Management Organization and the first open source release was generated.

Numerous resources for Titan available:

Netscout Joins NTAF!

By Todd Law, NTAF President, Spirent Communications

NTAF is pleased to announce that Netscout Systems has joined NTAF as a full member, becoming NTAF’s 12th member overall.  Here are some basic facts about Netscout in 2014:

  • Headquartered in Westford, Massachusetts
  • Annual revenue of $396M (in 2014)
  • Approximately 1000 employees
  • Broad portfolio of products and solutions focused on enterprise, service provider, and test optimization markets

But that was Netscout last year – and things are changing rapidly.  In late 2014 (calendar year), Netscout agreed to pay $2.6B to acquire Danaher’s communication businesses, including well-known test equipment brands Fluke Networks and Tektronix Communications.  Also included in the deal were VSS Monitoring, Newfield Wireless (both part of Tektronix), and security specialist company Arbor Networks.  The Danaher Communications business had annual revenue of $813M (in 2013), and about 2,000 employees.  The new combined company is expected to have revenues of more than $1.2B, effectively tripling the size of the company, and making Netscout one of the largest test and measurement vendors in the world.

It’s great to see that Netscout sees the value of being a member of NTAF, and we look forward to collaboration in the forum, ultimately delivering new efficiencies to industry and increased value to customers.

Net Neutrality

Net neutrality has been debated for many years now, but recently came into focus last week when the United States Federal Communications Commission (FCC) ruled to classify broadband as a Title II service, meaning it is classified as an “information service” rather than a “telecommunications service”.  The new rules, among other things, forbid activities such as blocking, throttling, or discriminating against lawful content, as well as any kind of paid prioritization.  While the political, business and ethical aspects of this change continue to be hotly debated, few have taken the time to think about what this means from a technical perspective.

First of all, how to decide what exactly is and is not lawful content?  File sharing protocols such as Bit Torrent which now has over 150 million users can account for between 43% to 70% of all Internet traffic.  Some BitTorrent traffic is completely legitimate – Facebook and Twitter, for example, both use it to distribute updates to its servers.  Other BitTorrent activity is definitely illegal – Pirate Bay, for example, an online indexing service for digital content to facilitate sharing, was found guilty of copyright infringement in 2009.  In the US alone, over 200,000 people have been sued for filesharing on BitTorrent.  Implementing this will mean a slow, arduous process where organizations and individuals who have been known to generate unlawful content can start to be blocked or throttled.  It’s not going to happen in real-time.  Furthermore, individual users can easily create and hide behind new identities.  A simple rule such as “ block all traffic from this IP address”, or “that application”, or “some combination of the two” might mean that other users sharing the same IP address, but for legitimate purposes, are unfairly discriminated against – meaning enforcing the rules at all might quickly lead to actually breaking the rules.  I don’t see any easy way out of this one.

Second, the problem of “no paid prioritization” is even tougher. The Internet has had prioritization built into it at least since 1998, when RFC 2474 allowed for Differentiated Services, or “DiffServ Code Point” bits to be set in packet headers.  The purpose of this work was to enable new kinds of services (like VoIP) to work on the Internet, and to enable service level agreements (SLAs), which, to my mind sounds exactly like paid prioritization.

But let’s say the classification of lawful vs. not-lawful content problem was solved cleanly in a way that would not unfairly discriminate.  And let’s also say that all the existing uses of IP SLAs (which appear to violate the rules) somehow magically go away.  How would we know that I, as a broadband subscriber, am not being discriminated against?  Conversely, how would we know that a service provider is faithfully compliant with the rules?

This forces us to imagine what a test solution would look like.  Do we use traffic generators to simulate lawful and not lawful traffic?  Do we generate different kinds of lawful traffic and see how well it propagates through the network?  On the receive side, what would the metrics of fairness be?  And how can we automate testing to be a consistent, repeatable process?  Should testing of testing of fairness be a standard itself?  Can NTAF play a role in defining standardized, automated tests.  These are just a few of the questions that the new rules bring to mind.

Todd Law, Spirent Communications, NTAF President

2014 was a great year for NTAF, and we are expecting ever bigger things from 2015!

Happy New Year! 2014 was a great year for NTAF, and we are expecting ever bigger things from 2015. Here’s a quick recap of some of the exciting events from last year:

UNIVERSITY OF NEW HAMPSHIRE’S INTEROPERABILITY LAB JOINS NTAF
The University of New Hampshire’s Interoperability Lab (UNH-IOL) has joined the Network Test Automation Forum. UNH-IOL is a neutral, third-party laboratory dedicated to testing data networking technologies through industry collaboration. For some time now, NTAF has been looking to roll out its compliance mechanism, which is best delivered by a third-party for multiple reasons, and UNH-IOL is perfectly positioned to help guide NTAF in that effort. UNH-IOL is very experienced in this area, as it already provides collaborative testing programs for 30 other standards organizations. Moreover, their status as a non-profit third-party testing laboratory positions them perfectly on neutral ground. Furthermore, UNH-IOL is NTAF’s first academic member. This is an exciting development for NTAF, whose memberships until now have all come from industry. We look forward to the fresh perspective that UNH-IOL brings to the table.

NEW ADOPTER MEMBERS
NSN (Nokia Solutions and Networks), TekEmergence Solutions LLC and NetScout, a test and measurement company have joined NTAF as an adopter member, and furthermore are strongly considering joining as full members

TWO NEW SPECIFICATIONS: TS-005 AND TS-006
NTAF is proud to announce the official release of two new specifications. Both specs were successfully voted out of their respective working groups, NTAF’s technical committee, and NTAF as a whole, in November 2014.

The first new spec comes from NTAF’s Reporting Working Group, which is working on an industry standard for test reports. Standard NTAF reports are expected to contain various data, including what test equipment was used, DUT information, test case steps, pointers to logs, etc. One of the key elements reports also need to have is an expression of the topology used in the test bed. The Reporting Working Group decided to carve out a separate specification, specifically for describing topologies in reports. That topology specification has now been released as TS-006.

The second new spec comes from NTAF’s API working group, which is working on next-generation high-level APIs. High-level APIs are already commonly used in test labs as a layer between scripts/test cases and native APIs, but lack of standards has led to a mess of several pseudo-standards which suffer from lack of flexibility, lack of scale, and poor alignment with the underlying test equipment. Just one example of the lack of flexibility with these pseudo standards is that they are tied to specific languages such as Tcl or Perl, which are very minor languages in the broader software world, while other languages such as Python, and even architectures, based on REST, are quickly becoming widespread. These problems in turn have made it extremely expensive for equipment vendors to maintain test libraries – and the most astronomical of these costs occurs when traffic generators reach their end of life. The new spec from the API Working Group provides a framework to alleviate these problems for next-generation high-level APIs which are flexible, aligned and future-proof. The API specification has now been released as TS-005.

NETWORK TOPOLOGY STANDARD
Topologies are a fundamental concept in networking. Yet, amazingly, there exists no standard in the networking industry for expressing topologies. This is even more astounding if you consider that the networking industry abounds in hundreds of standards documents from the likes of the IETF and IEEE, among others.

In the last couple of quarters, NTAF’s Reporting Working Group has turned its attention to the issue of topology expression. Topologies are a key part of what a test report needs to include, since any reader or consumer of a test report will want to know the basics of how test equipment and device(s) under test were connected together during the test. As part of the effort to standardize on test reports, the Reporting Working Group has therefore drafted a standard for expressing topologies in test beds.

The standard offers the possibility to describe a network topology in XML or JSON. Networks described can be hierarchical, that is, a node can contain other nodes recursively, and for example a physical node can contain a number of virtual machines as nodes.

NEW REVISIONS OF TS-001 AND TS-002
As the first revisions of TS-001 and TS-002 were plagued by errors at the XML sample level, deteriorating intelligibility, errata were written to correct these errors. These errata have been approved during 2014 and merged into the originating standards so a new, corrected release is expected soon to be made official

NTAF Releases 2 New Specifications

NTAF is proud to announce the official release of two new specifications. Both specs were successfully voted out of their respective working groups, NTAF’s technical committee, and NTAF as a whole, in November 2014.

The first new spec comes from NTAF’s Reporting Working Group, which is working on an industry standard for test reports. Standard NTAF reports are expected to contain various data, including what test equipment was used, DUT information, test case steps, pointers to logs, etc. One of the key elements reports also need to have is an expression of the topology used in the test bed. The Reporting Working Group decided to carve out a separate specification, specifically for describing topologies in reports. That topology specification has now been released as TS-006.

The second new spec comes from NTAF’s API working group, which is working on next-generation high-level APIs. High-level APIs are already commonly used in test labs as a layer between scripts/test cases and native APIs, but lack of standards has led to a mess of several pseudo-standards which suffer from lack of flexibility, lack of scale, and poor alignment with the underlying test equipment. Just one example of the lack of flexibility with these pseudo standards is that they are tied to specific languages such as Tcl or Perl, which are very minor languages in the broader software world, while other languages such as Python, and even architectures, based on REST, are quickly becoming widespread. These problems in turn have made it extremely expensive for equipment vendors to maintain test libraries – and the most astronomical of these costs occurs when traffic generators reach their end of life. The new spec from the API Working Group provides a framework to alleviate these problems for next-generation high-level APIs which are flexible, aligned and future-proof. The API specification has now been released as TS-005.

Both specs are available on request, at no charge, by contacting the Network Test Automation Forum at admin@ntaf.causewaynow.com.

NTAF Annual Members Meeting – John Sanchez, Verizon

This was my first Face to Face Meeting with the NTAF team.

In attendance this year were Cisco, Google, Spirent, Juniper, Ericsson, Brocade, and Verizon. The team is comprised of technical experts in networking and development. The NTAF team is committed to:

• Building consensus between service providers and network equipment manufacturers
• Streamlining the development of automation in testing network devices
• Increasing market awareness for NTAF

On day one, our focus during the meeting was to establish a baseline and get feedback from the team members so that we can better align our corporate strategies and understand how we can move forward. Each team discussed:

• Current Automation Environment
• Challenges
• Vision of Automaton Environment
• Alignment between NTAF and Automation Vision

Although our corporations are different, it was apparent that we all struggle with the same challenges of test set up, lack of standardization in tools, tool overload, integration of tools/data exchange, and the high cost of test tools.

The second day, we took a look at our challenges and visions. We started to brainstorm ideas and perspectives on how we can best spend our time and energy on membership recruitment, re-alignment of our technical initiatives based on industry demand, and corporate strategies. We can honestly say that no NTAF members were harmed as a result of these discussions :).

By: John Sanchez, Verizon