... it's better to have good questions

He's dead, Jim

2018-07-30 5 min read technology Ronny Trommer

If you operate networks there is a big chance you had to deal with SNMP - the Simple Network Management Protocol. If you ever wondered where it came from, it started with a big bang.

On October 27, 1980, there was an unusual occurrence on the ARPANET. For a period of several hours, the network appeared to be unusable, due to what was later diagnosed as a high priority software process running out of control. Network-wide disturbances are extremely unusual in the ARPANET (none has occurred in several years), and as a result, many people have expressed interest in learning more about the etiology of this particular incident.

The first post-mortem written in RFC 789 expressed a need to manage and monitor a complex networks.

If you dig around in dusty RFC drafts, you can find some for HEMS a High-Level Entity Management System described in RFC 1021. Later the draft for the Simple Gateway Monitoring Protocol (SGMP) was published as a research project in RFC 1028. In August 1988 when I was 10 years old, SNMP got his first draft in RFC 1067 and became the successor of SGMP.

14 years later …

In the early years of 2000 it became quite clear, that SNMP was mainly used just for monitoring and not to configure network equipment as planned. On the 4th until 6th June 2002 the Internet Architecture Board [IAB] had a meeting and discussed the status quo. The full protocol can be read in RFC 3535. The interesting things are in Section 3, 4 and 6.

In a nutshell, they figured out SNMP was not usable to configure devices, you should still better use the CLI. They recommended to stop forcing working groups to provide writeable MIB modules. The decided to build a group to investigate why MIBS are still crap. The figured out, nobody wants ASN.1 and decided to form a group to develop and standardize a XML-based device and configuration management technology.

They basically documented already in 2002 why SNMP is no fun to use. The whole tool chain has not evolved and was adapted to new programming languages people want to use.

Juniper Networks used an XML-based network management approach at a similar time when the IAB discovered the problems with the current status quo and broad it to the IETF. The result is the first version of the NETCONF protocol published in 2006 as RFC 4741. With NETCONF as transport they needed to model things and this is where YANG as a data modelling language comes into play.

Configuration, Discovery and Polling

Using SNMP is in first place quite simple and follows the Client/Server approach. The network management system is the Server and the SNMP Agent on the network device is the client.

One goal of SNMP was to make devices and configuration discoverable. That means a network monitoring system can go and see how many disks or network interfaces are installed and how they are configured. You can also go further and ask in a regular base how many bytes where transferred or how full is your disk

This design implies a specific design for a monitoring system which needs to build a centralized inventory about those entities with a life cycle. This works for very static networks. You can try to keep the inventory in sync by polling the SNMP agents on a regular base, but you can imagine, the bigger and more dynamic a network gets, the more often your world will fell apart. For a monitoring system is this very critical, cause the information provided get less and less trustworthy and this is where you start to think replacing your current monitoring system with a different one.

For the reason of the bad implementation of SNMP a lot of monitoring tools deploy their own agents and reimplement these functionality with proprietary protocols which made their world better, but not for all of us. Additionally the world has changed dramatically since the last years with virtualization and deploying and running applications in containers. The infrastructure changes now so often and is so dynamic, there is no chance to poll all these things from a central scheduled place and provide an up-to-date inventory.

Generic tools which configure devices over SNMP are hard to find and/or expensive. It is like peak and poke registers and it has nothing to do with a declearitive approach to define a maintainable configuration state. Not speaking of all the problems described in RFC 3535.

The Tool-Situation

In early days the ISO - International Organization for Standardization categorized the management in functional areas for short FCAPS:

  • Fault
  • Configuration
  • Accounting
  • Performance
  • Security

As you can imagine just a single area in his own is huge. To get a checkmark on all of the you have to look at the tools of the Big Four with BMC, CA, HP and IBM. So in reality you don’t by one tool from one of these vendors, they try to sell you many tools they bought during the time and munched together as a solution.

The open-source world followed more or less the one tool for a job approach. Monitoring tools do mainly their job in the fault- and performance-management area. A lot of them can read SNMP but they don’t fully rely on it and bring mostly also their own agents.

Configuration management tools like Ansible, SaltStack, Chef and Puppet are pretty much the industry standard for configuration management. … and guess what, instead of using SNMP they come all with their own agents.

to be continued …

Nevertheless, SNMP is still the only agent available which is shipped with your cheap hardware and this is the reason why we still don’t have fancy things. But the days are numbered and if you look at all the things happening in next generation technologies e.g. Server Virtualization, Containerization, SDN, NFV, you won’t find SNMP and you have to face it, SNMP is dead. The new world in monitoring is model driven streaming telemetry which pushes data and is not polled and this is covered in another article.