[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

snmpconf The *other* non-local feature





In my previous email I described the non-local feature in which the
PM agent abstracts out instance/addressing information from
scripts, even when the PM agent is acting as a proxy and SNMP
operations are performed remotely.

Key to this is that it in policy scripts it is very easy to address
"this element", and easy to address related elements.

"This element"
   getvar("frCircuitCommittedBurst.$0.$1")
   or:
   getvar("acmeRouterCircuitTable.$0.$1")
   // this works when the other MIB is indexed the same

Related Element
   getvar("ifSpeed.$0");  // the parent port of the interface

As elements become less related, they become more difficult to address
because less information is defaulted and more of it needs to be
explicitely specified. The policy engine is doing less work for you
for elements that are less related. Luckily the most frequent
operations are on "this element", next most frequent is related
elements, and unrelated elements are even less frequent.

Unrelated elements
   getvar("ifSpeed.7"); // needed to specify #7
   // stuff like searchcolumn can help:
   searchcolumn("ifType", oid, "6", 0); // find an ethernet


Unrelated elements are on a continuum because we can get even more
unrelated if we address other contexts on the same system or other
systems entirely. In the former we need to begin specifying the
context explicitely and in the latter we need to specify
context, address and security info.

By this way of looking at it, the proxy agent non-local operation
couldn't be more different than this type of non-local operation.
While the former is about less and less explicit addressing, this is
about providing more and more explicit addressing so that a script
can address anything.


Why is it valuable to do an snmpsend to another system in a policy
environment? Let me pose an example.

  You have a medium-sized field office that connects over a big pipe
  (10MBit) to headquarters, with a slow-speed backup link
  (256K). There are a number of routers and switches at this site. The
  PM MIB is used to deploy a consistent QOS scheme of marking in
  switches and limiting/shaping on the routers in order ensure
  proper usage of the link. However, when the link goes down, all of
  this needs to change and a new QOS scheme needs to be deployed
  everywhere (HTTP can't enjoy it's 512K allocation anymore). How will
  the switches/routers know the uplink has failed? It would seem that
  they need to test the status of the link with a remote snmp query. I
  see the policies structured as:
        Filter1: uplinkUp() && (...)
        Action1: deployRegularQOS()

        Filter2: !uplinkUp() && (...)
        Action2: deployDisasteQOS()


I believe such situations will be infrequent but when they arise they
will require a solution. The solution isn't very expensive. Since we
have to fully specify local operations including context, the added
cost of non-local is just adding the address and the security
information as optional arguments to the APIs of some of the accessor
functions.


Steve