[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: snmpconf RE: script extensibility I-D


  Lots of good questions. I'll try to explain the design decisions as
best as I can. Reading the archive of the WG can also provide further
insight (you might need to go back a year or so for some of these

> As to your question regarding draft-ietf-snmpconf-pm-06.txt, a quick perusal 
> of the document prompts me to ask the following questions: 
> 1) What is the justification for creating a new PolicyScript language rather 
> than using an existing language? The PolicyScript language is extremely 
> similar to Java and its variants and the three differences noted in the 
> "Quickstart for JavaScript/ECMAScript/JScript Programmers" are not what I 
> would consider to be critical to expressing policy, nor are they language 
> improvements. So why not define an API or PDU instead of a language? 

First, we needed the code size to be smaller than any of the existing 
interpreters. We didn't want it to "price itself out" of small devices 
(e.g. cable/dsl routers that retail for $200-$400). Thus we knew we 
couldn't incorporate any of the existing interpreted languages in it's 

Second, we had a requirement that we not invent a new language but 
rather base it on an existing language. One good reason (there are 
many) is that we aren't language experts and we shouldn't believe we 
can improve on what the experts have created. Practically speaking, we 
are not proficient at documenting programming languages and we have 
inadequate peer-review for the task. I think this rule has served us 
well because we've been able to concentrate our efforts on the policy 
and network management aspects of this technology and as a result have 
been pretty productive. 

As far as differences that were critical to expressing policy, you
might be surprised to hear that there weren't any. Some language
decisions were critical for working with SNMP, and many were for
making the language and interpreter simple, but none were for policy's
sake. I don't believe that policy needs any special support from the
language. The only area where specific policy support was added was in
the accessor functions (e.g. roleMatch(), defer(), etc). 

> 2) Assuming a new language is needed, why not base it on MIB syntax? It is 
> already necessary to know the MIB language in order to describe the data 
> that the policies will be based upon. If a single compiler could be used for 
> both code and data, it would simplify implementations as well. A common 
> representation will allow a single scanner/parser and a single storage 
> mechanism. (Note that Lisp derives a great deal of its power from its 
> ability to express code and data using identical syntax.) Since the code 
> must be pushed to heterogenous managed nodes, some compatible data format 
> must be chosen for transmission. Why not choose a data format compatible 
> with other SNMP data (i.e. a MIB)? 

It's not a new language but a subset of C/C++. 

These scripts need to be executed on managed systems which don't tend
to have MIBs or MIB compilers on them (that's why all OIDs in scripts
must be translated to dotted decimal notation before the script is
downloaded). So even if such a syntax was possible, the strategy makes
no sense because there's no MIB compiler to leverage on the managed

Further, we wanted the language to be as easy to adopt as possible. 
It is important that, to the greatest extent possible, network 
operators and engineers feel that they already know this 
language. That's why the parallels to C, C++, Perl and others were

> 3) Why are function definitions and enums not supported? Policy needs to be 
> expressed as coherently as possible. Functions and enums are both highly 
> effective abstraction mechanisms, and I would not want to have to define a 
> large body of policy code without them (or their moral equivalents). To add 
> insult to injury, the pre-processor is also excluded. 

All of this was removed for simplicity. We've have seen no evidence
that policies will be large enough to need such abstraction

> 4) Why is there only one scope for variables, with loosely-typed data, and 
> yet variable declarations are required? A variable declaration carries no 
> information content if it does not specify scope and type information. A 
> statement with no information content is, by definition, noise. Noise should 
> not be permitted in a language concerned with the clear, concise expression 
> of policy. 

We eliminated the treatment of every '{ .. }' as a new scope because 
it is non-trivial to implement and wasn't necessary.

Variable declarations were required because C and C++ required them 
and that was our reference language. 

> 5) Won't the use of the term "role" to mean any "characteristic of a managed 
> element" be confusing? We already have the established definition of role to 
> mean "the profile of security enablements necessary to accomplish a 
> particular job function by a user." I particularly object to a simple 
> boolean like "paid" to be given the exalted term of "role". This is not a 
> role, it is merely an attribute or property or characteristic or element 
> parameter, so why not use one of these terms? 

We used the definition of "role" from the policy terminology document
from the policy framework WG:

         - An administratively specified characteristic of a 
           managed element (for example, an interface). It is a 
           selector for policy rules and PRovisioning Classes 
           (PRCs), to determine the applicability of the rule/PRC to 
           a particular managed element.  

> 6) The integer type is defined to range over -2^63 to 2^64-1. Surely it 
> would be an extremely rare policy that would require such extremes of 
> integers for its expression, and yet pervasive policy enforcement will 
> require for some time to come that such expressions be interpreted on a lot 
> of equipment with less than 64-bit word lengths. Why are 64-bit integers 
> required? 

I tend to agree *except* for the very important case of holding 64-bit 
counter values and performing precise arithmetic on them. 

> 7) The integer type is defined to require inconsistent treatment of negative 
> and positive overflow. Positive overflow is to be ignored, a dangerous 
> proposition in policy enforcement. Negative overflow, surprisingly, does not 
> necessarily result in an RTE, instead the result is undefined. Why? 
Most languages don't treat overflow as an RTE.

> 8) It is unclear how expressions in the language are to be typed if "var" 
> variables can be either strings or integers, and string operators also have 
> integer interpretations. How do I know "+" means concatenation and not 
> addition in a given expression such as a + b? Having worked with JavaScript, 
> I can testify that it is important to make string and integer operations 
> syntactically distinct, especially if variables are loosely typed. (If a 
> simple language is desired, integers are unnecessary and should be 
> eliminated in favor of automatic and transparent conversions, as is done in 
> many other interpreted languages.) 

The var data type is borrowed from Javascript except for the fact that 
it uses 64 bit integers instead of floats and doesn't have a NaN value 
for type conversion failures. All of the other rules are the same and 
are described starting on page 20 of the last draft. 

> 9) Making a distinction between character constants (e.g. 'c') and strings 
> (e.g. "character string") is not useful. Why this carryover from C/C++? 
> Allowing a string of any length to be delimited by either single or double 
> quotes *is* useful as it permits unescaped single or double quotes in the 
> other type of string. 

Yes, I agree that would have been nice but it wasn't a subset of C/C++. 

> 10) Perl programmers will be unpleasantly surprised to learn that the die() 
> function does not have the arguments they expect. 

Are you saying that Perl has a die() function? If so, I agree that we 
should definitely rename it. 

> Both defer() and die() are 
> noisy; and the entire defer feature is of dubious necessity, and predictably 
> error-prone to implement. It should be eliminated in the interest of clear 
> policy expression. As a policy writer, the last thing I want to happen when 
> a policy encounters an RTE due to less than well-defined conditions is to 
> have another similar policy substituted that is equally likely to fail as 
> well. Why not use Python's try statement or Perl's eval {} construct for 
> error handling? That way the interpreter doesn't have to abandon the entire 
> policy if one expression fails, and code can take different actions if 
> different expressions fail, or if different errors are detected in one 
> expression. 

You have complete control of whether or not a defer will happen. 

Why the defer function? Imagine: 

#    Group   Precedence      Filter              Action 
1    3       3               roleMatch(gold)     set gold QOS 
2    3       2               roleMatch(silver)   set silver QOS 
3    3       1               always match        set default QOS 

This way policy #3 will be executed as a last resort to configure
interfaces with default QOS (this way no interface could "slip through
the cracks" due to a logic bug and remain unconfigured). But what if
we experienced a bug while executing "set gold QOS"? Then no QOS would
be configured. With the defer capability, I can be assured that at
least the default will be configured no matter what happens to the
other policies.

> 11) To avoid the problem of a security policy not being available when 
> needed after redefinition of roles or capabilities, policies should be 
> tagged with a "preload" attribute when they are security related or 
> otherwise would cause problems if they are not immediately available when 
> needed.

Such an attribute might be important in the internal database of the
management station, but I see little value in having it in the
agent. The preload attribute would only be visible for policies that
have already been loaded! And I can't force a preload for policies
that haven't been loaded since there is no instance to
set. Fundamentally, a preload attribute is an instruction for
management station behavior while this MIB defines attributes for
controlling agent behavior.

> 12) Why is the policy execution security model not well-defined? One might 
> hope for something along the same lines as RFC 2575 (view-based access 
> control) and RFC 2574 (user-based security model). This draft does 
> "recommend" use of those models, but doesn't go into any detail on how they 
> are expected to be used. There is no detail on what safeguards are to be 
> placed on the pushing of policy code from the management station to the 
> managed nodes. I would want to know when the view-based access controls 
> should be applied, and when the user-base security model should be applied, 
> and the details of how to extend those models to cover execution access in 
> addition to read/write access, and how to identify the responsible user at 
> each point. There are many possible answers, so why leave us guessing?

That text is just boilerplate that every MIB these days
contains. Similarly, the questions you raise are areas of flexibility
that today's MIBs leave to the implementor and/or customer.
> 13) Why is it required that policy code execute locally with the credentials 
> of the person who last updated it? (And if this is a good idea, why not make 
> it the default for remote execution as well?) Why not use the credentials of 
> the person who installed the management station software, or the person that 
> installed SNMP on the managed node, or a person defined to own all global 
> policies?

This is a security mechanism. I don't want someone to hijack my
authority by modifying the script that I installed to do something I
didn't intend. Instead, the act of modifying the script revokes my
authority and substitutes the authority of the modifier: they gain

> There is no thought given to the possible need to revoke a 
> credential, or replace it with that of another principal if someone leaves 
> the enterprise.

These functions are handled by the security infrastructure of RFC 2574
and 2575.

> 14) If a credential is not valid for execution on a given managed node, what 
> is to be done? Is the global policy to be ignored? How does a managed node 
> define who may or may not issue global policies that apply to that node? Are 
> we to assume that the credential is the same as a logon credential for the 
> operating system of the managed node?

The SNMP Administrative Framework doesn't define execution rights.

Execution access is given to all who can write to the
policyTable/policyCodeTable. It follows that to deny execution access
you deny writes to these tables.

I expect that most policies will be distributed by central NMS
software. This software must have write access to the
policyTable/policyCodeTable to actually install policies.

> David Wallis