[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Comments on draft-ietf-snmpconf-pm-02



>> Several *Index object types have integer range restrictions
>> (0..65535). Why at all? Why 64k?

Steve> 2 reasons:

Steve>   1) SMICng requires range restrictions on index objects. If I didn't
Steve>      put them there people would complain that it didn't compile
Steve>      cleanly. Sometimes it's better to just submit :-)
Steve>   2) It's useful to declare that 0 is a valid value. Otherwise this
Steve>      would be an FAQ.

I'd suggest using Unsigned32 without restrictions. The reason why you
need range restrictions for Integer32 is that an index sub-identifier
must not be negative. SMICng is doing right.

Steve> Unfortunately, when you put in a range, sometimes you have to
Steve> make an arbitrary decision on one end of the range. That's what
Steve> the 65535 represents (a number larger than reasonably
Steve> possible).

Then you should specify the maximum value.

Steve> One of the chief benefits of policy-based management is that it
Steve> reduces the number of management decisions that need to be made
Steve> (or in plain english, there won't be that many policies) , so a
Steve> limit of 64K seems reasonable.

Well, it seems just arbitrary to me. ;-)


>> Why is pmPolicyFilter restricted to a maximum size of 65535 octets?
>> If there's a good reason for this size, why is pmPolicyAction not
>> restricted?

Steve> Because the SMI restricts octet strings to 64K. It was my
Steve> intent to be consistent with both objects but I missed
Steve> policyAction.

Yes. The SMI *is* already restricted to this size. See SNMPv2-SMI:

        -- OCTET STRINGs with a more restrictive size
        -- may also be used
        string-value
            OCTET STRING (SIZE (0..65535)),


>> What is the purpose of pmPolicyActionMaxLatency? At first sight, it
>> seems like an action is called exactly when a filter is evaluated being
>> true. Thus the action latency simply depends on pmPolicyFilterMaxLatency
>> and the result of the filter evaluation.

Steve> This architecture allows the agent to implement them coupled (more
Steve> obvious and simpler to implement) or decoupled (provides more
Steve> scalability for larger systems). [...]

Ok. I think, I understand these two ways of operation.  There are
significant differences. E.g. in the decoupled case, it might happen
that a filter is evaluated true, then again false and the action gets
never executed. How does a manager recognize, how the agent works?


>> The pmPolicyMatches DESCRIPTION says `The number of policies ...' while
>> it means the number of elements. However, I do not see much value in
>> this object type. When a manager reads this variable, it does not know
>> at which stage in policy evaluation this value is current.

Steve> Yes, it should say "The number of elements ...".

Steve> Once filterMaxLatency time has passed, you know this value will be
Steve> current.

May be a note should be added, that the agent must not update
pmPolicyMatches for each row while it runs through the policy table.


>> It's explicitly stated that `no state is remembered from the previous
>> invocation' in filters and actions. So: how can counters be evaluated
>> by a filter in a meaningful way?

Steve> I'll answer "how" below, but first I'd like to discuss why we
Steve> don't want to remember state - the reasons are complicated and
Steve> vitally important. [...]

Steve> About the only thing we COULD rationally do is have context 2C keep
Steve> state from the last execution of context 2C.

Yes. This is what you need for counter evaluation.

Steve> Each combination of policy and element is a separate thread of
Steve> execution, so we could use threads, but we don't want to have to
Steve> support P*E threads (we must allow P*E to be a very large number,
Steve> say P=100 and E=10,000). So threads are out.

Well, let's say it's an implementation issue.

Steve> We could automatically remember the last state of all automatic
Steve> (local) variables, but this could be a huge burden, especially
Steve> considering that many scripts will have multiple local variables but
Steve> few will need to remember any from invocation to invocation.

Steve> However, there may be times that we need to remember state from one
Steve> context to another. Here are 3 ways: (sorry for the shorthand)

Steve> A) Scratchpad MIB: [...]

You are kidding!? ;-)

Steve> B) Scratchpad accessor function [...]
   
Sounds to be the most reasonable solution of these three proposals.

Steve> C) Counter specific accessor function [...]

Counters were just one example. Another one would be the detection of
state changes, where states are represented by values of an enumeration
type object. So this counter specific solution would not be generic
enough, I guess.

Steve> Finally, let's keep some rigor in our requirements analysis (i.e. a
Steve> little skeptecism). I can't imagine a filter or action script making a
Steve> meaningful policy-based decision based on the 'instantaneous' delta
Steve> value derived from 2 counter values (whether they were collected 2
Steve> minutes apart or 2 hours apart). It seems that at least some smoothing
Steve> is required.

Yes. I can easily imagine policies that take actions when too many
errors occur. This means you need guaranteed time deltas or some
implicit evaluation (in case of counters).

Steve>   What type of policies would depend on traffic or error rates?

Policies that limit an ISP customers traffic volume per days.

Policies that adapt routing for `gold' customers in case of many
errors on the cheaper routes.

Steve>   What types of rates would they need? smoothed? peak?

Well, every computed rate is `smoothed' somehow. What people - well at
least I ;-) - would expect (except from discontinuities and wraps) is:

            value-of-this-eval - value-of-last-eval
  value  =  ---------------------------------------
                     this-time - last-time

Steve>   Where is the best place to get those rates? (another MIB perhaps?)

My assumption is: by reading from the relevant MIB objects and computing
in the filter script using some kind of inter-evaluation persistent data
(script variables or accessor functions?).


>> The pmCapabilitiesTable represents a new way to express agent
>> capabilities, but with a limited scope. I think it is not a good idea.

Steve> Agent capabilities describes the capabilities of the instrumentation
Steve> (i.e. SNMP Agent), is static, and is not accessible by SNMP
Steve> retrievals.

Steve> The capabilities describe the capabilities of the managed system, they
Steve> are dynamic, and they are accessible by SNMP retrievals (as well as by
Steve> the capMatch accessor function).

Steve> In particular, you need this functionality if you want your policy
Steve> to know if diffserv is currently supported by the card plugged into
Steve> slot #3.

Steve> Jon wrote the current iteration of the capabilities table so I'll let
Steve> him field your questions about it.

I agree that a way to retrieve agent capabilities by SNMP is a good
thing. But I don't think it's a good idea to do this just in the PM
context.