[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: A Data collection MIB

Hi Bryan,

Thanks for replying to my comments. Please see my
responses inline. I have two new comments:

1. You need to add management contexts support in
this MIB so that the NMS can specify which management
context to collect data from e.g. a switch can have multiple
instances of bridge MIB and a NMS may be interested in
collecting data from a MIB table in a particular context.

2. Change the title of the document. Currently it says
"SNMP Row Operations Extensions".

> >I agree that we really need to have the discussion going
> >on this list. As an interested developer, I had sent a few
> >comments for draft-ietf-eos-snmp-bulkdata-00.txt on
> >08/02/2001 but haven't seen any response or any updated
> >draft. If there is a need for more volunteers, may be we
> >should ask for help on the list.
>I appologize for not putting more time on this list for
>discussion of that mib.  let me take a belated try at your
>comments from long ago:
> >- When is the bulk data collected? Is it collected
> >when the entry in slice table is made active or when
> >the entry in xferTable made active?
>I envision the bulk data being collected, conceptually
>all at once (simulating an atomic data-gathering)
>when the 'xferEntryStatus' variable is set.  looking back
>at the mib, I agree it wasn't clear that setting this var
>would cause the data to be collected - just transmitted.
>perhaps it makes more sense to have some action variables
>that are separate; one for 'do the data collection NOW'
>and one for 'do the file xfer NOW'.

Yes, action variables would make it clear. But if you add
action variables for both data collection and transfer, then the
data collection and transfer need not happen at the same
time and implementation needs to save the file for later
use by the transfer subsystem. But, if you have only
one action variable for transfer+data collection, then the
bulk data can be transmitted as its being generated. So,
no hassles of saving the file etc. just a transient file would
be good enough.

> >- If the NMS wants to get the same data again
> >and again at regular intervals, it seems he has
> >to create new entries in the xferTable. Instead
> >there should be an object to start generation
> >and transfer of the data specified by a particular
> >sliceEntry again and again without having to create
> >new xferTable entries.
>it was my intention that the user would only have to define
>the data [slice] once and then be able to use it again and
>again.  perhaps the same notion should be applied to the xfer
>table as well.  otoh, I am not sure I agree that periodic
>uploads should be automatic.  its too easy for an NMS to
>setup a recurring job and then forget about it, not ever deleting
>that 'control record' in the agent.  I'd far prefer that even just
>a single snmp var be set each time an xfer is to be done than have
>the agent just assume 'keep uploading ad infinitum until I'm told

Yes, that's what I said in my comment. May be I wasn't clear.
I didn't want to have an object for automatic periodic uploads.
An action variable, like the one mentioned above, to initiate the
transfer+data collection would be better than having to create
new entries.

> >- Currently the NMS can only specify and transfer
> >either a MIB subtree or a single MIB table in one
> >file. There should be a provision to transfer a bunch
> >of data in one go e.g. NMS may need to transfer
> >a lot of tables in several MIBs to fully populate
> >its database. Instead of creating and parsing one
> >file per MIB table, there should be provision to
> >parse the whole list of tables and possibly scalars
> >as well.
>I am very seriously considering allowing multiple 'augments' style
>tables to be scooped up at once and sent up to the remote fileserver
>in one single file.  dperkins convinced me of this and I agree that
>its useful.  otoh, I'm not sure of the utility of scooping up several
>tables (possibly including scalars) and putting them all in ONE file
>for remote upload.  this just makes things more complicated than they
>have to be.  its much easier to have a single schema record at the beginning
>of the data file than to have to delimit several tables and several schemas.
>uploading a few files vs. one big one seems to have no real effect on the
>efficiency of the data collection and transfer.

Usually a NMS would be interested in many subtrees
and tables (probably 100s of tables) when its populating/
updating its database. Having to create a separate
entry for each table/subtree and then parsing/managing
the bulk data files for each table is very cumbersome
and much more complicated than having to fit multiple
schemas and data in a single file.

Moreover, I think we should support collecting scalars
as well. This way the NMS doesn't need to use this
MIB for one type of data (tables etc.) and another interface
(SNMP get) for scalars.

> >- You need to include details in the draft about how
> >to integrate this MIB with the Schedule MIB. In
> >that case, the entries in the xferTable need to be
> >active forever unless explicitly deleted.
>I agree that more details need to be there.  hopefully, a lot of the missing
>details will be explain by sample code (if I can ever get enough time to get
>a 0.1 working demo that I can submit for general review).

I had done some research about using the SCHEDULE
MIB with a similar MIB. I'll mail you info about how to
integrate schedule MIB with this MIB.

> >- There should be an object to start/stop the bulk
> >file generation/transfer. Currently there is no knob
> >to stop the bulk file generation/transfer other than
> >deleting the row which is not good.
>by separating out the definition of the remote file login info (currently 
>as part of the xfer table) from the 'action request', we can avoid having 
>to delete
>the remote file login info just to abort the file transfer.  you have a 
>good point

Or we can use the action variable mentioned above to
have a "Stop" value in addition to "Start Now". A set
on that object with "Stop" value will stop/abort the

> >- Notifications should be defined in this MIB to
> >indicate both the error conditions and the successful
> >transfer of the bulk file.
>yes, I was a bad boy in not listing any traps as part of this mib.  I will 
>that mistake in the next rev ;-)


> >- Is the bulk file stored in RAM/Flash or is it an
> >ephemeral file? The status variable does has
> >reference to ephemeral file. If its an ephemeral file,
> >it should be mentioned clearly in the draft.
>I didn't specify how the intermediate file would be saved.  I did assume 
>that it would
>be persistant across agent reboots, if for no other reason than to keep a 
>local copy
>until the agent is SURE that the file has been xferred AND accepted by the 
>remote fileserver.
>but your point is a good one in that it should be explicit and not just 

If you are thinking about storing the file then this MIB
should have objects like file name and storage type i.e.
volatile (stored in RAM) and permanent (survives reboots).

May be it should also have objects, with some defval, to:
1. specify the maximum allowed size of the bulk file
  so that the file doesn't take up all the space in RAM/flash.
2. indicate if the file should be deleted after it has been
  successfully transferred so as to reclaim memory/flash

I think saving the files adds a lot of overhead regarding
file management etc. and doesn't carry much benefits.
Transient files i.e. transferring the data as soon as its
being collected (probably one record at a time) is easy
to implement and use.

> >- The MIB in the draft uses IpAddress type. It should
> >instead use InetAddressType and InetAddress to
> >accommodate IPv6 addresses as well.
> >- In description of xferFileEncoding object, the
> >XML schema should be included so that the NMS
> >knows how to parse it.
>not being much of an XML person, I'll defer this to someone who knows 
>XML.  I added
>'xml' only to suggest that CSV files wouldn't be the only format that 
>would be useful
>to remote NMS's.  any xml fans out there care to beef up this section?

I'll compile the XML schema corresponding to the specified
ASCII format and you can add it to this section.