=head1 NAME
rrdcreate - Set up a new Round Robin Database
=for html
=head1 SYNOPSIS
B B I
S<[B<--start>|B<-b> I]>
S<[B<--step>|B<-s> I]>
SIB<:>IB<:>IB<:>IB<:>I [B...] ...>
SIB<:>IB<:>IB<:>I [B...] ...>
=head1 DESCRIPTION
The create function of the RRDTool lets you set up new
Round Robin Database (B) files.
The file is created at its final, full size and filled
with I<*UNKNOWN*> data.
=over 8
=item I
The name of the B you want to create. B files should end
with the extension F<.rrd>. However, B will accept any
filename.
=item B<--start>|B<-b> I (default: now - 10s)
Specifies the time in seconds since 1970-01-01 UTC when the first
value should be added to the B. B will not accept
any data timed before or at the time specified.
See also AT-STYLE TIME SPECIFICATION section in the
I documentation for more ways to specify time.
=item B<--step>|B<-s> I (default: 300 seconds)
Specifies the base interval in seconds with which data will be fed
into the B.
=item BIB<:>IB<:>IB<:>IB<:>I
A single B can accept input from several data sources (B).
(e.g. Incoming and Outgoing traffic on a specific communication
line). With the B configuration option you must define some basic
properties of each data source you want to use to feed the B.
I is the name you will use to reference this particular data
source from an B. A I must be 1 to 19 characters long in
the characters [a-zA-Z0-9_].
I defines the Data Source Type. See the section on "How to Measure" below for further insight.
The Data Source Type must be one of the following:
=over 4
=item B
is for things like temperatures or number of people in a
room or value of a RedHat share.
=item B
is for continuous incrementing counters like the
InOctets counter in a router. The B data source assumes that
the counter never decreases, except when a counter overflows. The update
function takes the overflow into account. The counter is stored as a
per-second rate. When the counter overflows, RRDTool checks if the overflow happened at
the 32bit or 64bit border and acts accordingly by adding an appropriate value to the result.
=item B
will store the derivative of the line going from the last to the
current value of the data source. This can be useful for gauges, for
example, to measure the rate of people entering or leaving a
room. Internally, derive works exactly like COUNTER but without
overflow checks. So if your counter does not reset at 32 or 64 bit you
might want to use DERIVE and combine it with a MIN value of 0.
=item B
is for counters which get reset upon reading. This is used for fast counters
which tend to overflow. So instead of reading them normally you reset them
after every read to make sure you have a maximal time available before the
next overflow. Another usage is for things you count like number of messages
since the last update.
=back
I defines the maximum number of seconds that may pass
between two updates of this data source before the value of the
data source is assumed to be I<*UNKNOWN*>.
I and I are optional entries defining the expected range of
the data supplied by this data source. If I and/or I are
defined, any value outside the defined range will be regarded as
I<*UNKNOWN*>. If you do not know or care about min and max, set them
to U for unknown. Note that min and max always refer to the processed values
of the DS. For a traffic-B type DS this would be the max and min
data-rate expected from the device.
I
=item BIB<:>IB<:>IB<:>I
The purpose of an B is to store data in the round robin archives
(B). An archive consists of a number of data values from all the
defined data-sources (B) and is defined with an B line.
When data is entered into an B, it is first fit into time slots of
the length defined with the B<-s> option becoming a I.
The data is also consolidated with the consolidation function (I)
of the archive. The following consolidation functions are defined:
B, B, B, B.
I The xfiles factor defines what part of a consolidation interval may
be made up from I<*UNKNOWN*> data while the consolidated value is still
regarded as known.
I defines how many of these I are used to
build a I which then goes into the archive.
I defines how many generations of data values are kept in an B.
=back
=head1 The HEARTBEAT and the STEP
Here is an explanation by Don Baarda on the inner workings of RRDTool.
It may help you to sort out why all this *UNKNOWN* data is popping
up in your databases:
RRD gets fed samples at arbitrary times. From these it builds Primary
Data Points (PDPs) at exact times every "step" interval. The PDPs are
then accumulated into RRAs.
The "heartbeat" defines the maximum acceptable interval between
samples. If the interval between samples is less than "heartbeat",
then an average rate is calculated and applied for that interval. If
the interval between samples is longer than "heartbeat", then that
entire interval is considered "unknown". Note that there are other
things that can make a sample interval "unknown", such as the rate
exceeding limits, or even an "unknown" input sample.
The known rates during a PDP's "step" interval are used to calculate
an average rate for that PDP. Also, if the total "unknown" time during
the "step" interval exceeds the "heartbeat", the entire PDP is marked
as "unknown". This means that a mixture of known and "unknown" sample
time in a single PDP "step" may or may not add up to enough "unknown"
time to exceed "heartbeat" and hence mark the whole PDP "unknown". So
"heartbeat" is not only the maximum acceptable interval between
samples, but also the maximum acceptable amount of "unknown" time per
PDP (obviously this is only significant if you have "heartbeat" less
than "step").
The "heartbeat" can be short (unusual) or long (typical) relative to
the "step" interval between PDPs. A short "heartbeat" means you
require multiple samples per PDP, and if you don't get them mark the
PDP unknown. A long heartbeat can span multiple "steps", which means
it is acceptable to have multiple PDPs calculated from a single
sample. An extreme example of this might be a "step" of 5 minutes and a
"heartbeat" of one day, in which case a single sample every day will
result in all the PDPs for that entire day period being set to the
same average rate. I<-- Don Baarda Edon.baarda@baesystems.comE>
=head1 HOW TO MEASURE
Here are a few hints on how to measure:
=over
=item Temperature
Normally you have some type of meter you can read to get the temperature.
The temperature is not really connected with a time. The only connection is
that the temperature reading happened at a certain time. You can use the
B data source type for this. RRDTool will then record your reading
together with the time.
=item Mail Messages
Assume you have a method to count the number of messages transported by
your mailserver in a certain amount of time, this give you data like '5
messages in the last 65 seconds'. If you look at the count of 5 like and
B datatype you can simply update the RRD with the number 5 and the
end time of your monitoring period. RRDTool will then record the number of
messages per second. If at some later stage you want to know the number of
messages transported in a day, you can get the average messages per second
from RRDTool for the day in question and multiply this number with the
number of seconds in a day. Because all math is run with Doubles, the
precision should be acceptable.
=item It's always a Rate
RRDTool stores rates in amount/second for COUNTER, DERIVE and ABSOLUTE data.
When you plot the data, you will get on the y axis amount/second which you
might be tempted to convert to absolute amount volume by multiplying by the
delta-time between the points. RRDTool plots continuous data, and as such is
not appropriate for plotting absolute volumes as for example "total bytes"
sent and received in a router. What you probably want is plot rates that you
can scale to for example bytes/hour or plot volumes with another tool that
draws bar-plots, where the delta-time is clear on the plot for each point
(such that when you read the graph you see for example GB on the y axis,
days on the x axis and one bar for each day).
=back
=head1 EXAMPLE
C
This sets up an B called F which accepts one
temperature value every 300 seconds. If no new data is supplied for
more than 600 seconds, the temperature becomes I<*UNKNOWN*>. The
minimum acceptable value is -273 and the maximum is 5000.
A few archives areas are also defined. The first stores the
temperatures supplied for 100 hours (1200 * 300 seconds = 100
hours). The second RRA stores the minimum temperature recorded over
every hour (12 * 300 seconds = 1 hour), for 100 days (2400 hours). The
third and the fourth RRA's do the same for the maximum and
average temperature, respectively.
=head1 AUTHOR
Tobias Oetiker Eoetiker@ee.ethz.chE