GET library
|
This page presents the procedures defined for the various analysis available in the GET system analyser class. The base class for analysis is GETSystemAnalyser.
These analyses should be defined after the GET system has been created with its number of modules and channels.
The noise analysis evaluates, on an event by event basis, the signal RMS of each channel.
The analysis is created when defining the samples interval on which the noise is estimated (function GETSystemAnalyser::SetNoiseRange). The interval must be selected so that it has very little chance that a real signal appears in this range. This is in principle achieved at the beginning of the sample, if the trigger delay from GET electronics is set properly.
The analysis is performed by the analyser on corrected outputs (FPN, baseline...), namely in the GETSystemAnalyser::AnalyseCorEvent function (automatically called by the GETSystem::ReadEvent function).
For a given channel j, the average of the channel signal is computed in the interval:
Then the corresponding RMS:
This RMS is the estimated noise for the channel. The analysis is performed on corrected outputs (FPN, baseline...) in the GETSystemAnalyser::AnalyseCorEvent function.
If the noise analysis is defined, when an event is analysed, the RMS computed for each channel is stored in an histogram (GETSystemAnalyser::GetChannelsNoiseHisto).
This analysis does not work when the zero suppress mode is active.
This picture shows the noise histogram for one event, indicating an abnormal noise level for channels around number 2000, compared to the other system channels.
The noise analysis functions are overloaded in the GETActarDemAnalyser class to define also a noise histogram for the ACTAR TPC (demonstrator) pad layout (following the GETSystem lookup table). This picture shows the same event as the previous picture, for pads instead of electronics channels.
It is possible to cumulate the results for all events in a 2D histogram using the GETSystemAnalyser::FillCumulNoiseHisto.
While the noise analysis is defined for a single event, the baseline fluctuation analysis compares the current event to the previous ones, to check that the system is in stable conditions. It appears to be a good monitor for the AsAd-AGET calibration and clock alignment.
The analysis is created by the GETSystemAnalyser::InitBaseFluctuations function. Like for the noise analysis, the time buckets interval must be chosen so that real signal should not appear in the analysis range. Note that the interval may be different than the one used for the noise analysis. In addition, a warnig threshold is also defined: if the baseline fluctuation overpasses this threshold for one or more channels, a warning is issued on the terminal output.
Since the analysis compares current event to the previous ones, it requires several events before being effective. The analysis works as follows:
where is computed, for event k in the same way than for noise analysis;
This baseline fluctuations analysis is performed on the raw data, in GETSystemAnalyser::AnalyseRawEvent function.
If the analysis is defined, when an event is analysed, the baseline fluctuation for each channel is stored in an histogram (GETSystemAnalyser::GetBaseFluctuationsHisto).
This analysis does not work when the zero suppress mode is active.
This picture shows the baseline fluctuation histogram for one event, indicating a problem for channels around 1000-1100.
It is possible to cumulate the results for all events in a 2D histogram using the GETSystemAnalyser::FillCumulFluctuationsHisto.
The resolution analysis proposes a tool to estimate the channel resolution for run files with constant signal on each channel, such as pulser calibration measurements.
The analysis stores, on an event by event basis, the maximum amplitude of all channel in a 2D histogram (channel number versus amplitude). Then the RMS is computed for each channel, and stored in the resolution histogram, that is recomputed for each event.
Note: it has been designed to optimize the GET clocks offsets.
This analysis checks the number of data (from the circular buffer memory) have been read from the data frame, for each channel. In principle, for a sample depth of 512 time buckets, this number should be exactly 512 or 0 (if partial readout mode is on). This is true if the zero suppress mode is off.
The analysis is defined by the GETSystemAnalyser::InitDataNumberCheck function, that sets the interval in which the number of read data is considered bad. This interval should be for example in full readout mode and in partial readout mode.
When analysing an event, if some channels have a number of data in this interval, a warning is issued.
A possible result of problems in the GET system hardware configuration process is to get corrupted data. This may happen in several ways, like the example below or with absolutly inconsistant channels data.
The data continuity proposes a test based on a maximum accepted variation dmax between values of contiguous time buckets. This limit is set by the GETSystemAnalyser::InitDataContinuityCheck function.
In order to accept fast signal variations that may happen (for short peaking times), the analysis test checks an abnormal increase and decrease of the signal (or the inverse).
This picture illustrate the type of problem that can be detected by the continuity analysis.