GET library
GET system analyser predefined analysis

This page presents the procedures defined for the various analysis available in the GET system analyser class. The base class for analysis is GETSystemAnalyser.

These analyses should be defined after the GET system has been created with its number of modules and channels.

Noise monitoring analysis

The noise analysis evaluates, on an event by event basis, the signal RMS of each channel.

The analysis is created when defining the samples interval $[i_0;i_1]$ on which the noise is estimated (function GETSystemAnalyser::SetNoiseRange). The interval must be selected so that it has very little chance that a real signal appears in this range. This is in principle achieved at the beginning of the sample, if the trigger delay from GET electronics is set properly.

The analysis is performed by the analyser on corrected outputs (FPN, baseline...), namely in the GETSystemAnalyser::AnalyseCorEvent function (automatically called by the GETSystem::ReadEvent function).

For a given channel j, the average of the channel signal $S_{j}$ is computed in the $[i_0;i_1]$ interval:

\[\left< S_{j} \right> = \frac{1}{i_1 - i_0 + 1} \cdot \sum_{i=i_0}^{i_1} S_j[i]\]

Then the corresponding RMS:

\[\Delta S_{j} = \sqrt{ \frac{1}{i_1 - i_0 + 1} \cdot \sum_{i=i_0}^{i_1} \left(S_j[i] - \left< S_{j} \right>\right)^2}\]

This RMS is the estimated noise for the channel. The analysis is performed on corrected outputs (FPN, baseline...) in the GETSystemAnalyser::AnalyseCorEvent function.

If the noise analysis is defined, when an event is analysed, the RMS computed for each channel is stored in an histogram (GETSystemAnalyser::GetChannelsNoiseHisto).

This analysis does not work when the zero suppress mode is active.

AnalyserNoise_Event.png

This picture shows the noise histogram for one event, indicating an abnormal noise level for channels around number 2000, compared to the other system channels.

AnalyserNoise_Pads.png

The noise analysis functions are overloaded in the GETActarDemAnalyser class to define also a noise histogram for the ACTAR TPC (demonstrator) pad layout (following the GETSystem lookup table). This picture shows the same event as the previous picture, for pads instead of electronics channels.

AnalyserNoise_Cumul.png

It is possible to cumulate the results for all events in a 2D histogram using the GETSystemAnalyser::FillCumulNoiseHisto.

Baseline fluctuations monitoring analysis

While the noise analysis is defined for a single event, the baseline fluctuation analysis compares the current event to the previous ones, to check that the system is in stable conditions. It appears to be a good monitor for the AsAd-AGET calibration and clock alignment.

The analysis is created by the GETSystemAnalyser::InitBaseFluctuations function. Like for the noise analysis, the time buckets interval $[i_0;i_1]$ must be chosen so that real signal should not appear in the analysis range. Note that the interval may be different than the one used for the noise analysis. In addition, a warnig threshold is also defined: if the baseline fluctuation overpasses this threshold for one or more channels, a warning is issued on the terminal output.

Since the analysis compares current event to the previous ones, it requires several events before being effective. The analysis works as follows:

  • for the first few events, only the average (over events) baseline is built;
  • for event number n, the average baseline (from previous events) of channel j is:

    \[\left< B_{j}^{(n)} \right> = \frac{1}{n - 1} \cdot \sum_{k=0}^{n-1} \left< S_j^{(k)} \right>_{[i_0;i_1]}\]

    where $\left< S_{j}^{(k)} \right>$ is computed, for event k in the same way than for noise analysis;
  • for a given channel, the signal is taken into account only if the channel contains data;
  • the baseline variation for channel j of event n is:

    \[\Delta B_{j}^{(n)} = \left< B_{j}^{(n)} \right> - \left< S_{j}^{(n)} \right>\]

This baseline fluctuations analysis is performed on the raw data, in GETSystemAnalyser::AnalyseRawEvent function.

If the analysis is defined, when an event is analysed, the baseline fluctuation for each channel is stored in an histogram (GETSystemAnalyser::GetBaseFluctuationsHisto).

This analysis does not work when the zero suppress mode is active.

AnalyserBaseline_Event.png

This picture shows the baseline fluctuation histogram for one event, indicating a problem for channels around 1000-1100.

AnalyserBaseline_Cumul.png

It is possible to cumulate the results for all events in a 2D histogram using the GETSystemAnalyser::FillCumulFluctuationsHisto.

Maximum amplitude resolution analysis

The resolution analysis proposes a tool to estimate the channel resolution for run files with constant signal on each channel, such as pulser calibration measurements.

The analysis stores, on an event by event basis, the maximum amplitude of all channel in a 2D histogram (channel number versus amplitude). Then the RMS is computed for each channel, and stored in the resolution histogram, that is recomputed for each event.

Note: it has been designed to optimize the GET clocks offsets.

Channels data count monitoring analysis

This analysis checks the number of data (from the circular buffer memory) have been read from the data frame, for each channel. In principle, for a sample depth of 512 time buckets, this number should be exactly 512 or 0 (if partial readout mode is on). This is true if the zero suppress mode is off.

The analysis is defined by the GETSystemAnalyser::InitDataNumberCheck function, that sets the interval $[n_0;n_1]$ in which the number of read data is considered bad. This interval should be for example $[0;511]$ in full readout mode and $[1;511]$ in partial readout mode.

When analysing an event, if some channels have a number of data in this interval, a warning is issued.

Data continuity (or accidents) monitoring analysis

A possible result of problems in the GET system hardware configuration process is to get corrupted data. This may happen in several ways, like the example below or with absolutly inconsistant channels data.

The data continuity proposes a test based on a maximum accepted variation dmax between values of contiguous time buckets. This limit is set by the GETSystemAnalyser::InitDataContinuityCheck function.

In order to accept fast signal variations that may happen (for short peaking times), the analysis test checks an abnormal increase and decrease of the signal (or the inverse).

AnalyserContinuity_Event.png

This picture illustrate the type of problem that can be detected by the continuity analysis.