(Napoli Group)
It has been noted that recording interesting events in
ICARUS requires a suitable treatment of the signals from the front-end
electronics in order to efficiently select, compress and filter the data to be
stored on disk. In fact, the huge amount of data corresponding to an event
(even in the case it is well localized) produces severe bandwidth and storage
problems. This consideration motivated the need for data compression and
zero-skipping. For special topologies of specific interest, the rate is such to
justify a trigger selection in addition. Therefore, one needs a global trigger
system in order to suitably identify and handle the signals from interesting
events.
Let’s take as an example the case of a Supernova
explosion that would produce a number of the order of 100 neutrino interactions
in a second in a 300 ton detector module. Such an event rate could not be
handled if one aims at storing for each event the whole detector image (nearly
13 GB in total). However, given the low energy of the events, one could select
the volume around the event and only readout the few corresponding channels,
leaving the other unaffected channels ‘free’ to record further events, thus reducing
global dead time and increasing the
effective buffer size.
We propose to implement a ‘segmented’ trigger
architecture, designed to be able to elaborate, in a ‘trigger box’, signals
from different local units carrying information about ‘local’ events, in order
to issue a variety of trigger proposals, according to the nature of the
recorded event: solar neutrino interaction, supernova explosion, atmospheric
neutrino interaction, muon, calibration, beam, etc.
The global trigger system collects signals from
different sub-systems: TPC wires, PMTs, external tracking detectors, and the
timing and control system of the CNGS beam (spill timing). Local trigger
control units or LTCUs are designed according to the needs of the different
sub-systems. The granularity of the LTCUs belonging to each sub-system is to be
defined. The signals elaborated by the local units are sent to a central unit,
called TCU, one per chamber, where the actual trigger logic is implemented and
the trigger proposals are defined. Finally, the proposals are validated by a
Trigger Supervisor, exchanging information with the DAQ, which also deals with
trigger signal distribution, control, analysis and statistical monitoring of
the system. The entire system will be built with programmable logic units based
on state-of-the-art FPGAs.
Fig.
1 shows a schematic block diagram of the trigger
system.
In order to further detail the architecture, we
have made the following assumptions:
·
we
define a pixel as the elementary LAr volume contributing to define the
picture of the detector as seen by the trigger;
·
according
to the type, number and pattern of pixels fired, an event is triggered as global
or local;
·
in
case of global events the full detector (all channels) is to be readout;
·
in
case of local events, the detector is only partially readout.
Taking into account the average size of the
neutrino events of astrophysical origin in the detector, and considering the
T600 front-end electronics, it seems reasonable to us to define pixels
dimensions matching the readout granularity.
Each
analogue crate can host one LTCU board.
For simplicity, and as a basis for further development, we have started with
the treatment of the signals coming from the wires.
Each LTCU receives as input the analogue sum of
the signals from 32 wires (AWS). 9 Induction and 9 Collection wire sums coming
from the V791 boards are discriminated
according to remotely controlled thresholds. The LTCU gives as output two
trigger proposals (T0 and T1), each one coming from the
Fast OR of 9 inputs.
The signals T0 and T1 are sent
to the Trigger Control Unit, which
processes data on line, in order to select interesting events. Each TCU module receives as input Naws signals from the
AWS-LTCU, Npmt
signals from PMT-LTCU, Next
from external (spectro, beam, ...). It checks majority, 2D/3D pattern logic
conditions and labels events according to topology.
The Trigger Supervisor is a VME module,
receiving inputs from each chamber and the Absolute Clock. It validates trigger
requests, performs trigger distribution and has monitoring. It also makes
statistics and dead-time calculations.
We have prototyped the
AWS-LTCU and we intend to test it on the detector in order to evaluate the
noise conditions and fake trigger rate due to electronic noise. This board has
been documented in order to provide a useful guidance for the realization of
the other sub-systems units.
As far as physical
background is concerned, a MonteCarlo simulation of the trigger conditions
should be performed in order to evaluate the expected fake trigger rate
together with the trigger efficiency. According to preliminary calculations, we should be
dominated by the neutron background, amounting to 0.1 Hertz in the worst case.
We believe that the
realization of the proposed architecture for the T600 detector will provide us
with the capability of performing event pre-classification, data streaming and
easier extraction of the solar neutrino data from the low energy event sample.
We also consider this system a useful test bench for the definition of a T1200
low energy trigger.
Fig. 1 Trigger System architecture