Previous Contents Next

Chapter 4
The OPAL Data Environment

The collection of data from the OPAL detector is the result of many tools used to select, process, and store the enormous quantity of physics data. Data selection and reconstruction is the primary concern of the physics data environment, in contrast to the off-line physics environment which deals with the generation and simulation of the physics Monte Carlo environment. Although the data which are processed in the two instances are quite different, the further along the processing ``tree'' one continues, the less obvious this distinction becomes since the tools used become nearly identical. This chapter will give an overview of the most important aspects of the physics data environment.

4.1  The Data Acquisition Flow

The digitized pulses coming from the different subdetectors of the OPAL detector are combined and converted into recognizable patterns for the data acquisition system [42] of the OPAL detector. For each synchronized event, data are collected from up to 15 subdetectors, the trigger, the track-trigger, and the central slow controls console. The processing of the data is based on a hierarchical structure with the buffering and processing at each level. The processing is controlled by the on-line VAX cluster6 composed of 20 mVAXs and VAX stations operating from a VAX 8700. A software filter is implemented with a Hewlett-Packard DN100007. The complete data collection and processing are performed in a series of stages. The subdetector signals, trigger, event builder, filter, event reconstruction, event storage, and distribution for analysis are shown schematically in Figure 4.1 [42]. A quick overview of this important sequence will give the reader the necessary understanding of the data environment before discussing the details of the processing.


Graphic: images/online.gif

Figure 4.1: The Data Acquisition System.
The complete data collection and processing are performed in a series of stages: the subdetector signals (the trigger), the event builder, the filter, the event reconstruction, the event storage, and the distribution for analysis.


Subdetector Signals

Each of the 15 subdetectors handles its own signal collection and data reduction at the level of pedestal subtraction, zero-suppression, pulse shape analysis and track finding. The front-end readout and processing is controlled by a two-processor VME system: a FIC 82308 dedicated to the management of the local operating system (OS9) and a MVME1679 dedicated to the actual readout of the signals from the subdetector. The ensemble is called the local system crate (LSC) and contains at least 5 Mbyte of memory, an ethernet interface, the interface to the front-end digitizers, the local trigger unit (LTU), and a VME crate interconnect. Ten of the subdetectors have two interconnected LSCs on opposite sides of the OPAL detector, with one LSC functioning as the slave and the other serving as the master as well as the common acquisition system for event building. The subdetectors have widely differing detection hardware and digitizer electronics as shown in Table 4.1 [42].


DET. CHAN. SIZE (kB) DIGITIZER CRATE D.T.
Low High
SI 16000 2 6 Ded. TDC CAMAC 8
CV 648 2 4 Ded. TDC EURO 17.5
CJ 7680 20 120 100 MHz FADC EURO-ECL/VME 18
CZ 2304 2 6 100 MHz FADC EURO-ECL/VME 8
TB 320 2 2 TDC/ADC/PU16 CAMAC 10
PB 21504 5 5 Mplxd ADC EURO/VME 13
EB 9440 3 5 CIA FASTBUS 20
HT 1472 2 2 CIA CAMAC 15
HS 56146 1 2 Bit pattern VME 5
MB 1320 6 6 TPD FASTBUS 20
PE 6080 1 2 Mpxd ADC EURO/VME 8
EE 2264 7 7 CIA FASTBUS 10
HP 10576 1 2 CIA CAMAC 15
ME 38400 2 2 Mpxd ADC EURO/VME 7
FD 4400 5 5 100 MHz FADC CAMAC 6
TT 7 7 FASTBUS Var.
Table 4.1: The On-line Digitization System.
The ensemble of the data acquisition system consists of at least 5 Mbyte of memory, an ethernet interface, the interface to the front-end digitizers, the local trigger unit (LTU), and a VME crate interconnect.


Event Builder

From the LSC of each subdetector, the data are passed sequentially along to the event builder and the filter. The event builder crate has four interconnected VME crate modules corresponding to four branches into which the 15 subdetectors are grouped. The processor in the event builder crate synchronizes the LSC readout and initiates DMA transfers from the subdetectors. The data are buffered, concatenated, and formatted into a complete event in the dual-port memory of the event builder. The data are then accessible to the filter processors.

Filter

The filter processor [43] is located in the filter crate and supervises the multi-event buffer of the events. A fast analysis is performed by the filter processor to aid in the selection and classification of interesting physics events as well as the rejection of obvious background events. The filter program uses the jet chamber tracks, the time-of-flight signals, and muon chamber information for making decisions concerning the selection of events. The filter program manages many data monitoring histograms and also runs in the off-line environment for acceptance of Monte Carlo simulation in physics analysis. The filter algorithm is discussed in detail in the Section 4.4.

Event Reconstruction

After the filter, events are transferred from the underground experimental area via an optical link10 to the main processor at ground level. The events are are buffered by an optical disk and a Hewlett-Packard DN10000 performs event reconstruction in real time. The program used to reconstruct events from digits from the OPAL detector or from the OPAL simulation program, GOPAL, is called ROPE [44]. The ROPE reconstruction is implemented in the on-line environment through a collection of Apollo workstations at ground level in the barrack at interaction point 6 (I6). Reconstruction is performed as the calibration constants for the subdetectors become available, usually within an hour of the events being taken. All ROPEd events classified into various event types such as Bhabha, m-pair event, t-pair event, multihadronic Z0 decay, and flagged luminosity events and transmitted with the data summary tape (DST) blocks to magnetic tape cartridges for later physics analysis.

Data Storage and Distribution

The data are transmitted to the disks in the off-line VAX cluster for semi-permanent residence to perform analysis. The data are copied from this disk over the network to the main CERN site - to the SHIFT analysis farm, to the off-line VAX cluster and to CERNVM. On SHIFT, the partitions from each data-taking run (usually corresponding to one fill of LEP) are collated, and events passing the FYZ1 selection are stripped off. The headers of the full data set are also stripped, to make an ``off-line nanodst'', which is compared to the ``filter nanodst'' to check if events were lost in the data reconstruction processing. The fully ROPEd dataset is archived to cartridge in the computer center and the VAX disk copy deleted.

Software Control and Monitoring

One of the keys to the smooth running of the on-line data processing is the dependability and homogeneity of the software. All subdetector processors in VME run the OS9 operating system11 with the common programming languages of Assembler [45], C [46], and RTF FORTRAN [45]. The networking protocols over the ethernet local area network (LAN) between processors in VME crates, the VAX 8700, workstations, and the CERN main site include OS9Net [47], DECNet, TCP/IP, and CATS [48]. The underlying format of the data structure for the data from the LSCs on is the ZEBRA data structure [49].

The data acquisition system is controlled by the on-line VAX cluster for the event builder and by each LSC for the subdetector. The software is based on the MODEL suite of programs available from the CERN Data Division12. The main features of the control system and interface to the subdetectors are:

A control system [51] ensures the safe operation of the detector and serves as a homogeneous interface between the subdetectors and the general infrastructure of the experiment. The system supervises the common environment including safety aspects and the aspects of the experiment specific to each subdetector. The parameters of the common infrastructure such as voltages, gas flows, temperatures, etc., are continuously monitored by seven VME micro computer stations distributed over the experimental site. All values are continually compared to their nominal settings and stored to a file. When problems arise, the system notifies the operator and takes automatic corrective actions if required. The control stations are always operational and interface to the LEP-wide security systems, which is globally monitored by a single station in the control room of the OPAL experiment.

4.2  The Trigger

The OPAL trigger system [52] is a fast, programmable system designed to signal the occurrence of genuine e+e- interactions within the physical constraints of the OPAL detector, while minimizing the backgrounds to e+e- interactions. The main physics processes which must be triggered efficiently are the multihadronic decays of the Z0, charged leptonic decays of the Z0, and small angle Bhabha scattering for luminosity measurement. The beam related backgrounds due to beam-gas or beam-pipe interactions, synchrotron radiation, and cosmic rays must be reduced as much as possible. The physical constraints of the OPAL detector include chamber drift times of up to 8 ms, an electronic reset time of 7 ms, and readout deadtimes of 20 ms per event. Incorporating these criteria and constraints, the programmable OPAL trigger system reduces the actual bunch crossing rate from the 45 Hz collision to 1-5 Hz, which keeps losses below 10%.

The origin of all OPAL trigger signals comes from the separate subdetector trigger signals which are used either separately or in coincidence with each other. Implementation of the system is in two separate parts: firstly, the individual subdetector trigger signals and secondly, the q and f (q-f  matrix) signals which are synthesized by the central trigger logic processor resulting in collective triggering. Used separately, the subdetector trigger signals such as total energy can trigger events with high thresholds, while used in spatial coincidence with other subdetector q-f  elements, the signals can trigger with lower thresholds allowing the system to attain a very high level of redundancy and efficiency in triggering.

Subdetector Triggers

The subdetectors which provide independent triggers include the vertex chamber, the jet chamber, the time-of-flight detector, the electromagnetic calorimeter (barrel and endcap), the hadron calorimeter (barrel and endcap) and pole tip, the muon chambers (barrel and endcap), and the forward detectors. Direct trigger signals from the subdetectors are summarized in Table 4.2 [52].


NAME DET. SIGNAL DESCRIPTION RATE
TM1 TT ³ 1 Trk 8
TM2 TT ³ 2 Trk 1
TM3 TT ³ 3 Trk 0.2
TBM1 TT ³ 1 Barrel Trk 0.4
TBM2 TT ³ 2 Barrel Trk 0.2
TBM3 TT ³ 3 Barrel Trk 0.1
VXH TT ³ 8 Hits in Vertex Chamber 400
J1H TT ³ 8 Hits in Jet Chamber Ring 1 150
J2H TT ³ 8 Hits in Jet Chamber Ring 2 70
J3H TT ³ 8 Hits in Jet Chamber Ring 3 50
TOFOR TOF ³ 1 Time-of-flight Hit 20
TOFMANY TOF ³ 7 Overlapping TOF q-f Bins 0.2
TOFMUL TOF TOF Bars Hit above Threshold 2
EBTOTHI EM EEB tot. ³  7 GeV 0.1
EELHI EM EEE left ³  6 GeV 0.06
EERHI EM EEE right ³  6 GeV 0.06
EBTOTLO EM EEB tot. ³  4 GeV 10
EELLO EM EEE left ³  4 GeV 0.5
EERLO EM EEE right ³  4 GeV 0.5
EBTPHI EM ³ 1 q-f w/ EEB ³ 2.6 GeV 0.2
EELTPH EM ³ 1 q-f w/ EEE left ³ 3 GeV 0.1
EERTPH EM ³ 1 q-f w/ EEE right ³ 3 GeV 0.1
MBH MU ³ 1 MB 500
MEL MU ³ 1 ME in Left 20
MER MU ³ 1 ME in Right 20
MELR MU ³ 1 ME in Left AND Right 0.2
FDSUM FD SEleft,right ³ 15 GeV 0.3
FDSEG FD SEleft,right seg. ³ 13 GeV 0.35
FDHIOR FD EFD ³ 35 GeV 0.3
FDSUMA FD FDSUM Accidental Trig. 0.05
FDSEGA FD FDSEG Accidental Trig. 0.005
LCALLO FD EFD left ³ 15 GeV 4
RCALLO FD EFD right ³ 15 GeV 4
BXRSA RAN CTL 0.04
Table 4.2: Subdetector Trigger Signals.
The abbreviations in column 2 denote the track trigger (TT), the time-of-flight detector (TOF), the electromagnetic calorimeter (EM), the muon detector (MU), and the central trigger logic (CTL). The rates given in column 4 correspond to a typical fill (luminosity of  0.4×1031 cm-2s-1 and about 1 mA current per beam).


The q-f Matrix

The actual q-f matrix consists of 144 overlapping q-f bins with 6 bins in q and 24 bins in f. The q-f matrix has nearly 4p coverage in solid angle of the detector as shown in Table 4.3 [52]. The q-f  matrix has five layers corresponding to the track, time-of-flight, electromagnetic, hadron, and muon triggers. This is shown schematically in Figure 4.2 [52] with the output triggers listed in Table 4.4 [52]. The direct event triggering along with the trigger input signals provided to the central trigger logic are summarized in Table 4.2.


BIN q RANGE q
1 -0.980 to -0.596
2 -0.823 to -0.213
3 -0.596 to 0.213
4 -0.213 to 0.596
5 0.213 to 0.823
6 0.596 to 0.980
BIN f RANGE f
1 0° to 30°
2 15° to 45°
3 30° to 60°
··· ···
23 330° to 360°
24 345° to 15°
Table 4.3: Segmentation of OPAL Detector by the q-f Matrix.
The f segmentation is matched only approximately by some subdetectors.


Graphic: images/tpmatrix.gif

Figure 4.2: Trigger Generation by the q-f Matrix.
The track trigger (TT), the time-of-flight system (TOF), the electromagnetic calorimeter (EM), the hadron calorimeter (HA) and the muon detector (MU) send signals to the q-f matrix, the outputs of which, together with up to 64 additional NIM signals, are logically combined to form the final trigger decision. Crosses on the vertical lines representing different particle types passing through the detector indicate the sensitivity at the trigger level.


NAME SIGNAL DESCRIPTION RATE
TPTTB Trk. Trig., ³ 1 q-f in q2-5 (barrel) 1
TPTT1 Trk. Trig., ³ 1 q-f Bin 8
TPTT2 Trk. Trig., ³ 2 Independent q-f Bins 1
TPTTL Trk. Trig., ³ 1 f Bin in q1 5
TPTTR Trk. Trig., ³ 1 f Bin in q6 5
TPTTCL Trk. Trig., ³ 1 pair of Collinear tracks 0.15
TPTTTO Trk. Trig. AND TOF, ³ 1 Correlated q-f Bin 0.3
TPTTEM Trk. Trig. AND EM, ³ 1 Correlated q-f Bin 0.15
TPTTHA Trk. Trig. AND Hadron, ³ 1 Correlated q-f Bin -
TPTTMU Trk. Trig. AND Muon, ³ 1 Correlated q-f Bin 0.03
TPTO1 TOF, ³ 1 q-f Bin 20
TPTO2 TOF, ³ 2 Independent q-f Bins 2
TPTOCL TOF, ³ 1 pair of coplanar hits 0.4
TPTOEM TOF AND EM, ³ 1 Correlated q-f Bin 0.25
TPTOHA TOF AND Hadron, ³ 1 Correlated q-f Bin -
TPTOMU TOF AND Muon, ³ 1 Correlated q-f Bin 0.2
TPEMB EM, ³ 1 q-f Bin in barrel 100
TPEM1 EM, ³ 1 q-f Bin 120
TPEM2 EM, ³ 2 Independent q-f Bins 0.2
TPEML EM, ³ 1 f Bin in q1 10
TPEMR EM, ³ 1 f Bin in q6 10
TPEMCL EM, ³ 1 pair of Collinear clusters 0.15
TPEMMU EM AND Muon, ³ 1 Correlated q-f Bin -
TPHAB Hadron, ³ 1 q-f Bin in barrel 200
TPHA1 Hadron, ³ 1 q-f Bin -
TPHA2 Hadron, ³ 2 Independent q-f Bins -
TPHAL Hadron, ³ 1 f Bin in q1 -
TPHAR Hadron, ³ 1 f Bin in q6 -
TPHACL Hadron, ³ 1 pair of Collinear clusters -
TPHAMU Hadron AND Muon, ³ 1 Correlated q-f Bin -
TPMUB Muon, ³ 1 q-f Bin in barrel 500
TPMU1 Muon, ³ 1 q-f Bin 500
TPMU2 Muon, ³ 2 Independent q-f Bins 10
TPMUL Muon, ³ 1 f Bin in q1 20
TPMUR Muon, ³ 1 f Bin in q6 20
Table 4.4: The q-f Matrix Trigger Signals.
The rates in column 3 correspond to a typical fill. The signals where no rates are given have not been used in data-taking.


The Central Trigger

The central trigger consists of a special crate with up to nine monoboard VME cards and ten specific function VME cards. One VME card receives up to 64 stand-alone signals from the subdetectors and allows a variety of logical conditions to be imposed on signals, both direct and from q-f , which results in the trigger and digitization of the event. The central trigger logic receives the direct subdetector signals (refer to Table 4.2) and the q-f signals (refer to Table 4.4) 1 to 14 ms after the bunch crossing and then makes a decision to send a trigger signal to the subdetectors to start data acquisition 7 ms before the next bunch crossing. A detailed view of the trigger process with its relation to the beam crossings is shown in Figure 4.3 [52].

In actual implementation, the central trigger logic is composed of the following components:

The central trigger crate is well integrated into the OPAL data acquisition system via the Ethernet Local Area Network (ELAN). This communication allows valuable operator control of relevant run and trigger parameters as well as instantaneous feedback in the form of histograms on detector performance and background conditions. Testing is also performed before each run and important information concerning the assignment of trigger signals to SIM inputs, the PAL logic, and the logical conditions to be imposed on the PAM inputs, is stored on mass media. In addition to beam-synchronized operation, the central trigger logic allows for triggering cosmic rays by gating from scintillation counters above the detector. An unbiased random trigger at a pre-scaled rate of 0.04 Hz also provides samples of null events, which are particularly useful in monitoring noise levels in the subdetectors.


Graphic: images/trigger.gif

Figure 4.3: The General Event Trigger.
Discriminated analog sums of multiplicity counts and q-f information are provided by the subdetectors after each crossing of the particle bunches. Logical combinations of these, formed in the central trigger logic, lead to a decision to accept or reject the event. The decision is broadcasted by the general trigger unit (GTU) to local trigger units (LTU)s at the subdetectors.


4.3  The Single Photon Trigger

Single photon events are characterized by a single low energy photon, Eg » 1.5 GeV, seen in the barrel electromagnetic calorimeter (EB) accompanied by a presampler shower cluster and perhaps a time-of-flight hit with no other unassociated activity observed in the detector. Since other types of events have similar topology and triggering, it is instructive to consider these in the context of understanding the detector simulation, determination of the selection efficiencies, and studying the backgrounds. Among these events with similar topology are the following:

As expected from their similar topologies, these event types have a high degree of correlation in triggering as shown in Table 4.5 [52]. The fact that the triggering of the single photon events is a subset of the other event types is important in studying the efficiencies for the single photon events. In general, the selection criteria for the single photon events were chosen to retain the maximum signal from the e+e-® n[`(n)]g process, while reducing as much as possible the background rates and allowing the flexibility of selecting other event types, like the tagged photon and single electron events. The tagged photon events are used as a check of the forward detector simulation and the systematics. The single electron events are used to determine the trigger and event selection efficiencies and to check the background veto capability and the calorimeter response to electromagnetic showers. The results of studies with the tagged photon events and the single electron events are discussed along with that of the single photon events where applicable.


TRIGGER DESCRIPTION SINGLE TAGGED SINGLE
Photon Photon Electron
TPTOEM TOF AND EM Ö Ö Ö
³ 1 Correlated q-f Bin
EBTPHI ³ 1 q-f Bin Ö Ö Ö
with EEB > 2.6 GeV
EELTPH ³ 1 q-f Bin Ö Ö Ö
with EEE left > 3 GeV
EERTPH ³ 1 q-f Bin Ö Ö Ö
with EEE right > 3 GeV
TPTTEM Trk. Trig. AND EM Ö
³ 1 Correlated q-f Bin
TPTTTO Trk. Trig. AND TOF Ö
³ 1 Correlated q-f Bin
Table 4.5: The Trigger for Single Photon and Related Types.
The triggering of the single photon events is shown along with that for the tagged photon and single electron events. It is possible to determine the efficiencies of single photon events from the single electron events due to the fact that the triggers used to accept single photon events are a subset of those used to accept single electrons.


Trigger Implementation

The trigger implementation used for the 1991 run is summarized in Table 4.6 [52]. The stable running of LEP during the run allowed for the easy optimization of the trigger conditions to keep the trigger rate below the maximum rate tolerable by the data acquisition system and provide good efficiency for all considered physics processes. The redundancy for all physics processes was achieved by triggers based on one detector component and sensitive to single particles. The trigger rate is a function of the machine luminosity, the beam-related backgrounds, and at a constant level the cosmic rays and detector noise. As a result, single particle triggers from only one subdetector were not possible due to the high rates from such considerations [54]. The q-f coincidences between detector layers had low rates, even for low thresholds, because the coincidences suppressed random noise and low energy backgrounds seen only in one layer. The rates of the low threshold ECAL triggers and the muon barrel trigger were mainly due to noise, while the track trigger and the TOF system were dominated by beam related backgrounds. The rates for triggering averaged over the 1991 run are shown in Table 4.6.


NAME TRIGGER CONDITION RATE
TM3 More than 3 Tracks 0.2
TBM2 More than 2 Tracks in Barrel 0.2
TPTTCL Track Trig. Collinear q-f Bins 0.15
TPTTTO Track Trig. - TOF q-f Coinc. 0.3
TPTTEM Track Trig. - EM q-f Coinc. 0.15
TPTTMU Track Trig. - Muon q-f Coinc. 0.03
TOFMANY ³ 7 Overlapping TOF q-f bins 0.2
TPTOMU - Muon q-f Coinc. 0.2
TPTOEM TOF - EM q-f Coinc. 0.25
TOFMUL.AND. TOF-Based Single Phot. Trig. 0.1
   .NOT.(TPTO2.OR.J1H)
EBTOTHI EEB tot. ³ 7 GeV 0.1
EELHI.OR.EERHI EEE tot. ³ 6 GeV 0.1
EBTPHI EEB q-f ³ 1 Bins ³ 2.6 GeV 0.2
EELTPH.OR.EERTPH EEE q-f ³ 1 Bins ³ 3.0 GeV 0.15
TPEMCL ECAL , Collinear q-f Bins 0.15
EEL(R)LO EEB tot. ³ 4 GeV 0.03
.AND.EBTOTLO AND E1 EE tot. ³ 4 GeV 0.03
EELLO.AND.EERLO EEE ³ 4 GeV in Both EE 0.04
EEL(R)LO EEE ³ 4 GeV 0.08
.AND.TPTTR(L) AND Opp. Trk.
EEL(R)LO.AND.TBM1 EEE ³ 4 GeV AND ³ 1 Barrel Trk. 0.05
TPEML(R).AND.TBM1 EE q-f Bin AND ³ 1 Barrel Trk. 0.10
TPEML(R) EE q-f 0.15
.AND.TPTTR(L) AND Opp. Trk.
EBTOTLO.AND. EEB ³ 4 GeV AND 0.2
   (TBM1.OR.TOFOR) 1 Barrel Trk. or TOF Hit
MELR Muon in Left AND Right ME 0.2
MEL(R).AND. ³ 1 ME AND 0.2
   (TBM1.OR.TOFOR) 1 Barrel Trk. or TOF Hit
MEL(R).AND.TPTTR(L) Muon in ME AND Opp. Trk. 0.02
FDSUM EFD ³ 15 GeV on Both Sides 0.35
FDSEG EFD ³ 13 GeV Back-to-Back 0.3
FDSEGA FD Accidental 0.005
L(R)CALLO EFD ³ 15 GeV AND 0.04
   .AND.EBTOTLO EEB ³ 4 GeV
L(R)CALLO.AND.TBM1 EFD ³ 15 GeV AND ³ 1 Barrel Trk.0.03
BXRSA Random Beam Crossing 0.04
Table 4.6: The General Trigger Conditions of 1990 and 1991.


4.4  Filter and Reconstruction

The filter processor13 is driven by a Hewlett-Packard Apollo DN10000 RISC-based workstation configured with four CPU boards, 64 Mbyte of memory, and four 700 Mbyte disks which receive the ``raw'' event data from the event builder. The filter acts as a second-level software trigger that checks, analyzes, and compresses events before writing them to disk. To achieve speed and simplicity, the filter algorithm uses special definitions for determining tracks and clusters.

With these simple definitions, the filter algorithm operating in parallel under the coordination of the MODEL buffer manager selects and classifies events. The result of the filter processing is registered in a 32 bit filter word which is recorded along with the LEP run parameters in the 64 word event header which is then used to make a decision to accept or reject the event. For events which are accepted, data compression is performed reducing the data volume by a factor of five before transmission of the event for complete reconstruction.

From the filter, events are passed through a reconstruction process to convert all the ``raw'' subdetector data into physical quantities such as particle energies. The process operates asynchronously from the data acquisition due to the dependence on the calibration constants from the OPAL Calibration (OPCAL) data base. OPCAL [55] is a set of programs designed to allow storage and retrieval of calibration data from the OPAL subdetectors. The reconstruction process provides a quick feedback to the shift people responsible for the quality of the collected data during the operation of LEP. The full event reconstruction requires 26 CPU seconds for the average hadronic Z0 decay and 2 CPU seconds for other events with an average throughput of 4 events per second. The reconstruction uses the ORACLE data base14 to maintain a history of each data file, store information on the number and types of events, and store the calibration files. Before sending the input and output compressed data files for storage, the reconstruction algorithm determines the event-type classifications. For the purposes of the analysis presented in this thesis, event types important to the selection of single photon events are determined from the filter algorithm and summarized in Table 4.7 [56]. The most general acceptance is the FYZ1 selection which is the synthesis of all other types of preselection and selection including luminosity, lepton pair, single photon, converted photon, heavy lepton, two photon, Higgs, multihadron (Tokyo criteria), luminosity filter, gold-plated filter, and other special triggers.


EVENT TYPE DESCRIPTION BIT
IEGAM S Ecluster > 0.7 GeV, 8
D q,D f < 0.2 rad
IEGSTR Special Single Photon Event 14
IEGEMT S Ecluster > 0.7 GeV 24
D q,D f < 0.2 rad,
and > 1 TOF hit
IEGEM S Ecluster > 5.0 GeV, 27
S E2 cluster < 0.2 GeV,
Nblocks > 2, 0.15 < fmax < 0.99
IEFYZ1 Pass Any Physics Preselection Selection 32
Preselection or Selection 32
IELLQQ Lepton Pair High Multiplicity 4
IEGVMU Muon Veto 22
IEGVBW Beam Wall Veto 23
Table 4.7: Event Type Requirements.
The filter algorithm determines the event type for each event based on simple tracking criteria (S CDhits > 60% wires in a sector) and simple electromagnetic clustering criteria (Emain block > 100 MeV and Eneighbor block > 50 MeV). The analysis requires events to be of the type IEGAM, IEGSTR, IEGEMT, or IEGEM. In addition, the analysis requires events to be of the type IEFYZ1, but not IELLQQ, IEGVMU, and IEGVBW.


4.5  Data Storage and Distribution

After the reconstruction process, events are registered in various data bases for physics analysis. For the sake of bookkeeping, the convention OPAL uses includes the following:

A summary of the 1990 and 1991 data taking is shown below in Table 4.8 [56].


EXP. PER. RUNS DATES ENGY. PASS L(nb-1)
1 8 1419-1498 23/03-01/04 1990 Peak Pass 4 215.7±3.1
1 9 1535-1597 10/04-23/04 1990 Scan Pass 4 679.6±5.5
1 10 1632-1703 30/04-14/05 1990 Scan Pass 4 674.7±5.5
1 11 1737-1756 22/05-27/05 1990 Scan Pass 4 305.1±3.7
1 12 1775-1792 31/05-05/06 1990 Scan Pass 4 419.7±4.3
1 13 1801-1810 07/06-12/06 1990 Scan Pass 4 343.1±3.9
1 14 1840-1846 30/06-01/07 1990 Scan Pass 4 266.2±3.5
1 15 1852-1882 08/07-17/07 1990 Scan Pass 4 518.1±4.8
1 16 1885-1897 20/07-24/07 1990 Scan Pass 4 592.3±5.2
1 17 1917-1924 02/08-07/08 1990 Scan Pass 4 370.2±4.1
1 18 1929-1942 09/08-14/08 1990 Scan Pass 4 771.1±5.9
1 19 1951-1974 18/08-29/08 1990 Scan Pass 4 1457.6±8.1
2 20 2205-2230 17/04-22/04 1991 Peak Pass 3 70.6±1.8
2 21 2258-2271 30/04-06/05 1991 Peak Pass 3 693.4±5.6
2 22 2282-2299 08/05-13/05 1991 Peak Pass 3 524.4±4.8
2 23 2316-2330 18/05-26/05 1991 Peak Pass 3 1164.3±7.2
2 24 2338-2345 30/05-02/06 1991 Peak Pass 3 730.3±5.7
2 25 2354-2360 07/06-12/06 1991 Peak Pass 3 1027.0±6.8
2 26 2363-2377 13/06-20/06 1991 Peak Pass 3 1545.6±8.3
2 27 2393-2406 10/07-16/07 1991 Peak Pass 3 611.1±5.2
2 28 2408-2441 19/07-07/08 1991 Peak Pass 3 1677.2±8.7
2 29 2461-2480 17/08-26/08 1991 Scan Pass 3 1635.4±8.6
2 30 2500-2517 05/09-11/09 1991 Scan Pass 3 1841.5±9.1
2 32 2536-2559 11/10-23/10 1991 Scan Pass 3 2231.2±10.0
2 33 2566-2582 27/10-06/11 1991 Scan Pass 3 1722.1±8.8
Table 4.8: The 1990 and 1991 OPAL Data Set.
A summary of the OPAL data set including the experiment, the period, the run, the dates, the energy, the pass, and the luminosity. The pass number refers to the latest complete pass of the data through the event reconstruction processor ROPE. Period 31 was a 90 kHz test run and is not used for physics analysis.


4.6  Event Display

The event display of the OPAL detector is accomplished through the GROPE [57] program. The program allows for the viewing of the final reconstruction for OPAL events with the DST information or raw data. The program displays three types of information including geometrical information, raw data, and reconstructions for each of the existing subdetectors. The graphics interface is performed through direct calls to the three-dimensional graphics package, GKS3D, which allow two-dimensional views as well as specialized views like q-f and x-y views of events. Figure 4.4 shows a single photon event in the OPAL detector selected from the data, along with a single electron event in the OPAL detector shown in Figure 4.5.


Graphic: images/photon.gif

Figure 4.4: Typical Single Photon Event in OPAL.
A typical single photon event as seen in the OPAL detector. The distinctive features of the single photon events are the presence of a single low energy cluster in the electromagnetic calorimeter accompanied by shower in the presampler calorimeter or time-of-flight activity with no other activity in the OPAL detector inconsistent with noise.


Graphic: images/electron.gif

Figure 4.5: Typical Single Electron Event in OPAL.
A typical single electron event as seen in the OPAL detector. The distinctive characteristics of the single electron events are also the same as those of single photon events with the additional associated activity of a charged particle in the central detector and the hadron calorimeter.


Previous Contents Next