Tony Smith's Home Page
Run 2 of the Fermilab Tevatron Collider Experiments to Detect and measure the Truth Quark began in 1999.
Burton Richter, of SLAC, says (in hep-ex/0001012):
"... The LHC will have an energy of 14 TeV in the proton-proton center of mass, a luminosity of 10^34 cm^(-2)s(-1) and a mass reach of about 1 TeV. Operations are expected to begin in the year 2005.
The two main experiments ATLAS and CMS will each have 1500 to 2000 collaborators. The size of these collaborations is unprecedented and presents difficult organizational problems in getting ready and new sociological problems in operation. In the 500-strong collaborations of today, we already have a bureaucratic overlay to the science with committees that decide on the trigger, data analysis procedures, error analysis, speakers, paper publications, etc. The participating scientists are imprisoned by golden bars of consensus. ...
... The experiments are difficult and the detectors are complex, expensive devices. Data rates, particularly at proton colliders, are enormous and there is no way to digest it all. Complex, multi-tiered trigger systems are needed to reduce the ood of data coming from the machine by a factor of 10 million or more so that our computer systems can handle the load. Those events that do not pass the trigger screen are discarded. There is a danger here.
According to Fermilab, Run 2 should produce at least 20 times as many proton-antiproton collisions as in Run 1, so Run 2 should produce at least 20 times as many events as Run 1.
roughly (to an order of magnitude) 10 Dilepton or tagged lepton + jets events, and
roughly (to an order of magnitude) 100 semileptonic (not necessarily tagged) events,
so that I could look pretty closely at each individual event in my analysis, which produced a result (the green bar in the chart below) that was consistent with the D4-D5-E6-E7 physics model value of about 130 GeV for the Truth Quark mass (the dark blue line in the chart below).
Fermilab's analysis of the same events showed a higher value, about 170 GeV, for the Truth Quark mass (the cyan bar in the chart below).
For detailed comparison of my analysis with the Fermilab analysis, see:
However, there is another possibility that worries me more than the time it will take to look at a lot more events.
My greater worry is is based on the fact that the disagreement between my analysis and the Fermilab analysis is based substantially on disagreement as to what is background and what is signal.
For example, with respect to the following histogram is CDF events described in 1994 in FERMILAB-PUB-94/097-E,
the Fermilab CDF analysis disregards the green region by saying (on p. 140): "...There are 13 events with a mass above 160 GeV/c^2, whereas the bin with masses between 140 and 150 GeV/c^2 has eight events. We assume the mass combinations in the 140 to 150 GeV/c^2 bin represent a statistical fluctuation since their width is narrower than expected for a top signal. ...",
and the Fermilab CDF analysis considers the cyan region to represent the signal, thus justifying their value for the Truth Quark mass of about 170 GeV,
my analysis considers the cyan region to represent poorly understood background,
and my analysis considers the green region to represent the signal, thus justifying my value for the Truth Quark mass of close to about 130 GeV (the green region is about 145 GeV, which is close enough to my theoretical tree-level 130 GeV value for me to be happy.
Consider the paper hep-ex/9907041, Neural Networks for Analysis of Top Quark Production, by the D0 Collaboration, which says: "... The golden channel for observing tt decays has long been the dilepton mode ... Due to the presence of two leptons with different flavors, this channel has a very low background. However, compared to the channels in which one of the W bosons decays into jets, the e-mu channel has a relatively small branching ratio - about 2:5%, versus about 15% for the e + jets channel. Therefore, any new analysis techniques that can increase efficiency for identifying signal in this channel while maintaining the low background level are welcome. ... neural networks provide a significant improvement over conventional analysis methods. We expect that such techniques will have a prominent place in the analysis of data from the upcoming Run 2 of the Tevatron. ...".
I do NOT believe that Neural Networks are inherently bad,
but I DO believe that Neural Networks are a move from Human Judgment to Automation, and
that more Automation often leads to more assumptions that are implicitly buried in the structure of the Automated System, and
that it is often hard to figure out how such buried implicit assumptions work,
and what their consequences are.
Tony Smith's Home Page