He knew that the Higgs could manifest itself in many different forms, and this particular channel was extremely delicate. A common mantra in science is that extraordinary claims require extraordinary evidence.
If the analysis is bulletproof, the next question is whether the evidence is substantial enough to claim a discovery. And if a discovery can be claimed, the final question is what, exactly, has been discovered? Scientists can have complete confidence in their results but remain uncertain about how to interpret them.
With all of that in mind, Incandela and his team made a decision: From that point on, everyone would refine their scientific analyses using special data samples and a patch of fake data generated by computer simulations covering the interesting areas of their analyses.
Then, when they were sure about their methodology and had enough data to make a significant observation, they would remove the patch and use their algorithms on all the real data in a process called unblinding.
A few weeks before July 4, all the different analysis groups met with Incandela to present a first look at their unblinded results. This time the bump was very significant and showing up at the same mass in two independent channels. The next few weeks were among the most intense I have ever experienced. One time I thought some colleagues had come into the office, but it turned out to be two stray cats fighting in the corridor.
According to predictions from the Standard Model, the Higgs can transform into two of these particles when it decays, so scientists on both experiments knew that this project would be key to the discovery process. All in all, it was the most exciting time in my career. I think the best qualities of the community came out during the discovery. At the end of June, Hard and his colleagues synthesized all of their work into a single analysis to see what it revealed.
And there it was again—that same bump, this time surpassing the statistical threshold the particle physics community generally requires to claim a discovery. Hard had no idea whether CMS scientists were looking at the same thing. A secondary vertex occurs when a particle originating from the interaction decays after travelling some distance from the primary vertex. The impact parameter of a reconstructed charged particle track is its distance of closest approach to the primary vertex.
The magnitude of the impact parameter can be used to discriminate between the tracks of particles which originated in the interaction and those of particles produced from the decay of a relatively long-lived product of the interaction for example a b-hadron even if it is not possible to reconstruct a secondary vertex.
Leptons and photons originating from a hard interaction, or from the decay chain of a particle, such as a Z boson or Higgs boson, directly produced in a hard interaction are not except by chance closely accompanied by other particles — they are isolated. Many lepton and photon candidates reconstructed in the detector do not originate directly from the hard interaction, but come from the constituents of the jets of particles formed by the hadronization of gluons or quarks.
These candidates tend not to be isolated — they are surrounded by other particles from the jet. A measure of how many particles, or how much energy, surrounds a candidate lepton or photon can be used to determine whether it is probable that it emerged directly from the hard interaction, or more likely that it originated in a jet fragment.
Events selected so as to have negligible probability of containing signal events constitute a control sample , and the selection requirements define a control region in parameter space.
The control sample in the control region contains only background events and can be used to estimate the number of background events expected in the region of the signal, provided the effect of whatever distinguishes the control region from the signal region is sufficiently well understood.
For both experiments, an excess of events is present in the mass region around GeV. The corresponding numbers for the CMS experiment in the mass window More details about the observed and expected numbers of events, their uncertainties and the split in the various four-lepton channels are given in Table 1 for the ATLAS and in Table 2 for the CMS experiment.
In order to quantify these excesses, the probabilities for the background-only hypotheses were calculated. The local p-value is defined to give the probability of a background fluctuation. The probability that an excess of events observed anywhere within an extended search range is due to a background fluctuation is termed the global p-value , and is larger than the local p -value — this fact is often referred to as the look-elsewhere effect.
The search for the Standard Model Higgs boson decaying to two photons involves detecting a narrow peak above a background diphoton invariant mass spectrum. The Higgs boson decay to two photons proceeds via loop diagrams containing charged particles.
The W boson loop and the top quark loop diagrams dominate the decay amplitude, but contribute with opposite sign. The Standard Model branching fraction is small, having a maximum value of 0. Events containing diphotons are collected by both experiments using diphoton triggers. Further selection requirements are applied to suppress the reducible background contribution from photons originating in jets. Requirements are made on the shapes of the showers in the calorimeter cells or crystals, and on the isolation of the photons from other activity in the detectors.
There are many interactions for each bunch crossing, whose longitudinal position along the beam axis has a RMS spread of about 5 cm. The diphoton interaction vertex is assigned using multiple sources of information combined in a global likelihood ATLAS or using a boosted decision tree BDT Hoecker et al. The analysis of the ATLAS experiment additionally considers the directions of the photon showers reconstructed in the calorimeter using its longitudinal granularity. That of the CMS experiment additionally examines the correlation between the kinematic properties of the charged particle tracks associated with the reconstructed vertices, with those of the diphoton system.
Both ATLAS and CMS enhance the sensitivity of their analyses by subdividing the selected diphoton events into mutually exclusive categories, where the categorization is based upon criteria sensitive to the diphoton mass resolution, and to the probability that an event is signal rather than background.
The ATLAS classification is based on the location of the photons in the calorimeter, whether they are tagged as having converted in the material in front of the calorimeter, by the presence of a reconstructed electron track, and p Tt , the component of the diphoton transverse momentum that is orthogonal to the axis defined by the difference between the photon momenta.
The CMS classification is based on the output of a BDT which is trained to give a high value for signal-like events and for events with good diphoton invariant mass resolution. Events in which a dijet is present, in addition to the diphoton, and satisfies selection criteria chosen to be consistent with the characteristics of signal events produced by the vector-boson fusion process, are placed in dijet categories.
For the analysis of the 8 TeV dataset CMS uses two dijet categories with differing levels of selection stringency. Both analyses published tables listing the categories and indicating the numbers of events expected from a Standard Model Higgs boson signal in each category, the mass resolution expected in each category, and information about the numbers of events present in the data ATLAS Collaboration , CMS Collaboration Some important numbers from these tables are reproduced in Table 3 and Table 4.
For the statistical analysis of the data, the sum of a signal mass peak and a background distribution is fitted to the diphoton invariant mass distribution. The shape of the invariant mass distribution of the signals is obtained from detailed simulation for each of the categories.
The background in the different categories is modelled by parametric functions. Various tests are made to determine the potential bias resulting from the choice of background fit functions. These tests involve either pseudo-experiments, or large samples of simulated events complemented by data-driven estimates.
CMS uses polynomials in the Bernstein basis with degree ranging from 3 to 5 depending on category. ATLAS uses Bernstein polynomials of degree 4, exponentials of a polynomial of degree 2, and plain exponential functions, depending on the category.
Based on searches in this channel, mass regions could be excluded by both the Tevatron and the LHC experiments already in summer The presence of neutrinos makes the reconstruction of a narrow mass peak impossible, and evidence for a signal must be extracted from an excess of events above the expected backgrounds. Usually, the WW transverse mass m T , computed from the leptons and the missing transverse momentum,. Typical selection requirements are the presence of two isolated high- p T leptons with a small azimuthal angular separation and a significant missing transverse energy.
The weak decays of the W-bosons imply a correlation between the directions of the charged leptons, which can be exploited to reject the WW background. The different jet categories are sensitive to different Higgs boson production mechanisms, and show very different background compositions.
The distributions of two discriminating variables, the transverse mass and the invariant mass of the two leptons, as measured in July , after the application of all final selection criteria, are shown in Figure Excesses of events above the expectations from Standard Model processes excluding Higgs boson production are visible.
Details about the observed and expected numbers of events, their uncertainties and the split in the various categories are given in Table 5 for the ATLAS and in Table 6 for the CMS experiment.
A shallow minimum of the local p -value appears at These decay channels require a larger data sample to produce statistically significant measurements. The background is estimated mainly through the use of control samples in data. The results in the individual categories are combined to give the final result.
This restriction greatly improves the signal-to-background ratio. How can we test for the Higgs field? Winding its way hundreds of yards under Geneva, Switzerland, crossing the French border and back again, the LHC is a nearly mile-long circular tunnel that serves as a racetrack for smashing together particles of matter. The LHC is surrounded by about 9, superconducting magnets, and is home to streaming hordes of protons, cycling around the tunnel in both directions, which the magnets accelerate to just shy of the speed of light.
At such speeds, the protons whip around the tunnel about 11, times each second, and when directed by the magnets, engage in millions of collisions in the blink of an eye. The collisions, in turn, produce fireworks-like sprays of particles, which mammoth detectors capture and record. The math showed that if the idea is right, if we are really immersed in an ocean of Higgs field, then the violent particle collisions should be able to jiggle the field, much as two colliding submarines would jiggle the water around them.
And every so often, the jiggling should be just right to flick off a speck of the field—a tiny droplet of the Higgs ocean—which would appear as the long-sought Higgs particle. The calculations also showed that the Higgs particle would be unstable, disintegrating into other particles in a minuscule fraction of a second.
In the early morning hours of July 4, , I gathered with about 20 other stalwarts in a conference room at the Aspen Center for Physics to view the live-stream of a press conference at the Large Hadron Collider facilities in Geneva.
About six months earlier, two independent teams of researchers charged with gathering and analyzing the LHC data had announced a strong indication that the Higgs particle had been found. The rumor now flying around the physics community was that the teams finally had sufficient evidence to stake a definitive claim.
Coupled with the fact that Peter Higgs himself had been asked to make the trip to Geneva, there was ample motivation to stay up past 3 a. And as the world came to quickly learn, the evidence that the Higgs particle had been detected was strong enough to cross the threshold of discovery.
With the Higgs particle now officially found, the audience in Geneva broke out into wild applause, as did our little group in Aspen, and no doubt dozens of similar gatherings around the globe. Peter Higgs wiped away a tear. Radio and television waves. Gravitational fields. But none of these is permanent. None is unchanging.
None is uniformly present throughout the universe. In this regard, the Higgs field is fundamentally different. We believe its value is the same on Earth as near Saturn, in the Orion Nebulae, throughout the Andromeda Galaxy and everywhere else.
As far as we can tell, the Higgs field is indelibly imprinted on the spatial fabric. Second, the Higgs particle represents a new form of matter, which had been widely anticipated for decades but had never been seen. Early in the 20th century, physicists realized that particles, in addition to their mass and electric charge, have a third defining feature: their spin.
Electrons and quarks all have the same spin value, while the spin of photons—particles of light—is twice that of electrons and quarks.
The equations describing the Higgs particle showed that—unlike any other fundamental particle species—it should have no spin at all. Data from the Large Hadron Collider have now confirmed this. Establishing the existence of a new form of matter is a rare achievement, but the result has resonance in another field: cosmology, the scientific study of how the entire universe began and developed into the form we now witness.
For many years, cosmologists studying the Big Bang theory were stymied. They had pieced together a robust description of how the universe evolved from a split second after the beginning, but they were unable to give any insight into what drove space to start expanding in the first place.
0コメント