A lot of the problems with string theory reduce to the fact that we don’t really have a way to verify or falsify its predictions. Key to string theory is the existence of, well, strings, which amount to tiny coiled-up bonus dimensions extending beyond the four that we all know and love. Unfortunately, these strings are so small, about 10–35 meters, that even the most powerful particle colliders imaginable wouldn’t be able to offer a glimpse.
But, still, there are many that want to believe. Enter physicists Sergio Gutiérrez, Abel Camacho, and Héctor Hernández of Universidad Autonoma Metropolitana-Iztapalapa in Mexico City. The trio posted a paper late last month to the arXiv preprint server suggesting that it may be possible to detect extra dimensions by using Bose-Einstein condensates as windows of sorts.
Bose-Einstein condensates are pretty neat. The basic idea is that if you cool a bunch of particles way, way down, they start acting like one great big particle. The upshot is that the quantum effects normally seen at subatomic scales can be experienced at relatively macro scales.
It’s hard to imagine a photon having a shape at all. For one thing, photons—pointlike, indivisible units of light—are massless, which is the whole essence of being a photon to begin with and what enables such particles to set the universe’s maximum speed limit (the c in E = mc2). How does something without mass even occupy geometric space? Alternatively, how can space be not empty yet not contain any mass?
As it turns out, photons can take on different shapes and sizes and this winds up mattering a great deal when it comes to interactions between light and matter (such as atoms). To this end, researchers at the National University of Singapore have devised a method for shaping photons with extreme precision, allowing for an unprecedented look at these light-matter interactions at atomic scales.
Their work, which is described this week in Nature Communications, demonstrates that shape plays a key role in predicting whether or not an atom is likely to absorb an incoming photon, an insight likely to have consequences for the development of quantum information technologies, which hinge upon light-matter interactions.
This is among the most fundamental things in the electromagnetic world: Photons carry energy, and when a photon is absorbed by an atom, the atom takes up that energy. This might result in the atom emitting its own photons at new wavelengths (giving rise to the innate colors of objects) or otherwise displaying new properties. This interaction is what enables photosynthesis, as photons from the Sun convey energy to chloroplasts, which convert light energy into chemical energy.
The Large Hadron Collider, now operating at near-peak luminosity in its second operational phase, may be the most powerful particle collider ever built and it may smash together billions of protons per second like it’s not even a thing, but the latest hint of a new particle comes courtesy of the predecessor to the LHC’s predecessor.
This is according to a paper posted Friday to the arXiv preprint server by a physicist named Arno Heister who had worked on the ALEPH experiment at the Large Electron-Positron collider (LEP), which formerly occupied the LHC’s 27-kilometer tunnel.
The LEP collider was constructed in 1989 and later upgraded (in 1995) to the LEP II. It has several claims to fame, including further refinements to the masses of the W and Z bosons and a would-be hint of the Higgs boson. Surely the project’s resulting data has been analyzed and analyzed again by now, but Heister offers an intriguing reconsideration—one that yields a small anomalous signal in the 30 GeV mass range. Is this data bump real, or is it just a statistical artifact?
Bumps like this are generally how new particles are found in collision experiments. Two particles are smashed together at extreme energies and the result is an energetic shower of particle byproducts; you might imagine pairs of wine glasses impacting dead-on at bullet speeds. Statistics describing these post-collision particle showers are collected over time and then analyzed for unknown quantities.
Despite the rapid emergence of mobile devices and cloud computing, it’s still intuitive to think of computers as relatively self-contained systems. And, as self-contained systems, they can be relied on to have many if not most of the same capabilities offline as they do online. They work just fine in isolation.
Quantum computing is poised to upend a great many of our assumptions about computers and what it even means to be such a thing. Rather than centralized machines, it may make more sense to imagine the future’s powerful quantum computers as distributed networks of very tiny computers operating in a way similar to how parallel computation occurs in classical computing. A problem is broken apart many times and then sent across many different processors at once; at the other end, all of the results are put together into a final solution.
Physicists are routinely breaking distance records for quantum correlations, but connecting quantum computers at very short distances is important too when it comes to distributed quantum computing. To this end, researchers at Sandia National Laboratories have successfully bridged quantum computers at atomic scales. Their work, which is described in the current issue of Science, offers the possibility of arranging many small quantum computers into dense, powerful parallel networks.
On Monday, the Council of the Cherenkov Telescope Array Observatory signed a deal with the Instituto de Astrofísica de Canarias to place 19 telescopes on the island of La Palma in the Canary Islands. The site, located on a high-altitude plateau near the rim of an extinct volcanic crater, will allow the pristine viewing conditions for spotting the bits of blue light known as Cherenkov radiation that are characteristic of high-energy gamma rays smashing into Earth’s upper atmosphere.
It’s these rays that are the experiment’s ultimate quarry. Once complete, the Cherenkov Telescope Array (CTA) will be able to spot incoming gamma rays with a precision 10 times that of the current best instruments. The Canary Islands array will be only the Northern Hemisphere portion of the CTA, with another 99 telescopes to be installed at a site in Chile. (The second site is still being negotiated with its European Southern Observatory landlord.) The two arrays combined will allow access to observations from across the whole sky and across a wide range of energies.
The telescopes in question are Imaging Atmospheric Cherenkov Telescopes (IACTs). They look very cool, particularly on a foggy night when the laser beams used to focus their mirror arrays are visible. These mirrors are used to focus ultrashort bursts of light—Cherenkov radiation—into photomultiplier tubes, which are coupled to electronics that perform quick data analyses on the events.
No one said detecting dark matter would be easy. We didn’t even know about the stuff until a couple of decades ago, after all, despite the fact that it represents some 85 percent of all of the mass in the universe and is what’s responsible for giving structure to the cosmos. We see its effects across the universe, but we have yet to see it. We’re not even sure what exactly we’re looking for—there are many theories as to the exact properties of a dark matter particle. Some aren’t even all that dark.
The leading candidate for dark matter is a particle known as a WIMP, or weakly-interacting massive particle. These are really heavy, classically „dark“ particles. They interact with other matter via only the gravitational force, crucially evading electromagnetic interactions, which are what most of the interactions we see out in the world are based on: from a baseball whapping into a catcher’s mitt to the nanoscale electrical circuits enabling the machine you are now staring at.
WIMP detection is premised on WIMPs having sufficient mass to smack into an atomic nuclei with enough force to create a bit of light or heat, which can then be registered by the detector. A problem then arises when we start trying to imagine dark matter particles that maybe aren’t so heavy and, as such, may result in interactions below the sensitivity of current detectors. This is where the work of Kathryn Zurek and colleagues at the Lawrence Berkeley National Laboratory comes in—bagging superlight dark matter may require supermaterials.
Though no one could really say what it was specifically, the Large Hadron Collider’s 750 GeV diphoton bump registered at least one unambiguous conclusion for physicists: they’d found something new. In the showers of proton collision byproducts that occurred during the 2015 run of CERN’s ATLAS and CMS experiments, it seemed there was a new particle.
2016 data, however, failed to replicate the bump, indicating that the earlier observations were just statistical fluctuations. This has resulted in a generally grim attitude shared by many researchers in high-energy physics: The LHC managed to bag the Higgs boson, yes. But bagging New Physics, the presence of a particle or interaction so-far unknown? Not so much.
Yet, just as the diphoton bump was being kicked to the curb, a potential new strangeness emerged at the LHC, albeit one that’s less plainly seen. This has to do with a process known as tth (top-top-Higgs), which is an alternative mode of Higgs boson production that results in the creation of Higgs particles alongside pairs of top quarks, the heaviest known fundamental particles.
After 20-month search period, a key dark matter detection experiment has officially come up empty-handed, casting doubt on the existence of weakly interacting massive particles (WIMPS), which have been far and away the leading explanation for one of the biggest mysteries in astrophysics. This is according to new results from South Dakota’s Large Underground Xenon (LUX) detector presented Thursday at the Identification of Dark Matter Conference (IDM 2016) in Sheffield, England.
“With this final result from the 2014-2016 search, the scientists of the LUX Collaboration have pushed the sensitivity of the instrument to a final performance level that is four times better than the original project goals,“ offered Rick Gaitskell, professor of physics at Brown University and co-spokesperson for the LUX experiment, in a statement. „It would have been marvelous if the improved sensitivity had also delivered a clear dark matter signal. However, what we have observed is consistent with background alone.”
To be clear, this doesn’t say anything about the existence of dark matter itself, just one of many possible explanations for dark matter. And, given that dark matter accounts for some 85 percent of all of the mass in the universe and is responsible for guiding and nurturing the development of galaxies, this is an explanation that’s ultimately at the very heart of how the universe wound up as we see it today. Far from a cosmic curiosity, dark matter and its surrounding mystery explains why we’re even here.
Dark energy is arguably the best mystery in astrophysics. Here we have an uneasy placeholder for almost all of the energy in the universe—energy that, as you read this, is working hard to shred the universe itself.
Energy that will not be satisfied until all of existence is a featureless black void. Dark energy also has the bonus selling point of being a fairly new idea, tracing back to the late-1990s discovery that the universe is not just expanding, but is also accelerating in its expansion. At every moment, the universe gets both bigger and emptier (more space, same amount of stuff).
What’s doing this or, better, why it’s happening is unsettled, to put it mildly. The general answer is that quantum physics insists that empty space has energy—vacuum energy—but what, exactly, that means is TBD. In a paper posted this week to the arXiv pre-print server, a group of cosmologists from the University of Barcelona make an interesting case for dark energy being linked to so-called frozen neutrinos, or neutrinos that may have become coupled to dark energy as the universe began to cool circa a million years following the Big Bang. These neutrinos, which had before been hauling ass around the universe at near-light speed, were suddenly arrested, passing on their kinetic energy (energy of motion) to the dark energy field in the process.
Until 1981, the atom was a sort of imaginary entity. We knew it was there, of course, and we could even measure and observe it via a number of different techniques—including field ion microscopy—but we couldn’t just go and look at an atom in the same way that we could peer into a microscope and look at some biological cells.
Then, in 1981, Gerd Binnig and Heinrich Rohrer came along with the scanning tunneling microscope, which allowed scientists to look at surfaces at atomic scales for the first time. The pair won the Nobel Prize for the accomplishment in 1986. Here, in the latest of Physics World’s 100 Second Science series, the physicist Peter Wahl explains how the thing actually works.
A group of Australian physicists has demonstrated for the first time that even atoms, relatively massive particles hundreds of million times larger than electrons (allowing for electrons to take up space at all, anyhow), exhibit the strangest properties of wave-particle duality—that is, when a particle acts as both a deterministic point and a probabilistic wave, depending on how and when we observe it. More specifically, the researchers performed a version of John Wheeler’s canonical „delayed choice“ thought experiment, which attempts to pin down when exactly a quantum particle „decides“ whether it’s going to act like a wave or point.
So, the classic demonstration of quantum oddness goes like this. Particles are fired from an emitter at some barrier with two slits cut into it, side by side. Behind the slits is a surface that records where the particles arrive. In this basic configuration, the particles behave as waves rather than point-like, deterministic objects.
We know this because the final barrier, where the particles are detected, reveals an interference pattern characteristic of two waves meeting. As waves, individual particles don’t choose one slit or the other, they pass through both, as we’d expect. On the other side, the waves that passed through each slit recombine. The effect is in essence of a particle interfering with itself, which is really strange, and leads to the popular interpretation that a single particle is somehow in many places at once.
As Richard Feynman famously quipped, wave-particle duality is the „mystery that cannot go away.“
To really demonstrate wave-particle duality, the double-slit experiment is modified such that there’s a measurement apparatus placed on the emitter side of the slits. The particle is then measured before it goes through the slits, in which case the interference pattern vanishes and the particle just acts like a particle, registering in one place at one time, as expected. The conclusion is that measurement—observation, generally—forces the probabilistic particle-wave to „choose“ where it actually is, confining it to a discrete point. The nature of this choice is a deep question.
For example, when does a particle choose where it will be at a given time? Does it always know, or have some complete knowledge of the experiment beforehand? Or does it only know at the moment of measurement? How important is the observer, really? In Wheeler’s thought experiment, and the current IRL experiments, the decision of whether or not to measure the particle is made only after the particle is emitted.