Showing posts with label supercomputers. Show all posts
Showing posts with label supercomputers. Show all posts

Wednesday, June 15, 2016

A magnetic vortex to control electron spin

Researchers coupled a diamond nanoparticle with a magnetic vortex to control electron spin in nitrogen-vacancy defects. @ Case Western Reserve University

Researchers at Case Western Reserve University have developed a way to swiftly and precisely control electron spins at room temperature.
The technology, described in Nature Communications, offers a possible alternative strategy for building quantum computers that are far faster and more powerful than today's supercomputers.
"What makes electronic devices possible is controlling the movement of electrons from place to place using electric fields that are strong, fast and local," said physics Professor Jesse Berezovsky, leader of the research. "That's hard with magnetic fields, but they're what you need to control spin."
Other researchers have searched for materials where electric fields can mimic the effects of a magnetic field, but finding materials where this effect is strong enough and still works at room temperature has proven difficult.
"Our solution," Berezovsky said, "is to use a magnetic vortex."
Berezovsky worked with physics PhD students Michael S. Wolf and Robert Badea.
The researchers fabricated magnetic micro-disks that have no north and south poles like those on a bar magnet, but magnetize into a vortex. A magnetic field emanates from the vortex core. At the center point, the field is particularly strong and rises perpendicular to the disk.
The vortices are coupled with diamond nanoparticles. In the diamond lattice inside each nanoparticle, several individual spins are trapped inside of defects called nitrogen vacancies.
The scientists use a pulse from a laser to initialize the spin. By applying microwaves and a weak magnetic field, Berezovsky's team can move the vortex in nanoseconds, shifting the central point, which can cause an electron to change its spin.
In what's called a quantum coherent state, the spin can act as a quantum bit, or qubit--the basic unit of information in a quantum computer.
In current computers, bits of information exist in one of two states: zero or one. But in a superposition state, the spin can be up and down at the same time, that is, zero and one simultaneously. That capability would allow for more complex and faster computing.
"The spins are close to each other; you want spins to interact with their neighbors in quantum computing," Berezovsky said. "The power comes from entanglement."
The magnetic field gradient produced by a vortex proved sufficient to manipulate spins just nanometers apart.
In addition to computing, electrons controlled in coherent quantum states might be useful for extremely high-resolution sensors, the researchers say. For example, in an MRI, they could be used to sense magnetic fields in far more detail than with today's technology, perhaps distinguishing atoms.
Controlling the electron spins without destroying the coherent quantum states has proven difficult with other techniques, but a series of experiments by the group has shown the quantum states remain solid. In fact, "the vortex appears to enhance the microwave field we apply," Berezovsky said.
The scientists are continuing to shorten the time it takes to change the spin, which is a key to high-speed computing. They are also investigating the interactions between the vortex, microwave magnetic field and electron spin, and how they evolve together.
Case Western Reserve University

Thursday, January 28, 2016

Argonne-UChicago researchers work to annihilate nanoscale defects in semiconductors

Researchers from the University of Chicago and Argonne use the supercomputing resources at the Argonne Leadership Computing Facility to predict the path molecules must follow to find defect-free states. They designed a process that delivers industry-standard nanocircuitry that can be scaled down to smaller densities without defects.
Courtesy of Argonne National Laboratory

Target dates are critical when the semiconductor industry adds small, enhanced features to consumer devices by integrating advanced materials onto the surfaces of computer chips. Missing a target means postponing a device’s release, which could cost a company millions of dollars or the loss of competitiveness and an entire industry.

But meeting target dates can be challenging because the final integrated devices, which include billions of transistors, must be flawless—less than one defect per 100 square centimeters.

Researchers at the University of Chicago and Argonne National Laboratory, led by Profs. Juan de Pablo and Paul Nealey, may have found a way for the semiconductor industry to hit miniaturization targets on time and without defects.

To make microchips, de Pablo and Nealey’s technique includes creating patterns on semiconductor surfaces that allow block copolymer molecules to self-assemble into specific shapes, but thinner and at much higher densities than those of the original pattern. The researchers can then use a lithography technique to create nano-trenches where conducting wire materials can be deposited.

This is a stark contrast to the industry practice of using homo-polymers in complex “photoresist” formulations, where researchers have “hit a wall,” unable to make the material smaller.

Before they could develop their new fabrication method, however, de Pablo and Nealey needed to understand exactly how block copolymers self-assemble when coated onto a patterned surface—their concern being that certain constraints cause copolymer nanostructures to assemble into undesired metastable states. To reach the level of perfection demanded to fabricate high-precision nanocircuitry, the team had to eliminate some of these metastable states.

Using the Argonne Leadership Computing Facility, UChicago and Argonne researchers have found a way miniaturize microchip components using a technique producing zero defects. This advance will allow semiconductor manufacturers to meet miniaturization target dates to produce smaller components with added functionality for consumer devices.
Courtesy of Argonne National Laboratory

To imagine how block copolymers assemble, it may be helpful to picture an energy landscape consisting of mountains and valleys, in which some valleys are deeper than others. The system prefers defect-free stability, which can be characterized by the deepest (low-energy) valleys, if they can be found. However, systems can get trapped inside higher (medium-energy) valleys, called metastable states, which have more defects.

To move from a metastable to stable state, block copolymer molecules must find ways to climb over the mountains and find lower energy valleys.

“Molecules in these metastable states are comfortable, and they can remain in that state for extraordinarily long periods of time,” said de Pablo.

“In order to escape such states and attain a perfect arrangement, they need to start rearranging themselves in a manner that allows the system to climb over local energy barriers, before reaching a lower energy minimum. What we have done in this work is predict the path these molecules must follow to find defect-free states and designed a process that delivers industry-standard nanocircuitry that can be scaled down to smaller densities without defects.”

Supported by a DOE leadership computing grant, de Pablo and his team used the Mira and Fusion supercomputers at the Argonne Leadership Computing Facility. The team generated molecular simulations of self-assembling block polymers along with sophisticated sampling algorithms to calculate where barriers to structural rearrangement would arise in the material. 

After all the calculations were done, the researchers could precisely predict the pathways of molecular rearrangement that block copolymers must take to move from a metastable to stable state. They could also experiment with temperatures, solvents and applied fields to further manipulate and decrease the barriers between these states.

To test these calculations, de Pablo and Nealey partnered with IMEC, an international consortium located in Belgium. Their commercial-grade fabrication and characterization instruments helped the researchers perform experiments under conditions that are not available in academic laboratories.

An individual defect measures only a handful of nanometers; “finding a defect in a 100-square centimeter area is like finding a needle in hay stack, and there are only a few places in the world where one has access to the necessary equipment to do so,” said de Pablo.

“Manufacturers have long been exploring the feasibility of using block copolymer assembly to reach the small critical dimensions that are demanded by modern computing and higher data storage densities,” de Pablo said. “Their biggest challenge involved evaluating defects; by following the strategies we have outlined, that challenge is greatly diminished.”

John Neuffer, president and CEO of the Semiconductor Industry Association, said industry is relentlessly focused on designing and building chips that are smaller, more powerful and more energy-efficient.

“The key to unlocking the next generation of semiconductor innovation is research,” he said. “SIA commends the work done by Argonne National Laboratory and the University of Chicago, as well as other critical scientific research being done across the United States.”

De Pablo, Nealey and their team will continue their investigations with a wider class of materials, increasing the complexity of patterns and characterizing materials in greater detail while also developing methods based on self-assembly for fabrication of three-dimensional structures.
Their long-term goal, with support from the DOE’s Office of Science, is to arrive at an understanding of directed self-assembly of polymeric molecules that will enable creation of wide classes of materials with exquisite control over their nanostructure and functionality for applications in energy harvesting, storage and transport.

Wednesday, December 9, 2015

Supercomputing the Strange Difference Between Matter and Antimatter

Supercomputers such as Brookhaven Lab's Blue Gene/Q were essential for completing the complex calculation of direct CP symmetry violation. The same calculation would have required two thousand years using a laptop.
An international team of physicists including theorists from the U.S. Department of Energy's (DOE) Brookhaven National Laboratory has published the first calculation of direct "CP" symmetry violation—how the behavior of subatomic particles (in this case, the decay of kaons) differs when matter is swapped out for antimatter. Should the prediction represented by this calculation not match experimental results, it would be conclusive evidence of new, unknown phenomena that lie outside of the Standard Model—physicists' present understanding of the fundamental particles and the forces between them.

The current result—reported in the November 20 issue of Physical Review Letters—does not yet indicate such a difference between experiment and theory, but scientists expect the precision of the calculation to improve dramatically now that they've proven they can tackle the task. With increasing precision, such a difference—and new physics—might still emerge.
"This so called 'direct' symmetry violation is a tiny effect, showing up in just a few particle decays in a million," said Brookhaven physicist Taku Izubuchi, a member of the team performing the calculation. Results from the first, less difficult part of this calculation were reported by the same group in 2012.  However, it is only now, with completion of the second part of this calculation—which was hundreds of times more difficult than the first—that a comparison with the measured size of direct CP violation can be made.  This final part of the calculation required more than 200 million core processing hours on supercomputers, "and would have required two thousand years using a laptop," Izubuchi said.
The calculation determines the size of the symmetry violating effect as predicted by the Standard Model, and was compared with experimental results that were firmly established in 2000 at the European Center for Nuclear Research (CERN) and Fermi National Accelerator Laboratory.
"This is an especially important place to compare with the Standard Model because the small size of this effect increases the chance that other, new phenomena may become visible," said Robert Mawhinney of Columbia University.
"Although the result from this direct CP violation calculation is consistent with the experimental measurement, revealing no inconsistency with the Standard Model, the calculation is on-going with an accuracy that is expected to increase two-fold within two years," said Peter Boyle of the University of Edinburgh. "This leaves open the possibility that evidence for new phenomena, not described by the Standard Model, may yet be uncovered."

Matter-antimatter asymmetry

Physicists' present understanding of the universe requires that particles and their antiparticles (which have the same mass but opposite charge) behave differently. Only with matter-antimatter asymmetry can they hope to explain why the universe, which was created with equal parts of matter and antimatter, is filled mostly with matter today. Without this asymmetry, matter and antimatter would have annihilated one another leaving a cold, dim glow of light with no material particles at all.
The first experimental evidence for the matter-antimatter asymmetry known as CP violation was discovered in 1964 at Brookhaven Lab. This Nobel-Prize-winning experiment also involved the decays of kaons, but demonstrated what is now referred to as "indirect" CP violation. This violation arises from a subtle imperfection in the two distinct types of neutral kaons.
The target of the present calculation is a phenomenon that is even more elusive: a one-part-in-a-million difference between the matter and antimatter decay probabilities. The small size of this "direct" CP violation made its experimental discovery very difficult, requiring 36 years of intense experimental effort following the 1964 discovery of "indirect" CP violation.
This calculation required more than 200 million core processing hours on supercomputers and would have required two thousand years using a laptop.
While these two examples of matter-antimatter asymmetry are of very different size, they are related by a remarkable theory for which physicists Makoto Kobayashi and Toshihide Maskawa were awarded the 2008 Nobel Prize in physics. The theory provides an elegant and simple explanation of CP violation that manages to explain both the 1964 experiment and later CP-violation measurements in experiments at the KEK laboratory in Japan and the SLAC National Accelerator Laboratory in California.
"This new calculation provides another test of this theory—a test that the Standard Model passes, at least at the present level of accuracy," said Christoph Lehner, a Brookhaven Lab member of the team.
Although the Standard Model does successfully relate the matter-antimatter asymmetries seen in the 1964 and later experiments, this Standard-Model asymmetry is insufficient to explain the preponderance of matter over antimatter in the universe today.
"This suggests that a new mechanism must be responsible for the preponderance of matter of which we are made," said Christopher Kelly, a member of the team from the RIKEN BNL Research Center (RBRC). "This one-part-per-million, direct CP violation may be a good place to first see it. The approximate agreement between this new calculation and the 2000 experimental results suggests that we need to look harder, which is exactly what the team performing this calculation plans to do."
This calculation was carried out on the Blue Gene/Q supercomputers at the RIKEN BNL Research Center (RBRC), at Brookhaven National Laboratory, at the Argonne Leadership Class Computing Facility (ALCF) at Argonne National Laboratory, and at the DiRAC facility at the University of Edinburgh. The research was carried out by Ziyuan Bai, Norman Christ, Robert Mawhinney, and Daiqian Zhang of Columbia University; Thomas Blum of the University of Connecticut; Peter Boyle and Julien Frison of the University of Edinburgh; Nicolas Garron of Plymouth University; Chulwoo Jung, Christoph Lehner, and Amarjit Soni of Brookhaven Lab; Christopher Kelly, and Taku Izubuchi of the RBRC and Brookhaven Lab; and Christopher Sachrajda of the University of Southampton. The work was funded by the U.S. Department of Energy's Office of Science, by the RIKEN Laboratory of Japan, and the U.K. Science and Technology Facilities Council.  The ALCF is a DOE Office of Science User Facility.

Monday, June 1, 2015

A magnetic field can steer sound? Regulate heat? Ohio State researchers prove phonons have magnetic properties

A team led by Ohio State’s Wolfgang Windl, Ph.D., used OSC’s Oakley Cluster to calculate acoustic phonon movement within an indium-antimonide semiconductor under a magnetic field. Their findings show that phonon amplitude-dependent magnetic moments are induced on the atoms, which change how they vibrate and transport heat.

Phonons—the elemental particles that transmit both heat and sound—have magnetic properties, according to a landmark study supported by Ohio Supercomputer Center (OSC) services and recently published by a researcher group from The Ohio State University.
In a recent issue of the journal Nature Materials, the researchers describe how a magnetic field, roughly the size of a medical MRI, reduced the amount of heat flowing through a semiconductor by 12 percent. Simulations performed at OSC then identified the reason for it—the magnetic field induces a diamagnetic response in vibrating atoms known as phonons, which changes how they transport heat.
“This adds a new dimension to our understanding of acoustic waves,” said Joseph Heremans, Ph.D., Ohio Eminent Scholar in Nanotechnology and a professor of mechanical engineering at Ohio State whose group performed the experiments. “We’ve shown that we can steer heat magnetically. With a strong enough magnetic field, we should be able to steer sound waves, too.”
People might be surprised enough to learn that heat and sound have anything to do with each other, much less that either can be controlled by magnets, Heremans acknowledged. But both are expressions of the same form of energy, quantum mechanically speaking. So any force that controls one should control the other.
The nature of the effect of the magnetic field initially was not understood and subsequently was investigated through computer simulations performed on OSC’s Oakley Cluster by Oscar Restrepo, Ph.D., a research associate, Nikolas Antolin, a doctoral student, and Wolfgang Windl, Ph.D., a professor, all of Ohio State’s Department of Materials Science and Engineering. After painstakingly examining all possible magnetic responses that a non-magnetic material can have to an external field, they found that the effect is due to a diamagnetic response, which exists in all materials. This suggests then that the general effect should be present in any solid.
The implication: in materials such as glass, stone, plastic—materials which are not conventionally magnetic—heat can be controlled magnetically, if you have a powerful enough magnet. This development may have future impacts on new energy production processes.
But, there won’t be any practical applications of this discovery any time soon: seven-tesla magnets like the one used in the study don’t exist outside of hospitals and laboratories, and a semiconductor made of indium antimonide had to be chilled to -450 degrees Fahrenheit (-268 degrees Celsius)—very close to absolute zero—to make the atoms in the material slow down enough for the phonons’ movements to be detectible.
To simulate the experiment, Windl and his computation team employed a quantum mechanical modeling strategy known as density functional theory (DFT). The DFT strategy was used to determine how the electron distribution changed when atoms vibrated with or without magnetic field. The motion of the electrons around their atoms changed in the field, creating diamagnetic moments when phonons were present. These moments then reacted to the field and slowed the heat transport, similar to an eddy current brake in a train.
The simulations were conducted on the Oakley Cluster, an HP/Intel Xeon system with more than 8,300 processor cores to provide researchers with a peak performance of 154 Teraflops—tech-speak for 154 trillion calculations per second. Since atoms can vibrate in many different ways, a large number of simulations were necessary, consuming approximately 1.5 million CPU hours even on a machine as powerful as Oakley. OSC engineers also helped the research team use OSC’s high-throughput, parallel file system to handle the immense datasets generated by the DFT model.

“OSC offered us phenomenal support; they supported our compilation and parallel threading issues, helped us troubleshoot hardware issues when they arose due to code demands, and moved us to the Lustre high-performance file system after we jammed their regular file system,” said Antolin, who is the expert for high-demand computations in Windl’s group.
“Dr. Windl and his team are important OSC clients, and we’re always pleased to support their research projects with our hardware, software and staff support services,” said David Hudak, Ph.D., OSC’s director of supercomputer services. “With the addition of the Ruby Cluster this past fall and another, much more powerful system upcoming this fall, OSC will continue to offer even larger, faster and more powerful services to support this type of discovery and innovation.”
Next, the group plans to test whether they can deflect sound waves sideways with magnetic fields.
Coauthors on the study included graduate student Hyungyu Jin and postdoctoral researcher Stephen Boona from mechanical and aerospace engineering; and Roberto Myers, Ph.D., an associate professor of materials science and engineering, physics and mechanical and aerospace engineering.
Funding for the study came from the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research and the National Science Foundation (NSF), including funds from the NSF Materials Research Science and Engineering Center at Ohio State. Computing services were provided by the Ohio Supercomputer Center.

Note: Significant portions of this story were adapted from a release written earlier by Pam Frost Gorder in the Research & Innovation Communications office at The Ohio State University: https://news.osu.edu/news/2015/03/23/heatmag/(link sends e-mail)
The Ohio Supercomputer Center (OSC), a member of the Ohio Technology Consortium of the Ohio Board of Regents, addresses the rising computational demands of academic and industrial research communities by providing a robust shared infrastructure and proven expertise in advanced modeling, simulation and analysis. OSC empowers scientists with the vital resources essential to make extraordinary discoveries and innovations, partners with businesses and industry to leverage computational science as a competitive force in the global knowledge economy, and leads efforts to equip the workforce with the key technology skills required to secure 21st century jobs. For more, visit www.osc.edu.