System Bits: Feb. 3

A viable silicon substitute; enhanced graphene; parallelized algorithms.

popularity

A viable silicon substitute
A new study by UC Berkeley, the University of Pennsylvania and the University of Illinois at Urbana-Champaign (UIUC) moves graphene a step closer to knocking silicon off as the dominant workhorse of the electronics industry.

They reminded that while silicon is ubiquitous in semiconductors and integrated circuits, researchers have been eyeing graphene, a one-atom-thick layer of crystallized carbon, as a replacement because of the ultrafast speed with which electrons can travel through the material.

Researchers have demonstrated a simple, reversible way of creating nano-scale devices from 2D graphene. The stripes in the above image shows differences in electron density in graphene. (Source:UC Berkeley)

Researchers have demonstrated a simple, reversible way of creating nano-scale devices from the 2-D wonder material graphene. The stripes in the above image shows differences in electron density in graphene. (Source:UC Berkeley)

The team found a way to control the movement and placement of electrons in graphene in a way that can make it easy to change the polarity of the charge with an electric field.

Enhanced graphene
Also in recent research with graphene, Rice University scientists have discovered that a winding thread of odd rings at the border of two sheets of graphene has qualities that could be valuable to manufacturers.

They noted that graphene rarely appears as a perfect lattice of chicken wire-like six-atom rings, and when grown via chemical vapor deposition, it usually consists of domains that bloom outward from hot catalysts until they meet up.

Where they meet, the regular rows of atoms aren’t necessarily aligned, so they have to adjust if they are to form a continuous graphene plane. That adjustment appears as a grain boundary, with irregular rows of five- and seven-atom rings that compensate for the angular disparity.

The Rice team had calculated that rings with seven carbon atoms can be weak spots that lessen the legendary strength of graphene, but their new research shows meandering grain boundaries can, in some cases, toughen what are known as polycrystalline sheets, nearly matching the strength of pristine graphene.

Periodic grain boundaries in graphene may lend mechanical strength and semiconducting properties to the atom-thick carbon material, according to calculations by scientists at Rice University. (Source: Rice University)

Periodic grain boundaries in graphene may lend mechanical strength and semiconducting properties to the atom-thick carbon material, according to calculations by scientists at Rice University. (Source: Rice University)

Interestingly, they can also create a “sizable electronic transport gap,” or band gap. Perfect graphene allows for the ballistic transport of electricity, but electronics require materials that can controllably stop and start the flow.

The researchers also determined that at certain angles, these “sinuous” boundaries relieve stress that would otherwise weaken the sheet, and that if stress along the boundary were alleviated, the strength of the graphene would be enhanced — but this only applies to sinuous grain boundaries as compared with straight boundaries.

Parallelized algorithms
It is well understood that there are different ways of organizing data in a computer’s memory and every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.

Hardware manufacturers make computer chips faster by giving them more cores, or processing units – but some data structures are not well adapted to multicore computing.

With algorithms that use a common data structure called a priority queue, that’s been true for up to about eight cores — but adding any more cores actually causes performance to plummet.

To this point, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have described a new way of implementing priority queues that lets them keep pace with the addition of new cores. In simulations, algorithms using their data structure continued to demonstrate performance improvement with the addition of new cores, up to a total of 80 cores.

In multicore systems, conflicts arise when multiple cores try to access the front of a priority queue at the same time. The problem is compounded by modern chips’ reliance on caches — high-speed memory banks where cores store local copies of frequently used data.

To avoid problems, the researchers relaxed the requirement that each core has to access the first item in the queue. If the items at the front of the queue can be processed in parallel — which must be the case for multicore computing to work, anyway — they can simply be assigned to cores at random.