provigil and trileptal, provigil and lexapro together, provigil nz, safe place to buy provigil online, provigil depersonalization, provigil nursing implications

System Bits: Oct. 24

Light not wires; selective memory; nanotube fiber antennas.

popularity

Optical communication on silicon chips
With the huge increase in computing performance in recent decades achieved by squeezing ever more transistors into a tighter space on microchips, at the same time this downsizing has also meant packing the wiring within microprocessors ever more tightly together. This has led to effects such as signal leakage between components, which can slow down communication between different parts of the chip. This delay is known as the interconnect bottleneck, and is becoming an increasing problem in high-speed computing systems.

One approach to solve this issue is to use light rather than wires to communicate between different parts of a microchip but this is no easy task because silicon does not emit light easily.

Now, however, MIT researchers have devised a light emitter and detector that can be integrated into silicon CMOS chips.

The team explained that the device is built from an ultrathin semiconductor material called molybdenum ditelluride that belongs to an emerging group of materials known as 2D transition-metal dichalcogenides. Unlike conventional semiconductors, the material can be stacked on top of silicon wafers.

Researchers have designed a light-emitter and detector that can be integrated into silicon CMOS chips. This illustration shows a molybdenum ditelluride light source for silicon photonics.
Source: MIT

Researchers have been trying to find materials that are compatible with silicon in order to bring optoelectronics and optical communication on-chip but this has proven very difficult. For example, gallium arsenide is very good for optics, but it cannot be grown on silicon very easily because the two semiconductors are incompatible. However, the 2-D molybdenum ditelluride can be mechanically attached to any material.

Another difficulty with integrating other semiconductors with silicon is that the materials typically emit light in the visible range, but light at these wavelengths is simply absorbed by silicon. But molybdenum ditelluride emits light in the infrared range, which is not absorbed by silicon, meaning it can be used for on-chip communication.

The researchers are now investigating other materials that could be used for on-chip optical communication, and are also investigating black phosphorus, which can be tuned to emit light at different wavelengths by altering the number of layers used.

The hope is that if they are able to communicate on-chip via optical signals instead of electronic signals, they will be able to do so more quickly, and while consuming less power.

Making high-capacity data caches more efficient
As the transistor counts in processors have gone up, the relatively slow connection between the processor and main memory has become the chief impediment to improving computing performance but now, researchers from MIT, Intel, and ETH Zurich have created a cache-management scheme that they say improves the data rate of in-package DRAM caches by 33 to 50 percent.

The bandwidth in in-package DRAM can be five times higher than off-package DRAM but it turns out that previous schemes spend too much traffic accessing metadata or moving data between in- and off-package DRAM, not really accessing data, and they waste a lot of bandwidth. The performance is not the best that can be achieved from this new technology.

Metadata refers to the data that describe where data in the cache comes from. In a modern computer chip, when a processor needs a particular chunk of data, it will check its local caches to see if the data is already there. Data in the caches is tagged with the addresses in main memory from which it is drawn; the tags are the metadata.

The team reminded that a typical on-chip cache might have room enough for 64,000 data items with 64,000 tags and since a processor doesn’t want to search all 64,000 entries for the one that it’s interested in, cache systems usually organize data using something called a hash table. When a processor seeks data with a particular tag, it first feeds the tag to a hash function, which processes it in a prescribed way to produce a new number and that number designates a slot in a table of data, which is where the processor looks for the item it’s interested in.

The point of a hash function is that very similar inputs produce very different outputs. That way, if a processor is relying heavily on data from a narrow range of addresses — if, for instance, it’s performing a complicated operation on one section of a large image — that data is spaced out across the cache so as not to cause a logjam at a single location.

Hash functions can, however, produce the same output for different inputs, which is all the more likely if they have to handle a wide range of possible inputs, as caching schemes do. So a cache’s hash table will often store two or three data items under the same hash index. Searching two or three items for a given tag, however, is much better than searching 64,000.

Here’s where the difference between DRAM and SRAM, the technology used in standard caches, comes in, the researchers said. For every bit of data it stores, SRAM uses six transistors. DRAM uses one, which means that it’s much more space-efficient. But SRAM has some built-in processing capacity, and DRAM doesn’t. If a processor wants to search an SRAM cache for a data item, it sends the tag to the cache. The SRAM circuit itself compares the tag to those of the items stored at the corresponding hash location and, if it gets a match, returns the associated data.

DRAM, on the other hand, can’t do anything but transmit requested data. So the processor would request the first tag stored at a given hash location and, if it’s a match, send a second request for the associated data. If it’s not a match, it will request the second stored tag, and if that’s not a match, the third, and so on, until it either finds the data it wants or gives up and goes to main memory.

In-package DRAM may have a lot of bandwidth, but this process squanders it. The new solution — Banshee — avoids all that metadata transfer with a slight modification of a memory management system found in most modern chips.

Any program running on a computer chip has to manage its own memory use, and it’s generally handy to let the program act as if it has its own dedicated memory store. But in fact, multiple programs are usually running on the same chip at once, and they’re all sending data to main memory at the same time. So each core, or processing unit, in a chip usually has a table that maps the virtual addresses used by individual programs to the actual addresses of data stored in main memory.

Banshee, adds three bits of data to each entry in the table. One bit indicates whether the data at that virtual address can be found in the DRAM cache, and the other two indicate its location relative to any other data items with the same hash index.

The buffer is small, only 5 kilobytes, so its addition would not use up too much valuable on-chip real estate. And the researchers’ simulations show that the time required for one additional address lookup per memory access is trivial compared to the bandwidth savings Banshee affords.

Flexible fibers work well but weigh much less
According to Rice University researchers, fibers made of carbon nanotubes configured as wireless antennas can be as good as copper antennas but 20 times lighter, and may offer practical advantages for aerospace applications and wearable electronics where weight and flexibility are factors.

The researchers believe the discovery offers more potential applications for the strong, lightweight nanotube fibers developed by the Rice lab of chemist and chemical engineer Matteo Pasquali.

The lab introduced the first practical method for making high-conductivity carbon nanotube fibers in 2013 and has since tested them for use as brain implants and in heart surgeries, among other applications.

Rice graduate student Amram Bengio prepares a sample nanotube fiber antenna for evaluation. The fibers had to be isolated in Styrofoam mounts to assure accurate comparisons with each other and with copper.
Source: Rice University

The researchers believe this work could help engineers who seek to streamline materials for airplanes and spacecraft where weight equals cost. Increased interest in wearables like wrist-worn health monitors and clothing with embedded electronics could benefit from strong, flexible and conductive fiber antennas that send and receive signals.

The Rice team and colleagues at the National Institute of Standards and Technology (NIST) developed a metric they called “specific radiation efficiency” to judge how well nanotube fibers radiated signals at the common wireless communication frequencies of 1 and 2.4 gigahertz and compared their results with standard copper antennas. They made thread comprising from eight to 128 fibers that are about as thin as a human hair and cut to the same length to test on a custom rig that made straightforward comparisons with copper practical.

Antennas typically have a specific shape, and must be designed very carefully, Once in a shape, they need to stay that way.

Contrary to earlier results by other labs (which used different carbon nanotube fiber sources), the Rice researchers found the fiber antennas matched copper for radiation efficiency at the same frequencies and diameters. Their results support theories that predicted the performance of nanotube antennas would scale with the density and conductivity of the fiber.



Leave a Reply


(Note: This name will be displayed publicly)