Graphene rings; ordinary language.
Bottom-up nanoribbons
Concentric hexagons of graphene grown in a furnace at Rice University represent the first time anyone has synthesized graphene nanoribbons on metal from the bottom up — atom by atom.
As seen under a microscope, the layers brought onions to mind, according to Rice chemist James Tour, until a colleague suggested flat graphene could never be like an onion. “So I said, ‘OK, these are onion rings,’” Tour quipped. The name stuck.
The challenge was to figure out how such a thing could grow. Usually, graphene grown in a hot furnace by chemical vapor deposition starts on a seed — a speck of dust or a bump on a copper or other metallic surface. One carbon atom latches onto the seed in a process called nucleation and others follow to form the familiar chicken-wire grid.
Experiments in the lab to see how graphene grows under high pressure and in a hydrogen-rich environment produced the first rings. Under those conditions, Rice researchers found that the entire edge of a fast-growing sheet of graphene becomes a nucleation site when hydrogenated. The edge lets carbon atoms get under the graphene skin, where they start a new sheet.
But because the top graphene grows so fast, it eventually halts the flow of carbon atoms to the new sheet underneath. The bottom stops growing, leaving a graphene ring. Then the process repeats.
The mechanism relies on that top layer to stop carbon from reaching the bottom so easily so what results are a multiple of single crystals growing one on top of the other.
The big news here is that the relative pressures of the growth environment of hydrogen versus carbon can be changed to achieve entirely new structures, which is dramatically different from regular graphene, the researchers said.
The width of the rings, which ranged from 10 to 450nm, also affects their electronic properties, so finding a way to control it will be one focus of continued research. If the researchers can consistently make 10nm ribbons, they can begin to gate them and turn them into low-voltage transistors. They may also be suitable for lithium storage for advanced lithium ion batteries.
Writing programs using ordinary language
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have demonstrated that, for a few specific tasks, it’s possible to write computer programs using ordinary language rather than special-purpose programming languages.
The work may be of some help to programmers, and it could let nonprogrammers manipulate common types of files — like word-processing documents and spreadsheets — in ways that previously required familiarity with programming languages. But the researchers’ methods could also prove applicable to other programming tasks, expanding the range of contexts in which programmers can specify functions using ordinary language.
The researchers don’t think this will be possible for everything in programming, but there are areas where there are a lot of examples of how humans have done translation. If the information is available, it may be possible to learn how to translate this language to code. In other cases, programmers may already be in the practice of writing specifications that describe computational tasks in precise and formal language.
Researchers have used examples harvested from the Web to train a computer system to convert natural-language descriptions into so-called “regular expressions”: combinations of symbols that enable file searches that are far more flexible than the standard search functions available in desktop software.
They’ve also described a system that automatically learned how to handle data stored in different file formats, based on specifications prepared for a popular programming competition.
The system is one that can automatically write what are called input-parsing programs, essential components of all software applications. Every application has an associated file type — .doc for Word programs, .pdf for document viewers, .mp3 for music players, and so on. And every file type organizes data differently. An image file, for instance, might begin with a few bits indicating the file type, a few more indicating the width and height of the image, and a few more indicating the number of bits assigned to each pixel, before proceeding to the bits that actually represent pixel colors.
Input parsers figure out which parts of a file contain which types of data: Without an input parser, a file is just a random string of zeroes and ones.
The MIT researchers’ system can write an input parser based on specifications written in natural language. They tested it on more than 100 examples culled from the Association for Computing Machinery’s International Collegiate Programming Contest, which includes file specifications for every programming challenge it poses. The system was able to produce working input parsers for about 80 percent of the specifications. And in the remaining cases, changing just a word or two of the specification usually yielded a working parser.
They expect this could be used as an interactive tool for the developer. The developer could look at those cases and see what kind of changes they need to make to the natural language — maybe some word is hard for the system to figure out.
~Ann Steffora Mutschler
Leave a Reply