Author's Latest Posts


Design Reuse Vs. Abstraction


Chip designers have been constantly searching for a hardware description language abstraction level higher than RTL for a few decades. But not everyone is moving in that direction, and there appear to be enough options available through design reuse to forestall that shift for many chipmakers. Pushing to new levels of abstraction is frequent topic of discussion in the design world, particula... » read more

System Bits: July 3


Machine learning network for personalized autism therapy MIT Media Lab researchers have developed a personalized deep learning network for therapy use with children with autism spectrum conditions. They reminded these children often have trouble recognizing the emotional states of people around them, such as distinguishing a happy face from a fearful face. To help with this, some therapists... » read more

Adding NoCs To FPGA SoCs


FPGA SoCs straddle the line between flexibility and performance by combining elements of both FPGAs and ASICs. But as they find a home in more safety- and mission-critical markets, they also are facing some of the same issues as standard SoCs, including the ability to move larger and larger amounts of data quickly throughout an increasingly complex device, and the difficulty in verifying and de... » read more

System Bits: June 26


I’m enjoying a very busy Design Automation Conference this week in San Francisco, and on the lookout for interesting research topics here. In the meantime, enjoy a few interesting items from around the globe. AI platform diagnoses Zika and other pathogens University of Campinas (UNICAMP) researchers in Brazil have developed an AI platform that can diagnose several diseases with a high deg... » read more

Defining Edge Memory Requirements


Defining edge computing memory requirements is a growing problem for chipmakers vying for a piece of this market, because it varies by platform, by application, and even by use case. Edge computing plays a role in artificial intelligence, automotive, IoT, data centers, as well as wearables, and each has significantly different memory requirements. So it's important to have memory requirement... » read more

System Bits: June 19


ML algorithm 3D scan comparison up to 1,000 times faster To address the issue of medical image registration that typically takes two hours or more to meticulously align each of potentially a million pixels in the combined scans, MIT researchers have created a machine-learning algorithm they say can register brain scans and other 3D images more than 1,000 times more quickly using novel learning... » read more

Near-Threshold Issues Deepen


Complex issues stemming from near-threshold computing, where the operating voltage and threshold voltage are very close together, are becoming more common at each new node. In fact, there are reports that the top five mobile chip companies, all with chips at 10/7nm, have had performance failures traced back to process variation and timing issues. Once a rather esoteric design technique, near... » read more

Farming Goes High-Tech


Data from dirt — literally — is enabling farmers to perform detailed analysis to make their farming practices smarter, more efficient, and significantly more productive. Companies in every market are leveraging data to their business advantage, and the agricultural sector is no different. Even the venture capital community has taken note. According to ABI Research, some sizeable venture ... » read more

System Bits: June 12


Writing complex ML/DL analytics algorithms Rice University researchers in the DARPA-funded Pliny Project believe they have the answer for every stressed-out systems programmer who has struggled to implement complex objects and workflows on ‘big data’ platforms like Spark and thought: “Isn’t there a better way?” Their answer: Yes with PlinyCompute, which the team describes as “a sys... » read more

System Bits: June 5


The right squeeze for quantum computing In an effort to bring quantum computers closer to development, Hokkaido University and Kyoto University researchers have developed a theoretical approach to quantum computing that is 10 billion times more tolerant to errors than current theoretical models. The team said their method may lead to quantum computers that use the diverse properties of sub... » read more

← Older posts Newer posts →