Machine Learning Meets IC Design


Machine Learning (ML) is one of the hot buzzwords these days, but even though EDA deals with big-data types of issues it has not made much progress incorporating ML techniques into EDA tools. Many EDA problems and solutions are statistical in nature, which would suggest a natural fit. So why is it so slow to adopt machine learning technology, while other technology areas such as vision recog... » read more

Deep Learning Robust Grasps with Synthetic Point Clouds & Analytic Grasp Metrics (UC Berkeley)


Source: The research was the work of Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea, and Ken Goldberg with support from the AUTOLAB team at UC Berkeley. Nimble-fingered robots enabled by deep learning Grabbing awkwardly shaped items that humans regularly pick up daily is not so easy for robots, as they don’t know where to apply grip... » read more

System Bits: June 13


Nimble-fingered robots enabled by deep learning Grabbing awkwardly shaped items that humans regularly pick up daily is not so easy for robots, as they don’t know where to apply grip. To overcome this, UC Berkeley researchers have a built a robot that can pick up and move unfamiliar, real-world objects with a 99% success rate. Berkeley professor Ken Goldberg, postdoctoral researcher Jeff M... » read more

The Evolution Of Deep Learning For ADAS Applications


Embedded vision solutions will be a key enabler for making automobiles fully autonomous. Giving an automobile a set of eyes – in the form of multiple cameras and image sensors – is a first step, but it also will be critical for the automobile to interpret content from those images and react accordingly. To accomplish this, embedded vision processors must be hardware optimized for performanc... » read more

System Bits: June 6


Silicon nanosheet-based builds 5nm transistor To enable the manufacturing of 5nm chips, IBM, GLOBALFOUNDRIES, Samsung, and equipment suppliers have developed what they say is an industry-first process to build 5nm silicon nanosheet transistors. This development comes less than two years since developing a 7nm test node chip with 20 billion transistors. Now, they’ve paved the way for 30 billi... » read more

What’s Next In Neural Networking?


Faster chips, more affordable storage, and open libraries are giving neural network new momentum, and companies are now in the process of figuring out how to optimize it across a variety of markets. The roots of neural networking stretch back to the late 1940s with Claude Shannon’s Information Theory, but until several years ago this technology made relatively slow progress. The rush towar... » read more

System Bits: April 18


RISC-V errors Princeton University researchers have discovered a series of errors in the RISC-V instruction specification that now are leading to changes in the new system, which seeks to facilitate open-source design for computer chips. In testing a technique they created for analyzing computer memory use, the team found over 100 errors involving incorrect orderings in the storage and retr... » read more

Biz Talk: ASICs


eSilicon CEO [getperson id="11145" comment="Jack Harding"] talks about the future of scaling, advanced packaging, the next big things—automotive, deep learning and virtual reality—and the need for security. [youtube vid=leO8gABABqk]   Related Stories Executive Insight: Jack Harding (Aug 2016) eSilicon’s CEO looks at industry consolidation, competition, China’s impact, an... » read more

What Does AI Really Mean?


Seth Neiman, chairman of eSilicon, founder of Brocade Communications, and a board member and investor in a number of startups, sat down with Semiconductor Engineering to talk about advances in AI, what's changing, and how it ultimately could change our lives. What follows are excerpts of that conversation. SE: How far has AI progressed? Neiman: We’ve been working with AI since the mid 1... » read more

System Bits: Jan. 31


Optimizing code To address the issue of code explicitly written to take advantage of parallel computing usually losing the benefit of compilers’ optimization strategies, MIT Computer Science and Artificial Intelligence Laboratory researchers have devised a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution. Charles E. Lei... » read more

← Older posts