Improving Library Characterization with Machine Learning


Efficient and accurate library characterization is a critical step in full-chip or block-level design flows because it ensures that all library elements perform to specification under all intended operating conditions. However, traditional library characterization and validation have become increasingly expensive in terms of computation and engineering effort, due to complexity and the amount o... » read more

Will AI Drive Scaling Forward?


The almost ubiquitous rollout of AI and its offshoots—machine learning, deep learning, neural nets of all types—will require significantly more processing power as the amount of data that needs to be processed continues to grow by orders of magnitude. What isn't clear yet is how that will affect semiconductor manufacturing or how quickly that might happen. AI is more than the latest buz... » read more

Week in Review: IoT, Security, Auto


Internet of Things Arm made five 2019 predictions for the Internet of Things. They are: The intelligent home goes mainstream; personalized delivery options; improved health-care service; smart cities seek to improve revenue streams and citizen engagement; and smart buildings use more technology for efficiencies. The company also commissioned a worldwide survey of 2,000 consumers, conducted by ... » read more

Security, Scaling and Power


If anyone has doubts about the slowdown and increasing irrelevance of Moore's Law, Intel's official unveiling of its advanced packaging strategy should leave little doubt. Inertia has ended and the roadmap is being rewritten. Intel's discussion of advanced packaging is nothing new. The company has been public about its intentions for years, and started dropping hints back when Pat Gelsinger ... » read more

The Cost Of Accuracy


How accurate does a system need to be, and what are you willing to pay for that accuracy? There are many sources of inaccuracy throughout the development flow of electronic systems, most of which involve complex tradeoffs. Inaccuracy leaves an impact on your design in ways you are not even aware of, hidden by best practices or guard-banding. EDA tools also inject some inaccuracy. As the i... » read more

Concurrent Test


Derek Wu, senior staff applications engineer at Advantest, looks at the need for doing multiple tests at the same time as chip designs become more complex, increasingly heterogeneous, and much more difficult to test at advanced nodes. https://youtu.be/-8inbjX_af0       __________________________________ See more tech talk videos here. » read more

FPGA Graduates To First-Tier Status


Robert Blake, president and CEO of Achronix, sat down with Semiconductor Engineering to talk about fundamental shifts in compute architectures and why AI, machine learning and various vertical applications are driving demand for discrete and embedded FPGAs. SE: What’s changing in the FPGA market? Blake: Our big focus is developing the next-generation architecture. We started this projec... » read more

Making Sure A Heterogeneous Design Will Work


An explosion of various types of processors and localized memories on a chip or in a package is making it much more difficult to verify and test these devices, and to sign off with confidence. In addition to timing and clock domain crossing issues, which are becoming much more difficult to deal with in complex chips, some of the new devices are including AI, machine learning or deep learning... » read more

Methodologies And Flows In A Rapidly Changing Market


A growing push toward more heterogeneity and customization in chip design is creating havoc across the global supply chain, which until a couple years ago was highly organized and extremely predictable. While existing tools still work well enough, no one has yet figured out the most efficient way to use them in a variety of new applications. Technology is still being developed in those marke... » read more

Looking Beyond The CPU


CPUs no longer deliver the same kind of of performance improvements as in the past, raising questions across the industry about what comes next. The growth in processing power delivered by a single CPU core began stalling out at the beginning of the decade, when power-related issues such as heat and noise forced processor companies to add more cores rather than pushing up the clock frequency... » read more

← Older posts