中文 English

Improving Performance And Simplifying Coding With XY Memory’s Implicit Parallelism


Instruction-level Parallelism (ILP) refers to design techniques that enable more than one RISC instruction to be executed simultaneously in the same instruction, which boosts processor performance by increasing the amount of work done in a given time interval, thereby increasing the throughput. This parallelism can be explicit, where each additional instruction is explicitly part of the instruc... » read more

Memory Access In AI Systems


Memory access is a key consideration in AI system design. Ron Lowman, strategic marketing manager for IP at Synopsys, talks about how memory affects overall power consumption, why partitioning of on-chip and off-chip is so critical to performance and power, and how this changes from the cloud to the edge. » read more

Simulation: Go Parallel Or Go Home


Although complemented by other valuable technologies, functional simulation remains at the heart of semiconductor verification. Every chip project still develops a testbench, usually compliant with the Universal Verification Methodology (UVM), and a large test suite. Constrained-random stimulus generation has largely replaced hand-crafted tests, but at the expense of much more simulation time. ... » read more

Divided On System Partitioning


Building an optimal implementation of a system using a functional description has been an industry goal for a long time, but it has proven to be much more difficult than it sounds. The general idea is to take software designed to run on a processor and to improve performance using various types of alternative hardware. That performance can be specified in various ways and for specific applic... » read more

Plasticine: A Reconfigurable Architecture For Parallel Patterns (Stanford)


Source: Stanford University Stanford University has been developing Plasticine, which allows parallel patterns to be reconfigured. "ABSTRACT Reconfigurable architectures have gained popularity in recent years as they allow the design of energy-efficient accelerators. Fine-grain fabrics (e.g. FPGAs) have traditionally suffered from performance and power inefficiencies due to bit-level ... » read more

Does System Design Still Need Abstraction?


About 15 years ago, the assumption in the EDA industry was that system design would be inevitable. The transition from gate-level design to a new entry point at the register transfer level (RTL) seemed complete with logic synthesis becoming well-adopted. The next step seemed to be so obvious at the time: High-level synthesis (HLS) and transaction-based development beyond RTL—also taking into ... » read more

Address Simulation Turn-Around Time Bottlenecks with VCS Fine-Grained Parallelism


Non-stop growth in design size and complexity makes it more difficult than ever for verification teams to keep up with project demands and product goals. According to the Synopsys 2017 Global User Survey, “Verification taking longer than planned” is the top reason for tapeout delays, and “Simulation runtime performance” is the top challenge for verification. Since regression test turn-a... » read more

Taming Concurrency


Concurrency adds complexity for which the industry lacks appropriate tools, and the problem has grown to the point where errors can creep into designs with no easy or consistent way to detect them. In the past, when chips were essentially a single pipeline, this wasn't a problem. In fact, the early pioneers of EDA created a suitable language to describe and contain the necessary concurrency ... » read more

Chip Industry In Rapid Transition


Wally Rhines, CEO Emeritus at Mentor, a Siemens Business, sat down with Semiconductor Engineering to talk about global economics, AI, the growing emphasis on customization, and the impact of security and higher abstraction levels. What follows are excerpts of that conversation. SE: Where do you see the biggest changes happening across the chip industry? Rhines: 2018 was a hot year for fab... » read more

Intel’s Next Move


Gadi Singer, vice president and general manager of Intel's Artificial Intelligence Products Group, sat down with Semiconductor Engineering to talk about Intel's vision for deep learning and why the company is looking well beyond the x86 architecture and one-chip solutions. SE: What's changing on the processor side? Singer: The biggest change is the addition of deep learning and neural ne... » read more

← Older posts