It’s been a quiet week on the blog front. If you’re looking for a bit of downtime while you enjoy the New Year, we present thoughts on what’s happened this year and what may be to come from last week’s System-Level Design newsletter:
Editor in Chief Ed Sperling finds acquisitions and the leading edge of design defined 2015, but other changes are ahead.
Technology Editor Brian Bailey observes that every year, smart people come up with new and inventive ways to celebrate the holidays.
Mentor Graphics’ Warren Kurisu looks at the challenges in architecting new IoT devices.
Synopsys’ Tom De Schutter uncorks poll results about software and virtual prototyping.
Aldec’s Henry Chan digs into why the Universal Verification Methodology is so important.
Arteris’ Kurt Shuler zeroes in on why physically aware NoC IP is critical in complex designs.
Agnisys’ Anupam Bakshi identifies the challenges in moving from a static spec to a live one.
eSilicon’s Mike Gianfagna examines how the basic high school science project has changed.
Cadence’s Frank Schirrmeister predicts a key breakthrough next year in the ability to seamlessly glide through abstraction.
XtremeEDA’s Neil Johnson questions whether a competition is brewing between shift left and agile, both of which are amorphous strategies.
30 facilities planned, including 10/7nm processes, but trade war and economic factors could slow progress.
Leaders of three R&D organizations, Imec, Leti and SRC, discuss the latest chip trends in AI, packaging and quantum computing.
Applied Materials’ VP looks at what’s next for semiconductor manufacturing and the impact of variation, new materials and different architectures.
What could make this memory type stand out from the next-gen memory crowd.
Researchers digging into ways around the von Neumann bottleneck.
Chips will cost more to design and manufacture even without pushing to the latest node, but that’s not the whole story.
This will go down as a good year for the semiconductor industry, where new markets and innovation were both necessary and rewarded.
The term creates hope for some, fear for others, and confusion for all.
Researchers digging into ways around the von Neumann bottleneck.
Optimizing processor architectures requires a broader understanding data flow, latency, power and performance.
While CPUs continue to evolve, performance is no longer limited to a single processor type or process geometry.
Optimizing complex chips requires decisions about overall system architecture, and memory is a key variable.
Leave a Reply