Accelerating AI And ML Applications With PCIe 5


The rapid adoption of sophisticated artificial intelligence/machine learning (AI/ML) applications and the shift to cloud-based workloads has significantly increased network traffic in recent years. Historically, the intensive use of virtualization ensured that server compute capacity adequately met the need of heavy workloads. This was achieved by dividing or partitioning a single (physical) se... » read more

The Evolution Of Pervasive Computing


The computing world has gone full circle toward pervasive computing. In fact, it has done so more than once, which from the outside may look like a more rapid spin cycle than a real change of direction. Dig deeper, though, and it's apparent that some fundamental changes are at work. This genesis of pervasive computing dates back to the introduction of the PC in 1981, prior to which all corpo... » read more

Open Source Hardware Risks


Open-source hardware is gaining attention on a variety of fronts, from chiplets and the underlying infrastructure to the ecosystems required to support open-source and hybrid open-source and proprietary designs. Open-source development is hardly a new topic. It has proven to be a successful strategy in the Linux world, but far less so on the hardware side. That is beginning to change, fueled... » read more

CEO Outlook: 2020 Vision


The start of 2020 is looking very different than the start of 2019. Markets that looked hazy at the start of 2019, such as 5G, are suddenly very much in focus. The glut of memory chips that dragged down the overall chip industry in 2019 has subsided. And a finely tuned supply chain that took decades to develop is splintering. A survey of CEOs from across the industry points to several common... » read more

Cloud Characterization


Library characterization is a compute-intensive task that takes days to weeks to complete. Runtimes for library characterization are increasing due to larger library sizes, higher number of operating conditions to characterize, as well as the need for statistical variation modeling in libraries at 22/20nm and smaller process nodes. Cloud platforms offer a way to accelerate library characterizat... » read more

Leveraging Data In Chipmaking


John Kibarian, president and CEO of PDF Solutions, sat down with Semiconductor Engineering to talk about the impact of data analytics on everything from yield and reliability to the inner structure of organizations, how the cloud and edge will work together, and where the big threats are in the future. SE: When did you recognize that data would be so critical to hardware design and manufact... » read more

Revving Up For Edge Computing


The edge is beginning to take shape as a way of limiting the amount of data that needs to be pushed up to the cloud for processing, setting the stage for a massive shift in compute architectures and a race among chipmakers for a stake in a new and highly lucrative market. So far, it's not clear which architectures will win, or how and where data will be partitioned between what needs to be p... » read more

Which Verification Engine When


Frank Schirrmeister, group director for product marketing at Cadence, talks about which tools get used throughout the design flow, from architecture to simulation, formal verification, emulation, prototyping all the way to production, how the cloud has impacted the direction of the flow, and how machine learning will impact verification. » read more

Disaggregation Of The SoC


The rise of edge computing could do to the cloud what the PC did to the minicomputer and the mainframe. In the end, all of those co-existed (despite the fact that the minicomputer morphed into commodity servers from companies like Dell and HP). What's different this time around is that the computing done inside of those boxes is moving. It is being distributed in ways never considered feasi... » read more

Changes In Data Storage and Usage


Doug Elder, vice president and general manager of OptimalPlus, talks about what’s changing in the storage and collection, including using data lakes and data engineering to break down silos and get data into a consistent format, and why it’s essential to define data up front based upon how quickly it needs to be accessed, as well as who actually owns the data. » read more

← Older posts