MPU Vs. MCU

Definitions are blurring, but the debate goes on.

popularity

There was a time when microprocessors and microcontrollers were distinct devices. There was never a question as to which one you were dealing with. But changes in the memory architecture have muddied the distinction in modern devices.

There are a number of ways in which microprocessors and microcontrollers could possibly be differentiated. But there is no universal agreement as to how that should happen, and some folks — although definitely not all — have come to the conclusion that any distinction might not even matter all that much anymore.

“The difference between an MCU and an MPU has become much fuzzier in recent years,” said Colin Walls, embedded software technologist at Mentor, a Siemens Business. “Originally, an MCU integrated CPU, memory and peripherals on one chip. Nowadays, although this is still the case, it’s very common to attach additional external memory, as the MCUs are powerful enough to support more sophisticated applications.”

A tale of two markets
There was a time when computing chips targeted two very different markets. On the more visible front, devices were targeted at mainstream computing, where performance was the primary consideration. Referred to as “microprocessors,” these single-chip computers powered personal computers and larger systems.

Today we see them in laptops, desktops, and servers of all types. What’s key is the fact that they’re general-purpose engines, intended to run any number of programs that aren’t known a priori. Primary memory is DRAM, and non-volatile storage is the hard drive (or SSD).

On the less visible side was the world of embedded computing. Here there was a need for modest computing power with a dedicated purpose. The intended program likely would be implemented in firmware so that the entire system — program and all — could be verified prior to shipping. Memory requirements were much more limited, and SRAM and non-volatile memory for code storage could be integrated onto the same chip as the CPU. Critically, real-time response was often important.

This market also tended to be used in environments with very specific I/O needs. Some might be driving motors. Others might be processing sound or reading sensors. It became useful to integrate the specialized peripheral interface hardware onto the same chip as the CPU and memory. This resulted in a wide range of chips with differing characteristics. But overall, CPUs integrated with SRAM, non-volatile memory, and specialized peripherals were known as “microcontrollers.”

Microprocessors have rocketed up to 64-bit monsters, while there are still plenty of 8-bit microcontrollers. But in the middle, some changes occurred to make the distinction far less clear.

While not the sole determining factor, the integration of flash memory was an important characteristic of the microcontroller. But flash memory has not been available at the most advanced microcontroller nodes, so many devices marketed as microcontrollers use external flash memory instead of embedded flash. They also may use external DRAM.

In fact, a process called “shadowing” takes code from external flash memory and copies it into DRAM, from which the code is then executed. And in order to improve performance, caching may be included. That makes the CPU/memory subsystem pretty much indistinguishable from that of a microprocessor. So is it now a microprocessor? Is there no longer a meaningful difference?


Fig. 1: The top is a typical simplified image of a microprocessor system. The DRAM and hard drive are external to the chip. The bottom shows an older microcontroller on the left and a newer one on the right that no longer looks so different from a microprocessor. Source: Bryon Moyer/Semiconductor Engineering

Possible differentiators could include the following:

  • CPU capabilities: If the CPU has a sophisticated pipeline, with speculative execution and other superscalar capabilities, that could qualify it as a microprocessor. Exactly where the transition would occur, however, is not well defined.
  • More bits: An 8-bit device is more likely to be considered a microcontroller, while a 64-bit device is most likely a microprocessor. But then again, the very first microprocessor was 4 bits, so this is more a matter of history than a defining characteristic.
  • Operating system: One might classify according to the type of operating system that a machine can run. If it runs Linux, then you might call it a microprocessor. If it ran only smaller real-time operating systems or even bare metal, then you could call it a microcontroller. This leaves a lot of middle ground for devices that possibly could run Linux.
  • Timing requirements: Microcontrollers are often, although not exclusively, used for applications that require hard or soft real-time response. Microprocessors generally can’t be used for that purpose.
  • Multicore: It’s much more likely that a multicore processor would be considered a microprocessor, especially if the cores are identical and managed symmetrically. But specialized devices may have more than one processor, with some being dedicated to a specific task like digital signal processing. They’re likely to be considered microcontrollers, but are they? Besides, a device doesn’t have to be multicore to be a microprocessor, so this really isn’t a good determiner.
  • Purpose: You could say that a general-purpose device is a microprocessor, while a single-purpose device is a microcontroller. But that’s really all about how the device is used. There are devices you could use either way. What would you then call that device in the absence of knowing how it’s used?
  • Peripherals: This leaves the specialized peripherals as a possible differentiator. It’s probably true that full-on microprocessors won’t have those peripheral circuits, largely because they’re intended for general-purpose use rather than being tied to a specific application. So you could probably say that, if it has such peripherals, it’s a microcontroller. But the reverse isn’t true: the lack of peripherals doesn’t mean that it’s a microprocessor.

Each of the obvious characteristics fails or is, at best, unsatisfactory. So where does that leave us? We asked a number of folks their opinions, and there was no consensus whatsoever. Here are some of their thoughts.

Marc Greenberg, group director of product marketing, IP group at Cadence: “I don’t know if there’s some ‘official’ engineering definition of the difference between microcontroller and microprocessor. A quick search seems to reveal that the presence of NVM on the die makes it an MCU, but there are bits of NVM on all kinds of microprocessors. And microprocessors may have MCUs on the same die as well, so what is that? The tiniest cache-less processors may still have some registers and SRAM. Is a sequencer coded in RTL really any different from a general-purpose processor executing from a ROM? So the distinction between a microcontroller and a microprocessor is somewhat arbitrary, and that means that it can be whatever you want it to be. When I think of microprocessors, I think of larger processors that are controlling general-purpose machines (like desktops, servers, tablets, etc) and microcontrollers as the heart of embedded devices that are headless or have smaller specific-purpose UIs.”

Grant Martin, distinguished engineer at Cadence: “From Wikipedia, a one-liner for each. ’A microcontroller is a small computer on a single metal-oxide-semiconductor integrated circuit chip. A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single (or more) integrated circuit (IC) of MOSFET construction.’ Both of those are pretty useless, but point to the arbitrariness of trying to distinguish them. If you drill into this a bit, a microprocessor has the functions of a CPU, so it’s the ‘computer processor,’ whereas the microcontroller is a more complete ‘computer,’ so that means microcontrollers include microprocessors, which is opposite to the convention. But is a 16-way server processor with multiple processor ‘cores’ a microprocessor anymore? And is a multi-way heterogenous SoC in, for example, a cell phone — which might include multiple application processing cores, multiple DSPs for audio, video, image processing, a GPU or two for rendering images on the screen, and a neural-net processing unit, just for fun — a ‘microcontroller? From my point of view, it is time for the industry to retire these somewhat archaic terms and instead use more precise, albeit longer and more descriptive (what I would call ‘boringly precise’) terms.”

Jeff Hancock, senior product manager at Mentor, a Siemens Business: “From a system software perspective, a microcontroller is expected to be amenable to applications that directly interpret and control hardware sensors and actuators. Such access often involves consistent and reliable instruction timing, which is at odds with the needs of a general-purpose microprocessor. The general-purpose microprocessor aims to optimize throughput, whereas the microcontroller often optimizes latency. So if you want a large database, a microprocessor is likely appropriate. If you want fine motor control, a microcontroller is for you. The external memory and cache certainly can disrupt the determinism of a microcontroller, but this is a long way from declaring it equivalent to a microprocessor. In particular, the existence of external memory does not require all processing units in the MCU to use external memory exclusively, or even at all. Systems can be constructed with isolated subsystems that permit critical workloads to continue in parallel with less critical application-level systems that make use of larger external memories and caches.”

Mentor’s Walls: “From the software engineer’s point of view, this is an interesting challenge. There are likely to be two memory regions at non-contiguous addresses. The on-board memory is small, but faster, so is best reserved for code that benefits from the optimal speed, like the real-time operating system. This has two implications: the development tools must be flexible enough to map the code correctly onto the memory, and the RTOS must be small enough [generally very scalable] to fit into the on-chip memory.”

Nicole Fern, senior hardware security engineer at Tortuga Logic: “Microcontrollers historically have been associated with embedded systems, where the requirements of low cost and low power are more important than performance. But with the advent of mobile computing and IoT edge computing, complex processing is now required for many embedded systems. This results in MCU offerings that look more like MPUs, with options for external memory and caches offering increased performance and configurability, but marketed for the embedded space. The difference between the terms MPU and MCU for these situations may only be dependent on the lineage of the system the CPU is being integrated into.”

Thomas Ensergueix, senior director for low-power IoT business at Arm: “Over recent years the lines have blurred between microcontrollers and microprocessors. One key difference between MCUs and MPUs is software and development. An MPU will support rich OSes like Linux and the related software stack, while an MCU traditionally will focus on bare metal and RTOSes. It is up to the software developer to decide which software environment and ecosystem fits best for their application before making the decision of which hardware platform, MCU, or MPU works best. As modern MCUs have transitioned to 32-bit, we also have seen a steep increase in performance, which has helped to close the gap between MCUs and MPUs. For example, many Arm Cortex-M7 based MCUs deliver over 100 Dhrystone MIPS, or over 2,000 points in CoreMark. Many of these devices also have a very large built-in memory or offer a fast interface to connect external memories. This has ensured that performance and memory are no longer bottlenecks for MCUs and has brought them closer to low-end MPUs.

Conclusion
So in the end, does it really matter if we nail down the distinction? Probably not. Applications come with requirements, and it’s the requirements that will determine which device is used – regardless of what we call it.

Related
The MCU Dilemma
Microcontroller vendors are breaking out of the box that has constrained them for years. Will new memory types and RISC-V enable the next round of changes?
An Increasingly Complicated Relationship With Memory
Pressure is building to change the programming paradigm associated with memory, but so far economic justifications have held back progress.
MCU Knowledge Center
MPU Knowledge Center



1 comments

Michel Gillet says:

For me the difference is simple and clear: an MCU doesn’t have a MMU, so no virtual memory addressing; an MPU does have a MMU and thus virtual memory addressing.

Leave a Reply


(Note: This name will be displayed publicly)