New applications, materials, and wavelengths for image sensors.
Pawel Malinowski, program manager at imec, sat down with Semiconductor Engineering to discuss what’s changing in sensor technology and why. What follows are excerpts of that discussion.
SE: What’s next for sensor technology?
Malinowski: We are trying to find a new way of making image sensors because we want to get out of the limitations of silicon photodiodes. Silicon is a perfect material, especially if you want to reproduce human vision because it’s sensitive to the visible wavelengths of light, which means that you can do what the human eye does. And the field now is at the stage where it’s very mature. There are around 6 billion image sensors sold per year. These are the chips that end up in the cameras of smartphones, cars, and other applications. They are typical standard image sensors, where you have the silicon-based circuitry or the electronics and silicon photodiode. They basically make the red/green/blue (RGB) reproduction so that we can have nice pictures. But if you look at other wavelengths — for example, go to UV or to infrared — you have phenomena or information that you cannot get in visible light. We are looking especially at the infrared range. There we address a specific range, which is between one micron and two microns, which we call shortwave infrared. With this range you can see through things. For example, you can see through fog or smoke or clouds. This is especially interesting for automotive applications.
SE: Any upcoming challenges or new applications for this technology?
Malinowski: You cannot use silicon for this wavelength, because it becomes transparent. This is interesting, for example, for defect inspection when you are looking at cracks in silicon solar cells. You have different contrasts of some materials. Materials that appear exactly the same in the visible range can have different reflectivity in the shortwave infrared, which means that you can have better contrast, for example, when you’re sorting plastics or when you’re sorting food. There are other applications, as shown in figure 1 (below). It’s the power of light that comes from the sun through the atmosphere. The gray is above atmosphere, and the black one is what comes to the earth. And you see that there are some maxima and minima. The minima are related to water absorption in the atmosphere. You can use this minima when you are working, for example, with active illumination systems, which means that you emit some light and you check what is bouncing back. This is how the Face ID on the iPhone works—you emit light and check what is coming back. They operate around 940 nanometers. If you went to longer wavelengths — for example, 1,400 — you will have much lower background, which means that you can have much better contrast. If you then go to wavelengths where there is still quite a lot of light, you can use it with passive illumination to get extra information, such as low-light imaging, where you still have some photons.
Fig. 1: The possibilities for short-wavelength infrared. Source: imec
SE: How did you determine that?
Malinowski: What we checked for is how to access these wavelengths. Silicon, due to its physical properties, is not good for that. The traditional way is bonding, where you take another material — for example, indium gallium arsenide or mercury cadmium telluride — and you bond it on the readout circuit. This is incumbent technology. It’s used a lot for defense applications, military, and high-end industrial or scientific. It’s expensive. Sensors made with this technology typically cost a few thousand euros, because of the bonding process and manufacturing costs. You can grow the material that you need such as germanium, but this is quite difficult and there are some issues with getting the noise low enough. We are following the third way, which is depositing material. In this case, we are using either organic materials or quantum dots. We take material that can absorb this shortwave infrared light or near infrared, and we deposit it with standard methods, such as spin coating, and we get very thin layers. That’s why we call this category of sensors ‘thin film photodetector sensors,’ where the material is much more absorptive than silicon. It looks like a pancake on top of the readout circuit.
SE: How does this compare to other materials?
Malinowski: If you compare it to silicon diodes, they need much larger volume and much larger depth. And especially for these longer wavelengths, they just become transparent. By contrast, thin-film photodetector (TFPD) image sensors have a stack of materials, including photoactive materials such as quantum dot organic materials, integrated monolithically, which means it’s one chip. There is no bonding on top of the silicon. The problem with this approach was that when you have such a photodiode integrated on top of this metal electrode, it’s very difficult to get the noise low enough because there are some inherent noise sources that you cannot get rid of.
Fig. 2: Thin-Film Photodetector. Source: imec
SE: How did you resolve this?
Malinowski: We followed the way that silicon image sensors progressed at the end of the 1980s and in the 1990s, where they introduced pinned photodiodes. You decouple the photodiode area where the photons are converted, and the readout. Instead of just having just one contact of this thin-film absorber to the readout, we introduce an additional transistor. This is the TFT, which takes care of having the structure fully depleted so that we can transfer all the charges created in this thin film absorber and transfer them with this transistor structure to the readout. In this way, we significantly limit the noise sources.
SE: Why is noise a problem for sensor design?
Malinowski: There are different sources of noise. Noise can be the total number of unwanted electrons, but these electrons can come from different sources or for different reasons. Some are related to temperature, some to non-uniformity in the chip, some to transistor leakage, and so on. With this approach, we are working on some of the noise sources related to the readout. For all the image sensors, you have noise, but you have different ways of dealing with the noise. For example, the silicon-based sensors in the iPhone deal with noise sources with a specific design of the readout circuit, with architectures whose foundation goes back to the ’80s and ’90s. This is a little bit of what we tried to replicate with this new category of image sensors utilizing the thin field photo detectors. It’s an application of old design tricks in a new category of sensor.
SE: Where do you anticipate this would be used? You mentioned automotive. Would it also work for medical devices?
Malinowski: The biggest pull for this technology is from consumer electronics, such as smartphones. If you go to longer wavelengths, you can have lower contrast, because there is simply less light at that wavelength, or you can you can see this light of that color in the atmosphere. It’s augmented vision, which means seeing more than the human eye can see, so there’s additional information for your camera. The other reason is that longer wavelengths are easier to pass through some displays. The promise is that if you have this kind of solution, you can place the sensor, such as Face ID, behind the other display, which can increase the display area.
Fig. 3: Augmented vision for better safety. Source: imec
The other reason is that if you go to longer wavelengths, your eye is much less sensitive — about five or six orders of magnitude compared to the near-infrared wavelengths, which means that you can use more powerful light sources. So you can shoot out more power, which means you can have longer ranges. For automotive, you can have additional visibility, especially in adverse weather conditions, such as visibility through fog. For medical, it could help advance miniaturization. In some applications, such as endoscopy, the incumbent technology used other materials and more complex integration, and thus the miniaturization is quite difficult. With a quantum dot approach, you can make very small pixels, which means higher resolution in a compact form factor. This enables further miniaturization while keeping a high resolution. In addition, depending on which wavelength we target, we can have very high contrast of water, which is one of the reasons the food industry might be interested. You can better detect moisture, for example, in grain products such as cereals.
Fig. 4: Potential applications Source: imec
SE: With the increased low light vision, could it have military applications?
Malinowski: These kind of sensors already are used by the military, for example, for detecting laser rangefinders. The difference is that the military is fine with paying 20,000 euros for a camera. In automotive or consumer they are not even considering this technology, exactly for that reason.
SE: So the breakthrough here is that you can have something that already exists, but you can have it at consumer-scale pricing?
Malinowski: Exactly. Because of the miniaturization and also how the monolithic integration allows you to upscale the technology, you can get consumer-scale volumes and prices.
SE: What other trends do you see in sensor technology?
Malinowski: One of the current discussion points is exactly this — beyond visible imaging. The incumbent technology already is fantastic for taking pictures. The new trend is sensors that are more dedicated to the application. The output doesn’t need to be a pretty picture. It can be specific information. With Face ID, the output can be actually one or zero. Either the phone is unlocked or it isn’t. You don’t need to see the picture of the face. There are also some interesting modalities coming up, such as polarized imagers, which are like polarizing glasses. They see better for some reflections. There are event-based imagers, which only look at the change of the scene — for example, if you study vibrations of a machine or count people that are passing by a shop. If you have an autonomous driving system, you need a warning that there’s an upcoming obstacle and you should brake. You don’t need a pretty picture. This trend means much more fragmentation, because it’s much more application-specific. It changes the way that that people design image sensors because they look at what is good enough for a particular application rather than optimizing the picture quality. Picture quality is always important, but sometimes you need something simple that just does the job.
SE: Is it important to know whether it’s a human being or a tree, or is it just enough to know that you need to brake now?
Malinowski: In the automotive industry, there is still a debate. Some people want to classify all the objects. They want to know if it’s a child, a biker, or a tree. Some say, ‘I just need to know if it’s in the way, because I need to trigger the brakes.’ So there is not one answer.
Leave a Reply