Battery technology is creeping along, but there are other ways to improve them.
There are entire libraries of available information on batteries and battery technology. The reason is the technology is hundreds of years old, and it hasn’t fundamentally changed since Alessandro Volta cooked up the first practical battery in 1791.
While there have been significant improvements in batteries since then, they haven’t come close to keeping up with advancements in electronics that have occurred in the last 30 years. Batteries are still electrolyte-based, ionic energy transfer devices – and there is no radical technology on the horizon that will change that.
What is changing, though, is battery charging. If you can’t make the energy density of the battery match the energy demands of the latest generation of devices, then make the energy available faster by improving charging capabilities. As the Internet of Everything emerges, batteries will begin to play a larger and larger role in a wider and wider application base. Virtually all mobile IoE devices will use them as their primary power source.
The one inescapable fact of batteries is that they use the principle of ion transfer to move energy through and out of the cell. The electrochemical processes that batteries use is an entirely different animal than semiconductors, so the myriad of techniques that can be implemented in silicon just don’t work in ionic devices. In fact, about the only thing that works in batteries is improving the components within them — the anode, cathode, electrolyte and separator materials. And those only have yielded modest improvements of late.
There are some promising elements out there that could change this. Carbon, and especially carbon nanotubes, promise to make significant improvements in both energy density and power output. But so far, that remains in the experimental stages.
In the interim, much of the battery industry is focused on improving charge times and rates, as well as the energy density of existing platforms, and making charging platforms ubiquitous. That includes conventional, solar, inductive and reserve.
Much of the industry is getting on the fast charging bandwagon, and there are several ways to accomplish this. The real driving force behind this is the electric vehicle, but closing that gap quickly are portable mobile electronics. Both will drive the industry to develop faster-charging batteries and fast-charging electronics.
The first step is to develop batteries that are designed to accept fast charging. That is already being done, to some degree, but to really gain traction will require a more focused approach to optimize the criteria for that. First and foremost, the battery must have tight specs. That means that the components have to be held to exacting tolerances. Anodes and cathodes must be precision machined and as elementally pure as practical. Separators must be of the highest quality, free of imperfections (as much as possible), and dimension tolerances kept as tight as possible.
The electrolyte also must be of the best chemical quality. And finally, the interconnect components have to use the highest quality materials, as well. Once this has been accomplished, the battery must be fitted with “smart” electronics so it can communicate with the charger.
If all of this is accomplished, then the cell will have the lowest possible internal resistance, which is the major factor affecting charge rate.
Smart batteries are a prerequisite to smart charging. While smart chargers can do some rudimentary fast charging on dumb batteries, it really doesn’t work well enough for the demands of today’s device and vehicles.
There are two main conditions that determine what is going on inside of the battery, state of charge (see reference 1) and state of health. If one is able to sense these parameters, then the overall condition of the battery can be determined and it can be charged intelligently.
Determining the state of charge is typically done by reading the open circuit voltage across the terminals. While that provides a simple go/no-go test that says the battery is in some sort of charge condition, that is about all it reveals. This really tells one very little about the actual state of charge of the battery.
Batteries are non-linear devices. A battery can show near normal open circuit voltage until it is nearly exhausted. Figure one is a chart of a typical open circuit voltage vs. state of charge of a lithium ion battery. One can see that there is really very little difference in the open circuit voltage for any state of charge between 90% and 10%. This is similar for all types of chemistries.
To get a better idea of the state of charge, a load can be applied across the terminals and the voltage can be read. However, batteries are load-sensitive and the rate of drain affects both the capacity and the open-circuit voltage under load. Figure 2 shows how discharge rates affect the voltage . This is for lithium-ion cells, but all cells have similar characteristics.
Therefore, simply placing a load across the terminals and measuring the open-circuit voltage will give a better idea of the state of charge, but it really only tells if the battery is near exhaustion. However, batteries can be characterized against these discharge curves and the data used to design smart chargers.
What makes the battery smart are sensors and the ability of AI to analyze and interoperate with what the dumb chemistry is telling it. Batteries do not show any visible signs of change as they run through their charge. A good parallel would be your vehicles gas tank. Without sensors (gas gauge) and artificial intelligence (speedometer/odometer) one would have no idea how much gas is in it. One might know if a battery is close to fully charged by looking into it (putting a volt meter across the battery’s terminals), but it would be difficult to see exactly what the level is.
Batteries also exhibit changes related to environment. Temperature will alter battery performance (charge/discharge rate, self-discharge characteristics, open circuit/load voltage). So will humidity and altitude. The actual effects vary, but the basic parameters just mentioned are all affected to some degree by these environmental conditions.
Another variable that affects battery specifications is the inefficiency of the chemical process. It would seem logical that if 10 amps of energy is pumped into a battery, 10 amps should be available for discharge. That’s not correct, however. Inefficiencies in charge acceptance, specifically near the end of the charge, cycle, as well as losses during storage and discharge, will always reduce the total amount of energy available.
All this makes it very challenging to build a smart battery. Fortunately, this is one technology where precision has not been a major objective. Unlike semiconductors where gates, substrates, and other materials need to be held to exact tolerances, batteries do not for a couple of reasons. First, even under exactly the same manufacturing and applications, capacities will vary. Second, battery demands vary, even under identical conditions. Combine those and getting exact battery capacity is impossible.
So after all of this, the first order of business is to get as accurate a statistic on the battery as possible, given all the variables. That can be accomplished in a number of ways. One is to use what is called a coulomb meter. A coulomb counter is a modern application of a 250-year-old theory developed by Charles-Augustin de Coulomb. The theory states that: One coulomb (C) equals one ampere flowing for one second (1C = 1A x 1s).
However, due to battery inefficiency, coulomb counting is still not precise. But if one takes into account the various de-rate factors, it can provide both a relatively inexpensive, and accurate state of charge indicator, which is sufficient for a variety of applications.
A step up is what is called the single wire bus (see Figure 3). The single-wire system, as its name implies, simply provides low-speed communicates via a single wire. It combines both clock and data on a single channel and uses Manchester code to separate the data at the reading device. It stores the battery code and tracks battery readings such as voltage, current, temperature and state of charge, and displays that on whatever device it is attached to (the charger, usually).
The shortcoming of this method is that it cannot measure the current directly. Therefore, it must be married to the specific battery.
A better approach is to use what is called the two-wire SMBus system. This is a relatively sophisticated circuit that can measure both the state of charge and state of health (see Figure 3).
This approach has a unique advantage. It removes the charging management from the charger and relocates it to the battery itself. That means that the charger can be any universal charger and work with the battery. In effect, it becomes the slave and the battery becomes the master. The charger simply responds to the commands of the battery.
Why this works is because the both permanent and temporary data is programmed into the battery. This typically consists of data such as battery type, its ID, and manufacturing data. (date, serial number, etc.) The temporary data is custom and consists of parameters such as cycle count, user pattern and maintenance requirements. Typically, these two-wire circuits are very accurate, with the ability to measure as low as 1 mV of voltage and as low as 0.5 mA current variations with a temperature accuracy of ±3o C.
There are other iterations of smart batteries, but they all follow a similar technology. There are any number of special applications, and the technology varies with battery chemistry to a degree.
Battery chargers run the gamut from dumb, to smart, to fast, and ultra-fast. The simplest is the dumb charger. They are generally constant current chargers that simply deliver a fixed current to the battery, for a time determined by the user, at a rate of about 0.1C. That is considered the rate at which a battery can be charged indefinitely. These constant current types will continue to deliver current, regardless of the states of charge and health. The major drawback is these have the potential, if not set correctly and monitored, to eventually cause the battery to overhead and vent (or explode in extreme cases).
Constant voltage chargers are a bit safer, but they rely on having the proper charging voltage set to proportionately reduce the charging current as the internal resistance of battery and the open current voltage changes during charging. Most of them are preset to specific voltages for the type of battery (car, radio and tool chargers, for example).
Smart chargers encompass the rest of the charging family. Today, virtually all smart chargers are microprocessor-based, and there are a variety of charging chips on the market. Sophistication and functions vary significantly from design to design, but fundamentally, the vast majority of them use some variant of the delta sensing method (ΔV and/or ΔT ). Others approaches include pulsed, and burp methodologies, and dT/dt.
ΔV and ΔT technologies rely on a phenomenon where, as the battery nears full charge, there is a slight, but measureable variation in these parameters. For ΔV, it is a slight drop in voltage. For ΔT it is a slight rise in temperature. These changes are non-linear when they occur so they are easy to spot. Sensing this change is what these technologies use to determine SoC and control the charging rate.
Pulsed charging is a technique by which the charge is fed to the battery in pulses. The charging rate is averaged out over time, but periodically it varies the width of the pulses, from a few hundred ms to about one second. Why this works is that, during the charging process, the charger will add short rest periods, typically 20 to 30 ms, between the pulses to allow the chemical actions in the battery to stabilize. As it turns out this is a very effective way to charge a battery and it keeps the chemical process stable by equalizing the reaction throughout the bulk of the electrode before the next charge pulse. This technique lets the chemical reaction rest, and it can accept the very high current pulses (see Figure 5).
A variant on pulse charging is called burp or reflex/negative pulse charging (see Figure 6). This is an interesting approach because it uses alternating charge and discharge pules. This approach applies a very short discharge pulse, as a multiple of the charge current for several ms (typically five or so) during the rest periods between charge pulses.
The idea is to dislodge the gas bubbles that build up on the electrodes during the fast charge pulse. The result is that the stabilization process is further improved, and so is the charging time. The diffusion of the gas bubbles is the “burp” part of the pulse charge. There has been some controversy as to whether this really is a better method than just pulse charging, and that is can remove the dendrite growth in certain types of chemistries, and extend the life of the battery. However, hard evidence is scarce. But it does work, at least as well as pulse charging
Battery charging has come a long way over the years. The downside is that batteries are a chemical technology and, frankly, very sloppy in tolerances. That makes designing charging systems a challenge.
To address that, modern charging systems have integrated all kinds of sensing technologies, as well has charging technologies, which makes up for the many variables that can affect charging. Moreover, charging designs have improved significantly. That makes charging batteries much easier, safer, and more precise that those of just a couple of decades ago.
Still, batteries are a fickle beast. As the IoE evolves, for applications that require a rechargeable source (from electric vehicle to smart infrastructures, and cities to remote communications devices), it is imperative that the power designer understand battery chemistry to design charging systems.
With the myriad of solution and technologies now available, keeping batteries healthy and happy is much less of a chore that in earlier days.
Reference 1: State of Charge Charge and discharge rates are commonly referenced to capacity (C) of the battery. A charge rate of 1C means a 100 mAh battery is being charged at 100 mAh. Conversely, a 1C discharge rate means the battery is being discharged at 100 mAh.