Should AI Mimic Real Life?

What the human body can teach us about the importance of communications.

popularity

There is a lot we still do not know about how the human body and brain works. Today’s machines are far from being anything like as capable, or power frugal, or having the ability to make decisions. True – machines have been made for highly specific tasks that can now beat a human. But there lies one of the human mind’s greatest features – when it makes mistakes, it asks how it can do better, it asks if it can make a machine to do it better, it accepts the error, and in many cases creates something new and better from it. It is adaptable and creative. It can capture knowledge and share that with others. Show me a machine that can do that!

Researchers work in areas that will either get them published or make money. This happens in all fields of science. For example, consider the early days of the Internet, IoT and AI. The networking companies tried to convince everyone that all traffic needed to go into the cloud. That meant more demand for their product. Today, people realize that it is not a good solution and are looking at how to push more of the processing, more of intelligence out towards the edge.

But what does the human body do? Does it have distributed intelligence or does everything go through the brain? Admittedly, communications within the body isn’t clogged with huge amounts of high-resolution video, meaning that it may not have the same bandwidth constraints, but here also is one departure between the two architectures. The human only transmits or processes the least amount of data possible. Of that 4K image on the screen, how much do the eyes actually take in and how much gets processed?

It would appear that human vision works in an incremental fashion, first using the minimum amount of data necessary to make an initial determination. If there is uncertainty, then more data is brought in and this goes on until either a “match” is made, or we decide what it is closest to, and how it differs in an attempt to make an educated guess. How may that translate into ML – perhaps using the minimum accuracy first, say 4-bit, and then working its way up to longer data lengths only if necessary.

That does not address distributed intelligence. The eyes and brain are too close together to effectively separate them. But movement and other body functions are different. It is known that reflex actions are instigated locally and do not involve the brain. It is also possible that some functions are performed locally, but do they have the ability to learn?

The question is important because researchers are trying to decide if edge nodes should just do inferencing or learning. The human body does have one very important advantage – there is always a connection between all parts of the body and the brain. It is, in effect, a hard-wired connection and is reliable (yes, accidents can break it, but we are talking about properly functioning systems). In this case, instinctive movement cannot be governed by the brain – it is just not fast enough. Think of some of the great pianists. If they had to read the music and play the notes, the brain just couldn’t do the processing fast enough.

That doesn’t mean the fingers learn in the literal sense. I don’t think there is processing there. There is perhaps the ability to push a program out for execution closer to the point of execution.

It all comes back down to communications. Architects spend most of their time designing processing, just looking to see if the communications have enough bandwidth. But systems have actually progressed. Processing is the easy part – it is communications that is tough. What to communicate, when, how much precision, with what latency? I think we are making some of the same mistakes with AI. We need to design the networks and communications before deciding what and how much processing is required where.



Leave a Reply


(Note: This name will be displayed publicly)