Systems & Design
SPONSOR BLOG

Happy 25th Birthday, HAL!

AI has come a long way since HAL became operational.

popularity

“Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H. A. L. plant in Urbana, Illinois on the 12th of January, 1992.”—Stanley Kubrick and Arthur C. Clarke, 2001: A Space Odyssey (1968).

Nearly a half-century ago, Arthur C. Clarke and Stanley Kubrick introduced us to cinema’s most compelling example of artificial intelligence: the HAL 9000, a heuristically programmed algorithmic computer. The sentient HAL was not only capable of understanding his human colleagues – he could also speak, see, plan, understand emotion and play chess. Perhaps not surprisingly, HAL was shown to be the most human character in 2001: A Space Odyssey. While Frank Poole died silently in the cold vacuum of space and the demise of the hibernating crew members was revealed by a medical monitor’s trace going flat, by contrast HAL sang a touching yet dissolving rendition of “Daisy, Daisy, give me your answer do” as David Bowman deliberately shut down his consciousness.

Despite numerous attempts by other science fiction filmmakers, HAL remains the most compelling portrayal of machine intelligence in cinema. When as a young boy I first saw 2001 (in large-screen Cinerama), I was entranced. That experience, and my many dozens of subsequent viewings, helped lead me to a career in pattern recognition, machine learning, smart sensing, and other technologies that would make a real HAL. I even published a book, HAL’s Legacy: 2001’s computer as dream and reality (MIT Press) and co-created and hosted a PBS documentary on HAL, 2001: HAL’s Legacy, to share my enthusiasm for the masterpiece film and its leading character. My California vanity license plate reads: HAL 9000. (Yes… perhaps I’ve gone a bit overboard).

Clarke and Kubrick strove to make HAL and the concept of artificial intelligence (AI) as realistic as possible. For the most part, the duo succeeded, with the film undoubtedly influencing the development of intelligent personal assistants such as Siri and Alexa. Nevertheless, 2001: A Space Odyssey, which depicts a fictional portrayal of the early 21st century, didn’t portray AI on laptops, tablets and smartphones for the masses. Indeed, HAL’s “brain room” or control center is enormous, although there are parallels with modern server farms (clouds) that serve as the backbone of the Internet and are used for increasingly “intelligent” functions.

Since the film’s 1968 debut, humankind has landed on the moon, explored Mars with rovers and sent robotic probes to study the outer solar systems. In contrast to space travel, the development of artificial intelligence (AI) has progressed at a somewhat slower pace. To be sure, the creation of artificial systems that can see, speak, understand language, lip-read, appreciate art, understand emotions and plan is currently one of the greatest challenges facing scientists. We humans are so adept at these tasks we take our expertise for granted; it is when we try to build systems to perform these tasks do we fully appreciate the magnitude of the challenges.

Nevertheless, AI scientists working in pattern and speech recognition, computer vision and machine learning have made significant progress in recent years. This includes, most recently, deep learning, in which large “brain-like” networks programmed with hundreds of millions of examples – that are able to recognize objects, actions and simple descriptions of scenes. In some applications, such networks outperform human experts. One recent example of deep learning is AlphaGo, a computer program developed to play Go, an abstract strategy board game invented in China more than 2,500 years ago.

The advent of AlphaGo, which successfully defeated a professional Go player, illustrates the steady evolution of artificial intelligence. In the early years of AI, the ultimate benchmark of an advanced computer program was its ability to beat a human opponent at the game of chess by analyzing potential player moves with massive search capabilities. This technique is insufficient for Go, which Google defines as a game of “profound complexity,” as there are more possible positions on the Go board than there are atoms in the universe. Consequently, AlphaGo is based on AI methods and pattern recognition, using deep neural networks to “see” and “understand” the board position and mimic expert players and further improve the program by learning from games played against itself. Put simply, DeepMind focuses on patterns and structure, which, unlike massive search algorithms, seems to be the true foundation of much of human intelligence and advanced artificial intelligence.

Artificial intelligence researchers have always found themselves challenged by a perpetually receding goalpost: Every time an advance has been made (in speech recognition, planning, image understanding, and so forth) scholars and the public alike say, “Oh that’s not true AI… We’ll have AI when…” Nonetheless, AI researchers have made real progress in recent decades. And now, on HAL’s birthday, we can confidently look back and say HAL has matured considerably since the 1968 debut of 2001: A Space Odyssey.

Happy 25th birthday, HAL!

Related Stories
The Multiplier And The Singularity
AI makes interesting reading, but physics will limit just how far it can go and how quickly.



Leave a Reply


(Note: This name will be displayed publicly)