Artificial intelligence is a concept in computer science in which machines "think" for themselves. No one knows exactly where or when the concept was started, but formal research into this idea generally is attributed to a 1956 project at Dartmouth College.
Artificial intelligence achieved some notoriety following the Arthur C. Clarke science fiction novel 2001: A Space Odyssey, which was published in 1968. In the book, a computer named HAL (one letter off from the IBM acronym) attempted to take over a space ship.
By the mid-1980s, though, artificial intelligence was considered the next big thing in computing. Every major computer maker had an AI research project, most of which were abandoned in the early 1990s due to insufficient processor speed, the high cost of memory, and slow networking speeds.
Moore's Law and vast improvements in data movement have eliminated those bottlenecks, and AI has returned, both as a general category for development and in various subsets such as deep learning.
While it is still debated about which is the subset of the other--deep learning or machine learning, in the general scheme of things, machine learning is what makes artificial intelligence possible. An artificially intelligent machine must utilize machine-learning algorithms to make choices based upon previous experience and data. The terms are often confusing, in part because they are blanket terms that cover a lot of ground, and in part because the terminology is evolving with technology. But no matter how those arguments progress, machine learning is critical to AI and deep learning.