4 minute read

Artificial intelligence was mainly a fancy movie component for me. No real-life usage I could see. Those few brave attempts, such as chatbots of service provider companies, were very “artificial” — leaving much to be desired.

But when ChatGPT in late 2022 debuted, the world suddenly realized that AI was finally capable of something worthwhile. As a colleague told me who had been interested in the subject for a long time, it was always getting better and better. In late 2022 it just broke through the threshold where we feel artificial intelligence capable of doing things like humans.

But what is artificial intelligence? How does it work? Where are the limitations? I wanted to answer those questions, and because I was fed up with engineering management and related books, I looked for a good read on AI. After some searching, I have ended up with The Myth of Artificial Intelligence - Why Computers Can’t Think the Way We Do by Erik J. Larson.

I need to be honest. It was not an easy read. It is a longer book than usual, and the subject and its specific words were new to me. The author was also using a rich level of English, using a lot of expressions that I had never heard of before. When it went to philosophical or academic reasoning, I lost the line. But this does not mean that the book is terrible. On the contrary, it answered all my questions and gave me the knowledge I wasn’t expecting.

As for someone completely inexperienced in AI, the most significant benefit for me was getting to know the three different kinds of inferencing: deduction, induction and abduction. It explained them, and the entire book, with all its chapters, found a connection with inferencing and its types. While the book did its best to explain “Why Computers Can’t Think the Way We Do”, the strongest argument among all others was inferencing related — more specifically, abduction is required for true and general intelligence. We already know how to code deduction and induction, but we have no idea how to approach abduction. We are not even at the starting line with it since, as the book stated, “surprise, no one is working on it—at all”.

If you are unfamiliar with the different inferencing types, the following explanation and examples from the book might help you.

  • DEDUCTION: gives us certain knowledge and concludes based on generally accepted statements of facts.
    “All the beans from this bag are white. These beans are from this bag. Therefore, these beans are white.”
  • INDUCTION: gives us provisional knowledge and generalizes based on what is known or observed.
    “These beans are from this bag. These beans are white. Therefore, all the beans from this bag are white.”
  • ABDUCTION: gives us probable knowledge and concludes from the information what is known.
    “All the beans from this bag are white. These beans are white. Therefore, these beans are from this bag.”

Deep learning and big data were also core components by the author for explaining nowadays AI achievements and capabilities. While I expected this, I didn’t know those solutions were mostly deduction and induction based. It is also the most substantial limitation of ChatGPT. At this point, I needed to ask ChatGPT what inferencing types it uses. Here is what I got:

As an AI language model, I primarily utilize deductive and, to a lesser extent, inductive reasoning. Deductive reasoning involves drawing logical conclusions based on provided premises or information, whereas inductive reasoning involves making generalizations or predictions based on observed patterns or examples.
Abductive reasoning, which involves making educated guesses or creating hypotheses to explain observations or phenomena, is less prevalent in my responses. While I can sometimes generate creative or speculative responses, they are typically based on the patterns and knowledge acquired during training, rather than on abductive reasoning.

As the book progresses, it spends enormous effort to prove and decorate its message with academic, philosophical or historical reasonings. Alan Turing dominates all over, including his Turing test, but Janos Neumann, the Nazi’s Enigma, Jack Good, Einstein, Aristotle, Deep Blue, Watson, AlphaGo, DeepMind, and roughly a hundred others follow them. Surprisingly even Prometheus and Frankenstein are mentioned at some points. It was too much information for a newbie like me, and I had already forgotten most of them. But this granularity might be a true gem for a genuine AI-enthusiastic person.

What I liked the most:

  • The inferencing types were new to me. It helps me to understand AI solutions better now.
  • I have found it very informative and detailed.
  • Great book for AI enthusiastic persons.

What I didn’t like much:

  • I often lost the line when the book went into some academic or philosophical reasoning.
  • It is a challenging read.
  • There might be better books for AI beginners.

Comments