Intractability is a growing concern across the cognitive while many models of cognition can describe and predict human behavior in the lab, it remains unclear how these models can scale to situations of real-world complexity. Cognition and Intractability is the first book to provide an accessible introduction to computational complexity analysis and its application to questions of intractability in cognitive science. Covering both classical and parameterized complexity analysis, it introduces the mathematical concepts and proof techniques that can be used to test one's intuition of (in)tractability. It also describes how these tools can be applied to cognitive modeling to deal with intractability, and its ramifications, in a systematic way. Aimed at students and researchers in philosophy, cognitive neuroscience, psychology, artificial intelligence, and linguistics who want to build a firm understanding of intractability and its implications in their modeling work, it is an ideal resource for teaching or self-study.
Why this book should remain unread is evident from the new publication (2023) by Iris van Rooij & Co, in which they refute their previous conclusions (see text below). The fact is that all these 'professors' (just plagiarists) conduct research on the ‘universal soundhelix’ (klankhelix, Lauthelix rediscovered in 2012), without citing the source. Because they never developed this research by themselves, they do not understand its intensity. A machine (language as a time machine) cannot replace the human brain when it comes to morality. As plagiarists, Iris van Rooij & Co itself lacks any form of moral action! Also professors of the Radboud University like Marc van Oostendorp, Nicoline van der Sijs and Frans Hinskens are corrupt since they are coperating with the Deutsche Sprachatlas and the Philipps-University-Marburg with professors like Jürgen E. Schmidt, Herman J. Künzel, Richard Wiese, Joachim Herrgen and Ina Bornkessel-Schlesewski (neuroscientist).
Van Rooij & Co also present a 'workshop': SAIL Workshop – Fundamental Limits of Large Language Models Prof. Iris van Rooij (Radboud University, Netherlands & Aarhus University, Denmark). Van Rooij & Co did "re-introduce the notion of artificial intelligence as part of cognitive science and argued that, rather than trying to build models that mimic human intelligence (which they dubbed “Makeism”), one should treat computational models of human cognition as theoretical tools or formal hypotheses. Prof. van Rooij and colleagues underlined this view by presenting a theorem showing that, without additional assumptions on the class of models, learning a model of human cognition (or approximations thereof) from example data is NP-hard".
"CONCLUSION: The thesis of computationalism implies that it is possible in principle to understand human cognition as a form of computation. However, this does not imply that it is possible in practice to computationally (re)make cognition. In this paper, we have shown that (re)making human-like or human-level minds is computationally intractable (even under highly idealised conditions). Despite the current hype surrounding “impending” AGI, this practical infeasibility actually fits very well with what we observe (for example, running out of quality training data and the non-human-like performance of AI systems when tested rigorously). Many societal problems surrounding AI have received thorough treatment elsewhere. Our focus here has been on a different—but not unrelated—problem, namely that AI-as-engineering has been trespassing into cognitive science, with some people drawing overly hasty inferences from engineered AI systems to human cognition. This is a problem because any such system created now or in the near future is a mere decoy when our goal is to understand human cognition, and treating it as a substitute for human cognition for scientific purposes will only confuse and mislead us.Early cognitive scientists rightly recognised the tremendous potential of AI as a theoretical tool, but due to widespread, implicit makeist elements, AI and cognitive science became increasingly dissociated over time. Now, interest in AI among cognitive scientists is enjoying a renaissance—but the interest seems to be in the wrong type of AI, namely AI-as-engineering, which distorts our understanding of cognition and cognitive science. Accordingly,the time is apt to reclaim AI-as-theoretical-psychology as a rightful part of cognitive science. As we have argued, this involves embracing all the valuable tools that computationalism provides, but without (explicitly or implicitly) falling into the trap of thinking that we can or should try to engineer human(-like or -level) cognition in practice".
The first part of the book is an awesome teaching resource on classical an parameterized complexity theory, going into detail where appropriate but always making the main points clear to the casual reader.
The second part is where the book really shines. As a cognitive scientist interested in the "rationality vs heuristics" debate, the answers to common objections really help develop some of my intuitions into more theoretically sound arguments.