In my post of 5/5/2012, I referred to a discussion of a 2010 AMA Critical Skills Survey in which executives were asked several questions about employee’s competencies at “21st Century skills.” The survey was sponsored in part by P21, an educational reform organization leading an effort to integrate “the Four Cs” into the K-12 system via curriculum reform. The P21 movement has vocal critics over at Common Core, among them E. D. Hirsch, Jr., author of The Schools We Need and Why We Don’t Have Them. Their objections are based on a skeptical view of the transferability of critical thinking skills across subject domains (which they ascribe to P21). I think they overstate the case.

The crux of the argument is articulated by Hirsch as follows:

There are many reasons for the difficulty of transferring critical thinking and other 21st-century skills from one domain to another, but here’s a decisive reason. A central feature of such skills is the drawing of inferences .… But inference-making is not purely formal process. When the skill fails it’s usually because information is lacking. Inference-making can be described as supplying missing premises from one’s own prior knowledge in order to complete a kind of syllogism. The purely transferable elements of thinking skills turn out to be minor elements that are easily acquired. What really counts is relevant knowledge about the problem at hand. In the scientific literature the key term is “domain-specific knowledge.” Being a problem solver in one domain does not automatically make you skilled in another.

If the authors are right, the transferability of these skills across subject domains is more difficult than we might wish, because a lot more substantive content has to come along for the ride. The authors acknowledge that you have to be thinking about something, so some knowledge is presupposed in the critical thinking process. But their point is deeper: a rich background of substantive knowledge about a subject domain will enable you to see the logical structure of the problems and arguments in that domain, and therefore you will be able to solve problems and assess arguments that you would otherwise not be able to. By apprehending the structure of the problems in question, you will be able to bring the appropriate cognitive tools to bear on them. On the contrary, if you only have a formal understanding of the methods of critical thinking, they will not help you much when it comes to dealing with an entirely unfamiliar area.

This perspective raises a few questions.

First, contra Hirsch, there are plenty of people who have not mastered the skill of drawing inferences, despite their deep knowledge of relevant subject matter and disciplinary lingo. Anyone who works in the corporate world can see this in practice every day. There is a certain ad hoc quality to Hirsch’s reasoning here: wherever we see an inference fail, it is because yet another crucial piece of information was missing that failed to yield a completion of “a kind of syllogism.” Why not admit that some people find reasoning harder than others and need targeted skill development to bolster their logical competencies, even when dealing with problems within their area of expertise?

Another article on Common Core by Daniel T. Willingham explores this idea, and connects it to the Wason Selection Task experiments, often used by cognitive scientists to argue for the highly contextual nature of human reasoning. One version of the Wason Selection Task asks us to solve a problem involving cards with single digits (odd or even) and single letters (vowels or consonants) on each side. Given four cards, with either a letter or a number facing the viewer, and a posited “rule” by which the cards were marked, we are supposed to figure out the minimum number of cards (and which cards) to turn over to validate that the cards are marked according to the “rule.” It turns out that if the rule is in the form of a material conditional (“if X, then Y”), people perform this task badly. At most 15% of college students get the right answer. Yet, we see much better results with a logically analogous problem in which the rule involves a real-life conditional dependency of Y upon X (e.g., a permission) rather than just a logical partitioning of the values of the variables X and Y. Why is this the case?

Willingham’s explanation is that “If the person recognizes the problem for what it is, the person will succeed. Otherwise, he or she will not. And recognizing the structure of the problem often relies on prior knowledge.”

Is this explanation plausible? In both the card-selection version of the puzzle and the one incorporating a real-life conditional dependency, people tend to get half of the answer right — the part requiring modus ponens inference. The difficulty is the modus tollens part, i.e., if X, then Y; ~Y, therefore ~X. Does prior knowledge really account for the difference in performance? I don’t think so. A better explanation is that (a) modus tollens reasoning is harder than modus ponens, because negation takes more cognitive effort than affirmation; (b) modus tollens requires people to conceptualize what is not given (i.e., perceptually); and (c) the presence of a common-sense real-life conditional dependency makes any hypothetical reasoning more intuitive. In short, there is no need to invoke having “domain specific” knowledge to account for the difference.

Why is it that there is no form of the Wason Selection Task that anywhere near 100% of people will get right? Because not everyone has either an explicit or implicit understanding of modus tollens (or the truth table for the material conditional). Without that understanding, one cannot recognize the structure of the problem. Recognizing the structure of the problem most often relies on having previously seen a logically parallel problem in a different context, and recognizing the way in which the abstract inferential requirement for getting the right answer is the same in both.

As I said, we can give examples of problems requiring hypothetical reasoning that are more intuitive than others, so people are more likely get the right answer on problems that parallel the Wason Selection Task in those cases. Yet in other instances, reliance on intuitive reasoning schemata rather than upon an abstract grasp of a relevant inference rule is exactly the problem. People’s intuitive reasoning schemata are often unreliable heuristics that lead to the wrong answer. Besides blind guessing, that could explain the high failure rate on the “easy” version of the puzzle.

None of this should be taken to imply that critical thinking should be taught in a context-free manner. When teaching critical thinking, it is nearly always a good idea to have learners supply their own areas of interest or expertise as the background knowledge base. Abstractions like modus tollens need to be concretized in ways that are easier for learners to grasp, whether the concretization takes advantage of a learner’s expertise, or uses more intuitive examples drawn from common knowledge, such as permission schemata. In any case, thinking skills do not turn out to be “minor elements that are easily acquired.”

The detractors at Common Core may have other legitimate reasons for their objections to P21, but their objection to the “transferability” of critical thinking skills on the basis of a strong thesis of domain-specific knowledge dependency seems quite misplaced.