Cognitive Assessment

Cognitive Reflection Test

Seven short puzzles. Each one looks simple. Most people get the obvious answer wrong. Find out whether you think with your gut or catch yourself before answering.

7 questions ~5 minutes Free

Based on Frederick (2005) & Toplak et al. (2014) research

Question 1 of 7 14%

Question 1 of 7

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

Question 2 of 7

If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Question 3 of 7

In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

Question 4 of 7

If you're running a race and you pass the person in second place, what place are you in?

Question 5 of 7

A farmer had 15 sheep and all but 8 died. How many are left?

Question 6 of 7

Emily's father has three daughters. The first two are named April and May. What is the third daughter's name?

Question 7 of 7

How many cubic feet of dirt are there in a hole that is 3 feet deep, 3 feet wide, and 3 feet long?

Your Results

Based on 7 questions

Calculating your score...

Building your interpretation...

Question-by-Question Breakdown

Your question-by-question breakdown will appear here.

Practical follow-up tips will appear after scoring.

The Science Behind the Test

The Cognitive Reflection Test was introduced by Shane Frederick in 2005 and has become one of the most cited measures in behavioral science.

From the Research

"This paper introduces a three-item 'Cognitive Reflection Test' (CRT) as a simple measure of one type of cognitive ability — the ability or disposition to reflect on a question and resist reporting the first response that comes to mind."

— Frederick, S. (2005). Cognitive Reflection and Decision Making. Journal of Economic Perspectives, 19(4), 25-42.

Frederick found that CRT scores predict real-world decision patterns. People who scored higher tended to be more patient with financial choices and less likely to fall for common reasoning traps.

The test was given to over 3,400 people across universities including MIT, Princeton, and Harvard. Even at MIT, only about 48% of respondents answered all three original questions correctly.

System 1
Fast thinking
  • Automatic and effortless
  • Runs on pattern matching
  • Produces quick "gut" answers
  • Often right for familiar tasks
  • Gets tricked by CRT questions
System 2
Slow thinking
  • Deliberate and effortful
  • Follows logical steps
  • Catches errors from System 1
  • Requires concentration
  • Produces correct CRT answers

The CRT is built on dual-process theory, popularized by Daniel Kahneman in Thinking, Fast and Slow. Each puzzle is designed so that System 1 (fast, automatic thinking) generates a wrong answer that feels right. Getting the correct answer requires System 2 (slow, deliberate thinking) to step in and override that first impulse.

The CRT measures whether you tend to accept System 1's quick answer or engage System 2 to check the work.

From the Research

"A major strength of the CRT is that it is a direct measure of miserly processing as opposed to a self-report measure... and that the CRT goes beyond measures of cognitive ability by examining the depth of processing that is actually used."

— Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition, 39(7), 1275-1289.

The original 3-question CRT was short. Toplak, West, and Stanovich (2014) expanded it to 7 items, adding four new questions that test the same reflective ability using word puzzles and logic riddles instead of math alone. This expanded version has better reliability (Cronbach's alpha of roughly 0.72) and broader coverage of cognitive reflection.

Thomson and Oppenheimer (2016) also developed alternative CRT questions, specifically to address the problem that many people had already seen the original three puzzles. Their items use verbal and logical tricks rather than numerical ones, which reduces the advantage of strong math skills.

How We Built This Version

This assessment combines questions from the published CRT literature. Questions 1 through 3 are the original items from Frederick (2005). Questions 4 through 7 draw from the expanded items used by Toplak et al. (2014) and Thomson & Oppenheimer (2016). All items have been published in peer-reviewed journals and are widely reproduced in academic and educational contexts.

Multiple-choice format. The original CRT uses open-ended responses. We present four options per question, including the common intuitive wrong answer. Research shows that multiple-choice versions still measure the same construct (see SJDM DMIDI documentation on CRT multiple-choice versions).

Scoring: Each correct answer earns 1 point. Your total score ranges from 0 to 7. There are no subscales, no reverse scoring, and no weighting. Higher scores mean more reflective responses; lower scores mean more intuitive responses.

What this test is not: The CRT is not an IQ test. While CRT scores do correlate moderately with measures of cognitive ability, the test specifically measures a cognitive style (the tendency to reflect rather than go with your gut), not raw intelligence.

Sources: Frederick (2005), J. Economic Perspectives; Toplak, West & Stanovich (2014), Thinking & Reasoning; Thomson & Oppenheimer (2016), Judgment and Decision Making.

How People Actually Score

Frederick (2005) tested over 3,400 people on the original 3-item CRT across several universities. Here is how students at MIT performed:

3/3 correct
48%
2/3 correct
30%
1/3 correct
16%
0/3 correct
7%

Data: Frederick (2005), MIT sample (N=61). Average score across all samples ranged from 1.24 to 2.18 out of 3.

At less selective universities, average scores were lower. Princeton students averaged 1.63 out of 3, Carnegie Mellon averaged 1.51, and Harvard averaged 1.43. The bat-and-ball question alone trips up more than half of students at top schools.

Related Assessments

The Cognitive Reflection Test measures a specific cognitive style: whether you override intuitive responses. Other tools on Coached explore related territory from different angles:

Measures how much you enjoy thinking and engaging with complex problems. Where the CRT tests whether you do reflect, the Need for Cognition Scale asks whether you like to.

Covers a different domain entirely. The CRT measures reflective thinking on logical puzzles, while emotional intelligence looks at how well you read and manage emotions.

About This Assessment

This is a free, educational self-assessment tool based on published academic research. It is not affiliated with Shane Frederick, Yale University, or any commercial test publisher.

Questions 1-3 are from Frederick (2005), published in the Journal of Economic Perspectives. Questions 4-7 draw from Thomson & Oppenheimer (2016) and Toplak et al. (2014). These items appear in peer-reviewed journals and are widely reproduced in educational and research settings. We use them under fair use with full citation.

This is a self-reflection and educational tool. It is not a clinical assessment, not an IQ test, and should not be used for hiring, admissions, or any high-stakes decision. Your score reflects how you approached these specific puzzles on this occasion.

Research shows that up to half of participants have already seen classic CRT questions before. If you knew some of these puzzles, your score may be higher than it would otherwise be. The test works best the first time you encounter the questions.