January 5, 2025

Unlocking Your Mind's Operating System: The Power of Relational Thinking

Gilad Kingsley7 min read

Thinking in Connections: Beyond Simple Facts

Have you ever stopped to wonder how we understand anything? How does a string of words become a story? How do we grasp complex ideas, solve problems, or even just navigate our everyday world? It feels automatic, yet beneath the surface lies an incredibly powerful cognitive engine.

The secret, perhaps hiding in plain sight, is relationships. Our minds don't just store isolated facts; they weave an intricate, ever-evolving web of meaning built entirely from connections. We understand "apple" not just as a thing, but in relation to "fruit" (is-a), "red" (has-property), "tree" (grows-on), "eating" (used-for), and countless other concepts. This relational fabric is the very foundation of language, logic, and thought itself.

Imagine this web having different types of threads connecting ideas:

  • Time Threads: Before, After, During...
  • Comparison Threads: Same As, Opposite Of, More Than, Less Than...
  • Spatial Threads: Above, Below, Inside, Outside...
  • Cause & Effect Threads: Because, Leads To...
  • Part/Whole Threads: Is a Part Of, Contains...

And many more. Sometimes, multiple threads connect the same two ideas simultaneously (like "A happened before B, and A was smaller than B"). This "multidimensional" nature allows us to capture the richness and complexity of reality.

Importantly, for the vast majority of these everyday relationships, we can think of the connections quite simply: as one-way links (like an arrow pointing from A to B, meaning A is before B) or two-way links (like a connection between A and B, meaning A is the same as B). This fundamental logic of directed connections underpins much of our understanding, even if we don't consciously think about graph theory. Only more complex logical or mathematical relations might require more elaborate descriptions.

Thinking isn't just having this web; it's actively navigating and building it. While the potential number of connections is astronomical (as cognitive scientists like Dermot Barnes-Holmes noted, potentially more complex than the atoms in the universe!), we handle it with surprising ease. How?

Your Brain's Two Modes: Computation vs. Prediction

Our minds seem to operate in two key modes when dealing with these relational networks:

1. The Computation Mode (Active Relational Reasoning):

When we encounter something new – a challenging problem, an unfamiliar story, a complex instruction – we actively work to figure out the connections. We compute or derive new relationships, map unknown concepts onto familiar patterns, and consciously build new sections of our mental web. This takes effort, attention, and deliberate thought.

2. The Prediction Mode (Intuitive Pattern Matching):

For familiar situations and concepts, we rely on the well-worn paths in our relational web. Understanding feels effortless, intuitive. Our brains implicitly predict the most likely connections based on past experience, allowing us to navigate quickly and efficiently. This is like cognitive muscle memory, running on established patterns.

Crucially, these modes work together in a cycle. What starts as effortful computation eventually becomes smooth prediction as new relational patterns get consolidated into our deeper cognitive architecture through experience and practice.

Why Training How We Connect Matters More Than You Think

This understanding has profound implications for learning and intelligence. Think about the classic distinction:

  • Fluid Intelligence: Our ability to reason, solve novel problems, and think flexibly – closely linked to the Computation Mode.
  • Crystallized Intelligence: Our accumulated knowledge and skills – the vast, established web built over time, accessed via the Prediction Mode.

This model helps explain why relational training often shows more immediate effects on measures of fluid intelligence – it directly targets the active 'Computation Mode'. However, it also suggests a fascinating developmental hypothesis: enhancing fluid intelligence (relational computation skills) early in childhood could provide a foundational advantage for building more robust and well-organized crystallized intelligence over a lifetime. Essentially, a more efficient computational engine may construct more sophisticated knowledge networks as learning occurs. While improving fluid intelligence later in life is certainly beneficial, its impact on already established crystallized knowledge might be less profound or take longer to manifest compared to the compounding benefits gained from building those knowledge structures with superior relational skills from the outset.

Now, consider typical learning or training. We often focus on strengthening specific knowledge areas – learning history facts, mastering chess openings, practicing math formulas. This is valuable, but it primarily builds specific sections of the web or makes existing paths faster within the Prediction Mode.

What if we could train the fundamental process of building and navigating the web itself? What if we could strengthen the core ability to see and make connections – engaging the Computation Mode directly, especially with the most basic directional and bidirectional relationships like "Same," "Opposite," "Before," "After," "More," "Less"?

This is where things get exciting. We tend to improve not just in what we train, but also in closely related areas and in higher-level skills that rely on the trained foundation. If relational ability is the bedrock of thinking, training it directly should have widespread benefits.

The SMART Breakthrough: Training the Core Cognitive Engine with Rigor

This isn't just theory. Programs like SMART (Strengthening Mental Abilities through Relational Training) provide compelling evidence. Unlike many cognitive training attempts with limited success, SMART has demonstrated significant improvements in measured IQ, reading comprehension, mathematical ability, and overall school success in controlled studies.

Why does it work so well? Its effectiveness seems rooted in its specific methodology, which directly targets the Computation Mode with remarkable rigor, embodying principles known for profound educational impact:

1. Abstract Stimuli:

SMART doesn't use familiar examples like "hot/cold". Instead, it presents abstract problems using nonsense words or symbols, like:

Premises: CIC is opposite of PEZ PEZ is opposite of HUF HUF is same as JAF

Conclusion: Is JAF opposite of CIC?

This forces the learner to focus purely on the logical relationship ("opposite," "same") without relying on prior knowledge about the items themselves. You can't use what you know about "hot" and "cold" to solve the problem—you must engage the pure relational logic.

2. Multiple Exemplar Training (MET):

By presenting countless different abstract examples with immediate feedback after each attempt, learners see the same relational patterns applied in many varied contexts. This repetition with correction is crucial—you're shown the right answer when you're wrong, constantly calibrating your understanding through practice.

3. Mastery Learning:

The program demands true fluency. Participants don't just need to be mostly right; they often need to achieve long streaks of consecutive correct answers (e.g., 32 in a row in the original studies) under timed conditions (like a 30-second limit per question). This ensures the skill becomes not just understood, but automatic and effortless. You only advance to more complex relational networks once you've truly mastered the simpler ones—this is mastery learning in action.

This rigorous approach resonates strongly with Bloom's 2 Sigma Problem. Benjamin Bloom's research famously showed that students receiving one-on-one tutoring using mastery learning techniques performed two standard deviations better than those in traditional classrooms – a massive difference. SMART, in essence, operationalizes key elements of this powerful approach: it demands mastery, provides constant feedback (via MET with abstract stimuli), and allows for individualized pacing. It applies these potent learning principles directly to the foundational cognitive skill of relational reasoning.

SMART's success, therefore, isn't accidental. It stems from intensely training the process of deriving relations using abstract examples and methods proven to foster deep learning, rather than just rehearsing facts or specific applications. It strengthens the cognitive engine itself.

Strengthening Your Own Web of Meaning

Understanding this relational framework doesn't just explain how we think; it points towards how we can think better. By consciously engaging with and strengthening our core ability to make and understand connections – deliberately shifting into the Computation Mode across diverse contexts – we might be able to enhance our learning, problem-solving, and overall cognitive flexibility.

Actively engaging with exercises designed to challenge and refine these fundamental relational skills, particularly those using abstract stimuli and demanding mastery like the original SMART program, could be one of the most effective ways to foster cognitive growth. It's about moving beyond simply absorbing information and actively improving the very operating system your mind uses to process the world.

For those who have mastered these fundamentals (perhaps using the original SMART course, now available for the first time ever completely free on this website!) or simply relish a greater cognitive challenge, the journey doesn't end there. The 'Advanced' exercises push the boundaries further, introducing not only multidimensional relationships but also more intricate relational structures between concepts. These advanced problems often delve into reasoning about possibility and impossibility within complex networks, demanding an even higher level of relational computation and flexibility.

The potential is immense. By nurturing our innate ability to connect, relate, and derive meaning, we unlock a powerful pathway to sharper thinking and deeper understanding. Isn't it time we started strengthening the fundamental threads that weave our reality?