Navigating Complexity with AI Co-pilot

Navigating Complexity with AI Co-pilot

Living in today's world often feels like navigating through an elaborate maze of complexity and uncertainty.

• We are overloaded with information

• We're dealing with tons of fake information & conspiracies

• We face wicked problems & meta-crisis

• Even everyday problems are growing increasingly complex.

Instead of dealing with facts, we are dealing with probabilistic cues and statistical reasoning.
Sure, we get more and more scientific knowledge on every issue.

Yet, it often does not lead us towards greater clarity. Instead, we find ourselves in the realm of cognitive dissonance.
And then the world stops making sense, it is when we need sensemaking more than ever!


Sensemaking is a process through which we interpret and create a coherent and, actionable narrative from complex, ambiguous, or unclear situations.

Through this process, we transform scattered data, fragmented information, and raw experience into mental models and shared understanding.
Sensemaking is a crucial part of decision-making and problem-solving.
This process is ongoing and iterative.

We're constantly updating and refining our understanding of the world and our place in it based on new information and outcomes.

Sensemaking Frameworks

Sensemaking frameworks are cognitive tools that help us understand and interpret the world around us. They provide a structure for organizing information, making it easier to navigate complex situations.

There are quite a few useful sensemaking methods and frameworks, including:


• Dervin's method

• Weick's model

• DIKW Pyramid

• CLA (Causal Layered Analysis)

The challenge is that the application of any framework requires a significant cognitive investment. It means you need to deal with a lot of information and perform cognitive activities like inductive and deductive reasoning.

So as we become more reliant on complex technology, we can try to get some assistance from AI, specifically from GPT-4.

Reasoning Machines

One crucial characteristic of GPT: it can reason! Or, to be precise: statistically imitate human reasoning.

But that's good enough to outperform humans in many intellectual domains. And its productivity is insane.
First time in human history we have unrestricted, real-time access to a human-level AI that can assist us in sensemaking! Be our sensemaking co-pilot.

So let’s put its abilities to the test! Let's try to apply CLA (Causal Layered Analysis) to a complex problem.

Teaching GPT-4 to Apply CLA

CLA (Causal Layered Analysis) is a sensemaking framework from futures studies (interdiscipline systematic study of possible, probable and preferable futures). It was developed by Sohail Inayatullah, a political scientist and futurist

One of the key principles of CLA: in order to understand complex phenomena, you must go deeper into the layers of causality.

CLA incorporates four layers.

CLA by Sohail Inayatullah

When you apply CLA to selected phenomena, you start with Litany: obvious things, like how it’s described in popular media.

And then go deeper through Systematic Causes, Worldview to the level of Metaphors and Myths. Later in the article I will show you a case study, so you could have a better grasp on the method.

And now, I want to see how good GPT-4 is in applying it.
First, I want to know whether ChatGPT is familiar with CLA.

I will ask it directly.

It seems to be, however this is not the most recent version of the framework.
Let's update GPT knowledge.

I will use the AskYourPDF plugin to give GPT the most recent version. I will upload the most recent CLA guid.

Prompt: Update your understanding of CLA from this document: [link to PDF or DocID]. Summarize it. Give me the updated version of 4 layers that you got from the PDF.


Now it looks better: all the levels came from the most recent guide.

Updating GPT knowledge on the framework is a very important step! Model was trained on a lot of data, it knows a lot of frameworks. Yet the knowledge might be outdated or incomplete. So before applying – make sure you've taught GPT the most recent version of the framework.

Case Study: Employees Burnout Problem

We will explore the problem of people's burnout (particularly in large companies).

Employee burnout is a serious problem in today's workplace, particularly with the growing demands of the AI-era and the blurred lines between work and home life.At its core, burnout is a state of chronic physical and emotional exhaustion. It often also includes feelings of cynicism, detachment, a lack of accomplishment at work, and a decline in professional efficacy.

I want to see how GPT will deal with analyzing the primary causes of this problem.
But first, let's use another AI tool Perplexity. Perplexity basically allow you to do super-fast desk research to get an initial grasp of the problem.

Here is the prompt for Perplexity:

What are the primary causes of employee burnout?


Nice! Now we got some context for a case at hand.

Let's try to apply CLA using ChatGPT.

Prompt: Act as an expert analyst. Break down the problem of employee burnout into 4 layers of CLA. Think carefully and logically. Give me thorough, detailed and structured output for each layer. Do not give me additional information on layers. Put the output into the table.


I’ve put here the short version of it, but as you can see GPT did a pretty decent job!First: it covered all 4 layers correctly without any additional feedback.It gave a pretty solid variety of factors on the Systemic layer. Yet it missed a few important things like information overload.

Obviously on this layer it’s better to give model some more relevant context or ask it to verify and enrich results via websearch.

Yet even without it, GPR did a pretty comprehensive analysis on worldview level (It’s long, I will not put it here, but if you are interested – I encourage you to try this experiment for yourself).

Myths/metaphors is probably the hardest layer of CLA. On workshops many people struggle here. So GPT assistance can be extremely valuable.

The metaphor of Inexhaustible Machine is quite relevant, yet a bit obvious. That’s not GPT’s problem, just in this case metaphors are rather simple.

I've explored several more complex cases, and I can say that GPT's work with non-trivial metaphors was incredible!

We can follow the CLA method and try to visualize the Inexhaustible Machine metaphor. We will use another AI for that: Midjourney.

Inexhaustible Machine by Midjourney

Visual stimuli are incredibly helpful in understanding problems.
And now we're through the first part of the CLA method!

And it took us only a few minutes.


I've been using AI as sensemaking co-pilot for over a year now. It was quite useful in a number of highly complex cases. And I'm excited about a future where AI democratizes high-end analysis, allowing people to understand complex problems better. And as a result make more reasonable decisions.

Yet, it is not magic. This sort of work requires certain skills. And I think AI-enhanced sensemaking will be the ultimate meta-skill of the twenty-first century.


Andrew Altshuler is a researcher turned consultant & educator.
He helps people & businesses to transform the chaos of information into an efficient, AI-optimized knowledge system. You can find more about Andrew's work at:

Did this article spark any thoughts? Share with the community below.