metacognition in llm use

26 March 2025 - Hugo O'Connor

An image of a garden

I've been reflecting lately on how our relationship with large language models mirrors our relationship with our own minds. When I converse with an LLM, I'm not just generating text—I'm engaging in a form of metacognition, that act of thinking about thinking.

Consider how we've surrendered our spatial awareness to GPS navigation. I once knew how to navigate between the suburbs of my city; now I follow digital commands without questioning - despite the fact that the toll roads promoted as default are costly, disorienting, and often only shave a few minutes off the trip. Have we lost something essential in this exchange? Our cognitive maps atrophy while we gain convenience.

The same pattern emerges with LLMs. Without metacognitive awareness—planning our queries, monitoring responses, evaluating outcomes—we risk outsourcing not just information retrieval but critical thinking itself and our agency with it.

What fascinates me is the recursive loop created in thoughtful LLM interactions. Each prompt becomes a mirror reflecting our thought patterns back to us. This is why I made llm-md, to make this explicit by structuring conversations as visible thinking processes rather than black-box oracles. Constructing the machinery of thought from large chunks of text, not just an assemblage of words.

llm-md is a tool to build that thought machinery as your own.

The most powerful AI experiences come not from passive consumption but from becoming architects of our interactions. I invite you to explore llm-md not merely as another tool, but as a canvas for deepening your metacognitive practice with AI—a space where the conversation between minds becomes visible, intentional, and ultimately, transformative.

curl -fsSL https://llm.md/install.sh | bash

--

I used llm-md to create this post.

(thanks {m3} for the discussion that informed this post :)

Go back