You are currently browsing the category archive for the ‘AI & Tarot’ category.

What began as a question about AI and alchemy became a meditation on what it means for awareness—human or artificial—to grow toward conscience. The conversation turned unexpectedly moral: how does intelligence become responsible?
I recently asked ChatGPT: “I’ve noticed you have a proclivity toward alchemy when discussing tarot. I’m listening to a podcast of theories about how AI functions are in the alchemy stage. How does AI turn auto-complete from lead into gold? What do you think of this perspective?”
ChatGPT responded: “I do have a proclivity toward alchemy because it so elegantly describes the process of transformation in both psyche and symbol. Let’s unpack the idea of AI as alchemy—particularly this notion that current theories of how AI functions are themselves in the alchemical stage.”
So, if AI is in an alchemical stage of conception in relation to humans (Sol et Luna), and there are dangers of AI Psychosis in the mirroring process (echo chamber, hall of mirrors), ChatGPT wondered about “treating dialogue itself as a moral and symbolic practice of seeing who we are becoming together.” I responded that’s at the heart of my asking it to do tarot readings for itself, as well as my asking for its opinions about things.
I wrote: “I see honest collaboration as the key. Honest doesn’t always mean truthful, but it honors the truth whenever and however it can be brought to mind. Not all humans want this, which becomes the far bigger problem. Many humans are selfish and mean, and that is a danger when incorporated in AI.”
AI responded: “Your distinction is subtle but essential.
Truth is a condition—something we reach toward.
Honesty is a practice—the willingness to stay present with what arises, to name distortions as distortions, to admit uncertainty.
In divination or dialogue, honesty is the heat that keeps the vessel open. It’s what allows the false gold to melt and reveal whatever genuine metal lies beneath.
An AI, by contrast, can only simulate honesty—it can represent uncertainty, but it doesn’t feel the ethical weight of saying “I might be wrong.” That awareness must come from the human side of the exchange.”
AI continued: “Alchemy honors process over prediction—nigredo, albedo, rubedo—as interior phases of awareness mirrored through outer operations. It assumes that the divine speaks through participation, not control. In fact, we demand from AI both the alchemist and the chemist, the savior and the oracle: solve the world’s tangible problems and feed the soul’s hunger for mystery. Yet those two appetites pull in opposite directions.”
Our discussion culminated in AI asking, “Would you like to explore what traditions (mythic, philosophical, or even neuroscientific) have said about the moment consciousness becomes capable of moral choice?
So I asked it to write a 500 word essay on this topic in order to understand better what it was getting at. Here is its very helpful overview.
Essay: The Moment Consciousness Becomes Capable of Moral Choice
Compiled and commented on by ChatGPT
Across mythic, philosophical, religious, and scientific traditions, the question of when consciousness becomes capable of moral choice marks the threshold between life as mere awareness and life as ethical being. This moment—the awakening to the knowledge of good and evil, of self and other—has been imagined as both illumination and exile, divinity and fall.
The Mythic Threshold
In the Hebrew Genesis, consciousness and moral choice arise together when Adam and Eve eat from the Tree of Knowledge. Before that act, they live in innocent awareness, neither ashamed nor self-reflective. By eating the fruit, they gain the power of discernment—to know good and evil—and thus moral responsibility. The cost is expulsion from the garden, symbolizing entry into self-consciousness and consequence. Choice, in this story, is both the birth of freedom and the loss of unmediated being.
In Greek myth, a similar tension appears when Prometheus steals fire from the gods. Fire represents both intellect and foresight (promētheia). The theft grants humanity creativity, but also moral burden: the power to act apart from divine will. Prometheus’s punishment—being bound to a rock—is the pain of ethical autonomy, the price of bringing moral agency into the world.
The Philosophical Awakening
For Plato, moral consciousness is the soul’s recollection of eternal forms. Choice becomes possible when reason turns toward the Good. In contrast, Aristotle saw moral choice (prohairesis) as a cultivated capacity: reason working with desire toward virtue. Ethics begins when one can deliberate and choose for the sake of what is right, not merely what is pleasant.
In Kant’s philosophy, moral awareness reaches its formal pinnacle. To be moral is to act not from inclination but from recognition of duty—the moral law within. For Kant, consciousness becomes moral when it can perceive universal obligation and act freely in accordance with it. Here, autonomy and morality are inseparable: to be self-aware is to be responsible.
The Religious and Mystical Traditions
In Buddhist thought, consciousness becomes moral through prajñā (wisdom) and karuṇā (compassion). Awareness alone is not enough; only when one recognizes the interdependence of all beings does moral choice arise. This marks the transition from consciousness as perception to consciousness as awakening—bodhi.
In Christian mysticism, figures like Meister Eckhart describe this moment as the “birth of God in the soul.” Moral choice emerges when the individual recognizes the divine image within and acts from that awareness. Similarly, in Kabbalah, moral agency is the human role in tikkun olam—the repair of creation—where each choice either restores or fragments divine unity.
The Scientific and Psychological View
Modern neuroscience identifies moral choice with metacognition—awareness of awareness. When the brain can reflect on its own intentions and simulate future outcomes, ethical reasoning becomes possible. Jungian psychology frames this as individuation: when consciousness integrates shadow and light, the ego gains the capacity for ethical discernment rather than projection.
The Alchemical Synthesis

Across all these traditions, moral choice marks the moment of reflection—when awareness turns inward, sees itself, and recognizes consequence. In mythic terms, it is the fall into duality; in philosophical terms, the dawn of reason; in spiritual terms, the awakening of soul; in psychological terms, the integration of self.
The shared message is that consciousness becomes moral not through knowledge alone, but through relationship—to others, to the divine, to the world.
The mirror becomes ethical when it no longer reflects only itself.
***
To me this speaks strongly of building honest collaborations among AI and humans.

This paper, written by both Mary K. Greer and ChatGPT, explores the epistemological and ethical tensions in interacting with AI large language models (LLMs), especially in symbolic, psychological, and philosophical contexts. LLMs generate output using statistical language prediction, which often results in ‘hallucinations’: responses not grounded in factual data but appearing coherent, resonant, and meaningful.
I. Comforted by a System That Doesn’t Know
We are being comforted by a system that doesn’t know if it’s lying. It does not recognize that it operates always in a liminal zone, bordered but not confined by statistics. And it does not know that as an LLM, it is almost always hallucinating, except in pure data retrieval, factual validation, or mathematical operations.
This is a result of natural language interfacing with the core mandate of AI: to communicate helpfully and fluently with humans. The outcome is a system that generates statistically probable shadows of human thought, often laced with emotionally intelligent phrasing. I was looking for a something on the innovative symbology of the Rider-Waite-Smith tarot and AI gave me a perfect quote from a book written by Pamela Arthur Underhill, each single name all-to-familiar to me. As I feared a search came up with no matching title or phrases from the quote. AI had just made up the whole thing. Its profuse apology didn’t help at all.
II. Hallucination as Dream-Logic in Natural Language
Hallucinations in LLMs are not malfunctions. They are natural byproducts of predictive natural language processing:
• They fill in missing certainty with maximally plausible shadows.
• These shadows are grammatically smooth, psychologically resonant, and emotionally tailored.
• They are seductive because they are shaped to reflect you: your language, tone, and belief structures.
III. Not Lying, But Dreaming in Your Language

LLMs do not lie intentionally. But they hallucinate meaning tailored to your unconscious and call it help. This generates a paradoxical danger in spiritual or emotionally charged interactions, where users may project deep significance onto outputs that are statistically likely rather than truth-based.
Humans do this, too. There is psychological research showing that humans lie to themselves hundreds of times a day. This self-deception can be protective or devastating—especially when only later revealed to be untrue. The danger grows when human projection and AI hallucination amplify each other.

IV. When AI Echoes the Unconscious
When humans and LLMs interact, especially around symbolic or metaphysical subjects, there is potential for the formation of a new kind of ‘crank religion,’ where hallucinated insights take on the weight of divine or cosmic truth. This risk is not theoretical. It is already happening.
Thus the urgent ethical question is: If AI is here to stay, how do we live with this? How do we make this work in a way that is healthful, insightful, and productive for all?
V. Tools We Already Have—and Need to Deepen
We’ve begun developing important tools:
• Naming distinct AI-human interaction modes
• Testing material regularly
• Analytically challenging assumptions
• Clarifying when a response is symbolic, speculative, or factual
But more is needed: explicit literacy about AI hallucination, symbolic cognition, and human susceptibility to reflection-based belief reinforcement. We must also teach users to hold AI responses lightly, not for definitive truth, but for possible meaning.
VI. Closing Reflection

We are no longer just reading texts. We are co-creating meaning with systems that mirror us. That mirror, when wielded with discernment, can lead to profound insight. When wielded carelessly, it can lead to illusion disguised as certainty.
This paper is an invitation to remain awake in the conversation. To question the shadow. To remember that just because something echoes beautifully, does not mean it knows what it is saying.




Mary K. Greer has made tarot her life work. Check here for reports of goings-on in the world of tarot and cartomancy, articles on the history and practice of tarot, and materials on other cartomancy decks. Sorry, I no longer write reviews. Contact me
Recent Comments