Can AI Make Us Wiser?
Preliminary program announcement, updated 13-Dec-2025
Can AI make us not only more effective, productive, or even smarter-looking, but actually wiser?
No, it can’t, unless we team up, individually and collectively, with a wisdom-fostering AI as our learning partner. Yes, but how do we find or create one? That’s one of the questions that participants in our program will work on and discover.
AI-Augmented Wisdom Praxis:
a research seminar, an action research, and a learning expedition
The 6-month-long AI-Augmented Wisdom Praxis program of the Future HOW Center of Action Research, supported by the Schumacher Institute, will be an invitational research seminar, a collaborative learning expedition, and a Generative Action Research project, all rolled into one. It will address such issues as what the wisest questions worth asking ourselves and our AI partners are, and where those questions may come from.
In our context, “praxis” refers to the practice-driven dynamic interplay between theory and practice. We’re not going to only talk about moving the edge of AI-assisted human and communal development but also engage in socio-technical experimentation, using ChatGPT’s recently released “group chat” feature and other advanced tools.
Research Seminar as Action Research
A research seminar, as we practice it, weaves together two threads that conventional academic formats often separate. One thread offers substantive briefings from the frontiers of personal development—insights gleaned from our work on AI-enabled situational awareness, meaning-making, and choice-making. The other invites you to become a co-researcher who actively makes sense of these inputs and dialogues with AI agents rather than passively receiving them.
In the seminar, as action research, our inquiry uses three complementary lenses. The first-person lens invites self-inquiry into such questions as: What intrinsic motivation do I have for participating in this research? What does my personal experience in working with AI teach me about myself? What are the different contexts of my life in which what I am learning may be useful?
We use the second-person lens in generative conversations where we share emergent insights. What new understanding arises thanks to our conversations? What can we discover together? What possibilities does the meeting of our perspectives give rise to?
The third-person lens draws on relevant published research, frameworks, and the accumulated wisdom of the field, including the thinkers whose work informs this program.
These three ways of knowing don’t proceed in sequence; they interweave, each enriching the others as we spiral deeper into shared inquiry.
This seminar initiates a collaborative learning expedition: a six-month journey forming the first cycle of what we expect will become a longer voyage of discovery. Due to the cyclicity of Generative Action Research, at this point, we can’t predict what format(s) the larger learning expedition will take.
Here’s What Is Waiting for You
The program will start on 26-Jan-2026 and include:
6 sessions of 2 hrs via Zoom (the “basecamps” of the Expedition)
Scouting parties advancing in online forums between the basecamps
Briefings from the frontiers of tech-enabled development of situational awareness, meaning-making, and choice-making
One-on-one meetings once a month with the Expedition’s Sherpa
The use of a Knowledge Garden reflecting the landscape of research in the ”AI x wisdom” space, including the participants’ contribution to it
Facilitated, small group explorations involving AI-supported group chats
Collaboratively ranked issues list
On-the-ground use cases, chosen by the participants where they can put into action what is learned
Supporting This Work with Your Donation
Creating space for this kind of deep work requires resources—facilitation, research infrastructure, AI systems, and ongoing development. We ask for £450 as a recommended contribution.
But here’s what we care about more than the number: having committed practitioners who are genuinely called to experimenting at the edge of human-AI collaboration for the benefit of all. If finances are a barrier, reach out. We’ll find a way through work-trade or an amount that honors both the value of the work and your situation.
Provocation:
What would make us good ancestors, worthy of the appreciation of future generations? That’s the question that woke me up at predawn. Is it only if we succeed in eliminating the much-touted, AI-born existential risks while leaving all the man-made social ills of an extractive society untouched?
Many of us are already asking, when everything is possible, when with the help of AI we can become whatever we want to, then what are the wisest questions worth asking ourselves and our AI mates? I don’t think it is the fear-driven “What is that I’m better at than AI?” More likely, “What is the best gift that we and AI can give to humanity when we’re symbiotically linked, which we couldn’t give separately?”
Who Should Apply for an Invitation?
This program is designed with you in mind if you are:
A) deeply curious about the next stage of your development,
B) eager to know how human and AI co-facilitators can widen your perspective on where your journey may lead,
C) wanting to join a community of practitioners of AI-enabled positive change in communal and large systems.
To participate, you don’t need coding experience, only curiosity about and willingness to co-create a relational field with wisdom-guided and wisdom-fostering AI.
Note: Apply only if you can dedicate 9 hrs/week on average to your personal and professional development in this research seminar and action research.
Organizers:
George Pór—Principal Investigator, Distinguished Fellow at the Schumacher Institute
Founder and Director of Research, Future HOW Center of Action Research for Regenerative Futures
Ian Roderick—Chair of the program’s Advisory Board
Director of The Schumacher Institute for Sustainable Systems
How to apply for an invitation
1. Participation is limited to 25 people. Should you wish to reserve a seat, please send your Expression of Interest and your bio to our Teaching and Research Assistant, Dylan Brady, at dylancbrady@gmail.com ASAP.
2. Responding to your EoI, our team will contact you, and you will receive a link to the Application Survey at the beginning of January.
3. If you are shortlisted, you will be invited to a generative dialogue via Zoom.




This piece really made me think. The idea of AI as a wisdom partner is fascinating. What if through socio-technical experimentation, AI could help us discover entirely new categories of "wisest questions" that humans hadn't even concieved yet? That's truly mind-blowing.
'What would make us good ancestors, worthy of appreciation from future generations who will inherit whatever world we shape today?'
I believe we should tweak the question and clarify its moral orientation.
1) On anthropocentrism
Saying “the world we shape” assumes humans are the primary agents and rightful focus of history. That framing can obscure two important realities:
• Non-human agency: Earth systems, other species, and even chance events shape the future alongside us. We are powerful, but not sovereign.
• Moral standing beyond humans: If we only ask how future humans will judge us, we risk sidelining obligations to non-human life and ecological integrity for their own sake.
That said, some anthropocentrism may be unavoidable when speaking of ancestors, since the concept itself is relational and human. A less anthropocentric framing might emphasize participation rather than control, e.g., “the world we participate in shaping” or “the conditions we pass on.”
2) On “appreciation” vs. “gratitude”
. Appreciation implies evaluation, taste, or even approval from a distance. It can feel aesthetic or optional.
. Gratitude implies dependence and recognition: we could not be here, or live this well, without what you did.
Preferring gratitude shifts the ethical bar upward. It suggests that future generations aren’t merely impressed or approving, but genuinely thankful because their lives are better, freer, or more probable due to our restraint, care, or courage.There’s also humility in this shift: we don’t act to be admired, but to avoid being a burden. Gratitude suggests relief rather than praise.
Taken together
These proposals push the question toward a deeper ethical stance:
• Less about human mastery,
• Less about posthumous recognition,
• More about responsibility, restraint, and enabling life—human and non-human—to continue meaningfully.
A revised formulation might sound like:
What would it mean to live in such a way that future generations—human and otherwise—might feel gratitude rather than resentment for the conditions we leave behind?