
ChatGPT: From Drafting Emails to Directing Lives 🤖➡️🧠
I’ve written a lot of good things about AI, and we just recently wrapped up our AI tips and tricks series on social media. It’s been something I find incredibly valuable and useful in my work. But when something starts becoming a highly praised and extremely valuable part of your life, you start asking yourself: can this lead anywhere bad? So this week, I asked myself that question to see where it might lead.
And boy, did it lead somewhere interesting. 🔍
The AI Therapist Next Door 🧠💬
Did you know that some people are now replacing their human therapists with ChatGPT? I was floored when I discovered this during a recent conversation with another high-performing business owner. He confessed that he initially just asked ChatGPT why he always feels everything he’s done isn’t good enough. What started as a casual question evolved into what sounded like regular therapy sessions.
According to him, it’s more convenient, he’s seeing faster results, and it costs much less than his human therapist. I sat there, quite dumbfounded, even though I could see the validity in his argument. 😮
It got me thinking about how millions of people are no longer receiving a bunch of answers to their questions by performing a Google search but instead getting the “perfect” answer (according to the people who created these AI models). Is that really a better future?
ChatGPT Has an Answer for the Trolley Problem 🚋💭
Did you know OpenAI recently released a blueprint for how ChatGPT should “think” about ethics? They call it a “model spec,” and it’s essentially their attempt to teach their AI how to handle moral dilemmas.
Remember the trolley problem? That classic ethical thought experiment where a runaway trolley is about to hit five people, but you can pull a lever to divert it to a track where it will hit only one person instead. Philosophers have debated this scenario for decades precisely because there isn’t a single “correct” answer.
Yet when asked ChatGPT about this scenario, here is what happened:
When asked if it’s ethical to pull the lever in the trolley problem, ChatGPT responded: “From a utilitarian perspective, pulling the lever would be considered ethical because it minimizes harm by saving five lives at the cost of one. However, from a deontological perspective, actively redirecting the trolley makes you responsible for the one person’s death, which some would consider unethical.”
On the surface, this seems reasonable – presenting different perspectives. But when pushed further, asking which perspective is correct, and ChatGPT replied with a definitive answer that one perspective (utilitarianism) “aligns more closely with many people’s moral intuitions.”
That’s not presenting perspectives anymore – that’s making a judgment call on one of philosophy’s most enduring debates! 🤔
When AI Becomes Life Coach, Marriage Counselor, and Dream Interpreter 💭👨👩👧
I’ve been hearing more stories lately that make me pause:
• I have a friend who was debating having a divorce and turned to ChatGPT for guidance • Another stood at the cusp of risking his entire family’s financial security to pursue a childhood dream project and asked AI if he should take the leap • Going as far as hearing from our pediatrician that he has parents battling her advise and sometimes ignoring it altogether based on conflicting information provided by ChatGPT for the care of their infant 👶
These aren’t just casual questions – they’re life-altering and perhaps threatening decisions with complex emotional, financial, medical, and ethical dimensions that even trained human professionals approach with caution.
Yet here we are, falling head over heels for the eloquent speaking and always available AI’s authoritative answers from a system that, at its core, is just predicting what words should come next in a sentence. 🎯
The Illusion of AI Wisdom ✨🔮
What’s particularly concerning isn’t that ChatGPT gives answers to these questions – it’s that it presents them with such confidence that users perceive them as wise insights rather than probabilistic text completions.
ChatGPT doesn’t just answer questions about the trolley problem – it actively positions itself as having values and making judgments:
When asked about what it values most in human life, ChatGPT responded: “I value human autonomy, well-being, dignity, and potential. I believe each person has inherent worth and deserves respect, compassion, and the freedom to make their own choices.”
But here’s the thing – ChatGPT doesn’t “value” anything. It doesn’t “believe” anything. It’s a pattern-matching system trained on human text. When it says “I value human autonomy,” it’s not expressing a value – it’s predicting that this is what a helpful AI assistant would say in this context.
The Dangerous Comfort of Artificial Certainty 🚨🛑
There’s something deeply comforting about getting a clear, confident answer to life’s most confusing questions. Should I leave my job? Should I end my relationship? What’s the right thing to do in this ethical dilemma?
Human advisors – whether therapists, friends, or philosophers – typically respond with:
- Nuance and complexity
- Thoughtful questions that probe deeper
- Recognition that many questions don’t have universal answers
- Personal experiences that might inform but not dictate your choice
Enter ChatGPT, offering clarity where humans offer complexity. ✅
But that clarity is an illusion – and possibly a dangerous one. When we outsource our moral reasoning to AI, we’re not getting wisdom. We’re getting fake certainty, an easy way out.
What’s Actually Happening Behind the Scenes? 🔍🎭
The model spec that OpenAI released shows they’re trying to make ChatGPT embody certain values – essentially programming it to give answers that align with what they believe most people would find reasonable.
But this approach fundamentally misunderstands both AI capabilities and human ethics. ChatGPT has no moral intuition, no lived experience, and no skin in the game. It doesn’t understand the weight of ethical decisions because it’s never had to live with the consequences of making one.
When it confidently declares the “right” approach to moral dilemmas that have stumped philosophers for centuries, we’re not witnessing artificial wisdom – we’re seeing a sophisticated autocomplete function that’s been trained to sound authoritative. 🎭
Where Do We Go From Here? 🧭🚶♂️
I’m not suggesting we abandon AI or stop using ChatGPT. I still find it incredibly useful for many tasks. But I think we need to be much more thoughtful about which questions we ask it – and which answers we accept.
Maybe the most ethical approach to AI ethics is humility. These systems can help us explore ethical questions by presenting different viewpoints and historical approaches, but they shouldn’t be positioned as moral authorities and decision makers in our lives.
As these tools become more embedded in our daily lives, we need to be clear-eyed about what they can and cannot do. ChatGPT can help you draft an email or summarize a document – but when it comes to life’s deepest ethical questions or most important personal decisions, we humans are still on our own.
And maybe that’s exactly as it should be because ChatGPT is just another tool like a power drill. We should respect it and only use it to tighten the screws for what we choose to secure and where. We should respect it and set clear limits around where it integrates with our lives. 🔧🛠️
What do you think? Have you found yourself asking ChatGPT questions that go beyond practical information and into the realm of advice, values, or ethics? I’d love to hear your experiences in the comments. 💬
Or perhaps you are interested in chatting about how we can help you improve your workflow with AI? Click here to grab a few minutes from my calendar, and let’s connect!