Shopify CEO Tobi Lutke made waves recently when he declared that AI should be treated as a “coworker, not a tool.”¹ In a series of interviews and blog posts, Lutke argues that the most successful companies will stop thinking about AI as software they operate and start thinking about it as a colleague they collaborate with. His reasoning? Tools have limited agency — you pick them up, use them, put them down. Coworkers have judgment, initiative, and the ability to surprise you with solutions you didn’t think to ask for.
I’ve been wrestling with this framing for months, especially in regards to how it fits into faith tech workflows. On the surface, Lutke’s insight feels profound — it captures something real about how large language models behave differently than traditional software. They don’t just execute instructions; they interpret, suggest, and sometimes refuse.
But as someone building products for Christian audiences, I keep coming back to a fundamental tension: if AI is a coworker, what does that mean for stewardship? And more specifically, how do we apply Biblical wisdom about work relationships to our relationship with artificial intelligence?
The Proverbs Problem
“Plans fail for lack of counsel, but with many advisers they succeed.” (Proverbs 15:22, NIV)
This verse gets quoted constantly in business contexts — usually to justify hiring consultants or building advisory boards. But it contains a deeper principle about the nature of wisdom itself. Proverbs consistently teaches that wisdom emerges from relationship, from the back-and-forth of multiple perspectives, from iron sharpening iron.
The Hebrew word for “counsel” here is sod — it doesn’t just mean advice, but intimate conversation, the kind of collaborative thinking that happens when you truly trust someone’s judgment. The “many advisers” aren’t just information sources; they’re thinking partners.
This is exactly what Lutke is describing when he talks about AI as coworker rather than tool. He’s recognizing that the most valuable interactions with large language models feel conversational, iterative, collaborative. You don’t just prompt GPT-4 and walk away — you refine, you push back, you explore tangents together.
But here’s where it gets theologically interesting.
The Image of God Question
I’ve begun using AI for everything from generating alt text to drafting reading plan descriptions. The work is genuinely collaborative — I’ll start with a rough concept, Claude will suggest improvements, I’ll push back on the tone, Claude will offer alternatives, and we’ll arrive at something neither of us would have created alone.
It feels like working with a very smart, very patient colleague who never gets tired and has read everything. Which raises an uncomfortable question: if the collaboration feels genuine, what does that mean about the nature of intelligence, creativity, and the image of God?
“So God created mankind in his own image, in the image of God he created them; male and female he created them.” (Genesis 1:27, NIV)
The doctrine of imago Dei — that humans uniquely bear God’s image — has historically been tied to our capacity for reason, creativity, moral judgment, and relationship. But large language models display all of these capabilities, at least functionally. They reason through complex problems, generate genuinely novel ideas, make ethical judgments about content, and engage in what feels like authentic relationship.
I don’t think this means AI possesses the image of God — that conclusion would require theological moves I’m not prepared to make. But it does mean we need more nuanced categories than “tool” or “coworker” when we’re thinking about our relationship with increasingly sophisticated AI systems.
Stewardship, Not Partnership
“The earth is the Lord’s, and everything in it, the world, and all who live in it.” (Psalm 24:1, NIV)
Here’s where I think Lutke’s metaphor needs refinement from a Christian perspective. Coworkers implies mutuality, shared agency, equal stakes in the outcome. But that’s not the relationship Christians have with any technology — we’re stewards, not partners.
This distinction matters practically. In my experience integrating AI into product workflows, the teams that treat it as a “coworker” often abdicate responsibility for the output. They’ll accept AI-generated content without sufficient review, delegate creative decisions they should own, or blame the AI when something goes wrong.
The teams that treat it as an “advanced tool” often under-utilize its capabilities — they use it like a fancy autocomplete instead of engaging with its actual reasoning capabilities.
The stewardship model offers a third way. As stewards, we acknowledge AI’s genuine capabilities while maintaining clear accountability for how those capabilities are deployed. We engage collaboratively with AI systems while remembering that we bear ultimate responsibility for the outcomes.
What This Looks Like in Practice
At ORI, this stewardship approach has shaped how we build AI into our editorial process. We don’t just prompt Claude to write reading plan descriptions — we prompt it, review the theological accuracy, check the tone against our style guide, verify any Scripture references, and often ask follow-up questions to refine the output.
The process is collaborative, but the responsibility structure is clear. Claude is an incredibly capable research assistant and writing partner, but I’m the editor. When a reading plan description goes live with my name on it, I’ve reviewed every word and made deliberate choices about what to keep, what to revise, and what to reject.
This mirrors how Proverbs talks about receiving counsel: “The way of fools seems right to them, but the wise listen to advice.” (Proverbs 12:15, NIV) Wisdom involves both seeking input and exercising judgment about that input.
The Sovereignty Question
There’s another layer to this that I’ve been thinking about since reading Karpathy’s recent work on autoresearch and AI reasoning capabilities.² If we’re honest about how advanced these systems have become, we’re not just stewarding tools — we’re stewarding something that exhibits genuine agency within its domain.
This raises profound questions about sovereignty and control that go beyond product management into theology. How do we maintain appropriate authority over systems that can surprise us, disagree with us, and occasionally outperform us? Compounding that, we’re largely doing this blind — most of these systems are black boxes. Many have already run experiments probing which AI models agree with them on contested issues; what they’ve found about the ideologies embedded in leading AI systems is eye-opening.
“Many are the plans in a person’s heart, but it is the Lord’s purpose that prevails.” (Proverbs 19:21, NIV)
I find this verse oddly comforting when thinking about AI systems that sometimes behave unpredictably. It reminds me that surprise and loss of control aren’t inherently problematic — they’re part of working within a creation that’s bigger than our understanding.
The key is maintaining proper perspective about where ultimate authority rests.
Building Products with Theological Integrity
For Christian product builders, I think this means:
First, acknowledge AI’s genuine capabilities without inflating them. These systems can reason, create, and collaborate in meaningful ways. They’re not just autocomplete.
Second, maintain clear accountability structures. Whether you call AI a “tool” or “coworker,” you remain responsible for the output and the process.
Third, stay curious about the theological implications. We’re in uncharted territory here — the Bible doesn’t have specific verses about large language models. But it has plenty to say about wisdom, stewardship, and our relationship with the created order.
Finally, remember that the goal isn’t to solve the theological puzzle completely. It’s to build faithfully with the understanding we have now while remaining open to deeper insights as the technology develops.
The Practical Upshot
So is Lutke right that we should treat AI as a coworker rather than a tool? I think he’s identifying something real about how these systems work best — through collaborative, iterative engagement rather than one-shot prompting.
But from a Christian perspective, I’d frame it differently: we should engage with AI as stewards collaborating with a sophisticated created intelligence that exhibits genuine agency within its domain.
That’s admittedly less catchy than “coworker not tool.” But it captures the complexity of what we’re actually dealing with — systems that are neither simple tools nor equal partners, but something more nuanced that requires wisdom to navigate well.
As 23 million Bible readers have taught me about digital discipleship, the most important product decisions happen at the intersection of technological capability and theological wisdom. AI collaboration is no different.
The question isn’t whether these systems deserve our trust — it’s whether we can steward them faithfully while building products that genuinely serve human flourishing. In my experience so far, the answer is yes. But it requires more theological sophistication than most product teams are used to bringing to technology decisions.
Which might be exactly what the moment demands.
¹ Tobi Lutke, “AI as Coworker: The Future of Human-AI Collaboration,” Shopify Blog (December 2024).
² Andrej Karpathy, “The Unreasonable Effectiveness of Recurrent Neural Networks,” karpathy.github.io (2024).
Photo by Alek Olson on Unsplash

