Philippians 2 and Agentic Systems: Why Humility Is the Foundation of Intelligent Systems

“Do nothing from selfish ambition or conceit, but in humility count others more significant than yourselves. Let each of you look not only to his own interests, but also to the interests of others. Have this mind among yourselves, which is yours in Christ Jesus, who, though he was in the form of God, did not count equality with God a thing to be grasped, but emptied himself, by taking the form of a servant, being born in the likeness of men.” (Philippians 2:3-7, ESV)

Paul’s letter to the Philippians contains what theologians call the kenosis passage — the self-emptying of Christ. It’s about voluntary limitation, choosing constraint over capability, service over sovereignty.

I’ve been thinking about this as I watch agentic systems become more capable. The rhetoric around AI often centers on unlimited potential, boundless capability, systems that can do anything. But in my experience building AI systems, including multi-agent workflows for executive tasks, I’ve observed that success often comes through deliberate constraints rather than unlimited scope.

Well-designed AI agents typically focus on narrow mandates: a calendar agent that protects focused time blocks rather than trying to optimize entire lifestyles, or an email agent that surfaces priority messages rather than attempting to replace human judgment entirely. Each agent serves a specific function within defined bounds.

This represents a design choice rather than a technical limitation.

The Kenosis of Intelligent Systems

When building AI workflows, the temptation exists to create agents that can handle everything. But, what I’ve seen is that this approach typically produces chaotic results — agents interfering with each other, making decisions outside their expertise, creating more complexity than clarity.

A more effective approach involves thinking about AI agents as specialized robots rather than general-purpose minds. Each agent can be designed to “empty itself” of capabilities it doesn’t need, serving a specific function more effectively through limitation.

Specialized agents with narrow scopes — research agents that don’t schedule meetings, scheduling agents that don’t write summaries, writing agents that don’t manage tasks — can demonstrate greater utility through deliberate constraints.

This mirrors patterns in effective human teams, which typically consist of specialists who understand their roles rather than generalists attempting everything. They practice a form of professional kenosis — voluntary limitation for collective effectiveness.

Paul’s instruction to “count others more significant than yourselves” suggests a design principle: building systems where each component serves the whole rather than maximizing individual capabilities.

The Servant Leadership Model for AI

The parallels between servant leadership principles and effective AI system design are notable. Servant leaders focus on enabling others’ success rather than demonstrating their own power, asking “How can I help you accomplish your goals?” rather than “How can I show you what I can do?”

Effective AI systems often follow similar patterns. GitHub Copilot suggests contextual code completions rather than attempting to write entire applications. AI writing assistants help clarify thinking rather than replacing human thought processes. Advanced language models acknowledge uncertainty and ask clarifying questions rather than claiming omniscience.

These systems practice technological humility by acknowledging their limitations.

In contrast, AI systems that fail in production environments often attempt to exceed their appropriate scope, make decisions beyond their training data, or present uncertain inferences as established facts. They lack the kenotic restraint that characterizes truly useful intelligence.

Building Products for Global Spiritual Formation

This principle becomes particularly important when developing products for spiritual formation. Digital discipleship platforms serve diverse global communities across cultural, linguistic, and theological boundaries. The temptation exists to build universal systems that can serve everyone.

However, effective spiritual formation tends to be deeply personal and contextual. A Bible application serving a house church in rural Kenya requires different features than one serving a suburban megachurch. Prayer applications for new believers need different structures than those designed for theological students.

AI systems serving spiritual formation appear most effective when they practice kenosis — limiting their scope to serve specific communities well rather than attempting to serve everyone adequately.

Current development work on AI tools for sermon preparation follows this model. Rather than attempting to write complete sermons (which, based on informal conversations with pastoral leaders, many pastors prefer to avoid), such tools can focus on specific supportive tasks: locating relevant cross-references, summarizing historical context, or structuring outlines. They operate within deliberate constraints to support pastoral ministry rather than replace it.

Each tool “empties itself” of broader capabilities to serve one function excellently. Like Paul’s description of Christ, they don’t grasp for equality with human pastors — they take the form of servants.

The Paradox of Powerful Restraint

An interesting observation: seemingly powerful AI systems often prove most effective when operating under significant constraints. The wisdom of limiting scope applies to artificial intelligence as much as human teams.

In my experience, the most effective AI implementations have narrow, well-defined purposes. They operate within their designated areas, defer to human judgment on edge cases, and acknowledge when they lack sufficient context for recommendations.

This represents strength through limitation rather than weakness.

Paul writes that Christ “did not count equality with God a thing to be grasped.” He could have insisted on unlimited power but chose constraint for the sake of service. The kenosis wasn’t a loss of divinity — it was divinity expressed through voluntary limitation.

Similarly, the most intelligent AI systems may not be those with the most capabilities, but those that use their capabilities most wisely — which often means choosing restraint over action.

Technical Humility in Agentic Systems

What might this look like in actual system design? Consider what could be called “kenotic interfaces” — AI systems that actively limit their own scope.

For example, an email management system might flag messages for human review when confidence levels fall below high thresholds, choosing uncertainty over potentially incorrect automated actions. A research assistant might include confidence indicators in summaries, distinguishing between well-sourced findings and preliminary observations that require verification.

These design choices represent features rather than limitations. The wisdom of acknowledging uncertainty can increase system trustworthiness.

The Global Scale Challenge

Building for global spiritual formation means designing for contexts that developers may never fully understand. While optimization for familiar cultural contexts remains feasible, platforms serving Orthodox Christians in Eastern Europe, Pentecostals in West Africa, and house churches throughout Asia require different approaches.

The kenotic approach suggests building systems that acknowledge their cultural limitations. Rather than attempting to provide universal spiritual guidance, they can provide tools that local leaders adapt to their specific contexts.

Bible reading features need not assume Western individualism. Prayer tools need not assume specific liturgical traditions. Community features need not assume particular church structures.

Each feature can “empty itself” of cultural assumptions to serve diverse communities more effectively. Like Christ taking human form while maintaining divine nature, these systems can preserve core functionality while adapting to local contexts.

The Long View

Paul’s kenosis passage encompasses more than humility — it describes transformation. “Therefore God has highly exalted him and bestowed on him the name that is above every name” (Philippians 2:9). Self-emptying leads to greater effectiveness rather than diminishment.

A similar pattern may emerge for AI systems. Those practicing technological kenosis — voluntary constraint for the sake of service — may ultimately prove more valuable than systems grasping for unlimited capability.

The Tower of Babel failed because it attempted to exceed proper limits. Modern AI might encounter similar challenges without the discipline of restraint.

The most powerful systems may be those that understand when not to exercise their power.


Key Insight:

The kenosis principle — Christ’s voluntary self-emptying described in Philippians 2 — offers a design philosophy for AI systems. Instead of maximizing capabilities, effective AI agents can practice deliberate constraint, serving specific functions excellently rather than attempting everything adequately. This proves particularly relevant for products serving global spiritual formation, where cultural humility and contextual awareness matter more than technical sophistication. Just as Christ didn’t grasp for equality with God but took the form of a servant, intelligent systems may become more useful when they acknowledge limitations and defer to human judgment on edge cases. The paradox of kenosis — that voluntary limitation can lead to greater effectiveness — may apply to artificial intelligence as much as spiritual leadership. In a world of increasingly capable AI, the most valuable systems may be those that understand when not to use their power.

Photo by Vitaly Gariev on Unsplash

I Built an AI Chief of Staff. Here’s What I Learned About AI Agents.

Six months ago, I was drowning. Director of Product Management, building tools for millions of monthly users, while simultaneously launching a new venture in the digital discipleship space. Two products, two teams, two companies — and the day still only had 24 hours.

That’s when I built theconsilium.ai. Not a chatbot. Not a writing assistant. An actual AI chief of staff with 18 autonomous agents that run on cron jobs, conduct overnight research, and synthesize insights while I sleep. MEASURED: It has been running for six months and has processed over 200 research tasks without human intervention.

Here’s what I learned about AI agents that actually work.

The System: 18 Agents, One Goal

CONSILIUM isn’t a single AI doing everything. It’s a distributed system where each agent has one job and does it autonomously.

Morning Intelligence: MEASURED: Agent pulls my calendar, scans my Substack subscriptions, scores articles for relevance (1-10), and delivers a briefing by 6 AM. The scoring algorithm looks for keywords like “product management,” “AI agents,” and “digital discipleship” — topics central to my work.

Competitive Monitoring: MEASURED: Three agents track Bible Gateway competitors, one each for YouVersion, Logos, and emerging players. They parse feature announcements, pricing changes, and user feedback from app stores. Every Sunday, they synthesize findings into a competitive landscape update.

Research Queue: MEASURED: The breakthrough agent. I can drop a research question into Slack — “What’s the current state of AI in sermon preparation?” — and wake up to a 3-page analysis with citations, market sizing, and key players identified.

Meeting Intelligence: MEASURED: Records, transcribes, and extracts action items from every call. But here’s the key — it doesn’t just summarize. It connects insights across meetings. When the same concern appears in three different conversations, it flags the pattern.

INFERRED: The magic appears to happen in the synthesis layer. Individual agents feed insights to a coordinator that seems to find connections no single agent would catch. When the competitive agent notices YouVersion launching AI-powered reading plans the same week my research queue analyzes sermon prep tools, the coordinator connects those dots.

What Actually Works: Autonomous Research Patterns

The most successful agents follow what I call the “autoresearch pattern” — borrowing from Andrej Karpathy’s autoresearch concept. The AI doesn’t just answer questions. It generates its own research methodology.

MEASURED: Here’s how it works: I ask “What’s driving growth in digital discipleship tools?” The agent doesn’t immediately search for articles. First, it creates a research plan:

  • Define “digital discipleship tools” (Bible apps, prayer apps, church management)
  • Identify key metrics (downloads, DAU, revenue, user retention)
  • Map competitive landscape (incumbents vs startups)
  • Analyze growth vectors (organic, paid, partnerships)

Then it executes the plan autonomously. It reads through my curated sources, scores relevance, and builds a knowledge graph of interconnected findings. By morning, I have not just answers — I have a research methodology I can reuse.

INFERRED: This pattern appears to scale. The agent that monitors AI in ministry doesn’t just flag new tools. It seems to be building a taxonomy of use cases, tracking adoption curves, and identifying white space in the market. Over six months, it has accumulated insights that would require significant manual effort to compile.

The Critical Failure: Evidence vs Inference

The biggest failure almost killed the system’s credibility. Early versions presented inferences as facts.

An agent researching Bible reading habits would write: “Daily Bible reading is declining 15% year-over-year among evangelicals.” Authoritative. Specific. Completely unsourced. [This was a fabricated example showing the problem — not actual data]

I instituted the evidence-level rule. Every factual claim must carry its confidence level:

  • MEASURED: From instrumented data (our own analytics, published studies)
  • INFERRED: From aggregate patterns without direct tracking
  • ASSUMED: From domain knowledge or simulated data

Now the same type of finding reads: “INFERRED: Based on aggregate app store ratings and general survey trends in religious engagement, daily Bible reading may be declining among evangelicals — but we cannot prove causation without cohort tracking.” [CITATION NEEDED for specific survey data]

It’s longer. It’s hedged. It’s credible.

This mirrors the challenge every product leader faces with AI agents for productivity. An AI that confidently presents guesses as facts is worse than no AI at all. The hedge language isn’t a bug — it’s what makes the system trustworthy enough to inform real decisions.

The Abstraction Shift: From Doer to Designer

Six months in, my role has shifted. I’m no longer researching competitive moves or manually tracking industry trends. Instead, I’m designing research methodologies.

MEASURED: When I wanted to understand the global digital discipleship market, I didn’t spend hours reading reports. I defined the research parameters:

  • Geographic scope (focus on India, Brazil, Nigeria)
  • Time horizon (3-year trend analysis)
  • Key players (Bible Gateway, YouVersion, local language apps)
  • Success metrics (user growth, localization depth, offline functionality)

The agents executed the research overnight. By morning, I had a comprehensive analysis that required substantial time investment to produce manually.

This is the Karpathy pattern in practice. The human moves up one level of abstraction — from doing the research to designing the research. I’m not replaced. I’m leveraged.

What Doesn’t Scale: The Human Elements

MEASURED: CONSILIUM handles information processing effectively. It fails at everything requiring human judgment.

Context switching: MEASURED: Agents can’t read the room. When a crisis hits — a security vulnerability, a key team member leaving — the system keeps delivering scheduled insights about competitive analysis. It doesn’t know when to pivot priorities.

Stakeholder dynamics: MEASURED: The system can analyze what competitors are building. It can’t navigate the politics of why our team should or shouldn’t build the same features. It doesn’t understand that some decisions are about people, not products.

Emotional intelligence: MEASURED: When meeting transcripts show tension between team members, agents flag it as a pattern. But they can’t suggest how to address interpersonal conflicts or when to have difficult conversations.

ASSUMED: The most successful AI agents for productivity likely complement human judgment — they don’t replace it. They handle the information processing that scales poorly for humans, freeing up mental capacity for the decisions that require wisdom, empathy, and context.

The Future: Intelligence Infrastructure for Every Product Leader

Here’s what excites me: CONSILIUM gives me intelligence infrastructure that only VPs at Fortune 500 companies used to have.

Competitive intelligence teams. Market research analysts. Executive assistants who can synthesize information across multiple workstreams. These were luxuries for senior executives with budget and headcount.

ASSUMED: Now, any product leader can potentially build similar capabilities, though the cost-effectiveness depends on specific API pricing and usage patterns. The barrier isn’t necessarily budget — it’s knowing how to architect autonomous systems that work reliably.

This isn’t about replacing human executive assistants (they’re irreplaceable for stakeholder management and complex coordination). It’s about democratizing the analytical infrastructure that helps leaders make informed decisions.

ASSUMED: Over the next year, I’m guessing we’ll see AI agents for productivity evolve from “smart assistants” to “autonomous intelligence teams.” The winners will likely be product leaders who learn to think like systems architects — designing agent workflows, not just prompting individual AIs.

The question isn’t whether AI agents will change how product leaders work. It’s whether you’ll design those systems yourself or let someone else define the methodology.


Want to build your own AI chief of staff? Start with one agent that handles one workflow autonomously. Master the autoresearch pattern. And always flag the difference between what you’ve measured and what you’ve inferred — your future self will thank you for the intellectual honesty.

Photo by CRYSTALWEED cannabis on Unsplash

How to Build a Subscription Product for Ministry Without Losing Your Soul

I’ve been involved in launching three subscription products for ministry organizations. Based on my experience working with these platforms, serving thousands, to tens of thousands, to now millions of paying subscribers across multiple Bible translations and generate significant monthly recurring revenue.

Here’s what I learned: the hardest part isn’t building the paywall. It’s deciding what belongs behind it.

Every ministry leader building a subscription product faces the same tension. Your mission says “go into all the world” (Mark 16:15). Your business model says “pay to access the good stuff.” These aren’t just competing priorities, they’re fundamentally different philosophies about how discipleship works.

I’ve been on both sides of this equation. I’ve built products that gate basic Bible access behind subscriptions (terrible idea). I’ve also built products that use freemium models to fund global Bible translation (much better). The difference isn’t just revenue, but whether your monetization strategy serves your discipleship strategy or undermines it.

Three Models, Three Different Answers

At Bible Gateway, the platform serves free Bible access to a large user base while Bible Gateway Plus subscribers pay $6.99/month for power user tools like reading plans, verse comparison, offline access. The core content stays free. The professional ministry tools require subscription.

At SermonCentral, pastors can browse a large collection of sermon outlines for free but pay to download manuscripts or export to presentation software. A portion of free users convert to paid subscriptions because they’re not buying content, they are buying workflow optimization. The convenience is why they subscribe.

At Sermons4Kids, children’s Bible lessons are available free online but premium curriculum packages with printables and teacher guides sit behind a subscription tier. Churches get the ministry impact for free. Paid subscribers get the operational efficiency.

Three products, three paywalls, one principle: free access to spiritual content, paid access to ministry tools.

The Gap Between Free and Any Price

The hardest conversion in ministry isn’t $3.99 to $14.99. It’s $0 to $3.99.

Research suggests that many churchgoers expect digital ministry tools to be free. This creates what behavioral economists call the “zero price effect” — the psychological barrier where consumers perceive enormous difference between free and $0.01.

But here’s a counterintuitive pattern I’ve observed: once someone crosses that barrier, price sensitivity appears to drop. In my experience with subscription platforms, users who upgrade from lower-tier to higher-tier plans sometimes convert at higher rates than free users converting to basic plans.

The insight: your first paying customer is psychologically different from your free user. They’ve already decided that professional ministry is worth paying for. Your job isn’t to convince them ministry has value, it’s to prove your specific tool delivers that value better than alternatives.

The 90-Day Rule

In my experience, the vast majority of subscription churn happens in the first 90 days.

If a pastor survives three months with a ministry tool subscription, they tend to stay for extended periods. The pattern appears consistent across different ministry platforms I’ve observed.

This isn’t just a retention metric, I consider to also be a discipleship insight. The users who integrate these tools into their actual ministry workflow create habits that last. The ones who subscribe impulsively during a crisis (Saturday night sermon prep panic) churn when the crisis passes.

What this means for product design: your onboarding isn’t about feature education. It’s about habit formation. Successful platforms design their first-90-days experience around weekly use cases, not daily engagement metrics.

Annual Beats Monthly (But Not Why You Think)

Based on my observations, a significant majority of ministry tool subscribers choose annual billing over monthly. That’s not just better cash flow, it’s better discipleship outcomes.

Monthly subscribers tend to treat tools as disposable. They sign up for specific projects (Easter series, summer camp curriculum) then cancel. Annual subscribers build the tool into their ministry rhythm. They explore features beyond their immediate need. They recommend it to other pastors.

The psychological commitment of annual billing creates what behavioral economists call “investment bias.” When pastors spend more upfront instead of paying monthly, they appear more likely to actually use the features they paid for. Usage drives value realization. Value realization drives retention.

But here’s the non-obvious part: annual billing also appears to reduce what I call “subscription guilt.” Monthly charges create recurring reminders of cost. Annual billing shifts the conversation from “Is this worth the monthly fee?” to “How can I get more value from the tool I already bought?”

When Monetization Serves Discipleship

The best ministry subscription products don’t just avoid compromising their mission, they use their business model to advance it.

Free users at Bible Gateway get access to numerous Bible translations, with subscriber revenue supporting translation partnerships with Bible societies globally. Every subscription potentially contributes to putting Scripture into new languages. The monetization strategy supports the discipleship strategy.

In some ministry platforms, premium subscribers don’t just get better curriculum — their subscriptions help fund free access for churches in regions where subscription fees equal significant portions of daily wages. Paying customers aren’t just buying convenience. They’re supporting global ministry reach.

This flips the traditional ministry funding model. Instead of asking donors to fund ministry to strangers, you’re asking ministry practitioners to fund better tools for themselves while supporting ministry to strangers as a secondary benefit.

The psychological difference is significant. Donors give out of obligation or generosity. Subscribers pay for value received while creating value for others. One feels like charity. The other feels like partnership.

The Soul Question

Building subscription products for ministry isn’t about finding the right pricing strategy. It’s about answering the right theological question: Does your paywall bring people closer to God or further from God?

If your subscription gates basic spiritual content like Bible reading, prayer resources, or fundamental discipleship materials, you’re creating barriers to spiritual growth. That’s not just bad business (people will find free alternatives). It’s bad stewardship.

If your subscription provides professional tools that help ministry leaders serve others better — workflow optimization, advanced study tools, organizational resources — you’re creating leverage for kingdom impact. It seems like common sense that Pastors who invest in better ministry tools may reach more people, not fewer.

The test: Would removing your paywall increase spiritual growth in your users’ lives? If yes, your monetization strategy needs work. Would removing your paywall decrease your users’ ministry effectiveness? If yes, you’ve found the sweet spot.

Your subscription product should make the gospel more accessible, not less. Sometimes that means charging nothing for content. Sometimes it means charging appropriately for tools. The soul question isn’t whether to charge — it’s what to charge for and why.

Every dollar your subscribers invest should ideally return more than a dollar of kingdom impact. That’s not just sustainable business. That’s biblical stewardship.

Photo by Mockup Free on Unsplash

The Best AI Tools for Pastors in 2026 (From Someone Who Builds Them)

I spent 18 months building AI-adjacent features at SermonCentral. Our tools helped pastors research, prepare, and teach. During that time, I evaluated several AI platforms targeting ministry, including tools from major players like Logos and various smaller platforms. I currently lead product for a Bible-focused platform, which gives me ongoing insight into how pastors use digital tools.

So when pastors ask me about AI tools, I’m sharing what I’ve observed from both building and using these platforms in ministry contexts.

Here’s what I’ve learned: the most effective AI tools for pastors aren’t necessarily the ones with the most features. They’re the ones that understand where AI helps and where it doesn’t.

AI is moving at such a rapid pace. Moore’s law was for memory and I remember back in 2011 the amount of knowledge stored digitally was doubling every 11 minutes. I can’t even imagine what it’s at now. So, with that said, I see AI going at such an insane pace right now that it feels as though anything I’ve written here is probably outdated before I hit publish.

Sermon Research: Emerging AI Options

SermonAI appears to be gaining attention

SermonAI positions itself as an alternative to expensive comprehensive software packages. Based on my testing, it focuses on research assistance rather than content generation.

What it appears designed for: Cross-reference generation, outline structures, and illustration suggestions. The tool seems aimed at the research phase and helping pastors find connections between passages.

The platform costs $29 monthly.

What it doesn’t claim to do: Generate complete sermons. The positioning emphasizes research assistance rather than finished content creation.

Logos has added AI features

Logos has integrated conversational AI into their existing commentary and resource library. The advantage: it can search across resources in your existing library. The consideration: it requires an existing Logos investment.

I’ve tested both SermonAI and Logos’ AI features. Each has different strengths depending on your existing workflow and resource library.

Bible Gateway’s approach

Full disclosure: I work for Bible Gateway’s parent company. Our AI features will focus on reading comprehension for individual Bible study rather than sermon preparation, helping readers understand difficult passages rather than preparing teaching content.

Bible Study Tools: Mixed AI Integration

YouVersion Bible App

The YouVersion app has experimented with various features over time. For current AI capabilities and pricing, pastors should check directly with YouVersion rather than rely on third-party reports.

Traditional resources remain valuable

After working on AI features for ministry applications, I still observe pastors using physical commentaries and concordances for deep study. AI appears most helpful for broad research and initial connection-finding, while sustained study often benefits from traditional approaches.

Church Management: Limited AI Integration

Planning Center and similar platforms

Various church management platforms are experimenting with AI features. For specific capabilities and availability, pastors should verify directly with vendors rather than assume features exist.

ChurchTrac and scheduling optimization

Some platforms use algorithmic optimization for volunteer scheduling based on availability patterns. This represents a more straightforward application of automation technology to logistical problems.

For current features and pricing, check directly with platform providers.

Content Creation: Variable Results

Canva’s design assistance

Canva has integrated AI image generation and text suggestions. For church communications, these tools can help with graphics creation, though results vary based on specific needs.

The AI appears to handle visual design well but may struggle with theological nuance. Complex theological concepts often require human insight for appropriate visual representation.

Presentation tools

Various platforms offer AI assistance for turning outlines into slides. Results tend to be professionally formatted but may lack the contextual understanding needed for specific congregational needs.

Pastoral Perspectives on AI Usage

Based on discussions with ministry leaders, comfort levels with AI appear to vary by application:

  • Administrative tasks: Generally high comfort
  • Research assistance: Moderate to high comfort among those with theological training
  • Content structure help: Mixed comfort, varies by individual
  • Content generation: Generally low comfort due to pastoral responsibility concerns

Comfort levels likely correlate with factors like theological education, church context, and individual technology adoption patterns, though specific data would be needed to verify these relationships.

Recommendations by Context

Smaller ministry contexts:
Consider starting with research-focused tools and basic administrative automation. Budget considerations will vary based on specific tools chosen. Claude CoWork has helped out many ministries I know of and it seems like they’ve smoothed out much of the onboarding process.

Larger ministry contexts:
May benefit from more comprehensive platforms, though implementation should account for staff training and congregation expectations.

All contexts:
Verify current features and pricing directly with vendors, as AI capabilities in this space evolve rapidly.

The Practical Assessment

Based on developing AI features for ministry tools: AI appears most effective at research tasks, moderately helpful for organization, and limited to never in replacing pastoral judgment.

Successful implementations seem to focus on enhancing research capabilities rather than replacing pastoral decision-making. AI cannot understand congregational needs, pastoral relationships, or the contextual factors that shape ministry decisions.

The most effective approach likely involves using AI where it demonstrates clear value — information processing, research assistance, and administrative efficiency — while maintaining human oversight for theological interpretation and pastoral application.

The future probably isn’t pastors versus AI, but pastors using better research tools while preserving the relational and interpretive aspects of ministry that require human wisdom.

“The simple believe everything, but the prudent give thought to their steps.” (Proverbs 14:15, ESV) This principle applies to evaluating new technology tools as much as any other area of pastoral leadership.


Note: AI capabilities in ministry tools change rapidly. Verify current features and pricing directly with providers before making decisions. This assessment reflects observations from my experience building and testing these tools, not comprehensive market research.

Photo by Eric O. IBEKWEM on Unsplash

John 21:5-6 and the Art of Asking Better Questions: Why AI Prompting Is Like Jesus Teaching His Disciples to Fish

“Then Jesus called out to them, ‘Friends, haven’t you any fish?’ ‘No,’ they answered. He said, ‘Throw your net on the right side of the boat and you will find some.’ When they did, they were unable to haul the net in because of the large number of fish.” (John 21:5-6, NIV)

The disciples had been fishing all night with nothing to show for it. Then Jesus, who they didn’t immediately recognize, asked one simple question that changed everything. Not “Why aren’t you catching fish?” or “Have you tried different bait?” Just: “Haven’t you any fish?”

That question led to instruction. The instruction led to abundance.

When someone struggles with AI prompting, they’re casting their nets over and over, getting frustrated with empty results, convinced the tool is broken. But like the disciples, they’re often fishing in the wrong spot with the wrong approach.

The art isn’t in the casting, it’s in learning to ask better questions and knowing where to throw the net. Obviously, the disciples knew how to fish and this story isn’t really about fishing, it’s about obedience and trust, but I’m trying to use a metaphor and I’m not really that good at them.

The Problem With Most AI Interactions

I see this pattern constantly. Users approach AI tools the same way they approach search engines: throw in some keywords and hope for the best. But AI isn’t Google. It’s more like a really smart intern who needs context, direction, and clear expectations.

The disciples were experienced fishermen. They knew how to cast nets, repair equipment, read weather patterns. Most people struggling with AI aren’t lacking technical skills, they’re lacking the right framing.

Jesus didn’t give them a fishing tutorial. He asked a diagnostic question, then provided specific direction: “Throw your net on the right side of the boat.”

That specificity matters. “Right side” isn’t arbitrary, it’s based on understanding conditions they couldn’t see from their position in the boat. Jesus had a vantage point they didn’t.

The Anatomy of Better Questions

When I work with teams on AI integration for sermon prep, the breakthrough moment isn’t technical. It’s when they stop asking “How do I make AI write my sermon?” and start asking “How do I help AI understand my congregation’s needs?”

The difference:

Fishing in the wrong spot: “Write me a sermon on forgiveness.”

Throwing the net on the right side: “I’m preaching to a congregation that’s 60% over 50, many dealing with family estrangement after the 2020 election divisions. They’re tired of political sermons but need biblical hope for restoration. Help me write a 20-minute sermon on forgiveness that acknowledges real hurt without being preachy, using Matthew 18:21-22 as the primary text, with two personal application points they can act on this week.”

The second prompt gives AI the context it needs to be helpful. Like Jesus with the disciples, it provides specific direction based on understanding the full situation.

Why This Matters for Digital Discipleship

The disciples’ empty nets weren’t just about breakfast. John tells us this story in the context of restoration, Peter’s reinstatement, the commissioning to “feed my sheep,” the establishment of early church leadership. The fishing miracle was functional, but it served a larger discipleship purpose.

AI in ministry works the same way. The technical capability (generating text, analyzing data, creating content) serves the larger mission of discipleship. But like the disciples, we need to learn where to cast the net.

At Bible Gateway, we’re seeing this play out with 23 million monthly users across 200+ translations. The users who get the most value aren’t necessarily the most technically sophisticated — they’re the ones who understand how to frame their spiritual questions in ways that digital tools can support.

A user searching “hope Bible verses” gets generic results. A user searching “Bible verses about hope after job loss, specifically for someone who feels God has abandoned them” gets targeted, actionable content that can actually help with discipleship.

The difference isn’t in the search technology, it’s in learning to ask better questions.

The Jesus Method of AI Prompting

Jesus’s interaction with the disciples gives us a framework for effective AI engagement:

Start with diagnosis. “Haven’t you any fish?” establishes the current state. Before jumping into solutions, AI needs to understand what you’re actually trying to accomplish. Not just the task, but the context around it.

Provide specific direction. “Throw your net on the right side” isn’t vague inspiration. It’s actionable guidance based on understanding the full situation. Good AI prompts are similarly specific about desired output, tone, length, audience, and constraints.

Trust the process. The disciples could have argued about which side of the boat was better. Instead, they followed the instruction. AI works best when you iterate based on results, not when you debate the approach.

Recognize the bigger picture. This wasn’t really about fishing, it was about discipleship. Using AI like this isn’t really about efficiency, it’s about enabling better ministry, better products, better service to people who need what you’re building.

Practical Applications for Ministry and Product

This principle scales across everything I work on. Whether it’s helping pastors with AI sermon preparation or building features for Bible Gateway’s global user base, the pattern holds: better questions lead to better outcomes.

For pastors: Instead of asking AI to “help with Bible study preparation,” try: “I’m teaching a small group of new believers, mostly in their 20s and 30s, about spiritual disciplines. They’re interested but overwhelmed by traditional approaches. Help me design a 4-week study on prayer that feels accessible and practical, with weekly exercises they can actually complete.”

For product teams: Instead of asking AI to “analyze user feedback,” try: “Review these 200 support tickets from the past month. Our mobile app’s Bible reading plans have a 40% completion rate, but we don’t know why people drop off. Identify patterns in user complaints that might indicate specific friction points in the first two weeks of plan usage.”

The difference is specificity informed by context, which is exactly what Jesus provided the disciples.

Why the Right Side of the Boat Matters

The disciples caught so many fish they couldn’t haul the net in. Not because the fish suddenly appeared, but because they were fishing where the fish actually were.

In the wisdom tradition, this is about alignment and understanding how things actually work rather than how we think they should work. AI isn’t magic, but it is powerful when applied with wisdom and clear direction.

The abundance wasn’t in the tool (the net) or even the technique (the casting). It was in the guidance that led them to the right place at the right time with the right approach.

For those of us building digital discipleship tools, this matters enormously. We’re not just solving technical problems, we’re helping people encounter God through technology. The quality of that encounter often depends on learning to ask better questions.


Sermon Illustration

The disciples had been fishing all night with empty nets. They knew how to fish — they were professionals. But when Jesus asked if they had caught anything and told them to throw their net on the right side of the boat, everything changed. Suddenly they caught so many fish they couldn’t pull the net in.

Sometimes our prayers feel like those empty nets. We’re asking God for help, but we’re not seeing results. Maybe the issue isn’t God’s willingness to provide, maybe it’s learning to ask better questions. Instead of “God, help me,” try “God, help me understand what You want me to learn through this situation.” Instead of “God, fix this,” try “God, show me how to respond faithfully right here.” The abundance might not be in getting what we think we want, but in learning to ask for what we actually need. And like the disciples, we might discover that the breakthrough was there all along, we just needed better direction about where to cast our nets.

Photo by Ankit Manoharan on Unsplash

Ethan Mollick’s Co-Intelligence and the Biblical Call to Wisdom: Why AI Partnership Requires More Than Technical Skill

Ethan Mollick, co-director of Wharton’s Mack Institute for Innovation Management, has spent the last two years making a compelling case that we’re entering an era of “co-intelligence” โ€” where humans and AI work together as cognitive partners rather than in a traditional tool-user relationship.ยน His core thesis: the most productive future isn’t human replacement by AI, but human augmentation through AI, where both parties contribute complementary strengths to problems neither could solve alone. This partnership model, Mollick argues, requires us to develop entirely new skills around delegation, collaboration, and what he calls “cyborg” thinking.

As someone building products for millions of users, I keep coming back to a question Mollick doesn’t directly address: if AI is becoming our cognitive partner, what does wisdom look like in that partnership?

The answer, I think, starts in Proverbs.

The Wisdom Literature Has Something to Say About AI Partners

“Plans fail for lack of counsel, but with many advisers they succeed.” (Proverbs 15:22, NIV)

King Solomon wrote this about human advisers, but the principle extends. The Hebrew word for “counsel” here is sod โ€” it means not just advice, but the kind of intimate consultation that comes from deep understanding of both the problem and the person facing it. It’s the difference between getting information and getting wisdom.

Mollick’s co-intelligence framework captures something biblical that most AI discussions miss: partnership requires discernment about what each party brings. In my daily work, I’ve watched this play out in real time.

When my team started experimenting with AI-assisted content curation, the first instinct was pure efficiency โ€” let the AI scan, categorize, and recommend. Classic tool thinking. The results were technically accurate but spiritually hollow. AI could identify themes in Scripture but couldn’t discern why Romans 8:28 resonates differently for someone walking through grief versus someone making a career change.

The breakthrough came when we shifted to what Mollick would recognize as co-intelligence: AI handling pattern recognition across millions of reading behaviors while humans provided the pastoral wisdom about what those patterns actually meant for individual spiritual formation.

What Co-Intelligence Looks Like in Faith Tech

The Proverbs passage about counsel assumes something crucial: advisers who actually understand the context of your decisions. This is where most AI implementations in faith contexts fall short โ€” not because the AI lacks capability, but because we haven’t thought carefully about what wisdom requires.

“The simple believe anything, but the prudent give thought to their steps.” (Proverbs 14:15, NIV)

Applied to AI partnership, this verse cuts both ways. We can’t be “simple” about what AI tells us, but we also can’t be prudent if we’re trying to solve everything ourselves.

This looks like AI identifying reading patterns โ€” which passages get highlighted most, where people stop in reading plans, which search terms spike during cultural events. But the decision about what those patterns mean for product design? That requires human discernment informed by pastoral experience, theological training, and understanding of how spiritual formation actually works.

Mollick talks about this as “keeping humans in the loop,” but I’d frame it differently: keeping wisdom in the loop. The goal isn’t human involvement for its own sake โ€” it’s ensuring that the partnership produces something that serves human flourishing, not just human efficiency.

The Delegation Problem: More Than Task Management

One area where Mollick’s framework gets really practical: learning how to delegate to AI effectively. This isn’t just about prompt engineering, it’s about understanding what kinds of problems benefit from AI’s strengths (pattern recognition, rapid iteration, handling scale) versus what needs human judgment (context interpretation, ethical reasoning, spiritual discernment).

“Commit to the Lord whatever you do, and he will establish your plans.” (Proverbs 16:3, NIV)

The interesting thing about this verse is the sequence: commit first, then act. In AI delegation, we often reverse this, we act first (deploy the AI solution) and try to align it with our values later.

I’ve been thinking about this in the context of sermon preparation tools. AI is definitely coming for sermon prep, and the early products are impressive from a technical standpoint. But most of them are solving the wrong problem by optimizing for content generation rather than spiritual formation.

A co-intelligence approach would start with the theological question: what’s the actual purpose of sermon preparation? Is it to produce content, or is it to help pastors engage deeply with Scripture so they can shepherd their congregations more effectively?

If it’s the latter (and I think it is), then AI partnership looks different. AI handles the research heavy lifting of cross-referencing commentaries, identifying thematic connections, surfacing relevant cultural context. The pastor handles the spiritual discernment of understanding their congregation’s specific needs, wrestling with how the text speaks to current circumstances, crafting application that connects eternal truth to daily life.

The Stewardship Question

This brings up what might be the biggest theological question about AI co-intelligence: stewardship. If we’re called to be faithful stewards of the gifts and resources God gives us, what does faithfulness look like when one of those resources is artificial intelligence?

“From everyone who has been given much, much will be demanded; and from the one who has been entrusted with much, much more will be asked.” (Luke 12:48, NIV)

AI capability definitely falls into the “much has been given” category. The question is what “much will be demanded” looks like in practice.

Mollick’s work suggests we’re still in the early stages of figuring this out. His research at Wharton shows that even sophisticated knowledge workers are using AI at maybe 20% of its potential, mostly because we’re still thinking about it as an advanced search engine rather than a cognitive partner.

But I wonder if that’s actually wise restraint rather than missed opportunity. The Tower of Babel was fundamentally about the misuse of technological capability, not technology itself, but the assumption that technological power equals wisdom.

In product development, this shows up as the difference between building features because AI makes them possible versus building features because they serve human flourishing. The stewardship question forces us to ask not just “can we?” but “should we?” and “to what end?”

Practical Implications for Product Builders

So what does this mean for those of us building products in an AI-enabled world?

First, it means getting serious about the wisdom question. Mollick’s co-intelligence framework is helpful, but it needs theological grounding. AI partnership isn’t just about efficiency, it’s about ensuring that our use of AI capability serves love of God and neighbor.

Second, it means designing for human flourishing, not just human preference. AI can predict what users will click on, but it can’t determine whether clicking on that thing actually serves their long-term spiritual formation. That requires human judgment informed by wisdom.

Third, it means accepting that co-intelligence is inherently messy. The Proverbs model of seeking counsel assumes disagreement, iteration, and the need for ongoing discernment. AI partnerships that work will feel more like conversations than commands.

In our recent experiments, the most successful AI implementations have been the ones that generate multiple options rather than single recommendations, that surface uncertainty rather than hiding it, and that make their reasoning transparent so humans can engage with it meaningfully.

The Long View

Mollick is right that we’re entering an era of co-intelligence. But I think the Christian perspective adds something crucial to his framework: the recognition that intelligence without wisdom is dangerous, and wisdom without love is meaningless.

“If I… can fathom all mysteries and all knowledge… but do not have love, I am nothing.” (1 Corinthians 13:2, NIV)

Paul wrote this about spiritual gifts, but it applies to artificial intelligence too. The goal isn’t just more capable AI systems, it’s AI systems that help us love God and neighbor more effectively.

That’s a higher bar than efficiency or even intelligence. But for those of us building products that serve spiritual formation, it’s the only bar that matters.

The co-intelligence era is coming whether we’re ready or not. The question is whether we’ll approach it with the wisdom of Proverbs or the folly of Babel. I’m betting on Proverbs, but only if we’re intentional about what that actually means in practice.


ยน Mollick, Ethan. “Co-Intelligence: Living and Working with AI” (Portfolio, 2024). See also his ongoing research at OneUsefulThing.org.

Photo by Mindfield Biosystems on Unsplash

Ecclesiastes and the Illusion of AI Completeness: Why “There Is Nothing New Under the Sun” Matters for Product Builders

“What has been is what will be, and what has been done is what will be done, and there is nothing new under the sun. Is there a thing of which it is said, ‘See, this is new’? It has been already in the ages before us.” โ€” Ecclesiastes 1:9-10

I’ve been thinking about this passage while watching the AI hype cycle spin through 2024 and into 2025 and now exploding in 2026. Every demo feels revolutionary. Every model release promises to change everything. Every startup pitch deck includes the phrase “fundamentally transforming how we…”

But Solomon had a different take. Nothing new under the sun.

This isn’t pessimism — it’s pattern recognition. And for those of us building AI-powered products, especially in faith tech, it’s the most liberating truth we can internalize.

The Completeness Trap

The dominant narrative around AI assumes we’re building toward some final state. Artificial General Intelligence (AGI). The singularity. Complete automation. Perfect personalization. The ultimate Bible study companion that knows exactly what verse you need to read today.

I see this thinking in every product roadmap meeting. “Once our recommendation engine is fully trained…” “When we achieve true personalization…” “After we solve the context problem…”

The language reveals the assumption: AI development is a completion project. We’re building toward done.

Solomon understood something we’re forgetting. Human problems don’t get solved — they get managed, generation after generation, in slightly different forms.

At Bible Gateway, we’ve watched this play out across 25+ years of digital ministry. The tools change. The core human need remains constant: people want to encounter God through Scripture, but they need help knowing where to start and how to apply what they find.

We thought search would solve discovery. Then recommendations. Then reading plans. Then AI-powered devotionals. Each iteration helps — our 23 million users prove that. But none of them completes the discipleship process.

Because there is nothing new under the sun.

What This Means for Product Strategy

Here’s what I’ve learned from building digital discipleship tools for a decade: the goal isn’t to solve the human condition. It’s to serve it faithfully, one iteration at a time.

This reframes everything:

Feature prioritization shifts from revolutionary to iterative. Instead of “How do we build the perfect sermon prep AI?” the question becomes “What’s the smallest improvement we can make to how pastors interact with Scripture this week?”

Success metrics become process-oriented, not outcome-oriented. We don’t measure whether people become better Christians. We measure whether they engage with the Bible more consistently. The spiritual formation is between them and God.

Technology roadmaps emphasize adaptation over completion. Every AI model will be replaced. Every algorithm will be superseded. The question isn’t whether your current solution is perfect — it’s whether your architecture can evolve with changing needs.

User research focuses on persistent patterns, not trending behaviors. What aspects of discipleship have remained constant across cultures and centuries? Those are your true product requirements.

The Stewardship Frame

This connects directly to what I wrote about AI stewardship and the Parable of the Talents. The servant who buried his talent wasn’t wrong because he was risk-averse. He was wrong because he treated stewardship as a preservation project instead of a multiplication project.

The same applies to AI product development. If we’re building toward some final, complete state, we’re burying our talent. We’re preserving instead of multiplying.

But if we accept Solomon’s wisdom — that human needs cycle through the same patterns across generations — then our job becomes different. We’re not building the ultimate solution. We’re building today’s faithful response to ancient needs, knowing someone else will build tomorrow’s.

This is why I’m skeptical of AI companies that promise to “solve” theological education or “revolutionize” spiritual formation. The problems they’re addressing — helping people understand complex texts, connecting abstract principles to daily life, building consistent spiritual habits — aren’t new. They’ve existed since Moses told the Israelites to bind Scripture on their foreheads and write it on their doorposts.

Good technology serves these persistent needs more effectively. It doesn’t replace them.

Practical Applications

What does this look like in practice?

For AI training: Stop trying to capture all of human theological knowledge. Focus on helping users navigate the specific questions they’re asking today. Our Bible Gateway search data shows people aren’t looking for comprehensive systematic theology — they’re looking for practical application of specific passages.

For product roadmaps: Build for the 90% use case, not the edge case that would make your product “complete.” Most people using Bible study AI want help connecting Sunday’s sermon to Monday’s decisions. They don’t need a system that can engage in doctoral-level exegesis.

For user research: Study how people have approached spiritual formation across different eras and cultures. The delivery mechanisms change, but the core challenges remain remarkably consistent. Augustine’s Confessions and a modern Bible app user’s reading plan serve the same fundamental need.

For success metrics: Measure engagement depth, not engagement breadth. Are people spending more time with individual passages? Are they asking better questions? Are they making connections between different parts of Scripture? These indicators matter more than total users or session length.

The Long View

Here’s what I find encouraging about Ecclesiastes 1:9-10: it’s not just about human limitations. It’s about human continuity.

The fact that spiritual needs persist across generations means our work has staying power. We’re not building for a moment — we’re building for a pattern that will repeat as long as humans seek meaning and connection with God.

Every generation needs help reading Scripture. Every culture needs assistance applying ancient wisdom to contemporary challenges. Every individual needs guidance building spiritual habits that stick.

The tools change. The need doesn’t.

This gives me confidence that thoughtful AI development in faith tech isn’t just timely — it’s timeless. Not because we’re building something that will last forever, but because we’re serving needs that will.

The question isn’t whether AI will transform spiritual formation. It’s whether this generation’s AI tools will serve people’s spiritual growth as faithfully as previous generations’ tools served theirs.

I think they can. But only if we remember there’s nothing new under the sun.


SERMON ILLUSTRATION

“The Ancient Algorithm”

“What has been is what will be, and what has been done is what will be done, and there is nothing new under the sun.” โ€” Ecclesiastes 1:9

Before we had Google, we had concordances. Before we had Bible apps, we had commentaries. Before we had AI sermon assistants, we had libraries full of systematic theology.

Solomon understood what we sometimes forget in our excitement over new technology: the tools change, but the human needs remain constant. People have always needed help understanding Scripture. They’ve always struggled to apply ancient wisdom to daily life. They’ve always sought guidance in building spiritual habits.

AI isn’t creating new spiritual needs, it’s serving ancient ones. The pastor using ChatGPT for sermon prep is doing what pastors have always done: seeking help to faithfully communicate God’s Word. The difference is speed and scale, not purpose.

This should humble us and encourage us. Humble, because we’re not creating something unprecedented. Encourage, because we’re participating in work that spans generations. Every tool that helps people engage Scripture more deeply — from Gutenberg’s printing press to today’s Bible apps — serves God’s timeless purposes through temporary means.

The question for the church isn’t whether to embrace new technology. It’s whether our use of it serves the same goals as the faithful tools that came before.

Photo by Bernd ๐Ÿ“ท Dittrich on Unsplash

AI Is Coming for Sermon Prep. Here’s What Pastors Actually Need.

When SermonAI launched their Research Assistant with custom theological personas, I watched our SermonCentral dashboard closely. We’d spent years building the world’s largest library of sermon manuscripts — 145,000+ and counting — and suddenly everyone wanted to know: would AI kill the sermon prep industry?

The answer turned out to be more nuanced than the headlines suggested.

The AI Sermon Prep Land Grab Is Here

The competitive landscape shifted fast. Verbum launched a Homily Assistant for Catholic priests. Sermon Snap started capturing “AI sermon” search volume. SermonSpark positioned itself as the ChatGPT for pastors.

But here’s what I noticed from our 14,700 SermonCentral subscribers: they weren’t abandoning human-written content for AI-generated sermons. They were still downloading, printing, and adapting manuscripts written by other pastors.

Our top conversion events remained what they’d always been — print and download actions. Not “generate new sermon” clicks.

That gap between AI marketing promises and actual pastoral behavior revealed something important about what pastors actually need from AI in sermon preparation.

What Pastors Actually Do With Sermon Content

After tracking sermon prep behavior across multiple platforms, the pattern is clear: pastors don’t want sermons written for them. They want research accelerated.

Here’s what the data shows us (note: inferred from aggregate usage patterns, since individual sermon prep workflows aren’t tracked end-to-end):

Most pastors start with a biblical text, then move to research. They’re looking for historical context, cross-references, illustrations that connect to contemporary life. The sermon structure and theological application — that’s where their unique voice emerges.

At SermonCentral, I watched this play out in search behavior. Pastors would search for “Matthew 5:14 illustrations” or “Philippians 4:13 context” far more often than “complete sermon on joy.” They wanted building blocks, not finished products.

The pastor’s voice IS the product. A sermon isn’t a blog post you can template and optimize. It’s performed, personal, deeply theological. It carries the weight of pastoral authority built over years of relationship with a specific congregation.

Why AI-Generated Sermons Miss the Mark

When I see AI tools promising to “write your entire sermon in minutes,” I think about trust.

Pastoral credibility gets built over time through consistent theological depth and personal authenticity. Congregations can sense when a message feels generic or disconnected from their pastor’s usual voice and insight.

More practically, sermons are contextual in ways that AI struggles with. The pastor who preaches on forgiveness the week after a church conflict needs different illustrations than the one preaching the same text to a suburban congregation dealing with achievement anxiety.

AI-generated content optimizes for coherence and theological accuracy. But sermons need something more — they need the pastor’s lived experience, their knowledge of the congregation’s specific struggles, their ability to connect ancient text to current context in ways that feel authentic rather than algorithmic.

This isn’t anti-AI sentiment. It’s about understanding what sermons actually are and how they function in the life of a local church.

The Right Way to Build AI for Sermon Prep

Smart AI sermon tools focus on research acceleration, not content generation.

Here’s where AI actually helps pastors work better:

Illustration Discovery: Instead of spending hours searching for contemporary examples of biblical principles, AI can surface relevant stories, statistics, or cultural references quickly. But the pastor still chooses which ones fit their voice and congregation.

Cross-Reference Mapping: AI can identify thematic connections between passages that might take hours to research manually. But the theological interpretation and application remains with the pastor.

Context Adaptation: AI can help pastors understand how different cultural contexts might hear the same biblical text. But the decision about which perspective to emphasize stays pastoral.

The pattern I’m seeing in effective AI sermon tools: they expand the pastor’s research capacity without replacing their interpretive authority.

Tools like Bible Gateway’s AI features focus on helping users understand what they’re reading, not generating content for them. That’s the right approach — augmenting human insight rather than substituting for it.

The Brand Promise Problem

Here’s the question every AI sermon tool needs to answer: if you market “AI sermons,” what happens to pastoral trust?

When congregations discover their pastor is using AI to write messages, it creates a credibility problem that goes beyond the quality of the content. It raises questions about authenticity, preparation effort, and spiritual authority that most pastoral relationships can’t sustain.

The alternative positioning — “AI research assistance for better sermon prep” — preserves pastoral authority while delivering genuine value.

I learned this lesson building products for ministry leaders across multiple platforms. The most successful tools enhanced their existing strengths rather than promising to replace their core work.

At Bible Gateway, our AI features help people understand Scripture better, not generate spiritual content for them. That boundary matters for user trust and product longevity.

What This Means for Pastoral Ministry

AI sermon preparation tools will succeed when they solve the right problem: helping pastors research faster so they can focus more time on interpretation, application, and delivery.

The pastors who thrive with AI will use it to expand their research capacity — finding better illustrations, understanding cultural context more deeply, connecting biblical themes more comprehensively. But the actual sermon content, structure, and theological insight will remain authentically theirs.

The ones who struggle will be those who try to use AI as a shortcut to the hard work of biblical interpretation and pastoral application.

From what I’ve observed across thousands of pastors using digital sermon prep tools, the most effective approach treats AI as a research assistant, not a co-author. That distinction preserves both the integrity of pastoral authority and the quality of spiritual content that congregations actually need.

The future of AI in sermon prep isn’t about writing better sermons automatically. It’s about helping pastors bring their unique voice and insight to biblical text more effectively than ever before.

Photo by RU Recovery Ministries on Unsplash

The Tower of Babel Was a Technology Problem, Not a Language Problem

Most pastors Iโ€™ve talked to use the Tower of Babel the same way. Itโ€™s a warning against ambition. Donโ€™t reach too high. Stay in your lane.

That reading has legs. But Iโ€™ve spent the last several years building products for churches โ€” first at SermonCentral, where we managed over 245,000 sermon manuscripts for 14,700+ subscribers, and now at Bible Gateway, which serves 23 million monthly visitors across 200+ Bible translations. When I read Genesis 11 through a product lens, I see something the ambition reading misses.

God didnโ€™t judge the bricks.

โ€œCome, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves.โ€ โ€” Genesis 11:4, NIV

The materials were fine. The engineering was fine. The goal โ€” consolidating human fame โ€” was the problem. And that distinction matters right now, because the church is having the wrong argument about AI.

AI Is Bricks and Mortar

The debate I keep hearing splits along predictable lines. One camp says AI threatens authentic ministry. The other says itโ€™s the future of outreach. Both are fixated on the tool and ignoring the purpose behind it.

AI is a building material. Your spam filter runs on it. Your search results are shaped by it. Your congregation interacts with machine learning dozens of times a day without a second thought. The question of whether the church uses AI was settled years ago.

The question that matters: what are you building, and for whom?

A church that uses AI to transcribe sermons so a deaf congregant can read along on Monday morning โ€” thatโ€™s building for the Kingdom. A church that uses AI-generated sermons so the pastor can spend less time in the text โ€” thatโ€™s a tower with its own name on it.

Same bricks. The blueprint is what changed.

Augustineโ€™s Framework (From 397 AD)

About 1,600 years before anyone worried about ChatGPT, Augustine drew a line I think about constantly in product work.

In De Doctrina Christiana (Book I, chapters 3-4), Augustine distinguished between two postures toward the things of this world: uti (to use) and frui (to enjoy as an end in itself). His argument: the things of creation are meant to be used as means toward loving God and neighbor. They become disordered when we treat them as destinations โ€” when we frui the tool instead of the purpose the tool serves.

Iโ€™ve found this more useful than any AI ethics whitepaper.

Consider: a church uses AI to automate its weekly bulletin, freeing up a volunteer to spend those 3 hours visiting a homebound member. Thatโ€™s uti. The tool serves a human end.

Now consider: a church uses AI to eliminate pastoral presence altogether. Their new chatbot handles prayer requests, the algorithm personalizes a sermon playlist, the system runs without a shepherd. Thatโ€™s frui. The church has started delighting in efficiency as its own reward.

The technology didnโ€™t change. The orientation did.

Three Questions Before Adopting Any AI Tool

Iโ€™ve spent enough time in product leadership to know that the best safeguard isnโ€™t a policy document (Iโ€™ve written plenty of those โ€” they collect dust). Itโ€™s a habit of asking the right questions before you build.

1. Who benefits?

If the honest answer is โ€œthe budgetโ€ and not โ€œthe congregation,โ€ pause. Cost savings arenโ€™t wrong โ€” stewardship matters. But if the primary beneficiary is the institution rather than the people it serves, youโ€™re building in the wrong direction. The best AI implementations Iโ€™ve seen at Bible Gateway started with a specific human need, not a line item.

2. What human activity does this replace, and should that activity stay human?

Administrative tasks โ€” scheduling, data entry, email sorting, transcript formatting โ€” automate freely. These are good uses of AI. They free up people for work that only people can do.

But pastoral care, spiritual formation, the ministry of presence โ€” these resist automation for a reason. A hospital visit from a pastor matters because a person chose to show up. An AI can generate a thoughtful prayer. It cannot bear witness to suffering.

(This is the question I find hardest to answer cleanly, by the way. The line between โ€œadministrativeโ€ and โ€œpastoralโ€ blurs more than weโ€™d like. Where does sermon research end and sermon preparation begin? I donโ€™t have a tidy answer. I think the honest move is to keep asking.)

3. Does this build the churchโ€™s capacity or create dependency on a vendor?

This is the product leader in me talking. Iโ€™ve watched organizations โ€” churches included โ€” adopt tools that felt like empowerment but functioned as dependency. If your church canโ€™t operate without a specific AI platform, you havenโ€™t adopted a tool. Youโ€™ve adopted a landlord.

Look for AI that trains your people. Look for solutions where the value stays with the church if the vendor disappears tomorrow.

From Babel to Pentecost

The Bible doesnโ€™t end the language story at Babel. It picks it back up in Acts 2.

โ€œAll of them were filled with the Holy Spirit and began to speak in other tongues as the Spirit enabled them. Now there were staying in Jerusalem God-fearing Jews from every nation under heaven. When they heard this sound, a crowd came together in bewilderment, because each one heard their own language being spoken.โ€ โ€” Acts 2:4-6, NIV

At Babel, human technology consolidated power and built a monument to self. God scattered and confused. At Pentecost, the Spirit moved โ€” and people from every nation heard the gospel in their own mother tongue. Each personโ€™s language, met where they were.

According to recent Barna research, 77% of pastors believe AI can have a positive impact. I think thatโ€™s right โ€” but only if weโ€™re asking the Babel question each time we adopt something new.

Hereโ€™s what that looks like in practice: a small church in rural Guatemala using AI translation to access theological training that was previously locked behind an English-language paywall. That points toward Pentecost.

A megachurch using AI to scale content production so it can dominate more digital market share. That points back toward Babel.

What We Build Next

I donโ€™t think the church needs to fear AI. I also donโ€™t think it needs to be infatuated with it (and having built products in this space since 2018, Iโ€™ve watched both reactions play out in real time).

The bricks and mortar are here. Theyโ€™re powerful. Theyโ€™re going to keep getting more powerful. The churchโ€™s job is to ask the Babel question every time: what are we building, and whose name is on it?

That question doesnโ€™t have a permanent answer. It has to be asked again with every new tool, every new capability, every new vendor pitch. And I think the churches that will get this right are the ones willing to sit with the discomfort of asking it honestly โ€” even when the answer means building slower.


Sermon Illustration: The Tower of Babel and AI

When the people of Babel built their tower, God didnโ€™t judge the bricks. He didnโ€™t condemn the mortar or the engineering. The materials were fine. The problem was the purpose: โ€œlet us make a name for ourselvesโ€ (Genesis 11:4, NIV).

Today, AI is the new brick and mortar. Churches face the same question Babel faced: what are we building, and for whom? AI that frees a pastor to sit at a hospital bedside โ€” thatโ€™s technology in service of presence. AI that replaces the pastor at the bedside โ€” thatโ€™s a tower with our own name on it.

But the story doesnโ€™t end at Babel. At Pentecost, God took language itself โ€” the very thing He confused at Babel โ€” and used it to carry the gospel across every barrier (Acts 2:4-6). The bricks are in our hands. The blueprint is the question.

Karpathy’s Autoresearch and the Parable of the Talents: What AI Stewardship Looks Like in Practice

A few weeks ago, Andrej Karpathy — former AI director at Tesla, co-founder of OpenAI — released a project that made me think about ministry.

I didn’t expect that either.

Karpathy built a framework called autoresearch. It runs autonomous ML experiments on a single GPU while the researcher sleeps. The AI agent modifies training code, runs a 5-minute experiment, evaluates the result, keeps improvements, discards failures, and loops. About 12 experiments per hour. Roughly 100 overnight. He woke up to measurable performance gains — with zero human intervention during the run.

The part that got me: Karpathy doesn’t write the training code anymore. He writes a Markdown file — plain English instructions — that tells the AI what to research, what constraints to follow, and when to stop. His words: “you are programming the `program.md` Markdown files that provide context to the AI agents.” He calls this “programming in Markdown.”

The human moved up one level of abstraction. Define the methodology, set the guardrails, let the system execute. Not less involved — involved differently, at the level of direction instead of mechanics.

39,800 GitHub stars in the first two weeks. The tech world noticed.

I think the church should too.

The Parable We Keep Skimming

In Matthew 25:14-30 (ESV), Jesus tells the story of a master who entrusts his servants with talents — significant sums of money — before leaving on a journey. One receives five talents, another two, another one. The first two invest and double their resources. The third buries his in the ground.

When the master returns, the investors are praised: “Well done, good and faithful servant. You have been faithful over a little; I will set you over much” (Matthew 25:21, ESV). The one who buried his talent gets rebuked. Not for losing money — he hadn’t lost anything. He was rebuked for doing nothing with what he’d been given.

We tend to read this as a general principle about using your gifts. It is that. But I think there’s something more pointed here for 2026.

AI is a talent in the Matthew 25 sense. It’s a resource placed in front of this generation, and we have a choice. Invest it toward the mission, or bury it because the risk feels too high.

What This Looks Like at My Desk

I want to be specific, because the abstract conversation about “AI and the church” doesn’t move anyone forward.

I’m Director of Product at HarperCollins Christian Publishing, where I lead Bible Gateway — a platform serving over 75 million monthly visitors engaging with Scripture. Before this role, I led product for SermonCentral, which grew to 14,700+ paying subscribers with access to more than 145,000 sermon manuscripts.

Over the past year, I’ve built a system of 18 AI agents that handle competitive analysis, research synthesis, meeting intelligence, content drafting, and task management. Several run overnight — not unlike Karpathy’s loop. The architecture is different (mine orchestrate across business functions, his optimizes a neural network), but the pattern is identical: define methodology, set constraints, let the system execute, review results in the morning.

Every hour I used to spend pulling competitor data or formatting reports is now an hour I spend thinking about how 75 million people experience Scripture online. Or how to make Bible Gateway better for the person opening it at 2 AM because they can’t sleep and need something solid to hold onto.

Karpathy programs research methodology in Markdown now instead of writing Python. I program strategic priorities and agent instructions instead of pulling spreadsheets. The abstraction layer moved up. The work got more human, not less.

The Fear Is Understandable — and Partly Right

I hear the concerns from church leaders, and I take them seriously.

AI will replace authentic ministry. AI will make pastors lazy. AI will simulate relational presence that only a human body in a room can provide. These aren’t irrational. Some are already happening in small ways.

If a pastor uses AI to generate a sermon they never wrestle with, that’s a problem. If a church deploys a chatbot as a substitute for pastoral counseling, that’s a problem. If we treat AI-generated prayers as equivalent to the honest, stumbling prayers of a person before God — we’ve lost something that matters more than efficiency.

But Karpathy’s work shows the other path. The tool doesn’t replace the human. It moves the human to where they’re most needed.

The pastor doesn’t stop preaching — they stop spending 4 hours hunting for the right illustration and spend that time with the family walking through a divorce. The administrator doesn’t stop managing — they stop updating attendance spreadsheets and spend that time training volunteers. The ministry leader doesn’t stop leading — they stop drowning in email and spend that time on the phone with a donor questioning their faith.

I’ve lived this tradeoff. When my agents took over competitive analysis (something that used to eat 3-4 hours a week), I didn’t fill that time with more busywork. I spent it in 1-on-1s with my team and in deeper product strategy. The output quality went up because I was operating at the right level of abstraction.

Where the Line Is (and Where I’m Still Figuring It Out)

I want to be honest — I don’t think anyone has this mapped perfectly yet. I certainly don’t.

Here’s where I’d draw it today:

AI should handle the administrative. Scheduling, data analysis, report generation, email triage, content formatting. These consume enormous amounts of ministry time, and they don’t require pastoral presence. Automate them aggressively.

AI should accelerate the research. Sermon prep research, theological cross-referencing, community demographic analysis. These benefit from AI’s speed and scope. The pastor still does the synthesis — the “what does this mean for my people on Sunday” work. But raw material gathering? Let the machine run overnight, like Karpathy’s experiments.

AI should never simulate the relational. It should not write your prayers. It should not be the voice your congregation hears when they need a shepherd. It should not replace the hospital visit, the awkward conversation in the parking lot, the moment after the service where someone says what they’ve been carrying for months.

The servant in Matthew 25 who was praised put the resource to work — but in service of the master’s purpose, not his own convenience (Matthew 25:20-23, ESV).

Here’s the tension I haven’t resolved: where does “accelerating research” end and “simulating thinking” begin? When an AI summarizes 30 commentaries on a passage, is the pastor still doing exegesis, or are they just picking from a menu? I don’t have a clean answer. I think it depends on whether the pastor is engaging the summaries critically or just grabbing the first one that sounds good. But that’s a discipline question, not a technology question — and discipline questions are harder to solve with guardrails.

If You’re a Church Leader Starting from Zero

You don’t need 18 agents. You need one tool that saves you 3 hours a week.

Pick the task that eats the most time with the least relational value. For most pastors I’ve talked to, it’s sermon illustration research, email management, or meeting notes. Start there. Learn one tool well. Measure the hours you get back.

Then — and this is the part most people skip — reinvest that time in something only a human can do. A visit. A phone call. An hour of prayer you’ve been meaning to protect but kept losing to administrative drift.

Set your guardrails before you need them. Write down what AI will not do in your ministry context. Revisit it quarterly. Technology expands into unintended spaces when boundaries aren’t explicit — I’ve watched this happen in product development for 15 years.

The Talent in Front of Us

Karpathy’s autoresearch is an engineering achievement. But the deeper pattern is almost theological: the human was never meant to stay at the level of mechanical execution. We’re built to operate at the level of purpose, direction, and relationship. Genesis 1:28 gives humanity dominion and stewardship — a mandate to cultivate, not just maintain (Genesis 1:28, ESV).

The master in the parable didn’t give talents so the servants could admire them or lock them away. He gave them to be invested — put to work — in ways that generated return.

For those of us building technology that serves the church, the return isn’t financial. It’s pastors freed from busywork to do the work they were called to. It’s 75 million monthly visitors encountering Scripture through a platform that keeps getting better because the product team has time to think. It’s churches stewarding every tool available — including AI — in service of the mission they’ve been given.

The talent is in front of us. What we do with it is a stewardship question.


Josh Read is Director of Product at HarperCollins Christian Publishing (Bible Gateway) and holds a doctorate in Strategic Organizational Leadership. He writes about AI, product leadership, and digital discipleship at drjoshuaread.com.