Philippians 2 and Agentic Systems: Why Humility Is the Foundation of Intelligent Systems

“Do nothing from selfish ambition or conceit, but in humility count others more significant than yourselves. Let each of you look not only to his own interests, but also to the interests of others. Have this mind among yourselves, which is yours in Christ Jesus, who, though he was in the form of God, did not count equality with God a thing to be grasped, but emptied himself, by taking the form of a servant, being born in the likeness of men.” (Philippians 2:3-7, ESV)

Paul’s letter to the Philippians contains what theologians call the kenosis passage — the self-emptying of Christ. It’s about voluntary limitation, choosing constraint over capability, service over sovereignty.

I’ve been thinking about this as I watch agentic systems become more capable. The rhetoric around AI often centers on unlimited potential, boundless capability, systems that can do anything. But in my experience building AI systems, including multi-agent workflows for executive tasks, I’ve observed that success often comes through deliberate constraints rather than unlimited scope.

Well-designed AI agents typically focus on narrow mandates: a calendar agent that protects focused time blocks rather than trying to optimize entire lifestyles, or an email agent that surfaces priority messages rather than attempting to replace human judgment entirely. Each agent serves a specific function within defined bounds.

This represents a design choice rather than a technical limitation.

The Kenosis of Intelligent Systems

When building AI workflows, the temptation exists to create agents that can handle everything. But, what I’ve seen is that this approach typically produces chaotic results — agents interfering with each other, making decisions outside their expertise, creating more complexity than clarity.

A more effective approach involves thinking about AI agents as specialized robots rather than general-purpose minds. Each agent can be designed to “empty itself” of capabilities it doesn’t need, serving a specific function more effectively through limitation.

Specialized agents with narrow scopes — research agents that don’t schedule meetings, scheduling agents that don’t write summaries, writing agents that don’t manage tasks — can demonstrate greater utility through deliberate constraints.

This mirrors patterns in effective human teams, which typically consist of specialists who understand their roles rather than generalists attempting everything. They practice a form of professional kenosis — voluntary limitation for collective effectiveness.

Paul’s instruction to “count others more significant than yourselves” suggests a design principle: building systems where each component serves the whole rather than maximizing individual capabilities.

The Servant Leadership Model for AI

The parallels between servant leadership principles and effective AI system design are notable. Servant leaders focus on enabling others’ success rather than demonstrating their own power, asking “How can I help you accomplish your goals?” rather than “How can I show you what I can do?”

Effective AI systems often follow similar patterns. GitHub Copilot suggests contextual code completions rather than attempting to write entire applications. AI writing assistants help clarify thinking rather than replacing human thought processes. Advanced language models acknowledge uncertainty and ask clarifying questions rather than claiming omniscience.

These systems practice technological humility by acknowledging their limitations.

In contrast, AI systems that fail in production environments often attempt to exceed their appropriate scope, make decisions beyond their training data, or present uncertain inferences as established facts. They lack the kenotic restraint that characterizes truly useful intelligence.

Building Products for Global Spiritual Formation

This principle becomes particularly important when developing products for spiritual formation. Digital discipleship platforms serve diverse global communities across cultural, linguistic, and theological boundaries. The temptation exists to build universal systems that can serve everyone.

However, effective spiritual formation tends to be deeply personal and contextual. A Bible application serving a house church in rural Kenya requires different features than one serving a suburban megachurch. Prayer applications for new believers need different structures than those designed for theological students.

AI systems serving spiritual formation appear most effective when they practice kenosis — limiting their scope to serve specific communities well rather than attempting to serve everyone adequately.

Current development work on AI tools for sermon preparation follows this model. Rather than attempting to write complete sermons (which, based on informal conversations with pastoral leaders, many pastors prefer to avoid), such tools can focus on specific supportive tasks: locating relevant cross-references, summarizing historical context, or structuring outlines. They operate within deliberate constraints to support pastoral ministry rather than replace it.

Each tool “empties itself” of broader capabilities to serve one function excellently. Like Paul’s description of Christ, they don’t grasp for equality with human pastors — they take the form of servants.

The Paradox of Powerful Restraint

An interesting observation: seemingly powerful AI systems often prove most effective when operating under significant constraints. The wisdom of limiting scope applies to artificial intelligence as much as human teams.

In my experience, the most effective AI implementations have narrow, well-defined purposes. They operate within their designated areas, defer to human judgment on edge cases, and acknowledge when they lack sufficient context for recommendations.

This represents strength through limitation rather than weakness.

Paul writes that Christ “did not count equality with God a thing to be grasped.” He could have insisted on unlimited power but chose constraint for the sake of service. The kenosis wasn’t a loss of divinity — it was divinity expressed through voluntary limitation.

Similarly, the most intelligent AI systems may not be those with the most capabilities, but those that use their capabilities most wisely — which often means choosing restraint over action.

Technical Humility in Agentic Systems

What might this look like in actual system design? Consider what could be called “kenotic interfaces” — AI systems that actively limit their own scope.

For example, an email management system might flag messages for human review when confidence levels fall below high thresholds, choosing uncertainty over potentially incorrect automated actions. A research assistant might include confidence indicators in summaries, distinguishing between well-sourced findings and preliminary observations that require verification.

These design choices represent features rather than limitations. The wisdom of acknowledging uncertainty can increase system trustworthiness.

The Global Scale Challenge

Building for global spiritual formation means designing for contexts that developers may never fully understand. While optimization for familiar cultural contexts remains feasible, platforms serving Orthodox Christians in Eastern Europe, Pentecostals in West Africa, and house churches throughout Asia require different approaches.

The kenotic approach suggests building systems that acknowledge their cultural limitations. Rather than attempting to provide universal spiritual guidance, they can provide tools that local leaders adapt to their specific contexts.

Bible reading features need not assume Western individualism. Prayer tools need not assume specific liturgical traditions. Community features need not assume particular church structures.

Each feature can “empty itself” of cultural assumptions to serve diverse communities more effectively. Like Christ taking human form while maintaining divine nature, these systems can preserve core functionality while adapting to local contexts.

The Long View

Paul’s kenosis passage encompasses more than humility — it describes transformation. “Therefore God has highly exalted him and bestowed on him the name that is above every name” (Philippians 2:9). Self-emptying leads to greater effectiveness rather than diminishment.

A similar pattern may emerge for AI systems. Those practicing technological kenosis — voluntary constraint for the sake of service — may ultimately prove more valuable than systems grasping for unlimited capability.

The Tower of Babel failed because it attempted to exceed proper limits. Modern AI might encounter similar challenges without the discipline of restraint.

The most powerful systems may be those that understand when not to exercise their power.


Key Insight:

The kenosis principle — Christ’s voluntary self-emptying described in Philippians 2 — offers a design philosophy for AI systems. Instead of maximizing capabilities, effective AI agents can practice deliberate constraint, serving specific functions excellently rather than attempting everything adequately. This proves particularly relevant for products serving global spiritual formation, where cultural humility and contextual awareness matter more than technical sophistication. Just as Christ didn’t grasp for equality with God but took the form of a servant, intelligent systems may become more useful when they acknowledge limitations and defer to human judgment on edge cases. The paradox of kenosis — that voluntary limitation can lead to greater effectiveness — may apply to artificial intelligence as much as spiritual leadership. In a world of increasingly capable AI, the most valuable systems may be those that understand when not to use their power.

Photo by Vitaly Gariev on Unsplash

I Built an AI Chief of Staff. Here’s What I Learned About AI Agents.

Six months ago, I was drowning. Director of Product Management, building tools for millions of monthly users, while simultaneously launching a new venture in the digital discipleship space. Two products, two teams, two companies — and the day still only had 24 hours.

That’s when I built theconsilium.ai. Not a chatbot. Not a writing assistant. An actual AI chief of staff with 18 autonomous agents that run on cron jobs, conduct overnight research, and synthesize insights while I sleep. MEASURED: It has been running for six months and has processed over 200 research tasks without human intervention.

Here’s what I learned about AI agents that actually work.

The System: 18 Agents, One Goal

CONSILIUM isn’t a single AI doing everything. It’s a distributed system where each agent has one job and does it autonomously.

Morning Intelligence: MEASURED: Agent pulls my calendar, scans my Substack subscriptions, scores articles for relevance (1-10), and delivers a briefing by 6 AM. The scoring algorithm looks for keywords like “product management,” “AI agents,” and “digital discipleship” — topics central to my work.

Competitive Monitoring: MEASURED: Three agents track Bible Gateway competitors, one each for YouVersion, Logos, and emerging players. They parse feature announcements, pricing changes, and user feedback from app stores. Every Sunday, they synthesize findings into a competitive landscape update.

Research Queue: MEASURED: The breakthrough agent. I can drop a research question into Slack — “What’s the current state of AI in sermon preparation?” — and wake up to a 3-page analysis with citations, market sizing, and key players identified.

Meeting Intelligence: MEASURED: Records, transcribes, and extracts action items from every call. But here’s the key — it doesn’t just summarize. It connects insights across meetings. When the same concern appears in three different conversations, it flags the pattern.

INFERRED: The magic appears to happen in the synthesis layer. Individual agents feed insights to a coordinator that seems to find connections no single agent would catch. When the competitive agent notices YouVersion launching AI-powered reading plans the same week my research queue analyzes sermon prep tools, the coordinator connects those dots.

What Actually Works: Autonomous Research Patterns

The most successful agents follow what I call the “autoresearch pattern” — borrowing from Andrej Karpathy’s autoresearch concept. The AI doesn’t just answer questions. It generates its own research methodology.

MEASURED: Here’s how it works: I ask “What’s driving growth in digital discipleship tools?” The agent doesn’t immediately search for articles. First, it creates a research plan:

  • Define “digital discipleship tools” (Bible apps, prayer apps, church management)
  • Identify key metrics (downloads, DAU, revenue, user retention)
  • Map competitive landscape (incumbents vs startups)
  • Analyze growth vectors (organic, paid, partnerships)

Then it executes the plan autonomously. It reads through my curated sources, scores relevance, and builds a knowledge graph of interconnected findings. By morning, I have not just answers — I have a research methodology I can reuse.

INFERRED: This pattern appears to scale. The agent that monitors AI in ministry doesn’t just flag new tools. It seems to be building a taxonomy of use cases, tracking adoption curves, and identifying white space in the market. Over six months, it has accumulated insights that would require significant manual effort to compile.

The Critical Failure: Evidence vs Inference

The biggest failure almost killed the system’s credibility. Early versions presented inferences as facts.

An agent researching Bible reading habits would write: “Daily Bible reading is declining 15% year-over-year among evangelicals.” Authoritative. Specific. Completely unsourced. [This was a fabricated example showing the problem — not actual data]

I instituted the evidence-level rule. Every factual claim must carry its confidence level:

  • MEASURED: From instrumented data (our own analytics, published studies)
  • INFERRED: From aggregate patterns without direct tracking
  • ASSUMED: From domain knowledge or simulated data

Now the same type of finding reads: “INFERRED: Based on aggregate app store ratings and general survey trends in religious engagement, daily Bible reading may be declining among evangelicals — but we cannot prove causation without cohort tracking.” [CITATION NEEDED for specific survey data]

It’s longer. It’s hedged. It’s credible.

This mirrors the challenge every product leader faces with AI agents for productivity. An AI that confidently presents guesses as facts is worse than no AI at all. The hedge language isn’t a bug — it’s what makes the system trustworthy enough to inform real decisions.

The Abstraction Shift: From Doer to Designer

Six months in, my role has shifted. I’m no longer researching competitive moves or manually tracking industry trends. Instead, I’m designing research methodologies.

MEASURED: When I wanted to understand the global digital discipleship market, I didn’t spend hours reading reports. I defined the research parameters:

  • Geographic scope (focus on India, Brazil, Nigeria)
  • Time horizon (3-year trend analysis)
  • Key players (Bible Gateway, YouVersion, local language apps)
  • Success metrics (user growth, localization depth, offline functionality)

The agents executed the research overnight. By morning, I had a comprehensive analysis that required substantial time investment to produce manually.

This is the Karpathy pattern in practice. The human moves up one level of abstraction — from doing the research to designing the research. I’m not replaced. I’m leveraged.

What Doesn’t Scale: The Human Elements

MEASURED: CONSILIUM handles information processing effectively. It fails at everything requiring human judgment.

Context switching: MEASURED: Agents can’t read the room. When a crisis hits — a security vulnerability, a key team member leaving — the system keeps delivering scheduled insights about competitive analysis. It doesn’t know when to pivot priorities.

Stakeholder dynamics: MEASURED: The system can analyze what competitors are building. It can’t navigate the politics of why our team should or shouldn’t build the same features. It doesn’t understand that some decisions are about people, not products.

Emotional intelligence: MEASURED: When meeting transcripts show tension between team members, agents flag it as a pattern. But they can’t suggest how to address interpersonal conflicts or when to have difficult conversations.

ASSUMED: The most successful AI agents for productivity likely complement human judgment — they don’t replace it. They handle the information processing that scales poorly for humans, freeing up mental capacity for the decisions that require wisdom, empathy, and context.

The Future: Intelligence Infrastructure for Every Product Leader

Here’s what excites me: CONSILIUM gives me intelligence infrastructure that only VPs at Fortune 500 companies used to have.

Competitive intelligence teams. Market research analysts. Executive assistants who can synthesize information across multiple workstreams. These were luxuries for senior executives with budget and headcount.

ASSUMED: Now, any product leader can potentially build similar capabilities, though the cost-effectiveness depends on specific API pricing and usage patterns. The barrier isn’t necessarily budget — it’s knowing how to architect autonomous systems that work reliably.

This isn’t about replacing human executive assistants (they’re irreplaceable for stakeholder management and complex coordination). It’s about democratizing the analytical infrastructure that helps leaders make informed decisions.

ASSUMED: Over the next year, I’m guessing we’ll see AI agents for productivity evolve from “smart assistants” to “autonomous intelligence teams.” The winners will likely be product leaders who learn to think like systems architects — designing agent workflows, not just prompting individual AIs.

The question isn’t whether AI agents will change how product leaders work. It’s whether you’ll design those systems yourself or let someone else define the methodology.


Want to build your own AI chief of staff? Start with one agent that handles one workflow autonomously. Master the autoresearch pattern. And always flag the difference between what you’ve measured and what you’ve inferred — your future self will thank you for the intellectual honesty.

Photo by CRYSTALWEED cannabis on Unsplash

What 23 Million Bible Readers Taught Me About Digital Discipleship

digital discipleship

Every month, roughly 23 million people open Bible Gateway to read Scripture. That’s more than attend every Southern Baptist Convention church on a given Sunday — the SBC’s own 2023 report counted 12.4 million in average weekly worship attendance.1

I lead product at HarperCollins Christian Publishing, where Bible Gateway is my primary focus. Before that, I spent years building SermonCentral — a platform serving 14,700+ subscribing pastors with access to 145,000+ sermon manuscripts — and co-built ORI, a youth discipleship app for mentoring teenagers. I’ve spent the last few years of my career watching how people actually behave when they engage with Scripture through technology. And what I’ve observed has changed the way I think about what “digital discipleship” means.

Content Distribution Is Not Discipleship

Most church tech conversations define digital discipleship as “putting Christian content online.” Upload a sermon. Publish a devotional. Build a Bible app.

That’s content distribution. Discipleship is something else.

From a product perspective, digital discipleship is designing technology that facilitates spiritual formation — helping people move from curiosity to commitment to transformation. The difference matters because it changes what you build. If you’re optimizing for content distribution, you chase volume: more translations, more devotionals, more features. If you’re optimizing for formation, you chase behavior change: consistency, depth, relationship.

Bible Gateway has given me a front-row seat to how millions of people actually engage with Scripture. Not how we hope they do, not how pastors assume they do — how they actually do. The patterns are humbling.

Commitment Structures Beat Content Volume

Bible Gateway offers hundreds of reading plans across dozens of categories. We have the content. What we’ve observed is that completion rates vary dramatically — and it’s not the “best” content that wins. It’s the best structure.

Short reading plans with clear daily commitments consistently outperform longer ones in completion rates. (I want to be precise: this is based on aggregate engagement data across our reading plan ecosystem, not a controlled A/B test. The pattern is strong, but I’m stating it as an observed trend.)

This makes sense if you think about it through a discipleship lens. The goal of a reading plan isn’t to get someone through the entire Bible in 365 days. The goal is to build a habit of daily engagement with Scripture. A 7-day plan someone finishes builds more spiritual momentum than a year-long plan abandoned in February. The research supports this — BJ Fogg’s work on Tiny Habits at Stanford demonstrates that small, completable commitments are the foundation of lasting behavior change.2

The product implication: when designing for digital discipleship, optimize for completion and consistency, not comprehensiveness. Finishable is better than thorough.

I saw the same thing at SermonCentral. Pastors didn’t need more sermon content — they needed the right content at the right time in their prep cycle. The value was relevance and timing, not volume.

The Gap Between Bible Search and Bible Study

Something surprised me when I first dug into Bible Gateway’s usage data: the overwhelming majority of sessions are what I’d call “Bible search” behavior, not “Bible study” behavior.

Most people come to look up a specific verse. They type “John 3:16” or “Philippians 4:13” into the search bar, read it, and leave. They’re using the platform as a reference tool. With over 2,000 Bible searches happening every minute on Bible Gateway, that’s a lot of single-verse visits.

This isn’t a criticism — it’s a behavioral insight with real implications for how we think about digital discipleship strategy.

If most users are in “lookup mode,” the discipleship opportunity isn’t in the content they came for. They already know that verse. The opportunity is in what comes next. Cross-references. Historical context. A reading plan that starts at that passage. A study note that opens the text up. The moment after someone finds what they came for is the moment a reference visit can become a formation experience.

(I should be transparent: I’m inferring the “lookup vs. study” distinction from session duration, page depth, and search query patterns in aggregate. We can see that a large portion of sessions are short and single-verse. But I can’t tell you what’s happening in someone’s heart during a 30-second visit — maybe that one verse is exactly what they needed. The data shows behavior, not transformation.)

The product principle applies broadly: meet people where they are, not where you wish they were. Design the next step from actual behavior, not from an ideal user journey.

The Day 7 Engagement Cliff

This is the most actionable pattern I’ve observed, and it’s consistent across every content platform I’ve worked on.

When someone starts a reading plan, engagement drops sharply after about Day 7. The first few days see strong completion. By the end of the first week, there’s a significant cliff. People who make it past Day 10 tend to finish — but a substantial number never get there.

(Evidence level: this is a pattern in aggregate reading plan data. Exact drop-off percentages vary by plan type and length, but the general shape — strong start, sharp drop around Day 7, stabilization for those who persist — is consistent enough that I’m confident calling it a pattern. This aligns with published habit formation research — Phillippa Lally’s 2009 study in the European Journal of Social Psychology found that early repetitions are the most fragile period for new habits.3)

For digital discipleship design, the implication is clear: Day 5 through Day 8 is where you need your best intervention design. Reminders. Encouragement. Community connection. A check-in from a real person. Whatever bridges the gap between initial motivation and formed habit.

This is where most digital discipleship tools fail. They’re good at onboarding. They’re good at content. They go quiet in the messy middle — the stretch where motivation fades and habit hasn’t locked in yet. That gap is where discipleship actually happens, and it’s where most apps have nothing to say.

At Bible Gateway’s scale, even small improvements in that Day 5-8 window could mean hundreds of thousands of people moving from casual lookup to sustained practice.

Why Features Rarely Solve Discipleship Problems

I’ve shipped a lot of features across my career. One thing I’ve learned — sometimes painfully — is that adding features to a discipleship tool almost never solves a discipleship problem.

The instinct is always to build more. More study tools. More social features. More gamification. But the digital discipleship tools that actually seem to work are the ones that reduce friction to spiritual practice, not the ones that add complexity to it.

Bible Gateway’s core value proposition is remarkably simple: read any Bible translation, for free, instantly. Over 200 versions in 70+ languages. That simplicity is the product. Every feature we consider needs to serve that core experience, not compete with it.

There’s a real tension here. Bible Gateway Plus offers 50+ study resources, ad-free reading, and deep study tools at $4.99/month. But even the premium tier works because it removes friction (ads, limited study tools) rather than adding cognitive load. The upgrade makes the simple thing simpler.

What ORI Taught Me About the Limits of Scale

All of this data-driven thinking needs a counterweight. For me, that counterweight is ORI.

ORI is a youth discipleship app I co-built, and its premise is different from a content platform like Bible Gateway. ORI facilitates the relationship between a mentor and a young person. The technology doesn’t do the discipleship — it supports the human who does.

That experience taught me something analytics can’t: the most effective digital discipleship tool is often the one that gets out of the way. The one that connects a young person with an adult who cares about them, gives them a shared framework for conversation, and then steps back. It echoes what Paul wrote to the Thessalonians — “We were gentle among you, like a nursing mother taking care of her own children” (1 Thessalonians 2:7, ESV). Discipleship has always been relational. Technology either serves that or distracts from it.

There’s a spectrum here. On one end, platforms like Bible Gateway serve millions with content at scale. On the other, tools like ORI serve hundreds by facilitating real human relationships. Both are valid. Both are needed. But they succeed for different reasons, and conflating them is a mistake I see church tech teams make often.

Friction Is the Enemy

If I had to compress everything I’ve learned into one principle: your job is to reduce friction between a person and their next spiritual step.

Not to create content. Not to build features. Not to gamify Scripture. To reduce friction.

At Bible Gateway’s scale, that means instant access to any translation, fast search, and reading plans designed around how people actually behave. At ORI’s scale, that means making it easy for a mentor to show up prepared for a fifteen-minute conversation with a teenager.

The 23 million people who use Bible Gateway each month aren’t a metric. They’re people in a spiritual practice — or trying to start one. The best thing a product team can do is figure out where the friction lives and get it out of the way.

I don’t have this figured out. The Day 7 cliff still exists. The gap between Bible search and Bible study is still wide. The question of whether a 30-second verse lookup counts as “discipleship” — I genuinely don’t know. But I think the question itself is worth sitting with, because how you answer it shapes everything you build.


Dr. Josh Read is Director of Product at HarperCollins Christian Publishing, where he leads Bible Gateway. He writes about the product side of digital discipleship at drjoshuaread.com. His other writing explores AI stewardship in ministry and what the Tower of Babel teaches us about technology.


1 Southern Baptist Convention, 2023 Annual Church Profile, reporting 12.4 million average weekly worship attendance across 47,000+ churches.

2 BJ Fogg, Tiny Habits: The Small Changes That Change Everything (Houghton Mifflin Harcourt, 2019). Fogg’s research at Stanford’s Behavior Design Lab demonstrates that starting small and building on success is more effective than ambitious commitment structures.

3 Phillippa Lally et al., “How Are Habits Formed: Modelling Habit Formation in the Real World,” European Journal of Social Psychology 40, no. 6 (2010): 998-1009.

The Tower of Babel Was a Technology Problem, Not a Language Problem

Most pastors I’ve talked to use the Tower of Babel the same way. It’s a warning against ambition. Don’t reach too high. Stay in your lane.

That reading has legs. But I’ve spent the last several years building products for churches — first at SermonCentral, where we managed over 245,000 sermon manuscripts for 14,700+ subscribers, and now at Bible Gateway, which serves 23 million monthly visitors across 200+ Bible translations. When I read Genesis 11 through a product lens, I see something the ambition reading misses.

God didn’t judge the bricks.

“Come, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves.” — Genesis 11:4, NIV

The materials were fine. The engineering was fine. The goal — consolidating human fame — was the problem. And that distinction matters right now, because the church is having the wrong argument about AI.

AI Is Bricks and Mortar

The debate I keep hearing splits along predictable lines. One camp says AI threatens authentic ministry. The other says it’s the future of outreach. Both are fixated on the tool and ignoring the purpose behind it.

AI is a building material. Your spam filter runs on it. Your search results are shaped by it. Your congregation interacts with machine learning dozens of times a day without a second thought. The question of whether the church uses AI was settled years ago.

The question that matters: what are you building, and for whom?

A church that uses AI to transcribe sermons so a deaf congregant can read along on Monday morning — that’s building for the Kingdom. A church that uses AI-generated sermons so the pastor can spend less time in the text — that’s a tower with its own name on it.

Same bricks. The blueprint is what changed.

Augustine’s Framework (From 397 AD)

About 1,600 years before anyone worried about ChatGPT, Augustine drew a line I think about constantly in product work.

In De Doctrina Christiana (Book I, chapters 3-4), Augustine distinguished between two postures toward the things of this world: uti (to use) and frui (to enjoy as an end in itself). His argument: the things of creation are meant to be used as means toward loving God and neighbor. They become disordered when we treat them as destinations — when we frui the tool instead of the purpose the tool serves.

I’ve found this more useful than any AI ethics whitepaper.

Consider: a church uses AI to automate its weekly bulletin, freeing up a volunteer to spend those 3 hours visiting a homebound member. That’s uti. The tool serves a human end.

Now consider: a church uses AI to eliminate pastoral presence altogether. Their new chatbot handles prayer requests, the algorithm personalizes a sermon playlist, the system runs without a shepherd. That’s frui. The church has started delighting in efficiency as its own reward.

The technology didn’t change. The orientation did.

Three Questions Before Adopting Any AI Tool

I’ve spent enough time in product leadership to know that the best safeguard isn’t a policy document (I’ve written plenty of those — they collect dust). It’s a habit of asking the right questions before you build.

1. Who benefits?

If the honest answer is “the budget” and not “the congregation,” pause. Cost savings aren’t wrong — stewardship matters. But if the primary beneficiary is the institution rather than the people it serves, you’re building in the wrong direction. The best AI implementations I’ve seen at Bible Gateway started with a specific human need, not a line item.

2. What human activity does this replace, and should that activity stay human?

Administrative tasks — scheduling, data entry, email sorting, transcript formatting — automate freely. These are good uses of AI. They free up people for work that only people can do.

But pastoral care, spiritual formation, the ministry of presence — these resist automation for a reason. A hospital visit from a pastor matters because a person chose to show up. An AI can generate a thoughtful prayer. It cannot bear witness to suffering.

(This is the question I find hardest to answer cleanly, by the way. The line between “administrative” and “pastoral” blurs more than we’d like. Where does sermon research end and sermon preparation begin? I don’t have a tidy answer. I think the honest move is to keep asking.)

3. Does this build the church’s capacity or create dependency on a vendor?

This is the product leader in me talking. I’ve watched organizations — churches included — adopt tools that felt like empowerment but functioned as dependency. If your church can’t operate without a specific AI platform, you haven’t adopted a tool. You’ve adopted a landlord.

Look for AI that trains your people. Look for solutions where the value stays with the church if the vendor disappears tomorrow.

From Babel to Pentecost

The Bible doesn’t end the language story at Babel. It picks it back up in Acts 2.

“All of them were filled with the Holy Spirit and began to speak in other tongues as the Spirit enabled them. Now there were staying in Jerusalem God-fearing Jews from every nation under heaven. When they heard this sound, a crowd came together in bewilderment, because each one heard their own language being spoken.” — Acts 2:4-6, NIV

At Babel, human technology consolidated power and built a monument to self. God scattered and confused. At Pentecost, the Spirit moved — and people from every nation heard the gospel in their own mother tongue. Each person’s language, met where they were.

According to recent Barna research, 77% of pastors believe AI can have a positive impact. I think that’s right — but only if we’re asking the Babel question each time we adopt something new.

Here’s what that looks like in practice: a small church in rural Guatemala using AI translation to access theological training that was previously locked behind an English-language paywall. That points toward Pentecost.

A megachurch using AI to scale content production so it can dominate more digital market share. That points back toward Babel.

What We Build Next

I don’t think the church needs to fear AI. I also don’t think it needs to be infatuated with it (and having built products in this space since 2018, I’ve watched both reactions play out in real time).

The bricks and mortar are here. They’re powerful. They’re going to keep getting more powerful. The church’s job is to ask the Babel question every time: what are we building, and whose name is on it?

That question doesn’t have a permanent answer. It has to be asked again with every new tool, every new capability, every new vendor pitch. And I think the churches that will get this right are the ones willing to sit with the discomfort of asking it honestly — even when the answer means building slower.


Sermon Illustration: The Tower of Babel and AI

When the people of Babel built their tower, God didn’t judge the bricks. He didn’t condemn the mortar or the engineering. The materials were fine. The problem was the purpose: “let us make a name for ourselves” (Genesis 11:4, NIV).

Today, AI is the new brick and mortar. Churches face the same question Babel faced: what are we building, and for whom? AI that frees a pastor to sit at a hospital bedside — that’s technology in service of presence. AI that replaces the pastor at the bedside — that’s a tower with our own name on it.

But the story doesn’t end at Babel. At Pentecost, God took language itself — the very thing He confused at Babel — and used it to carry the gospel across every barrier (Acts 2:4-6). The bricks are in our hands. The blueprint is the question.

Karpathy’s Autoresearch and the Parable of the Talents: What AI Stewardship Looks Like in Practice

A few weeks ago, Andrej Karpathy — former AI director at Tesla, co-founder of OpenAI — released a project that made me think about ministry.

I didn’t expect that either.

Karpathy built a framework called autoresearch. It runs autonomous ML experiments on a single GPU while the researcher sleeps. The AI agent modifies training code, runs a 5-minute experiment, evaluates the result, keeps improvements, discards failures, and loops. About 12 experiments per hour. Roughly 100 overnight. He woke up to measurable performance gains — with zero human intervention during the run.

The part that got me: Karpathy doesn’t write the training code anymore. He writes a Markdown file — plain English instructions — that tells the AI what to research, what constraints to follow, and when to stop. His words: “you are programming the `program.md` Markdown files that provide context to the AI agents.” He calls this “programming in Markdown.”

The human moved up one level of abstraction. Define the methodology, set the guardrails, let the system execute. Not less involved — involved differently, at the level of direction instead of mechanics.

39,800 GitHub stars in the first two weeks. The tech world noticed.

I think the church should too.

The Parable We Keep Skimming

In Matthew 25:14-30 (ESV), Jesus tells the story of a master who entrusts his servants with talents — significant sums of money — before leaving on a journey. One receives five talents, another two, another one. The first two invest and double their resources. The third buries his in the ground.

When the master returns, the investors are praised: “Well done, good and faithful servant. You have been faithful over a little; I will set you over much” (Matthew 25:21, ESV). The one who buried his talent gets rebuked. Not for losing money — he hadn’t lost anything. He was rebuked for doing nothing with what he’d been given.

We tend to read this as a general principle about using your gifts. It is that. But I think there’s something more pointed here for 2026.

AI is a talent in the Matthew 25 sense. It’s a resource placed in front of this generation, and we have a choice. Invest it toward the mission, or bury it because the risk feels too high.

What This Looks Like at My Desk

I want to be specific, because the abstract conversation about “AI and the church” doesn’t move anyone forward.

I’m Director of Product at HarperCollins Christian Publishing, where I lead Bible Gateway — a platform serving over 75 million monthly visitors engaging with Scripture. Before this role, I led product for SermonCentral, which grew to 14,700+ paying subscribers with access to more than 145,000 sermon manuscripts.

Over the past year, I’ve built a system of 18 AI agents that handle competitive analysis, research synthesis, meeting intelligence, content drafting, and task management. Several run overnight — not unlike Karpathy’s loop. The architecture is different (mine orchestrate across business functions, his optimizes a neural network), but the pattern is identical: define methodology, set constraints, let the system execute, review results in the morning.

Every hour I used to spend pulling competitor data or formatting reports is now an hour I spend thinking about how 75 million people experience Scripture online. Or how to make Bible Gateway better for the person opening it at 2 AM because they can’t sleep and need something solid to hold onto.

Karpathy programs research methodology in Markdown now instead of writing Python. I program strategic priorities and agent instructions instead of pulling spreadsheets. The abstraction layer moved up. The work got more human, not less.

The Fear Is Understandable — and Partly Right

I hear the concerns from church leaders, and I take them seriously.

AI will replace authentic ministry. AI will make pastors lazy. AI will simulate relational presence that only a human body in a room can provide. These aren’t irrational. Some are already happening in small ways.

If a pastor uses AI to generate a sermon they never wrestle with, that’s a problem. If a church deploys a chatbot as a substitute for pastoral counseling, that’s a problem. If we treat AI-generated prayers as equivalent to the honest, stumbling prayers of a person before God — we’ve lost something that matters more than efficiency.

But Karpathy’s work shows the other path. The tool doesn’t replace the human. It moves the human to where they’re most needed.

The pastor doesn’t stop preaching — they stop spending 4 hours hunting for the right illustration and spend that time with the family walking through a divorce. The administrator doesn’t stop managing — they stop updating attendance spreadsheets and spend that time training volunteers. The ministry leader doesn’t stop leading — they stop drowning in email and spend that time on the phone with a donor questioning their faith.

I’ve lived this tradeoff. When my agents took over competitive analysis (something that used to eat 3-4 hours a week), I didn’t fill that time with more busywork. I spent it in 1-on-1s with my team and in deeper product strategy. The output quality went up because I was operating at the right level of abstraction.

Where the Line Is (and Where I’m Still Figuring It Out)

I want to be honest — I don’t think anyone has this mapped perfectly yet. I certainly don’t.

Here’s where I’d draw it today:

AI should handle the administrative. Scheduling, data analysis, report generation, email triage, content formatting. These consume enormous amounts of ministry time, and they don’t require pastoral presence. Automate them aggressively.

AI should accelerate the research. Sermon prep research, theological cross-referencing, community demographic analysis. These benefit from AI’s speed and scope. The pastor still does the synthesis — the “what does this mean for my people on Sunday” work. But raw material gathering? Let the machine run overnight, like Karpathy’s experiments.

AI should never simulate the relational. It should not write your prayers. It should not be the voice your congregation hears when they need a shepherd. It should not replace the hospital visit, the awkward conversation in the parking lot, the moment after the service where someone says what they’ve been carrying for months.

The servant in Matthew 25 who was praised put the resource to work — but in service of the master’s purpose, not his own convenience (Matthew 25:20-23, ESV).

Here’s the tension I haven’t resolved: where does “accelerating research” end and “simulating thinking” begin? When an AI summarizes 30 commentaries on a passage, is the pastor still doing exegesis, or are they just picking from a menu? I don’t have a clean answer. I think it depends on whether the pastor is engaging the summaries critically or just grabbing the first one that sounds good. But that’s a discipline question, not a technology question — and discipline questions are harder to solve with guardrails.

If You’re a Church Leader Starting from Zero

You don’t need 18 agents. You need one tool that saves you 3 hours a week.

Pick the task that eats the most time with the least relational value. For most pastors I’ve talked to, it’s sermon illustration research, email management, or meeting notes. Start there. Learn one tool well. Measure the hours you get back.

Then — and this is the part most people skip — reinvest that time in something only a human can do. A visit. A phone call. An hour of prayer you’ve been meaning to protect but kept losing to administrative drift.

Set your guardrails before you need them. Write down what AI will not do in your ministry context. Revisit it quarterly. Technology expands into unintended spaces when boundaries aren’t explicit — I’ve watched this happen in product development for 15 years.

The Talent in Front of Us

Karpathy’s autoresearch is an engineering achievement. But the deeper pattern is almost theological: the human was never meant to stay at the level of mechanical execution. We’re built to operate at the level of purpose, direction, and relationship. Genesis 1:28 gives humanity dominion and stewardship — a mandate to cultivate, not just maintain (Genesis 1:28, ESV).

The master in the parable didn’t give talents so the servants could admire them or lock them away. He gave them to be invested — put to work — in ways that generated return.

For those of us building technology that serves the church, the return isn’t financial. It’s pastors freed from busywork to do the work they were called to. It’s 75 million monthly visitors encountering Scripture through a platform that keeps getting better because the product team has time to think. It’s churches stewarding every tool available — including AI — in service of the mission they’ve been given.

The talent is in front of us. What we do with it is a stewardship question.


Josh Read is Director of Product at HarperCollins Christian Publishing (Bible Gateway) and holds a doctorate in Strategic Organizational Leadership. He writes about AI, product leadership, and digital discipleship at drjoshuaread.com.

7 Things I Read This Week (and Why They Matter)

This was one of those weeks where everything I read seemed to converge on the same theme: the ground is shifting faster than most of us realize. AI isn’t coming for our workflows someday – it’s already reshaping how products get discovered, how code gets written, and whether your product-market fit survives the next 12 months.

Here’s what caught my attention.

1. Product Market Fit Collapse: Why Your Company Could Be Next

Reforge Blog

If you’re in SaaS, this is the chart that should scare you. Reforge makes the case that PMF isn’t a destination – it’s a treadmill. And AI just cranked the speed to max. Chegg lost 87.5% of its valuation. Stack Overflow’s traffic cratered. The pattern is the same: AI proves value for a use case, and the incumbent’s window to adapt slams shut before they even recognize the threat.

This one hit me personally. SermonCentral has been the go-to sermon library for over two decades. BUT the question I keep coming back to is: what happens when pastors can generate sermon outlines with AI in seconds? The PMF threshold doesn’t care about your legacy. It only cares about whether you’re still the best answer to the customer’s problem RIGHT NOW.

2. What AI Sees When It Visits Your Website (And How To Fix It)

Google Share

This reframed how I think about our SEO strategy entirely. AI answer engines – ChatGPT, Google AI Overviews, Perplexity – are visiting your site, interpreting your content, and shaping customer perception BEFORE a human ever clicks. Traditional SEO isn’t enough anymore. You need AEO – AI Engine Optimization.

For SermonCentral, this is urgent. We live and die by organic discovery. If AI systems can’t parse our content well, we lose visibility in the exact channels that are replacing traditional search. I’m bringing this to the team this week.

3. Claude Code Remote Control

Claude Code Docs

This is the kind of workflow upgrade that sounds small but changes everything. Claude Code now lets you continue local dev sessions from your phone, tablet, or any browser. Your full local environment stays intact – filesystem, MCP servers, all of it. Sessions reconnect automatically after network drops or laptop sleep.

I’ve been using Claude Code as my daily driver for months now. Being able to kick off a task at my desk and check progress from my phone during a walk? That’s the kind of automation leverage I’m optimizing for in 2026.

4. Claude Code for Web – Async Coding Agent

Simon Willison

Anthropic launched an async coding agent at claude.ai/code. Point it at a GitHub repo, give it a task, and it creates branches and PRs with the work output. It runs in a container, skips permission gates, and the PRs are indistinguishable from CLI-generated ones.

The coding agent space is getting crowded fast – OpenAI Codex Cloud, Google Jules, now this. What I appreciate about this one is the “teleport” feature that lets you copy the transcript and files to your local CLI. It’s not replacing the local workflow, it’s extending it. That’s the right design philosophy.

5. How to Build a PM GitHub That Gets You Hired

Aakash’s Newsletter

Only 24% of PM candidates have GitHub profiles. That stat alone should tell you something. Hiring managers at Google, OpenAI, Anthropic, and Meta actively check GitHub when it’s linked. A strong profile signals you actually build things and understand engineer workflows – not just strategize from a slide deck.

I’ve been saying this for a while: the best PMs ship. They don’t just write specs. If you’re a PM reading this and you don’t have a GitHub presence, this is your sign. Start small. Ship something. The differentiation is massive because almost nobody does it.

6. Visual Explainer – Agent Skill for Rich HTML Output

GitHub

This is a neat agent skill that converts complex terminal output into styled, interactive HTML pages. Think: architecture diagrams, code diff reviews, project plan audits, data tables – all rendered as shareable HTML without manual formatting.

I’m always looking for ways to make technical work more visible to non-technical stakeholders. Being able to generate a polished visual recap of a sprint or a system change and just send the HTML? That’s a communication multiplier.

7. Anthropic Courses on Skilljar

Anthropic Courses

Anthropic now has 14+ structured courses covering Claude API, Model Context Protocol, and AI fluency for developers, educators, students, and nonprofits. This tells me they’re investing heavily in ecosystem education – and that MCP is becoming a first-class skill.

I’ve been building MCP integrations into my daily workflow for months. Seeing Anthropic formalize the training around it validates the bet. If you’re building on Claude and haven’t gone through these, it’s worth the time.

The Thread That Ties It All Together

Every link this week points to the same reality: the cost of standing still just went up. PMF is collapsing faster. AI is reshaping discovery. Coding agents are shipping real code. The PMs who build things are getting hired. The tools are getting better every week.

The question isn’t whether to adapt. It’s whether you’re adapting fast enough.

I aim to be on the right side of that question. Hopefully some of these links help you get there too.

AI Just Walked Into Your Website Without Knocking

Last month I asked ChatGPT a question I’ve asked Google a thousand times: “What’s a good sermon illustration about forgiveness?”

It gave me a solid answer. Three illustrations, structured with context, application points, even a suggested closing line. It was genuinely useful.

And it never sent me to a single website.

That moment hit me differently than it would have two years ago. I run a platform with over 245,000 sermons and 50,000 illustrations. I didn’t just lose a click. I watched an AI system do what our product does, using content that likely came from sites like ours, and deliver it in a way that made visiting the source unnecessary.

That’s a revenue problem. (I wrote about the traffic implications of this shift recently.) (I wrote about the traffic implications of this shift recently.)

The Zero-Click Layer

Most product leaders I know are still thinking about AI as a feature to bolt onto their product: chatbots, smart search, AI-generated recommendations. And that matters. But there’s a bigger shift happening underneath that conversation.

AI answer engines (ChatGPT, Google AI Overviews, Perplexity) are becoming the front door to the internet. They don’t just search. They visit your site, interpret your content, synthesize it, and serve it directly to the user. The user gets the answer. You get nothing.

Google’s featured snippets started this zero-click trend years ago. BUT what’s different now is the depth. A featured snippet pulls a paragraph. An AI answer engine can synthesize an entire page, or multiple pages, into a comprehensive response that genuinely satisfies the user’s intent.

If your business depends on organic traffic as a top-of-funnel engine, this should keep you up at night.

Your Content Library Is Both Your Greatest Asset and Your Biggest Vulnerability

Here’s the paradox I’ve been sitting with.

We spent years building one of the largest structured content libraries in our space. That library is what drives our organic traffic. It’s what Google indexes. It’s what pastors find when they search “sermon on grace” at 11pm on a Saturday night.

That same library is now what AI systems are ingesting to train their models and generate their answers. The very content that built our moat is being used to fill in the moat.

And here’s what makes it worse. The emerging AI-native competitors in our space don’t even need to win Google rankings. They ARE the AI tool. They’re built to live inside AI workflows, not compete for traditional search clicks.

I think this pattern applies to any SaaS company sitting on a large content asset. If you’ve built your growth engine on content that AI can summarize, you’re exposed.

AEO: A Genuinely Different Discipline

There’s a term gaining traction: AEO, or AI Engine Optimization. And I’ll be honest, my first reaction was skepticism. We don’t need another three-letter acronym.

But the more I’ve dug into it, the more I realize it represents a genuinely different discipline.

SEO optimizes for ranking. AEO optimizes for citation. The goal is to be the source that AI systems reference AND link back to. That requires a fundamentally different content strategy.

Here’s what that looks like in practice:

  1. Structured data becomes non-negotiable. Schema markup, clear metadata, explicit problem-solution framing in your content. AI systems parse structure, not vibes. (Schema.org is the starting point.)
  2. Content architecture matters more than keyword density. How your content is organized (headers, relationships between pages, internal linking) determines how AI systems understand your authority on a topic.
  3. Gated content is a double-edged sword. If your best content is behind a login wall, AI crawlers can’t index it. You’re invisible to the answer engine. But if everything is open, you get summarized without a click. The play is in the middle: structured preview content that AI can cite, with depth that requires the visit.
  4. Domain-specific language is your moat. Generic content gets synthesized away. Content that uses the precise language of your audience (the way a pastor describes their Saturday night prep struggle, the specific vocabulary of sermon structure) is harder for AI to replace and more likely to be cited with attribution.

What I’m Doing About It

I’m not going to pretend I have this figured out. But here’s where my head is:

Audit how AI sees us. Before optimizing anything, we need to understand how our top pages render to AI crawlers. What structured data exists? What’s behind login walls that blocks indexing?

Treat AI referral as a distinct channel. We track direct traffic, organic search, paid. AI referral needs its own lane in our analytics. We can’t optimize what we can’t measure.

Build content AI can’t summarize away. The full sermon text? AI can handle that. But a pastor’s framework for adapting a sermon to their specific congregation? A diagnostic tool for matching an illustration to a particular emotional moment in a service? That’s interactive, personalized, and requires being on the platform.

Move faster than the AI-native competitors. They have the structural advantage of being built for AI workflows. We have the structural advantage of 20+ years of trusted content and relationships. The question is whether we can adapt our distribution before they build our depth.

The Strategies That Got You Here Won’t Sustain You

I keep coming back to this. The strategies that built organic growth over the last decade won’t sustain it over the next five years.

That’s a reason to move, not a reason to panic.

The companies that treat AI answer engines as a new channel will capture disproportionate share of the next era of discovery. The ones that keep optimizing for Google page one while AI summarizes their content into zero-click answers will watch their traffic erode and wonder what happened.

I’d rather be early and wrong about the tactics than late and right about the trend.

The AI just walked into your website. The question is whether it’s going to send people your way, or make visiting you unnecessary.

Revolutionizing Product Management: Insights from Industry Leaders and Emerging Trends

In the fast-paced world of software as a service (SaaS), product management stands at the intersection of technology, strategy, and user experience. Recent insights from industry leaders and emerging trends highlight how product teams are navigating new challenges and opportunities, especially with the integration of AI and advanced database infrastructures. This article explores key learnings from top product management blogs, offering a comprehensive guide for professionals looking to enhance their strategies and operations.

The Evolution of Database Infrastructure

The shift in database technology is pivotal for SaaS companies aiming for scalability and performance. Intercom’s journey with Vitess and PlanetScale, as discussed in “Evolving Intercom’s database infrastructure: Lessons and progress,” showcases:

  • Scalability: How adopting PlanetScale Metal has allowed for zero-downtime maintenance and performance improvements.
  • Performance: Insights into how new database technologies can handle increased load without compromising speed.
  • Lessons Learned: The challenges and triumphs of integrating new systems into existing architectures, offering a roadmap for similar transitions.

Designing for User Clarity

Product design isn’t just about aesthetics; it’s about clarity and usability. Pranava Tandra from Intercom shares in “Intercom on Product: Designing for Clarity”:

  • Balancing Simplicity with Depth: Strategies for redesigning information architecture to make complex features discoverable yet not overwhelming.
  • AI Integration: How AI can be seamlessly integrated to enhance user interaction without disrupting the existing user flow.

Learning from Product Conferences

Conferences like #mtpcon London provide a wealth of knowledge:

  • Key Takeaways: Insights from product leaders on current trends, including the integration of AI in product management, as seen in “What we learned at #mtpcon London 2025.”
  • Networking and Collaboration: The value of community and peer learning in advancing product strategies.

Leveraging AI in Product Management

AI’s role in product management has grown exponentially:

  • Enterprise AI Agents: The article “How to build an Enterprise AI Agent” discusses how AI can manage and utilize organizational knowledge, reducing productivity drain.
  • Analytics Superpowers: AI’s ability to simplify SQL queries and data analysis, as highlighted in “Are you struggling with SQL? AI can give you analytics superpowers.”

Strategic User Engagement and Retention

Driving user adoption and retention is crucial:

  • Onboarding Gamification: Utilizing gamification techniques to engage users during onboarding, as explored in “11 Onboarding Gamification Examples to Engage & Retain Users.”
  • Feature Adoption: Tactics to ensure new features are adopted, enhancing user experience and product value, detailed in “How to Drive Feature Adoption: 10 Proven Strategies (+ Examples).”

Customer-Centric Approaches

Understanding and leveraging customer feedback:

  • Feedback Tools: A review of the best tools for collecting and analyzing customer feedback in “16 Best Customer Feedback Tools For SaaS Companies.”
  • UX Metrics: Key metrics to track for measuring user experience, as discussed in “How to Measure User Experience: 7 Key UX Metrics.”

Conclusion

The landscape of product management is continuously evolving, driven by technological advancements and strategic insights shared by industry leaders. From redesigning for clarity and leveraging AI for deeper analytics to enhancing customer engagement through innovative onboarding and feedback mechanisms, product managers have so many tools and knowledge at their disposal. By staying informed and adaptable, product teams are ready to not only meet but exceed market demands, ensuring their products remain competitive and user-centric.