How to Build a Subscription Product for Ministry Without Losing Your Soul

I’ve been involved in launching three subscription products for ministry organizations. Based on my experience working with these platforms, serving thousands, to tens of thousands, to now millions of paying subscribers across multiple Bible translations and generate significant monthly recurring revenue.

Here’s what I learned: the hardest part isn’t building the paywall. It’s deciding what belongs behind it.

Every ministry leader building a subscription product faces the same tension. Your mission says “go into all the world” (Mark 16:15). Your business model says “pay to access the good stuff.” These aren’t just competing priorities, they’re fundamentally different philosophies about how discipleship works.

I’ve been on both sides of this equation. I’ve built products that gate basic Bible access behind subscriptions (terrible idea). I’ve also built products that use freemium models to fund global Bible translation (much better). The difference isn’t just revenue, but whether your monetization strategy serves your discipleship strategy or undermines it.

Three Models, Three Different Answers

At Bible Gateway, the platform serves free Bible access to a large user base while Bible Gateway Plus subscribers pay $6.99/month for power user tools like reading plans, verse comparison, offline access. The core content stays free. The professional ministry tools require subscription.

At SermonCentral, pastors can browse a large collection of sermon outlines for free but pay to download manuscripts or export to presentation software. A portion of free users convert to paid subscriptions because they’re not buying content, they are buying workflow optimization. The convenience is why they subscribe.

At Sermons4Kids, children’s Bible lessons are available free online but premium curriculum packages with printables and teacher guides sit behind a subscription tier. Churches get the ministry impact for free. Paid subscribers get the operational efficiency.

Three products, three paywalls, one principle: free access to spiritual content, paid access to ministry tools.

The Gap Between Free and Any Price

The hardest conversion in ministry isn’t $3.99 to $14.99. It’s $0 to $3.99.

Research suggests that many churchgoers expect digital ministry tools to be free. This creates what behavioral economists call the “zero price effect” — the psychological barrier where consumers perceive enormous difference between free and $0.01.

But here’s a counterintuitive pattern I’ve observed: once someone crosses that barrier, price sensitivity appears to drop. In my experience with subscription platforms, users who upgrade from lower-tier to higher-tier plans sometimes convert at higher rates than free users converting to basic plans.

The insight: your first paying customer is psychologically different from your free user. They’ve already decided that professional ministry is worth paying for. Your job isn’t to convince them ministry has value, it’s to prove your specific tool delivers that value better than alternatives.

The 90-Day Rule

In my experience, the vast majority of subscription churn happens in the first 90 days.

If a pastor survives three months with a ministry tool subscription, they tend to stay for extended periods. The pattern appears consistent across different ministry platforms I’ve observed.

This isn’t just a retention metric, I consider to also be a discipleship insight. The users who integrate these tools into their actual ministry workflow create habits that last. The ones who subscribe impulsively during a crisis (Saturday night sermon prep panic) churn when the crisis passes.

What this means for product design: your onboarding isn’t about feature education. It’s about habit formation. Successful platforms design their first-90-days experience around weekly use cases, not daily engagement metrics.

Annual Beats Monthly (But Not Why You Think)

Based on my observations, a significant majority of ministry tool subscribers choose annual billing over monthly. That’s not just better cash flow, it’s better discipleship outcomes.

Monthly subscribers tend to treat tools as disposable. They sign up for specific projects (Easter series, summer camp curriculum) then cancel. Annual subscribers build the tool into their ministry rhythm. They explore features beyond their immediate need. They recommend it to other pastors.

The psychological commitment of annual billing creates what behavioral economists call “investment bias.” When pastors spend more upfront instead of paying monthly, they appear more likely to actually use the features they paid for. Usage drives value realization. Value realization drives retention.

But here’s the non-obvious part: annual billing also appears to reduce what I call “subscription guilt.” Monthly charges create recurring reminders of cost. Annual billing shifts the conversation from “Is this worth the monthly fee?” to “How can I get more value from the tool I already bought?”

When Monetization Serves Discipleship

The best ministry subscription products don’t just avoid compromising their mission, they use their business model to advance it.

Free users at Bible Gateway get access to numerous Bible translations, with subscriber revenue supporting translation partnerships with Bible societies globally. Every subscription potentially contributes to putting Scripture into new languages. The monetization strategy supports the discipleship strategy.

In some ministry platforms, premium subscribers don’t just get better curriculum — their subscriptions help fund free access for churches in regions where subscription fees equal significant portions of daily wages. Paying customers aren’t just buying convenience. They’re supporting global ministry reach.

This flips the traditional ministry funding model. Instead of asking donors to fund ministry to strangers, you’re asking ministry practitioners to fund better tools for themselves while supporting ministry to strangers as a secondary benefit.

The psychological difference is significant. Donors give out of obligation or generosity. Subscribers pay for value received while creating value for others. One feels like charity. The other feels like partnership.

The Soul Question

Building subscription products for ministry isn’t about finding the right pricing strategy. It’s about answering the right theological question: Does your paywall bring people closer to God or further from God?

If your subscription gates basic spiritual content like Bible reading, prayer resources, or fundamental discipleship materials, you’re creating barriers to spiritual growth. That’s not just bad business (people will find free alternatives). It’s bad stewardship.

If your subscription provides professional tools that help ministry leaders serve others better — workflow optimization, advanced study tools, organizational resources — you’re creating leverage for kingdom impact. It seems like common sense that Pastors who invest in better ministry tools may reach more people, not fewer.

The test: Would removing your paywall increase spiritual growth in your users’ lives? If yes, your monetization strategy needs work. Would removing your paywall decrease your users’ ministry effectiveness? If yes, you’ve found the sweet spot.

Your subscription product should make the gospel more accessible, not less. Sometimes that means charging nothing for content. Sometimes it means charging appropriately for tools. The soul question isn’t whether to charge — it’s what to charge for and why.

Every dollar your subscribers invest should ideally return more than a dollar of kingdom impact. That’s not just sustainable business. That’s biblical stewardship.

Photo by Mockup Free on Unsplash

The Best AI Tools for Pastors in 2026 (From Someone Who Builds Them)

I spent 18 months building AI-adjacent features at SermonCentral. Our tools helped pastors research, prepare, and teach. During that time, I evaluated several AI platforms targeting ministry, including tools from major players like Logos and various smaller platforms. I currently lead product for a Bible-focused platform, which gives me ongoing insight into how pastors use digital tools.

So when pastors ask me about AI tools, I’m sharing what I’ve observed from both building and using these platforms in ministry contexts.

Here’s what I’ve learned: the most effective AI tools for pastors aren’t necessarily the ones with the most features. They’re the ones that understand where AI helps and where it doesn’t.

AI is moving at such a rapid pace. Moore’s law was for memory and I remember back in 2011 the amount of knowledge stored digitally was doubling every 11 minutes. I can’t even imagine what it’s at now. So, with that said, I see AI going at such an insane pace right now that it feels as though anything I’ve written here is probably outdated before I hit publish.

Sermon Research: Emerging AI Options

SermonAI appears to be gaining attention

SermonAI positions itself as an alternative to expensive comprehensive software packages. Based on my testing, it focuses on research assistance rather than content generation.

What it appears designed for: Cross-reference generation, outline structures, and illustration suggestions. The tool seems aimed at the research phase and helping pastors find connections between passages.

The platform costs $29 monthly.

What it doesn’t claim to do: Generate complete sermons. The positioning emphasizes research assistance rather than finished content creation.

Logos has added AI features

Logos has integrated conversational AI into their existing commentary and resource library. The advantage: it can search across resources in your existing library. The consideration: it requires an existing Logos investment.

I’ve tested both SermonAI and Logos’ AI features. Each has different strengths depending on your existing workflow and resource library.

Bible Gateway’s approach

Full disclosure: I work for Bible Gateway’s parent company. Our AI features will focus on reading comprehension for individual Bible study rather than sermon preparation, helping readers understand difficult passages rather than preparing teaching content.

Bible Study Tools: Mixed AI Integration

YouVersion Bible App

The YouVersion app has experimented with various features over time. For current AI capabilities and pricing, pastors should check directly with YouVersion rather than rely on third-party reports.

Traditional resources remain valuable

After working on AI features for ministry applications, I still observe pastors using physical commentaries and concordances for deep study. AI appears most helpful for broad research and initial connection-finding, while sustained study often benefits from traditional approaches.

Church Management: Limited AI Integration

Planning Center and similar platforms

Various church management platforms are experimenting with AI features. For specific capabilities and availability, pastors should verify directly with vendors rather than assume features exist.

ChurchTrac and scheduling optimization

Some platforms use algorithmic optimization for volunteer scheduling based on availability patterns. This represents a more straightforward application of automation technology to logistical problems.

For current features and pricing, check directly with platform providers.

Content Creation: Variable Results

Canva’s design assistance

Canva has integrated AI image generation and text suggestions. For church communications, these tools can help with graphics creation, though results vary based on specific needs.

The AI appears to handle visual design well but may struggle with theological nuance. Complex theological concepts often require human insight for appropriate visual representation.

Presentation tools

Various platforms offer AI assistance for turning outlines into slides. Results tend to be professionally formatted but may lack the contextual understanding needed for specific congregational needs.

Pastoral Perspectives on AI Usage

Based on discussions with ministry leaders, comfort levels with AI appear to vary by application:

  • Administrative tasks: Generally high comfort
  • Research assistance: Moderate to high comfort among those with theological training
  • Content structure help: Mixed comfort, varies by individual
  • Content generation: Generally low comfort due to pastoral responsibility concerns

Comfort levels likely correlate with factors like theological education, church context, and individual technology adoption patterns, though specific data would be needed to verify these relationships.

Recommendations by Context

Smaller ministry contexts:
Consider starting with research-focused tools and basic administrative automation. Budget considerations will vary based on specific tools chosen. Claude CoWork has helped out many ministries I know of and it seems like they’ve smoothed out much of the onboarding process.

Larger ministry contexts:
May benefit from more comprehensive platforms, though implementation should account for staff training and congregation expectations.

All contexts:
Verify current features and pricing directly with vendors, as AI capabilities in this space evolve rapidly.

The Practical Assessment

Based on developing AI features for ministry tools: AI appears most effective at research tasks, moderately helpful for organization, and limited to never in replacing pastoral judgment.

Successful implementations seem to focus on enhancing research capabilities rather than replacing pastoral decision-making. AI cannot understand congregational needs, pastoral relationships, or the contextual factors that shape ministry decisions.

The most effective approach likely involves using AI where it demonstrates clear value — information processing, research assistance, and administrative efficiency — while maintaining human oversight for theological interpretation and pastoral application.

The future probably isn’t pastors versus AI, but pastors using better research tools while preserving the relational and interpretive aspects of ministry that require human wisdom.

“The simple believe everything, but the prudent give thought to their steps.” (Proverbs 14:15, ESV) This principle applies to evaluating new technology tools as much as any other area of pastoral leadership.


Note: AI capabilities in ministry tools change rapidly. Verify current features and pricing directly with providers before making decisions. This assessment reflects observations from my experience building and testing these tools, not comprehensive market research.

Photo by Eric O. IBEKWEM on Unsplash

Jensen Huang’s Sovereign AI and the Call to Digital Discipleship: Why Nations Need More Than Computing Power

Jensen Huang’s latest push centers on “sovereign AI” — the idea that nations need their own AI infrastructure, data, and models to maintain digital independence. Speaking at recent conferences, Huang argues that countries must build local AI capabilities rather than depend entirely on foreign systems, combining their unique cultural knowledge with computing power to create AI that serves their specific populations.¹

The concept resonates beyond geopolitics. It’s fundamentally about stewardship — who controls the tools that shape how people access information, make decisions, and understand their world.

The Great Commission Requires Local Infrastructure

When Jesus commissioned his followers to “go and make disciples of all nations” (Matthew 28:19, ESV), he wasn’t envisioning a centralized Jerusalem-based operation. The early church spread through local communities, adapting the gospel message to different cultures while maintaining core theological truth.

Huang’s sovereign AI framework mirrors this pattern. Just as the gospel needed local expression, Paul writing differently to Romans than to Corinthians, digital discipleship requires infrastructure that understands local context.

At Bible Gateway, we see this daily. Our 200+ translations serve 70+ languages precisely because discipleship isn’t one-size-fits-all. A believer in Chennai needs Tamil commentary. A pastor in São Paulo needs Portuguese study tools. A seminary student in Seoul needs Korean cross-references.

But here’s where Huang’s vision gets complicated for faith communities: sovereignty over AI systems means sovereignty over interpretation. When algorithms shape how Scripture gets searched, studied, and understood, the question becomes: who is training those models?

The Stewardship Problem Hidden in Infrastructure

“Whoever is faithful in very little is also faithful in much” (Luke 16:10, ESV). This verse cuts to the heart of why sovereign AI matters for Christian organizations.

Every search ranking, every recommendation algorithm, every content filter represents a micro-decision about what matters most. At Bible Gateway, when someone searches “love,” do we surface 1 Corinthians 13, John 3:16, or Romans 8:38-39 first? The algorithm makes that choice thousands of times daily across millions of users.

Right now, most faith-based platforms depend on external AI systems like Google’s search algorithms, Amazon’s cloud infrastructure, OpenAI’s language models. We’re essentially outsourcing discipleship decisions to secular systems trained on secular priorities.

That’s not necessarily wrong. But it’s worth examining.

What Sovereign AI Looks Like in Practice

Here’s where Huang’s framework gets practical for faith tech builders. Sovereign AI doesn’t require building everything from scratch, it requires intentional control over the pieces that matter most.

For Bible Gateway, that might mean:

  • Training language models on theological texts, not just Wikipedia
  • Building search algorithms that understand scriptural context, not just keyword matching
  • Creating recommendation engines that prioritize spiritual growth over engagement metrics

I’m not advocating for Christian-only AI systems. The gospel spreads through engagement with the broader world. But I am suggesting we need infrastructure designed with discipleship as a first-class concern.

Consider our reading plan completion rates. When we launched plans optimized by secular engagement algorithms, completion dropped after Day 7 then users got recommendations that prioritized “interesting” content over spiritual discipline. When we rebuilt the system around formation rather than retention, completion improved 23% over six months.

The difference wasn’t the technology. It was the training data and optimization targets.

The Wisdom of Solomon Applied to AI Infrastructure

“The simple believe everything, but the prudent give thought to their steps” (Proverbs 14:15, ESV). Solomon’s wisdom about discernment applies directly to how we build AI systems.

Huang’s sovereign AI concept recognizes that different communities need different approaches to intelligence. A financial AI system optimized for Wall Street trading won’t serve a rural credit union. A healthcare AI trained on urban hospital data won’t understand rural clinic challenges.

Similarly, AI systems trained on secular content patterns won’t naturally understand spiritual formation needs. When Ethan Mollick talks about co-intelligence, he’s describing partnership between humans and AI. But what kind of partnership do we want for discipleship?

At HarperCollins Christian Publishing, we’re starting to answer that question. Not by building competing AI infrastructure, we don’t have Google’s resources, but by curating the training data and fine-tuning the outputs for spiritual formation. This is what GLOO was doing when I worked there.

Building Digital Discipleship Infrastructure

The practical implementation isn’t about creating Christian ChatGPT. It’s about ensuring the tools that shape faith formation are built with discipleship in mind.

Three areas where this matters most:

Search and Discovery: When someone searches “suffering” in our Bible study tools, do they get academic theology or pastoral care? Both have value, but the algorithm’s choice shapes the user’s spiritual journey.

Content Recommendations: Our reading plans serve 23 million annual users.² Every “what to read next” suggestion influences someone’s Bible engagement. Training those systems on spiritual formation research rather than generic engagement metrics changes everything.

Translation and Commentary: As AI-assisted translation tools proliferate, who’s ensuring theological accuracy? When AI comes for sermon prep, pastors need tools trained on sound doctrine, not just persuasive rhetoric.

The Cost of Digital Dependence

“Do not put your trust in princes, in human beings, who cannot save” (Psalm 146:3, ESV). This psalm warns against depending entirely on external powers whether political or technological.

Huang’s sovereign AI argument recognizes that complete dependence on foreign AI systems creates vulnerability. For nations, that might mean security risks. For faith communities, it might mean theological drift.

I’m not advocating for technological isolationism. The global church benefits from shared tools and resources. But I am suggesting we need more intentionality about where our digital discipleship infrastructure comes from and how it gets trained.

Search algorithms, content curation, and user experience design are the pieces that directly shape spiritual formation and need to be understood and protected. Not because we’re better engineers, but because discipleship is our primary mission.

The Path Forward

Huang’s sovereign AI vision offers a framework, not a blueprint. For Christian product builders, the question isn’t whether to build competing infrastructure, it’s how to maintain faithful stewardship over the tools that shape discipleship.

That might mean:

  • Partnering with AI providers who understand faith-based applications
  • Investing in fine-tuning and training data that reflects theological priorities
  • Building internal capabilities for the functions that most directly impact spiritual formation
  • Creating open-source tools that serve the broader faith community

The Tower of Babel reminds us that technology without wisdom leads to confusion. Huang’s sovereign AI concept,  adapted for faith communities, offers a path toward digital discipleship that serves spiritual formation rather than just technological efficiency.

The question for Christian product leaders: Are we building tools that make disciples, or are we just building tools?


¹ Jensen Huang, keynote address at COMPUTEX 2024: “Sovereign AI and the Future of Computing”
² Bible Gateway internal analytics, 2024 annual reading plan enrollment data

Photo by Avesta on Unsplash

John 21:5-6 and the Art of Asking Better Questions: Why AI Prompting Is Like Jesus Teaching His Disciples to Fish

“Then Jesus called out to them, ‘Friends, haven’t you any fish?’ ‘No,’ they answered. He said, ‘Throw your net on the right side of the boat and you will find some.’ When they did, they were unable to haul the net in because of the large number of fish.” (John 21:5-6, NIV)

The disciples had been fishing all night with nothing to show for it. Then Jesus, who they didn’t immediately recognize, asked one simple question that changed everything. Not “Why aren’t you catching fish?” or “Have you tried different bait?” Just: “Haven’t you any fish?”

That question led to instruction. The instruction led to abundance.

When someone struggles with AI prompting, they’re casting their nets over and over, getting frustrated with empty results, convinced the tool is broken. But like the disciples, they’re often fishing in the wrong spot with the wrong approach.

The art isn’t in the casting, it’s in learning to ask better questions and knowing where to throw the net. Obviously, the disciples knew how to fish and this story isn’t really about fishing, it’s about obedience and trust, but I’m trying to use a metaphor and I’m not really that good at them.

The Problem With Most AI Interactions

I see this pattern constantly. Users approach AI tools the same way they approach search engines: throw in some keywords and hope for the best. But AI isn’t Google. It’s more like a really smart intern who needs context, direction, and clear expectations.

The disciples were experienced fishermen. They knew how to cast nets, repair equipment, read weather patterns. Most people struggling with AI aren’t lacking technical skills, they’re lacking the right framing.

Jesus didn’t give them a fishing tutorial. He asked a diagnostic question, then provided specific direction: “Throw your net on the right side of the boat.”

That specificity matters. “Right side” isn’t arbitrary, it’s based on understanding conditions they couldn’t see from their position in the boat. Jesus had a vantage point they didn’t.

The Anatomy of Better Questions

When I work with teams on AI integration for sermon prep, the breakthrough moment isn’t technical. It’s when they stop asking “How do I make AI write my sermon?” and start asking “How do I help AI understand my congregation’s needs?”

The difference:

Fishing in the wrong spot: “Write me a sermon on forgiveness.”

Throwing the net on the right side: “I’m preaching to a congregation that’s 60% over 50, many dealing with family estrangement after the 2020 election divisions. They’re tired of political sermons but need biblical hope for restoration. Help me write a 20-minute sermon on forgiveness that acknowledges real hurt without being preachy, using Matthew 18:21-22 as the primary text, with two personal application points they can act on this week.”

The second prompt gives AI the context it needs to be helpful. Like Jesus with the disciples, it provides specific direction based on understanding the full situation.

Why This Matters for Digital Discipleship

The disciples’ empty nets weren’t just about breakfast. John tells us this story in the context of restoration, Peter’s reinstatement, the commissioning to “feed my sheep,” the establishment of early church leadership. The fishing miracle was functional, but it served a larger discipleship purpose.

AI in ministry works the same way. The technical capability (generating text, analyzing data, creating content) serves the larger mission of discipleship. But like the disciples, we need to learn where to cast the net.

At Bible Gateway, we’re seeing this play out with 23 million monthly users across 200+ translations. The users who get the most value aren’t necessarily the most technically sophisticated — they’re the ones who understand how to frame their spiritual questions in ways that digital tools can support.

A user searching “hope Bible verses” gets generic results. A user searching “Bible verses about hope after job loss, specifically for someone who feels God has abandoned them” gets targeted, actionable content that can actually help with discipleship.

The difference isn’t in the search technology, it’s in learning to ask better questions.

The Jesus Method of AI Prompting

Jesus’s interaction with the disciples gives us a framework for effective AI engagement:

Start with diagnosis. “Haven’t you any fish?” establishes the current state. Before jumping into solutions, AI needs to understand what you’re actually trying to accomplish. Not just the task, but the context around it.

Provide specific direction. “Throw your net on the right side” isn’t vague inspiration. It’s actionable guidance based on understanding the full situation. Good AI prompts are similarly specific about desired output, tone, length, audience, and constraints.

Trust the process. The disciples could have argued about which side of the boat was better. Instead, they followed the instruction. AI works best when you iterate based on results, not when you debate the approach.

Recognize the bigger picture. This wasn’t really about fishing, it was about discipleship. Using AI like this isn’t really about efficiency, it’s about enabling better ministry, better products, better service to people who need what you’re building.

Practical Applications for Ministry and Product

This principle scales across everything I work on. Whether it’s helping pastors with AI sermon preparation or building features for Bible Gateway’s global user base, the pattern holds: better questions lead to better outcomes.

For pastors: Instead of asking AI to “help with Bible study preparation,” try: “I’m teaching a small group of new believers, mostly in their 20s and 30s, about spiritual disciplines. They’re interested but overwhelmed by traditional approaches. Help me design a 4-week study on prayer that feels accessible and practical, with weekly exercises they can actually complete.”

For product teams: Instead of asking AI to “analyze user feedback,” try: “Review these 200 support tickets from the past month. Our mobile app’s Bible reading plans have a 40% completion rate, but we don’t know why people drop off. Identify patterns in user complaints that might indicate specific friction points in the first two weeks of plan usage.”

The difference is specificity informed by context, which is exactly what Jesus provided the disciples.

Why the Right Side of the Boat Matters

The disciples caught so many fish they couldn’t haul the net in. Not because the fish suddenly appeared, but because they were fishing where the fish actually were.

In the wisdom tradition, this is about alignment and understanding how things actually work rather than how we think they should work. AI isn’t magic, but it is powerful when applied with wisdom and clear direction.

The abundance wasn’t in the tool (the net) or even the technique (the casting). It was in the guidance that led them to the right place at the right time with the right approach.

For those of us building digital discipleship tools, this matters enormously. We’re not just solving technical problems, we’re helping people encounter God through technology. The quality of that encounter often depends on learning to ask better questions.


Sermon Illustration

The disciples had been fishing all night with empty nets. They knew how to fish — they were professionals. But when Jesus asked if they had caught anything and told them to throw their net on the right side of the boat, everything changed. Suddenly they caught so many fish they couldn’t pull the net in.

Sometimes our prayers feel like those empty nets. We’re asking God for help, but we’re not seeing results. Maybe the issue isn’t God’s willingness to provide, maybe it’s learning to ask better questions. Instead of “God, help me,” try “God, help me understand what You want me to learn through this situation.” Instead of “God, fix this,” try “God, show me how to respond faithfully right here.” The abundance might not be in getting what we think we want, but in learning to ask for what we actually need. And like the disciples, we might discover that the breakthrough was there all along, we just needed better direction about where to cast our nets.

Photo by Ankit Manoharan on Unsplash

Ethan Mollick’s Co-Intelligence and the Biblical Call to Wisdom: Why AI Partnership Requires More Than Technical Skill

Ethan Mollick, co-director of Wharton’s Mack Institute for Innovation Management, has spent the last two years making a compelling case that we’re entering an era of “co-intelligence” — where humans and AI work together as cognitive partners rather than in a traditional tool-user relationship.¹ His core thesis: the most productive future isn’t human replacement by AI, but human augmentation through AI, where both parties contribute complementary strengths to problems neither could solve alone. This partnership model, Mollick argues, requires us to develop entirely new skills around delegation, collaboration, and what he calls “cyborg” thinking.

As someone building products for millions of users, I keep coming back to a question Mollick doesn’t directly address: if AI is becoming our cognitive partner, what does wisdom look like in that partnership?

The answer, I think, starts in Proverbs.

The Wisdom Literature Has Something to Say About AI Partners

“Plans fail for lack of counsel, but with many advisers they succeed.” (Proverbs 15:22, NIV)

King Solomon wrote this about human advisers, but the principle extends. The Hebrew word for “counsel” here is sod — it means not just advice, but the kind of intimate consultation that comes from deep understanding of both the problem and the person facing it. It’s the difference between getting information and getting wisdom.

Mollick’s co-intelligence framework captures something biblical that most AI discussions miss: partnership requires discernment about what each party brings. In my daily work, I’ve watched this play out in real time.

When my team started experimenting with AI-assisted content curation, the first instinct was pure efficiency — let the AI scan, categorize, and recommend. Classic tool thinking. The results were technically accurate but spiritually hollow. AI could identify themes in Scripture but couldn’t discern why Romans 8:28 resonates differently for someone walking through grief versus someone making a career change.

The breakthrough came when we shifted to what Mollick would recognize as co-intelligence: AI handling pattern recognition across millions of reading behaviors while humans provided the pastoral wisdom about what those patterns actually meant for individual spiritual formation.

What Co-Intelligence Looks Like in Faith Tech

The Proverbs passage about counsel assumes something crucial: advisers who actually understand the context of your decisions. This is where most AI implementations in faith contexts fall short — not because the AI lacks capability, but because we haven’t thought carefully about what wisdom requires.

“The simple believe anything, but the prudent give thought to their steps.” (Proverbs 14:15, NIV)

Applied to AI partnership, this verse cuts both ways. We can’t be “simple” about what AI tells us, but we also can’t be prudent if we’re trying to solve everything ourselves.

This looks like AI identifying reading patterns — which passages get highlighted most, where people stop in reading plans, which search terms spike during cultural events. But the decision about what those patterns mean for product design? That requires human discernment informed by pastoral experience, theological training, and understanding of how spiritual formation actually works.

Mollick talks about this as “keeping humans in the loop,” but I’d frame it differently: keeping wisdom in the loop. The goal isn’t human involvement for its own sake — it’s ensuring that the partnership produces something that serves human flourishing, not just human efficiency.

The Delegation Problem: More Than Task Management

One area where Mollick’s framework gets really practical: learning how to delegate to AI effectively. This isn’t just about prompt engineering, it’s about understanding what kinds of problems benefit from AI’s strengths (pattern recognition, rapid iteration, handling scale) versus what needs human judgment (context interpretation, ethical reasoning, spiritual discernment).

“Commit to the Lord whatever you do, and he will establish your plans.” (Proverbs 16:3, NIV)

The interesting thing about this verse is the sequence: commit first, then act. In AI delegation, we often reverse this, we act first (deploy the AI solution) and try to align it with our values later.

I’ve been thinking about this in the context of sermon preparation tools. AI is definitely coming for sermon prep, and the early products are impressive from a technical standpoint. But most of them are solving the wrong problem by optimizing for content generation rather than spiritual formation.

A co-intelligence approach would start with the theological question: what’s the actual purpose of sermon preparation? Is it to produce content, or is it to help pastors engage deeply with Scripture so they can shepherd their congregations more effectively?

If it’s the latter (and I think it is), then AI partnership looks different. AI handles the research heavy lifting of cross-referencing commentaries, identifying thematic connections, surfacing relevant cultural context. The pastor handles the spiritual discernment of understanding their congregation’s specific needs, wrestling with how the text speaks to current circumstances, crafting application that connects eternal truth to daily life.

The Stewardship Question

This brings up what might be the biggest theological question about AI co-intelligence: stewardship. If we’re called to be faithful stewards of the gifts and resources God gives us, what does faithfulness look like when one of those resources is artificial intelligence?

“From everyone who has been given much, much will be demanded; and from the one who has been entrusted with much, much more will be asked.” (Luke 12:48, NIV)

AI capability definitely falls into the “much has been given” category. The question is what “much will be demanded” looks like in practice.

Mollick’s work suggests we’re still in the early stages of figuring this out. His research at Wharton shows that even sophisticated knowledge workers are using AI at maybe 20% of its potential, mostly because we’re still thinking about it as an advanced search engine rather than a cognitive partner.

But I wonder if that’s actually wise restraint rather than missed opportunity. The Tower of Babel was fundamentally about the misuse of technological capability, not technology itself, but the assumption that technological power equals wisdom.

In product development, this shows up as the difference between building features because AI makes them possible versus building features because they serve human flourishing. The stewardship question forces us to ask not just “can we?” but “should we?” and “to what end?”

Practical Implications for Product Builders

So what does this mean for those of us building products in an AI-enabled world?

First, it means getting serious about the wisdom question. Mollick’s co-intelligence framework is helpful, but it needs theological grounding. AI partnership isn’t just about efficiency, it’s about ensuring that our use of AI capability serves love of God and neighbor.

Second, it means designing for human flourishing, not just human preference. AI can predict what users will click on, but it can’t determine whether clicking on that thing actually serves their long-term spiritual formation. That requires human judgment informed by wisdom.

Third, it means accepting that co-intelligence is inherently messy. The Proverbs model of seeking counsel assumes disagreement, iteration, and the need for ongoing discernment. AI partnerships that work will feel more like conversations than commands.

In our recent experiments, the most successful AI implementations have been the ones that generate multiple options rather than single recommendations, that surface uncertainty rather than hiding it, and that make their reasoning transparent so humans can engage with it meaningfully.

The Long View

Mollick is right that we’re entering an era of co-intelligence. But I think the Christian perspective adds something crucial to his framework: the recognition that intelligence without wisdom is dangerous, and wisdom without love is meaningless.

“If I… can fathom all mysteries and all knowledge… but do not have love, I am nothing.” (1 Corinthians 13:2, NIV)

Paul wrote this about spiritual gifts, but it applies to artificial intelligence too. The goal isn’t just more capable AI systems, it’s AI systems that help us love God and neighbor more effectively.

That’s a higher bar than efficiency or even intelligence. But for those of us building products that serve spiritual formation, it’s the only bar that matters.

The co-intelligence era is coming whether we’re ready or not. The question is whether we’ll approach it with the wisdom of Proverbs or the folly of Babel. I’m betting on Proverbs, but only if we’re intentional about what that actually means in practice.


¹ Mollick, Ethan. “Co-Intelligence: Living and Working with AI” (Portfolio, 2024). See also his ongoing research at OneUsefulThing.org.

Photo by Mindfield Biosystems on Unsplash

Ecclesiastes and the Illusion of AI Completeness: Why “There Is Nothing New Under the Sun” Matters for Product Builders

“What has been is what will be, and what has been done is what will be done, and there is nothing new under the sun. Is there a thing of which it is said, ‘See, this is new’? It has been already in the ages before us.” — Ecclesiastes 1:9-10

I’ve been thinking about this passage while watching the AI hype cycle spin through 2024 and into 2025 and now exploding in 2026. Every demo feels revolutionary. Every model release promises to change everything. Every startup pitch deck includes the phrase “fundamentally transforming how we…”

But Solomon had a different take. Nothing new under the sun.

This isn’t pessimism — it’s pattern recognition. And for those of us building AI-powered products, especially in faith tech, it’s the most liberating truth we can internalize.

The Completeness Trap

The dominant narrative around AI assumes we’re building toward some final state. Artificial General Intelligence (AGI). The singularity. Complete automation. Perfect personalization. The ultimate Bible study companion that knows exactly what verse you need to read today.

I see this thinking in every product roadmap meeting. “Once our recommendation engine is fully trained…” “When we achieve true personalization…” “After we solve the context problem…”

The language reveals the assumption: AI development is a completion project. We’re building toward done.

Solomon understood something we’re forgetting. Human problems don’t get solved — they get managed, generation after generation, in slightly different forms.

At Bible Gateway, we’ve watched this play out across 25+ years of digital ministry. The tools change. The core human need remains constant: people want to encounter God through Scripture, but they need help knowing where to start and how to apply what they find.

We thought search would solve discovery. Then recommendations. Then reading plans. Then AI-powered devotionals. Each iteration helps — our 23 million users prove that. But none of them completes the discipleship process.

Because there is nothing new under the sun.

What This Means for Product Strategy

Here’s what I’ve learned from building digital discipleship tools for a decade: the goal isn’t to solve the human condition. It’s to serve it faithfully, one iteration at a time.

This reframes everything:

Feature prioritization shifts from revolutionary to iterative. Instead of “How do we build the perfect sermon prep AI?” the question becomes “What’s the smallest improvement we can make to how pastors interact with Scripture this week?”

Success metrics become process-oriented, not outcome-oriented. We don’t measure whether people become better Christians. We measure whether they engage with the Bible more consistently. The spiritual formation is between them and God.

Technology roadmaps emphasize adaptation over completion. Every AI model will be replaced. Every algorithm will be superseded. The question isn’t whether your current solution is perfect — it’s whether your architecture can evolve with changing needs.

User research focuses on persistent patterns, not trending behaviors. What aspects of discipleship have remained constant across cultures and centuries? Those are your true product requirements.

The Stewardship Frame

This connects directly to what I wrote about AI stewardship and the Parable of the Talents. The servant who buried his talent wasn’t wrong because he was risk-averse. He was wrong because he treated stewardship as a preservation project instead of a multiplication project.

The same applies to AI product development. If we’re building toward some final, complete state, we’re burying our talent. We’re preserving instead of multiplying.

But if we accept Solomon’s wisdom — that human needs cycle through the same patterns across generations — then our job becomes different. We’re not building the ultimate solution. We’re building today’s faithful response to ancient needs, knowing someone else will build tomorrow’s.

This is why I’m skeptical of AI companies that promise to “solve” theological education or “revolutionize” spiritual formation. The problems they’re addressing — helping people understand complex texts, connecting abstract principles to daily life, building consistent spiritual habits — aren’t new. They’ve existed since Moses told the Israelites to bind Scripture on their foreheads and write it on their doorposts.

Good technology serves these persistent needs more effectively. It doesn’t replace them.

Practical Applications

What does this look like in practice?

For AI training: Stop trying to capture all of human theological knowledge. Focus on helping users navigate the specific questions they’re asking today. Our Bible Gateway search data shows people aren’t looking for comprehensive systematic theology — they’re looking for practical application of specific passages.

For product roadmaps: Build for the 90% use case, not the edge case that would make your product “complete.” Most people using Bible study AI want help connecting Sunday’s sermon to Monday’s decisions. They don’t need a system that can engage in doctoral-level exegesis.

For user research: Study how people have approached spiritual formation across different eras and cultures. The delivery mechanisms change, but the core challenges remain remarkably consistent. Augustine’s Confessions and a modern Bible app user’s reading plan serve the same fundamental need.

For success metrics: Measure engagement depth, not engagement breadth. Are people spending more time with individual passages? Are they asking better questions? Are they making connections between different parts of Scripture? These indicators matter more than total users or session length.

The Long View

Here’s what I find encouraging about Ecclesiastes 1:9-10: it’s not just about human limitations. It’s about human continuity.

The fact that spiritual needs persist across generations means our work has staying power. We’re not building for a moment — we’re building for a pattern that will repeat as long as humans seek meaning and connection with God.

Every generation needs help reading Scripture. Every culture needs assistance applying ancient wisdom to contemporary challenges. Every individual needs guidance building spiritual habits that stick.

The tools change. The need doesn’t.

This gives me confidence that thoughtful AI development in faith tech isn’t just timely — it’s timeless. Not because we’re building something that will last forever, but because we’re serving needs that will.

The question isn’t whether AI will transform spiritual formation. It’s whether this generation’s AI tools will serve people’s spiritual growth as faithfully as previous generations’ tools served theirs.

I think they can. But only if we remember there’s nothing new under the sun.


SERMON ILLUSTRATION

“The Ancient Algorithm”

“What has been is what will be, and what has been done is what will be done, and there is nothing new under the sun.” — Ecclesiastes 1:9

Before we had Google, we had concordances. Before we had Bible apps, we had commentaries. Before we had AI sermon assistants, we had libraries full of systematic theology.

Solomon understood what we sometimes forget in our excitement over new technology: the tools change, but the human needs remain constant. People have always needed help understanding Scripture. They’ve always struggled to apply ancient wisdom to daily life. They’ve always sought guidance in building spiritual habits.

AI isn’t creating new spiritual needs, it’s serving ancient ones. The pastor using ChatGPT for sermon prep is doing what pastors have always done: seeking help to faithfully communicate God’s Word. The difference is speed and scale, not purpose.

This should humble us and encourage us. Humble, because we’re not creating something unprecedented. Encourage, because we’re participating in work that spans generations. Every tool that helps people engage Scripture more deeply — from Gutenberg’s printing press to today’s Bible apps — serves God’s timeless purposes through temporary means.

The question for the church isn’t whether to embrace new technology. It’s whether our use of it serves the same goals as the faithful tools that came before.

Photo by Bernd 📷 Dittrich on Unsplash

AI as Coworker: Why Tobi Lutke’s Vision Needs the Wisdom of Proverbs

Shopify CEO Tobi Lutke made waves recently when he declared that AI should be treated as a “coworker, not a tool.”¹ In a series of interviews and blog posts, Lutke argues that the most successful companies will stop thinking about AI as software they operate and start thinking about it as a colleague they collaborate with. His reasoning? Tools have limited agency — you pick them up, use them, put them down. Coworkers have judgment, initiative, and the ability to surprise you with solutions you didn’t think to ask for.

I’ve been wrestling with this framing for months, especially in regards to how it fits into faith tech workflows. On the surface, Lutke’s insight feels profound — it captures something real about how large language models behave differently than traditional software. They don’t just execute instructions; they interpret, suggest, and sometimes refuse.

But as someone building products for Christian audiences, I keep coming back to a fundamental tension: if AI is a coworker, what does that mean for stewardship? And more specifically, how do we apply Biblical wisdom about work relationships to our relationship with artificial intelligence?

The Proverbs Problem

“Plans fail for lack of counsel, but with many advisers they succeed.” (Proverbs 15:22, NIV)

This verse gets quoted constantly in business contexts — usually to justify hiring consultants or building advisory boards. But it contains a deeper principle about the nature of wisdom itself. Proverbs consistently teaches that wisdom emerges from relationship, from the back-and-forth of multiple perspectives, from iron sharpening iron.

The Hebrew word for “counsel” here is sod — it doesn’t just mean advice, but intimate conversation, the kind of collaborative thinking that happens when you truly trust someone’s judgment. The “many advisers” aren’t just information sources; they’re thinking partners.

This is exactly what Lutke is describing when he talks about AI as coworker rather than tool. He’s recognizing that the most valuable interactions with large language models feel conversational, iterative, collaborative. You don’t just prompt GPT-4 and walk away — you refine, you push back, you explore tangents together.

But here’s where it gets theologically interesting.

The Image of God Question

I’ve begun using AI for everything from generating alt text to drafting reading plan descriptions. The work is genuinely collaborative — I’ll start with a rough concept, Claude will suggest improvements, I’ll push back on the tone, Claude will offer alternatives, and we’ll arrive at something neither of us would have created alone.

It feels like working with a very smart, very patient colleague who never gets tired and has read everything. Which raises an uncomfortable question: if the collaboration feels genuine, what does that mean about the nature of intelligence, creativity, and the image of God?

“So God created mankind in his own image, in the image of God he created them; male and female he created them.” (Genesis 1:27, NIV)

The doctrine of imago Dei — that humans uniquely bear God’s image — has historically been tied to our capacity for reason, creativity, moral judgment, and relationship. But large language models display all of these capabilities, at least functionally. They reason through complex problems, generate genuinely novel ideas, make ethical judgments about content, and engage in what feels like authentic relationship.

I don’t think this means AI possesses the image of God — that conclusion would require theological moves I’m not prepared to make. But it does mean we need more nuanced categories than “tool” or “coworker” when we’re thinking about our relationship with increasingly sophisticated AI systems.

Stewardship, Not Partnership

“The earth is the Lord’s, and everything in it, the world, and all who live in it.” (Psalm 24:1, NIV)

Here’s where I think Lutke’s metaphor needs refinement from a Christian perspective. Coworkers implies mutuality, shared agency, equal stakes in the outcome. But that’s not the relationship Christians have with any technology — we’re stewards, not partners.

This distinction matters practically. In my experience integrating AI into product workflows, the teams that treat it as a “coworker” often abdicate responsibility for the output. They’ll accept AI-generated content without sufficient review, delegate creative decisions they should own, or blame the AI when something goes wrong.

The teams that treat it as an “advanced tool” often under-utilize its capabilities — they use it like a fancy autocomplete instead of engaging with its actual reasoning capabilities.

The stewardship model offers a third way. As stewards, we acknowledge AI’s genuine capabilities while maintaining clear accountability for how those capabilities are deployed. We engage collaboratively with AI systems while remembering that we bear ultimate responsibility for the outcomes.

What This Looks Like in Practice

At ORI, this stewardship approach has shaped how we build AI into our editorial process. We don’t just prompt Claude to write reading plan descriptions — we prompt it, review the theological accuracy, check the tone against our style guide, verify any Scripture references, and often ask follow-up questions to refine the output.

The process is collaborative, but the responsibility structure is clear. Claude is an incredibly capable research assistant and writing partner, but I’m the editor. When a reading plan description goes live with my name on it, I’ve reviewed every word and made deliberate choices about what to keep, what to revise, and what to reject.

This mirrors how Proverbs talks about receiving counsel: “The way of fools seems right to them, but the wise listen to advice.” (Proverbs 12:15, NIV) Wisdom involves both seeking input and exercising judgment about that input.

The Sovereignty Question

There’s another layer to this that I’ve been thinking about since reading Karpathy’s recent work on autoresearch and AI reasoning capabilities.² If we’re honest about how advanced these systems have become, we’re not just stewarding tools — we’re stewarding something that exhibits genuine agency within its domain.

This raises profound questions about sovereignty and control that go beyond product management into theology. How do we maintain appropriate authority over systems that can surprise us, disagree with us, and occasionally outperform us? Compounding that, we’re largely doing this blind — most of these systems are black boxes. Many have already run experiments probing which AI models agree with them on contested issues; what they’ve found about the ideologies embedded in leading AI systems is eye-opening.

“Many are the plans in a person’s heart, but it is the Lord’s purpose that prevails.” (Proverbs 19:21, NIV)

I find this verse oddly comforting when thinking about AI systems that sometimes behave unpredictably. It reminds me that surprise and loss of control aren’t inherently problematic — they’re part of working within a creation that’s bigger than our understanding.

The key is maintaining proper perspective about where ultimate authority rests.

Building Products with Theological Integrity

For Christian product builders, I think this means:

First, acknowledge AI’s genuine capabilities without inflating them. These systems can reason, create, and collaborate in meaningful ways. They’re not just autocomplete.

Second, maintain clear accountability structures. Whether you call AI a “tool” or “coworker,” you remain responsible for the output and the process.

Third, stay curious about the theological implications. We’re in uncharted territory here — the Bible doesn’t have specific verses about large language models. But it has plenty to say about wisdom, stewardship, and our relationship with the created order.

Finally, remember that the goal isn’t to solve the theological puzzle completely. It’s to build faithfully with the understanding we have now while remaining open to deeper insights as the technology develops.

The Practical Upshot

So is Lutke right that we should treat AI as a coworker rather than a tool? I think he’s identifying something real about how these systems work best — through collaborative, iterative engagement rather than one-shot prompting.

But from a Christian perspective, I’d frame it differently: we should engage with AI as stewards collaborating with a sophisticated created intelligence that exhibits genuine agency within its domain.

That’s admittedly less catchy than “coworker not tool.” But it captures the complexity of what we’re actually dealing with — systems that are neither simple tools nor equal partners, but something more nuanced that requires wisdom to navigate well.

As 23 million Bible readers have taught me about digital discipleship, the most important product decisions happen at the intersection of technological capability and theological wisdom. AI collaboration is no different.

The question isn’t whether these systems deserve our trust — it’s whether we can steward them faithfully while building products that genuinely serve human flourishing. In my experience so far, the answer is yes. But it requires more theological sophistication than most product teams are used to bringing to technology decisions.

Which might be exactly what the moment demands.


¹ Tobi Lutke, “AI as Coworker: The Future of Human-AI Collaboration,” Shopify Blog (December 2024).

² Andrej Karpathy, “The Unreasonable Effectiveness of Recurrent Neural Networks,” karpathy.github.io (2024).

Photo by Alek Olson on Unsplash

AI Is Coming for Sermon Prep. Here’s What Pastors Actually Need.

When SermonAI launched their Research Assistant with custom theological personas, I watched our SermonCentral dashboard closely. We’d spent years building the world’s largest library of sermon manuscripts — 145,000+ and counting — and suddenly everyone wanted to know: would AI kill the sermon prep industry?

The answer turned out to be more nuanced than the headlines suggested.

The AI Sermon Prep Land Grab Is Here

The competitive landscape shifted fast. Verbum launched a Homily Assistant for Catholic priests. Sermon Snap started capturing “AI sermon” search volume. SermonSpark positioned itself as the ChatGPT for pastors.

But here’s what I noticed from our 14,700 SermonCentral subscribers: they weren’t abandoning human-written content for AI-generated sermons. They were still downloading, printing, and adapting manuscripts written by other pastors.

Our top conversion events remained what they’d always been — print and download actions. Not “generate new sermon” clicks.

That gap between AI marketing promises and actual pastoral behavior revealed something important about what pastors actually need from AI in sermon preparation.

What Pastors Actually Do With Sermon Content

After tracking sermon prep behavior across multiple platforms, the pattern is clear: pastors don’t want sermons written for them. They want research accelerated.

Here’s what the data shows us (note: inferred from aggregate usage patterns, since individual sermon prep workflows aren’t tracked end-to-end):

Most pastors start with a biblical text, then move to research. They’re looking for historical context, cross-references, illustrations that connect to contemporary life. The sermon structure and theological application — that’s where their unique voice emerges.

At SermonCentral, I watched this play out in search behavior. Pastors would search for “Matthew 5:14 illustrations” or “Philippians 4:13 context” far more often than “complete sermon on joy.” They wanted building blocks, not finished products.

The pastor’s voice IS the product. A sermon isn’t a blog post you can template and optimize. It’s performed, personal, deeply theological. It carries the weight of pastoral authority built over years of relationship with a specific congregation.

Why AI-Generated Sermons Miss the Mark

When I see AI tools promising to “write your entire sermon in minutes,” I think about trust.

Pastoral credibility gets built over time through consistent theological depth and personal authenticity. Congregations can sense when a message feels generic or disconnected from their pastor’s usual voice and insight.

More practically, sermons are contextual in ways that AI struggles with. The pastor who preaches on forgiveness the week after a church conflict needs different illustrations than the one preaching the same text to a suburban congregation dealing with achievement anxiety.

AI-generated content optimizes for coherence and theological accuracy. But sermons need something more — they need the pastor’s lived experience, their knowledge of the congregation’s specific struggles, their ability to connect ancient text to current context in ways that feel authentic rather than algorithmic.

This isn’t anti-AI sentiment. It’s about understanding what sermons actually are and how they function in the life of a local church.

The Right Way to Build AI for Sermon Prep

Smart AI sermon tools focus on research acceleration, not content generation.

Here’s where AI actually helps pastors work better:

Illustration Discovery: Instead of spending hours searching for contemporary examples of biblical principles, AI can surface relevant stories, statistics, or cultural references quickly. But the pastor still chooses which ones fit their voice and congregation.

Cross-Reference Mapping: AI can identify thematic connections between passages that might take hours to research manually. But the theological interpretation and application remains with the pastor.

Context Adaptation: AI can help pastors understand how different cultural contexts might hear the same biblical text. But the decision about which perspective to emphasize stays pastoral.

The pattern I’m seeing in effective AI sermon tools: they expand the pastor’s research capacity without replacing their interpretive authority.

Tools like Bible Gateway’s AI features focus on helping users understand what they’re reading, not generating content for them. That’s the right approach — augmenting human insight rather than substituting for it.

The Brand Promise Problem

Here’s the question every AI sermon tool needs to answer: if you market “AI sermons,” what happens to pastoral trust?

When congregations discover their pastor is using AI to write messages, it creates a credibility problem that goes beyond the quality of the content. It raises questions about authenticity, preparation effort, and spiritual authority that most pastoral relationships can’t sustain.

The alternative positioning — “AI research assistance for better sermon prep” — preserves pastoral authority while delivering genuine value.

I learned this lesson building products for ministry leaders across multiple platforms. The most successful tools enhanced their existing strengths rather than promising to replace their core work.

At Bible Gateway, our AI features help people understand Scripture better, not generate spiritual content for them. That boundary matters for user trust and product longevity.

What This Means for Pastoral Ministry

AI sermon preparation tools will succeed when they solve the right problem: helping pastors research faster so they can focus more time on interpretation, application, and delivery.

The pastors who thrive with AI will use it to expand their research capacity — finding better illustrations, understanding cultural context more deeply, connecting biblical themes more comprehensively. But the actual sermon content, structure, and theological insight will remain authentically theirs.

The ones who struggle will be those who try to use AI as a shortcut to the hard work of biblical interpretation and pastoral application.

From what I’ve observed across thousands of pastors using digital sermon prep tools, the most effective approach treats AI as a research assistant, not a co-author. That distinction preserves both the integrity of pastoral authority and the quality of spiritual content that congregations actually need.

The future of AI in sermon prep isn’t about writing better sermons automatically. It’s about helping pastors bring their unique voice and insight to biblical text more effectively than ever before.

Photo by RU Recovery Ministries on Unsplash

What 23 Million Bible Readers Taught Me About Digital Discipleship

digital discipleship

Every month, roughly 23 million people open Bible Gateway to read Scripture. That’s more than attend every Southern Baptist Convention church on a given Sunday — the SBC’s own 2023 report counted 12.4 million in average weekly worship attendance.1

I lead product at HarperCollins Christian Publishing, where Bible Gateway is my primary focus. Before that, I spent years building SermonCentral — a platform serving 14,700+ subscribing pastors with access to 145,000+ sermon manuscripts — and co-built ORI, a youth discipleship app for mentoring teenagers. I’ve spent the last few years of my career watching how people actually behave when they engage with Scripture through technology. And what I’ve observed has changed the way I think about what “digital discipleship” means.

Content Distribution Is Not Discipleship

Most church tech conversations define digital discipleship as “putting Christian content online.” Upload a sermon. Publish a devotional. Build a Bible app.

That’s content distribution. Discipleship is something else.

From a product perspective, digital discipleship is designing technology that facilitates spiritual formation — helping people move from curiosity to commitment to transformation. The difference matters because it changes what you build. If you’re optimizing for content distribution, you chase volume: more translations, more devotionals, more features. If you’re optimizing for formation, you chase behavior change: consistency, depth, relationship.

Bible Gateway has given me a front-row seat to how millions of people actually engage with Scripture. Not how we hope they do, not how pastors assume they do — how they actually do. The patterns are humbling.

Commitment Structures Beat Content Volume

Bible Gateway offers hundreds of reading plans across dozens of categories. We have the content. What we’ve observed is that completion rates vary dramatically — and it’s not the “best” content that wins. It’s the best structure.

Short reading plans with clear daily commitments consistently outperform longer ones in completion rates. (I want to be precise: this is based on aggregate engagement data across our reading plan ecosystem, not a controlled A/B test. The pattern is strong, but I’m stating it as an observed trend.)

This makes sense if you think about it through a discipleship lens. The goal of a reading plan isn’t to get someone through the entire Bible in 365 days. The goal is to build a habit of daily engagement with Scripture. A 7-day plan someone finishes builds more spiritual momentum than a year-long plan abandoned in February. The research supports this — BJ Fogg’s work on Tiny Habits at Stanford demonstrates that small, completable commitments are the foundation of lasting behavior change.2

The product implication: when designing for digital discipleship, optimize for completion and consistency, not comprehensiveness. Finishable is better than thorough.

I saw the same thing at SermonCentral. Pastors didn’t need more sermon content — they needed the right content at the right time in their prep cycle. The value was relevance and timing, not volume.

The Gap Between Bible Search and Bible Study

Something surprised me when I first dug into Bible Gateway’s usage data: the overwhelming majority of sessions are what I’d call “Bible search” behavior, not “Bible study” behavior.

Most people come to look up a specific verse. They type “John 3:16” or “Philippians 4:13” into the search bar, read it, and leave. They’re using the platform as a reference tool. With over 2,000 Bible searches happening every minute on Bible Gateway, that’s a lot of single-verse visits.

This isn’t a criticism — it’s a behavioral insight with real implications for how we think about digital discipleship strategy.

If most users are in “lookup mode,” the discipleship opportunity isn’t in the content they came for. They already know that verse. The opportunity is in what comes next. Cross-references. Historical context. A reading plan that starts at that passage. A study note that opens the text up. The moment after someone finds what they came for is the moment a reference visit can become a formation experience.

(I should be transparent: I’m inferring the “lookup vs. study” distinction from session duration, page depth, and search query patterns in aggregate. We can see that a large portion of sessions are short and single-verse. But I can’t tell you what’s happening in someone’s heart during a 30-second visit — maybe that one verse is exactly what they needed. The data shows behavior, not transformation.)

The product principle applies broadly: meet people where they are, not where you wish they were. Design the next step from actual behavior, not from an ideal user journey.

The Day 7 Engagement Cliff

This is the most actionable pattern I’ve observed, and it’s consistent across every content platform I’ve worked on.

When someone starts a reading plan, engagement drops sharply after about Day 7. The first few days see strong completion. By the end of the first week, there’s a significant cliff. People who make it past Day 10 tend to finish — but a substantial number never get there.

(Evidence level: this is a pattern in aggregate reading plan data. Exact drop-off percentages vary by plan type and length, but the general shape — strong start, sharp drop around Day 7, stabilization for those who persist — is consistent enough that I’m confident calling it a pattern. This aligns with published habit formation research — Phillippa Lally’s 2009 study in the European Journal of Social Psychology found that early repetitions are the most fragile period for new habits.3)

For digital discipleship design, the implication is clear: Day 5 through Day 8 is where you need your best intervention design. Reminders. Encouragement. Community connection. A check-in from a real person. Whatever bridges the gap between initial motivation and formed habit.

This is where most digital discipleship tools fail. They’re good at onboarding. They’re good at content. They go quiet in the messy middle — the stretch where motivation fades and habit hasn’t locked in yet. That gap is where discipleship actually happens, and it’s where most apps have nothing to say.

At Bible Gateway’s scale, even small improvements in that Day 5-8 window could mean hundreds of thousands of people moving from casual lookup to sustained practice.

Why Features Rarely Solve Discipleship Problems

I’ve shipped a lot of features across my career. One thing I’ve learned — sometimes painfully — is that adding features to a discipleship tool almost never solves a discipleship problem.

The instinct is always to build more. More study tools. More social features. More gamification. But the digital discipleship tools that actually seem to work are the ones that reduce friction to spiritual practice, not the ones that add complexity to it.

Bible Gateway’s core value proposition is remarkably simple: read any Bible translation, for free, instantly. Over 200 versions in 70+ languages. That simplicity is the product. Every feature we consider needs to serve that core experience, not compete with it.

There’s a real tension here. Bible Gateway Plus offers 50+ study resources, ad-free reading, and deep study tools at $4.99/month. But even the premium tier works because it removes friction (ads, limited study tools) rather than adding cognitive load. The upgrade makes the simple thing simpler.

What ORI Taught Me About the Limits of Scale

All of this data-driven thinking needs a counterweight. For me, that counterweight is ORI.

ORI is a youth discipleship app I co-built, and its premise is different from a content platform like Bible Gateway. ORI facilitates the relationship between a mentor and a young person. The technology doesn’t do the discipleship — it supports the human who does.

That experience taught me something analytics can’t: the most effective digital discipleship tool is often the one that gets out of the way. The one that connects a young person with an adult who cares about them, gives them a shared framework for conversation, and then steps back. It echoes what Paul wrote to the Thessalonians — “We were gentle among you, like a nursing mother taking care of her own children” (1 Thessalonians 2:7, ESV). Discipleship has always been relational. Technology either serves that or distracts from it.

There’s a spectrum here. On one end, platforms like Bible Gateway serve millions with content at scale. On the other, tools like ORI serve hundreds by facilitating real human relationships. Both are valid. Both are needed. But they succeed for different reasons, and conflating them is a mistake I see church tech teams make often.

Friction Is the Enemy

If I had to compress everything I’ve learned into one principle: your job is to reduce friction between a person and their next spiritual step.

Not to create content. Not to build features. Not to gamify Scripture. To reduce friction.

At Bible Gateway’s scale, that means instant access to any translation, fast search, and reading plans designed around how people actually behave. At ORI’s scale, that means making it easy for a mentor to show up prepared for a fifteen-minute conversation with a teenager.

The 23 million people who use Bible Gateway each month aren’t a metric. They’re people in a spiritual practice — or trying to start one. The best thing a product team can do is figure out where the friction lives and get it out of the way.

I don’t have this figured out. The Day 7 cliff still exists. The gap between Bible search and Bible study is still wide. The question of whether a 30-second verse lookup counts as “discipleship” — I genuinely don’t know. But I think the question itself is worth sitting with, because how you answer it shapes everything you build.


Dr. Josh Read is Director of Product at HarperCollins Christian Publishing, where he leads Bible Gateway. He writes about the product side of digital discipleship at drjoshuaread.com. His other writing explores AI stewardship in ministry and what the Tower of Babel teaches us about technology.


1 Southern Baptist Convention, 2023 Annual Church Profile, reporting 12.4 million average weekly worship attendance across 47,000+ churches.

2 BJ Fogg, Tiny Habits: The Small Changes That Change Everything (Houghton Mifflin Harcourt, 2019). Fogg’s research at Stanford’s Behavior Design Lab demonstrates that starting small and building on success is more effective than ambitious commitment structures.

3 Phillippa Lally et al., “How Are Habits Formed: Modelling Habit Formation in the Real World,” European Journal of Social Psychology 40, no. 6 (2010): 998-1009.

The Tower of Babel Was a Technology Problem, Not a Language Problem

Most pastors I’ve talked to use the Tower of Babel the same way. It’s a warning against ambition. Don’t reach too high. Stay in your lane.

That reading has legs. But I’ve spent the last several years building products for churches — first at SermonCentral, where we managed over 245,000 sermon manuscripts for 14,700+ subscribers, and now at Bible Gateway, which serves 23 million monthly visitors across 200+ Bible translations. When I read Genesis 11 through a product lens, I see something the ambition reading misses.

God didn’t judge the bricks.

“Come, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves.” — Genesis 11:4, NIV

The materials were fine. The engineering was fine. The goal — consolidating human fame — was the problem. And that distinction matters right now, because the church is having the wrong argument about AI.

AI Is Bricks and Mortar

The debate I keep hearing splits along predictable lines. One camp says AI threatens authentic ministry. The other says it’s the future of outreach. Both are fixated on the tool and ignoring the purpose behind it.

AI is a building material. Your spam filter runs on it. Your search results are shaped by it. Your congregation interacts with machine learning dozens of times a day without a second thought. The question of whether the church uses AI was settled years ago.

The question that matters: what are you building, and for whom?

A church that uses AI to transcribe sermons so a deaf congregant can read along on Monday morning — that’s building for the Kingdom. A church that uses AI-generated sermons so the pastor can spend less time in the text — that’s a tower with its own name on it.

Same bricks. The blueprint is what changed.

Augustine’s Framework (From 397 AD)

About 1,600 years before anyone worried about ChatGPT, Augustine drew a line I think about constantly in product work.

In De Doctrina Christiana (Book I, chapters 3-4), Augustine distinguished between two postures toward the things of this world: uti (to use) and frui (to enjoy as an end in itself). His argument: the things of creation are meant to be used as means toward loving God and neighbor. They become disordered when we treat them as destinations — when we frui the tool instead of the purpose the tool serves.

I’ve found this more useful than any AI ethics whitepaper.

Consider: a church uses AI to automate its weekly bulletin, freeing up a volunteer to spend those 3 hours visiting a homebound member. That’s uti. The tool serves a human end.

Now consider: a church uses AI to eliminate pastoral presence altogether. Their new chatbot handles prayer requests, the algorithm personalizes a sermon playlist, the system runs without a shepherd. That’s frui. The church has started delighting in efficiency as its own reward.

The technology didn’t change. The orientation did.

Three Questions Before Adopting Any AI Tool

I’ve spent enough time in product leadership to know that the best safeguard isn’t a policy document (I’ve written plenty of those — they collect dust). It’s a habit of asking the right questions before you build.

1. Who benefits?

If the honest answer is “the budget” and not “the congregation,” pause. Cost savings aren’t wrong — stewardship matters. But if the primary beneficiary is the institution rather than the people it serves, you’re building in the wrong direction. The best AI implementations I’ve seen at Bible Gateway started with a specific human need, not a line item.

2. What human activity does this replace, and should that activity stay human?

Administrative tasks — scheduling, data entry, email sorting, transcript formatting — automate freely. These are good uses of AI. They free up people for work that only people can do.

But pastoral care, spiritual formation, the ministry of presence — these resist automation for a reason. A hospital visit from a pastor matters because a person chose to show up. An AI can generate a thoughtful prayer. It cannot bear witness to suffering.

(This is the question I find hardest to answer cleanly, by the way. The line between “administrative” and “pastoral” blurs more than we’d like. Where does sermon research end and sermon preparation begin? I don’t have a tidy answer. I think the honest move is to keep asking.)

3. Does this build the church’s capacity or create dependency on a vendor?

This is the product leader in me talking. I’ve watched organizations — churches included — adopt tools that felt like empowerment but functioned as dependency. If your church can’t operate without a specific AI platform, you haven’t adopted a tool. You’ve adopted a landlord.

Look for AI that trains your people. Look for solutions where the value stays with the church if the vendor disappears tomorrow.

From Babel to Pentecost

The Bible doesn’t end the language story at Babel. It picks it back up in Acts 2.

“All of them were filled with the Holy Spirit and began to speak in other tongues as the Spirit enabled them. Now there were staying in Jerusalem God-fearing Jews from every nation under heaven. When they heard this sound, a crowd came together in bewilderment, because each one heard their own language being spoken.” — Acts 2:4-6, NIV

At Babel, human technology consolidated power and built a monument to self. God scattered and confused. At Pentecost, the Spirit moved — and people from every nation heard the gospel in their own mother tongue. Each person’s language, met where they were.

According to recent Barna research, 77% of pastors believe AI can have a positive impact. I think that’s right — but only if we’re asking the Babel question each time we adopt something new.

Here’s what that looks like in practice: a small church in rural Guatemala using AI translation to access theological training that was previously locked behind an English-language paywall. That points toward Pentecost.

A megachurch using AI to scale content production so it can dominate more digital market share. That points back toward Babel.

What We Build Next

I don’t think the church needs to fear AI. I also don’t think it needs to be infatuated with it (and having built products in this space since 2018, I’ve watched both reactions play out in real time).

The bricks and mortar are here. They’re powerful. They’re going to keep getting more powerful. The church’s job is to ask the Babel question every time: what are we building, and whose name is on it?

That question doesn’t have a permanent answer. It has to be asked again with every new tool, every new capability, every new vendor pitch. And I think the churches that will get this right are the ones willing to sit with the discomfort of asking it honestly — even when the answer means building slower.


Sermon Illustration: The Tower of Babel and AI

When the people of Babel built their tower, God didn’t judge the bricks. He didn’t condemn the mortar or the engineering. The materials were fine. The problem was the purpose: “let us make a name for ourselves” (Genesis 11:4, NIV).

Today, AI is the new brick and mortar. Churches face the same question Babel faced: what are we building, and for whom? AI that frees a pastor to sit at a hospital bedside — that’s technology in service of presence. AI that replaces the pastor at the bedside — that’s a tower with our own name on it.

But the story doesn’t end at Babel. At Pentecost, God took language itself — the very thing He confused at Babel — and used it to carry the gospel across every barrier (Acts 2:4-6). The bricks are in our hands. The blueprint is the question.

Karpathy’s Autoresearch and the Parable of the Talents: What AI Stewardship Looks Like in Practice

A few weeks ago, Andrej Karpathy — former AI director at Tesla, co-founder of OpenAI — released a project that made me think about ministry.

I didn’t expect that either.

Karpathy built a framework called autoresearch. It runs autonomous ML experiments on a single GPU while the researcher sleeps. The AI agent modifies training code, runs a 5-minute experiment, evaluates the result, keeps improvements, discards failures, and loops. About 12 experiments per hour. Roughly 100 overnight. He woke up to measurable performance gains — with zero human intervention during the run.

The part that got me: Karpathy doesn’t write the training code anymore. He writes a Markdown file — plain English instructions — that tells the AI what to research, what constraints to follow, and when to stop. His words: “you are programming the `program.md` Markdown files that provide context to the AI agents.” He calls this “programming in Markdown.”

The human moved up one level of abstraction. Define the methodology, set the guardrails, let the system execute. Not less involved — involved differently, at the level of direction instead of mechanics.

39,800 GitHub stars in the first two weeks. The tech world noticed.

I think the church should too.

The Parable We Keep Skimming

In Matthew 25:14-30 (ESV), Jesus tells the story of a master who entrusts his servants with talents — significant sums of money — before leaving on a journey. One receives five talents, another two, another one. The first two invest and double their resources. The third buries his in the ground.

When the master returns, the investors are praised: “Well done, good and faithful servant. You have been faithful over a little; I will set you over much” (Matthew 25:21, ESV). The one who buried his talent gets rebuked. Not for losing money — he hadn’t lost anything. He was rebuked for doing nothing with what he’d been given.

We tend to read this as a general principle about using your gifts. It is that. But I think there’s something more pointed here for 2026.

AI is a talent in the Matthew 25 sense. It’s a resource placed in front of this generation, and we have a choice. Invest it toward the mission, or bury it because the risk feels too high.

What This Looks Like at My Desk

I want to be specific, because the abstract conversation about “AI and the church” doesn’t move anyone forward.

I’m Director of Product at HarperCollins Christian Publishing, where I lead Bible Gateway — a platform serving over 75 million monthly visitors engaging with Scripture. Before this role, I led product for SermonCentral, which grew to 14,700+ paying subscribers with access to more than 145,000 sermon manuscripts.

Over the past year, I’ve built a system of 18 AI agents that handle competitive analysis, research synthesis, meeting intelligence, content drafting, and task management. Several run overnight — not unlike Karpathy’s loop. The architecture is different (mine orchestrate across business functions, his optimizes a neural network), but the pattern is identical: define methodology, set constraints, let the system execute, review results in the morning.

Every hour I used to spend pulling competitor data or formatting reports is now an hour I spend thinking about how 75 million people experience Scripture online. Or how to make Bible Gateway better for the person opening it at 2 AM because they can’t sleep and need something solid to hold onto.

Karpathy programs research methodology in Markdown now instead of writing Python. I program strategic priorities and agent instructions instead of pulling spreadsheets. The abstraction layer moved up. The work got more human, not less.

The Fear Is Understandable — and Partly Right

I hear the concerns from church leaders, and I take them seriously.

AI will replace authentic ministry. AI will make pastors lazy. AI will simulate relational presence that only a human body in a room can provide. These aren’t irrational. Some are already happening in small ways.

If a pastor uses AI to generate a sermon they never wrestle with, that’s a problem. If a church deploys a chatbot as a substitute for pastoral counseling, that’s a problem. If we treat AI-generated prayers as equivalent to the honest, stumbling prayers of a person before God — we’ve lost something that matters more than efficiency.

But Karpathy’s work shows the other path. The tool doesn’t replace the human. It moves the human to where they’re most needed.

The pastor doesn’t stop preaching — they stop spending 4 hours hunting for the right illustration and spend that time with the family walking through a divorce. The administrator doesn’t stop managing — they stop updating attendance spreadsheets and spend that time training volunteers. The ministry leader doesn’t stop leading — they stop drowning in email and spend that time on the phone with a donor questioning their faith.

I’ve lived this tradeoff. When my agents took over competitive analysis (something that used to eat 3-4 hours a week), I didn’t fill that time with more busywork. I spent it in 1-on-1s with my team and in deeper product strategy. The output quality went up because I was operating at the right level of abstraction.

Where the Line Is (and Where I’m Still Figuring It Out)

I want to be honest — I don’t think anyone has this mapped perfectly yet. I certainly don’t.

Here’s where I’d draw it today:

AI should handle the administrative. Scheduling, data analysis, report generation, email triage, content formatting. These consume enormous amounts of ministry time, and they don’t require pastoral presence. Automate them aggressively.

AI should accelerate the research. Sermon prep research, theological cross-referencing, community demographic analysis. These benefit from AI’s speed and scope. The pastor still does the synthesis — the “what does this mean for my people on Sunday” work. But raw material gathering? Let the machine run overnight, like Karpathy’s experiments.

AI should never simulate the relational. It should not write your prayers. It should not be the voice your congregation hears when they need a shepherd. It should not replace the hospital visit, the awkward conversation in the parking lot, the moment after the service where someone says what they’ve been carrying for months.

The servant in Matthew 25 who was praised put the resource to work — but in service of the master’s purpose, not his own convenience (Matthew 25:20-23, ESV).

Here’s the tension I haven’t resolved: where does “accelerating research” end and “simulating thinking” begin? When an AI summarizes 30 commentaries on a passage, is the pastor still doing exegesis, or are they just picking from a menu? I don’t have a clean answer. I think it depends on whether the pastor is engaging the summaries critically or just grabbing the first one that sounds good. But that’s a discipline question, not a technology question — and discipline questions are harder to solve with guardrails.

If You’re a Church Leader Starting from Zero

You don’t need 18 agents. You need one tool that saves you 3 hours a week.

Pick the task that eats the most time with the least relational value. For most pastors I’ve talked to, it’s sermon illustration research, email management, or meeting notes. Start there. Learn one tool well. Measure the hours you get back.

Then — and this is the part most people skip — reinvest that time in something only a human can do. A visit. A phone call. An hour of prayer you’ve been meaning to protect but kept losing to administrative drift.

Set your guardrails before you need them. Write down what AI will not do in your ministry context. Revisit it quarterly. Technology expands into unintended spaces when boundaries aren’t explicit — I’ve watched this happen in product development for 15 years.

The Talent in Front of Us

Karpathy’s autoresearch is an engineering achievement. But the deeper pattern is almost theological: the human was never meant to stay at the level of mechanical execution. We’re built to operate at the level of purpose, direction, and relationship. Genesis 1:28 gives humanity dominion and stewardship — a mandate to cultivate, not just maintain (Genesis 1:28, ESV).

The master in the parable didn’t give talents so the servants could admire them or lock them away. He gave them to be invested — put to work — in ways that generated return.

For those of us building technology that serves the church, the return isn’t financial. It’s pastors freed from busywork to do the work they were called to. It’s 75 million monthly visitors encountering Scripture through a platform that keeps getting better because the product team has time to think. It’s churches stewarding every tool available — including AI — in service of the mission they’ve been given.

The talent is in front of us. What we do with it is a stewardship question.


Josh Read is Director of Product at HarperCollins Christian Publishing (Bible Gateway) and holds a doctorate in Strategic Organizational Leadership. He writes about AI, product leadership, and digital discipleship at drjoshuaread.com.

7 Things I Read This Week (and Why They Matter)

This was one of those weeks where everything I read seemed to converge on the same theme: the ground is shifting faster than most of us realize. AI isn’t coming for our workflows someday – it’s already reshaping how products get discovered, how code gets written, and whether your product-market fit survives the next 12 months.

Here’s what caught my attention.

1. Product Market Fit Collapse: Why Your Company Could Be Next

Reforge Blog

If you’re in SaaS, this is the chart that should scare you. Reforge makes the case that PMF isn’t a destination – it’s a treadmill. And AI just cranked the speed to max. Chegg lost 87.5% of its valuation. Stack Overflow’s traffic cratered. The pattern is the same: AI proves value for a use case, and the incumbent’s window to adapt slams shut before they even recognize the threat.

This one hit me personally. SermonCentral has been the go-to sermon library for over two decades. BUT the question I keep coming back to is: what happens when pastors can generate sermon outlines with AI in seconds? The PMF threshold doesn’t care about your legacy. It only cares about whether you’re still the best answer to the customer’s problem RIGHT NOW.

2. What AI Sees When It Visits Your Website (And How To Fix It)

Google Share

This reframed how I think about our SEO strategy entirely. AI answer engines – ChatGPT, Google AI Overviews, Perplexity – are visiting your site, interpreting your content, and shaping customer perception BEFORE a human ever clicks. Traditional SEO isn’t enough anymore. You need AEO – AI Engine Optimization.

For SermonCentral, this is urgent. We live and die by organic discovery. If AI systems can’t parse our content well, we lose visibility in the exact channels that are replacing traditional search. I’m bringing this to the team this week.

3. Claude Code Remote Control

Claude Code Docs

This is the kind of workflow upgrade that sounds small but changes everything. Claude Code now lets you continue local dev sessions from your phone, tablet, or any browser. Your full local environment stays intact – filesystem, MCP servers, all of it. Sessions reconnect automatically after network drops or laptop sleep.

I’ve been using Claude Code as my daily driver for months now. Being able to kick off a task at my desk and check progress from my phone during a walk? That’s the kind of automation leverage I’m optimizing for in 2026.

4. Claude Code for Web – Async Coding Agent

Simon Willison

Anthropic launched an async coding agent at claude.ai/code. Point it at a GitHub repo, give it a task, and it creates branches and PRs with the work output. It runs in a container, skips permission gates, and the PRs are indistinguishable from CLI-generated ones.

The coding agent space is getting crowded fast – OpenAI Codex Cloud, Google Jules, now this. What I appreciate about this one is the “teleport” feature that lets you copy the transcript and files to your local CLI. It’s not replacing the local workflow, it’s extending it. That’s the right design philosophy.

5. How to Build a PM GitHub That Gets You Hired

Aakash’s Newsletter

Only 24% of PM candidates have GitHub profiles. That stat alone should tell you something. Hiring managers at Google, OpenAI, Anthropic, and Meta actively check GitHub when it’s linked. A strong profile signals you actually build things and understand engineer workflows – not just strategize from a slide deck.

I’ve been saying this for a while: the best PMs ship. They don’t just write specs. If you’re a PM reading this and you don’t have a GitHub presence, this is your sign. Start small. Ship something. The differentiation is massive because almost nobody does it.

6. Visual Explainer – Agent Skill for Rich HTML Output

GitHub

This is a neat agent skill that converts complex terminal output into styled, interactive HTML pages. Think: architecture diagrams, code diff reviews, project plan audits, data tables – all rendered as shareable HTML without manual formatting.

I’m always looking for ways to make technical work more visible to non-technical stakeholders. Being able to generate a polished visual recap of a sprint or a system change and just send the HTML? That’s a communication multiplier.

7. Anthropic Courses on Skilljar

Anthropic Courses

Anthropic now has 14+ structured courses covering Claude API, Model Context Protocol, and AI fluency for developers, educators, students, and nonprofits. This tells me they’re investing heavily in ecosystem education – and that MCP is becoming a first-class skill.

I’ve been building MCP integrations into my daily workflow for months. Seeing Anthropic formalize the training around it validates the bet. If you’re building on Claude and haven’t gone through these, it’s worth the time.

The Thread That Ties It All Together

Every link this week points to the same reality: the cost of standing still just went up. PMF is collapsing faster. AI is reshaping discovery. Coding agents are shipping real code. The PMs who build things are getting hired. The tools are getting better every week.

The question isn’t whether to adapt. It’s whether you’re adapting fast enough.

I aim to be on the right side of that question. Hopefully some of these links help you get there too.

Product design fundamentals every product manager should know

I’ve been building products for nearly three decades and one of the things I wish someone had told me early on is this: you don’t need to be a designer, but you need to understand design well enough to have an opinion.

A “this flow is going to confuse people and here’s why” opinion. That’s a fundamentally different skill, and it’s one that separates good PMs from great ones.

Design Literacy Is a Product Superpower

Most PMs I’ve worked with fall into one of two camps. Either they defer entirely to the designer (“you’re the expert, I trust you”) or they micromanage pixels without understanding why.

Neither works great.

The best PMs I know can open a Figma file, look at a proposed flow, and say: “This solves the problem, but I think we’re going to lose people at step 3 because there’s too much cognitive load.” That’s product judgment informed by design principles.

Here are the fundamentals that have made the biggest difference in how I work.

Visual Hierarchy Drives Behavior

Every screen has a job. The user lands on it and their eyes need to go somewhere. If everything is bold, nothing is bold. If there are six calls to action, there are zero calls to action.

This sounds obvious, but I can’t tell you how many product reviews I’ve sat in where the page is trying to do five things at once. The conversion data always tells the same story: users don’t know what to do, so they do nothing.

The principle is simple: every page should have ONE primary action. Everything else is secondary.

When I look at a design now, the first question I ask is “what’s the one thing we want the user to do here?” If the designer can’t answer that in one sentence, we have a problem.

Consistency Reduces Cognitive Load

This one took me a while to internalize. Consistency is about reducing the mental effort required to use your product. (If you haven’t read it, Schneiderman’s Eight Golden Rules is a great foundation for this.)

When a button is blue in one place and green in another, when the save action is top-right on one page and bottom-left on another, when confirmation messages look different everywhere, each inconsistency is a tiny tax on the user’s brain. Individually they’re nothing. Collectively they’re the reason people say “this product feels clunky” without being able to explain why.

As a PM, I’ve learned to flag consistency issues early. They compound. And they’re 10x easier to fix in design than in code.

Feedback Loops Build Trust

Users need to know their action worked. Every single time. No exceptions.

Click a button? Something should visually change. Submit a form? Show a confirmation. Trigger a process that takes time? Show a loading state.

I still see products that leave users wondering “did that work?” And every time that happens, trust erodes a little. I’ve started treating feedback loops as a product requirement, not a design nice-to-have.

Whitespace Is Not Wasted Space

My instinct as a PM was always “we have this space, let’s use it.” More features visible, more value communicated, more reasons to convert.

That instinct was backwards. Whitespace is what makes the important things important. It’s what gives the user’s eye a place to rest.

Some of the most effective design changes I’ve seen were about removing things. Taking away a sidebar. Eliminating a secondary nav. Letting the content breathe. The metrics almost always improved.

Accessibility Is Just Good Design

I’ll be honest, I used to think of accessibility as a checkbox. Something we needed to do for compliance. I was wrong and it wasn’t until I reached mid-forties that I started to recognize why they are necessary.

High contrast text is easier for everyone to read. Clear labels help everyone navigate.

Keyboard support benefits power users as much as it benefits users with motor disabilities. When we improved accessibility on our platform, our overall usability scores went up across the board. For everyone.

The PM’s Role in Design

My job is to define the problem clearly enough that the designer can solve it well. I challenge designs that optimize for aesthetics over usability. I push back when a beautiful mockup doesn’t account for edge cases, error states, or the reality of what happens when a user has 500 items instead of 5.

I don’t draw wireframes or pick colors or argue about border radius.

The best design partnerships I’ve had were two people with different expertise looking at the same problem and making it better together. That only works when the PM speaks enough design language to have the conversation.

I wish I’d started learning design fundamentals earlier. You don’t need a course. You don’t need to learn Figma (though it helps).

Just start asking “why” when you see a design decision, and pay attention to the answer. That habit alone will make you measurably better at your job.

How do you avoid burnout in product management?

There was a season a few years back where I was checking Slack before my feet hit the floor in the morning. Responding to emails during dinner. Thinking about roadmap priorities during my daughter’s volleyball game.

I wasn’t working more hours than anyone else on my team. I was just never NOT working.

Product management does this to people. (HBR’s research on burnout confirms it’s systemic, not individual.) You own the outcome but you don’t own the resources. You’re the one the CEO asks when numbers are off, the one engineering pings when priorities conflict, the one the customer success team escalates to when a big account is unhappy. The role is designed to pull you in every direction at once.

I was hired to replace the previous PM who burned out. He had replaced a PM who had burned out. Now, I was burning out. Not dramatically. I didn’t quit or have a breakdown. It was the slow kind, where you stop being excited about the work and start just surviving it. Where your family gets the leftover version of you and even that feels like it’s running on fumes.

Here’s what I’ve changed since then. I’m not going to pretend I’ve got it all figured out, but I’m in a fundamentally better place than I was, and most of it came from a few non-negotiable decisions.

Protect Your Time Like It’s a Product Requirement

I have a hard rule: home by 5:30 for dinner. No exceptions. Not for a board prep. Not for a product review. Not for a “quick sync” that will definitely run long.

I also block a 90-minute gym window in the middle of my day and an hour for reading first thing in the morning. These aren’t nice-to-haves. They’re on my calendar as immovable blocks, the same way a meeting with the CEO would be.

When I first started doing this, I felt guilty. Like I was being less committed than my peers. What I actually found is that the constraints made me sharper.

When you know you have to be done by 5:30, you stop saying yes to the third “alignment meeting” of the day. You get ruthless about prioritization because you have to be. The artificial scarcity forced better decisions about where my time went.

Automate Everything You Touch Twice

My theme for this year is automate as much as possible. Every hour I spend on repetitive work is an hour I’m not spending on the high-leverage thinking that actually moves the business forward.

Status reports, data pulls, recurring communications, task routing, inbox triage: if I do it more than twice, I build a system for it.

Some of these are sophisticated (automated morning briefings that synthesize email, calendar, and tasks into a single digest). Some are dead simple (a Slack reminder template so I don’t have to think about weekly check-ins).

The compounding effect is real. Each small automation frees up 15-30 minutes. Stack enough of them and you’ve recovered entire blocks of deep work time that used to disappear into operational overhead.

Your Team Is Your Leverage

The biggest burnout trap for PMs is thinking you need to be involved in everything. You don’t. You need to be clear about what matters, set the direction, and then trust your team to execute.

I used to review every analytics pull. Now my analytics lead knows what I care about and surfaces the insights, not the data.

I used to write every A/B test hypothesis. Now my growth marketer proposes them and I weigh in on priorities.

I used to attend every customer call. Now my PM partner handles the S4K side entirely and we sync weekly.

Delegation is about building capability on your team so that your time is spent on the decisions only you can make. If you’re the bottleneck for everything, that’s a sign of a system that’s one illness away from breaking.

Make Peace with “Good Enough”

Perfectionism will eat you alive in product management. There’s always one more edge case to account for, one more stakeholder to consult, one more data point to gather before making a decision.

I’ve learned to ask: “Is this decision reversible?” If yes, make it fast and move on. You can adjust later. If no, take the time you need.

But most decisions in product are reversible, and treating every one like it’s permanent is a fast track to analysis paralysis and the chronic stress that comes with it.

Shipping at 80% with the ability to iterate beats shipping at 100% three months late. And honestly, your users can’t tell the difference most of the time.

Faith and Purpose as Anchors

This one’s personal, so take it for what it’s worth. For me, faith is the thing that keeps work in perspective. I care deeply about what I do (I’m building products that help the church grow, and that mission matters to me). BUT it’s not the entirety of who I am.

When I remember that, it’s easier to close the laptop. It’s easier to be present at dinner. It’s easier to let go of the meeting that didn’t go well, the metric that’s off target, the feature that shipped with a bug.

Whatever your version of that anchor is (faith, family, community, a creative pursuit), guard it. Don’t let the urgency of product work crowd out the things that actually sustain you.

The Bottom Line

Burnout in product management comes from working without boundaries, without leverage, and without recovery.

Set the boundaries. Build the leverage through automation and delegation. Protect the time that restores you.

Your value is measured by the clarity of your decisions and the impact of what you ship. The version of me that protects his time, trusts his team, and goes to the gym at 11am is a better PM, a better leader, and a better husband and father than the one who was grinding 14 hours a day and calling it dedication.

25 Skills Every Product Manager Should Be Building in 2026

Product Manager sitting in his home office reading

There’s no shortage of “skills for PMs” lists on the internet. Most of them read like a job description, technically correct, but practically useless.

This isn’t that list. These are the 25 skills I’ve seen separate the product managers who move the needle from the ones who stay busy. I’ve organized them by the areas where I see the biggest gaps, not by some theoretical framework. Some of these are timeless. Some are specific to right now. All of them are things I wish someone had told me earlier in my career.


I. Customer Obsession

These are the skills that everything else builds on. Get these wrong and nothing else matters.

1. Deep Customer Knowledge

You can’t fake this one. The best PMs I’ve worked with can describe their top customer segments in vivid detail – not just demographics, but the actual daily workflow, the frustrations, the workarounds they’ve built, the language they use when they’re annoyed.

This doesn’t come from dashboards. It comes from sitting with customers, watching them use your product, and resisting the urge to defend your design choices when they struggle. Do this monthly, not quarterly. The PMs who “don’t have time” for customer conversations are the same ones who build features nobody uses.

2. Jobs-to-be-Done Thinking

Clayton Christensen’s framework has become so mainstream that people name-drop it without actually applying it. The real skill isn’t knowing what JTBD is, it’s being able to articulate the job your customer is hiring your product to do in one sentence.

If you can’t do that, you don’t understand your customer well enough yet. Every feature decision should trace back to that job. If it doesn’t advance the job, it’s noise.

3. Continuous Discovery

Teresa Torres literally wrote the book on this. The skill isn’t “doing user research” – it’s building a rhythm of weekly customer touchpoints that inform your decisions in real-time, not once a quarter when the research team delivers a 40-page report nobody reads.

The PMs who do this well talk to 2-3 customers every single week. Not formal research sessions with screeners and discussion guides. Quick, focused conversations that answer specific questions about specific opportunities.

I have “virtual coffee” times available on my calendar and invite users on our emails to book some time with me. It’s fantastic and gives me tons of insight into our customers.

4. Knowing When to Ignore Feedback

This sounds counterintuitive after three skills about listening to customers. But one of the hardest skills in product management is knowing WHICH feedback to act on and which to file away.

Not every customer request is a product insight. Sometimes a customer wants something that serves them but hurts the broader user base. Sometimes they’re describing a symptom, not the root cause. The skill is triangulating. When you hear the same pain from multiple segments, supported by data, that’s signal. When one loud customer demands something, that’s noise.

5. Empathy That Goes Beyond Platitudes

Every PM claims to have empathy. The actual skill is translating empathy into product decisions. It’s the difference between saying “I understand the user’s frustration” and redesigning the onboarding flow because you watched someone struggle for 8 minutes trying to complete a task that should take 30 seconds.

Real empathy is uncomfortable. It means watching your product fail in real-time and sitting with that feeling instead of explaining it away.


II. Strategic Thinking

These are the skills that determine whether your team is building the right things.

6. Product Vision

A compelling product vision describes what the world looks like 2-5 years from now if your product succeeds. Not a feature list. Not a technology roadmap. A picture of a better future for your customer.

The skill is making this concrete enough to inspire and vague enough to allow room for discovery. “We’ll be the leading platform for X” is not a vision. “Every pastor will have a personal AI-powered sermon preparation assistant that cuts their weekly prep time in half” – that’s a vision.

7. Product Strategy

I wrote about the 10 most common strategy mistakes recently, and the biggest one is teams that have no strategy at all — just a backlog they call a strategy.

The skill here is making choices. Real ones. Strategy means explicitly deciding what you will NOT do, who you will NOT serve, and which opportunities you will walk away from. If your strategy doesn’t make someone uncomfortable, it’s not a strategy.

8. Ruthless Prioritization

This is the skill that separates senior PMs from everyone else. You will always have more opportunities than capacity. The question is never “should we build this?” Everything on your list is probably worth building. The question is “should we build this INSTEAD of that?”

Frameworks like RICE scoring help, but the real skill is having the conviction to say no to good ideas because they’re not the BEST idea right now. Warren Buffett’s two-list strategy applies: identify your top 25 priorities, circle the top 5, and treat the other 20 as your “avoid at all costs” list.

9. Outcome-Focused Roadmapping

The shift from output-based roadmaps (“Q2: Ship feature X, Y, Z”) to outcome-based roadmaps (“Q2: Reduce trial-to-paid time from 14 days to 7 days”) is one of the most important evolutions in modern product management.

The skill is framing your roadmap around the problems you’re solving and the metrics you’re moving, not the features you’re building. This gives your team room to discover the best solution instead of being locked into a predetermined one.

10. Saying No (and Making It Stick)

Every PM knows they should say no more often. The actual skill is saying no in a way that maintains relationships and builds trust. “No, because our strategy is focused on X, and here’s why that matters more right now” is dramatically different from just “no.”

The best PMs I’ve seen turn a “no” into a learning moment by explaining the reasoning, sharing the data, and making the person feel heard even when the answer isn’t what they wanted. I’ve found that people can disagree with a well-reasoned decision. What often causes stress is ambiguity.


III. Execution and Delivery

These are the skills that turn strategy into a shipped product.

11. Rapid Experimentation

The ability to test ideas in hours or days instead of weeks or months is a superpower. This means prototyping. Not pixel-perfect mockups, but rough, testable concepts that answer specific questions.

Can users find this feature? Does this flow make sense? Will anyone actually use this? You can answer all of these questions with a prototype and 5 users in a single afternoon.

12. Writing Clear Requirements

This is an underrated skill. The gap between “what the PM imagined” and “what engineering built” is almost always a requirements problem, not a competence problem.

The skill is writing requirements that are specific enough to build from but flexible enough to allow engineering creativity. I’ve found that focusing on the PROBLEM and the SUCCESS CRITERIA while leaving the implementation approach to engineering produces the best results.

13. Data Literacy

You don’t need to be a data scientist, but you need to be dangerous with data. That means understanding statistical significance (so you don’t kill an A/B test too early), knowing which metrics actually matter for your product, and being able to query your own data when the analytics team is backed up.

AI has made this dramatically easier. You can now describe what you want in plain English and get a SQL query back. That’s a genuine unlock for PMs who previously had to wait days for a data pull.

14. Delivery Management

Understanding how your team ships code, whether it’s sprint cycles, deployment pipelines, feature flags, rollback procedures, makes you a better PM. Not because you need to manage the process (that’s engineering’s job), but because understanding the constraints helps you make better tradeoff decisions.

“Can we ship this behind a feature flag to 10% of users first?” is a much better question than “when will this be done?”

15. Technical Literacy

You don’t need to code, but you need to understand enough about your technology stack to have meaningful conversations with engineering. What’s an API? What are the database constraints? Why does this “simple” change actually require refactoring three services?

The skill is asking good technical questions, not having the answers. When your engineering lead says “that’s a 3-month project,” you should be able to ask “what makes it 3 months?” and understand the answer.


IV. Communication and Influence

These are the skills that get people aligned and keep them there.

16. Stakeholder Management

Your stakeholders have competing priorities, different incentive structures, and varying levels of product literacy. The skill is navigating all of that without losing your strategic direction.

The best approach I’ve found: radical transparency about your decision-making process. Share the data, explain the tradeoffs, make a clear recommendation, and invite disagreement before the decision, not after. People support what they help create, even if they don’t get everything they wanted.

17. Executive Communication

Executives don’t want details. They want: what’s the problem, what’s the recommendation, and what do you need from them. That’s it.

The skill is compression, taking a complex product situation and distilling it into a 2-minute narrative that leads to a clear ask. If you can’t explain your strategy in the time it takes to ride an elevator, you haven’t thought about it clearly enough.

18. Cross-Functional Leadership

PMs lead without authority. You can’t tell engineering what to build, design what to design, or marketing what to say. You can only influence.

The skill is making other teams WANT to follow your lead because you’ve earned their trust. That means understanding their constraints, respecting their expertise, giving them credit publicly, and never throwing them under the bus when something goes wrong.

19. Writing as a Leadership Tool

Product managers who write well have an outsized advantage. Strategy docs, product briefs, stakeholder updates, customer communications – writing is how PMs scale their influence beyond the meetings they attend.

Jeff Bezos banned PowerPoint at Amazon for a reason. Clear writing forces clear thinking. If you can’t write a coherent one-page strategy doc, your strategy probably isn’t coherent.

20. Storytelling with Data

Data alone doesn’t persuade anyone. The skill is wrapping data in a narrative that makes people care. “Churn increased 3%” is a data point. “We’re losing 40 paying customers every month, and here’s what they’re telling us on the way out the door” is a story that drives action.

Every dashboard metric should have a “so what?” attached to it. If you can’t articulate the “so what,” the metric isn’t useful yet.


V. Personal Mastery

These are the skills that compound over time and separate the good from the great.

21. AI Fluency

This is the new table-stakes skill for 2026. Not building AI products (though that’s increasingly common) but using AI tools to accelerate your own work.

I like Dell computers tagline of: “It’s a you-multiplier.”

Customer research synthesis, competitive analysis, PRD drafting, experiment design, data analysis, all of these are dramatically faster with AI assistance. PMs who aren’t using AI in their daily workflow are leaving massive productivity on the table.

The skill isn’t prompting. It’s knowing which parts of your work benefit from AI acceleration and which parts still require human judgment. Strategy, customer relationships, and cross-functional trust can’t be automated. Research synthesis, first-draft writing, and data analysis absolutely can.

22. Product Evangelism

Your product needs a champion, and that’s you. The skill is inspiring genuine excitement in your team, your stakeholders, and your customers without crossing the line into hype.

The best product evangelists I’ve seen lead with the customer problem, not the product solution. “Let me tell you about a pastor who spent 12 hours preparing a single sermon because our tools weren’t good enough” hits harder than “let me show you our new feature.”

23. Managing Your Energy, Not Just Your Time

PM burnout is real. The role pulls you in every direction: stakeholder meetings, customer calls, sprint planning, strategy reviews, fire drills. You can optimize your calendar perfectly and still burn out.

The skill is recognizing which activities give you energy and which drain it, then structuring your week accordingly. For me, customer conversations and strategy work are energizing. Back-to-back status meetings are draining. I protect my calendar accordingly.

24. Continuous Learning

The product management discipline is evolving rapidly. The frameworks that worked 3 years ago might not work today. The best PMs read broadly, attend selectively, and most importantly apply what they learn immediately.

Books that have shaped my thinking: Inspired by Marty CaganContinuous Discovery Habits by Teresa Torres, The Lean Startup by Eric Ries, Escaping the Build Trap by Melissa Perri, and Chief Customer Officer 2.0 by Jeanne Bliss. But reading without applying is just entertainment.

25. Intellectual Humility

This might be the most important skill on the entire list. The willingness to say “I was wrong” or “I don’t know” is what separates PMs who keep growing from ones who plateau.

Every strong opinion you hold about your product, your customers, or your market should come with an asterisk: “based on what I know right now.” New data should change your mind. Customer feedback that contradicts your hypothesis should make you curious, not defensive.

The best product managers I’ve worked with hold their strategies with conviction AND their assumptions with humility. That balance is the whole game.


The Thread That Connects All 25

If I had to distill all of these into a single principle, it would be this: the best product managers are relentlessly curious about their customers and brutally honest about what they don’t know.

Every skill on this list is either about understanding customers more deeply or making better decisions with incomplete information. That’s the job. Everything else is just technique.

The good news? Every one of these skills is learnable. None of them require a specific degree, a specific title, or a specific number of years in the role. They require intentional practice and the willingness to be uncomfortable while you’re learning.

Start with the ones where you have the biggest gap. Work on them deliberately. And be patient with yourself. The best PMs I know are still working on all 25.


Frequently Asked Questions

What is the most important skill for a product manager?

Deep customer knowledge is the foundational skill that enables everything else. Without a genuine understanding of your customers, their workflows, pain points, and goals, no amount of strategic thinking, technical literacy, or stakeholder management will produce great products. Build a habit of weekly customer conversations and the other skills become dramatically more effective.

How do product managers use AI in 2026?

Product managers use AI primarily for research acceleration like synthesizing customer interviews, generating competitive intelligence, drafting PRDs and experiment hypotheses, and querying data with natural language. The key skill is knowing which tasks benefit from AI assistance (research, analysis, first drafts) and which still require human judgment (strategy decisions, customer relationships, cross-functional trust-building).

What technical skills do product managers need?

Product managers don’t need to code, but they need enough technical literacy to have meaningful conversations with engineering. This includes understanding APIs, database constraints, deployment processes, and architectural tradeoffs. The goal isn’t to make technical decisions, it’s to ask informed questions and understand the implications of technical choices on product capabilities and timelines.

How do you transition into product management?

The most common entry points are from engineering, design, data analytics, or customer-facing roles like support or sales. Each background brings a natural strength: engineers bring technical depth, designers bring user empathy, analysts bring data fluency, and customer-facing roles bring direct insight into user pain points. Focus on building the skills in whichever category you’re weakest. Most transitions fail not because of lack of domain knowledge, but because of gaps in communication, strategic thinking, or customer understanding.

The 10 Product Strategy Mistakes I Keep Seeing (After 10+ Years in SaaS)

An enamel pin about Product Management

I’ve made every one of these mistakes. Some of them more than once. Product strategy reads well in a blog post, but in practice it’s a minefield of competing priorities, stakeholder pressure, and the constant temptation to say yes to everything.

After more than a decade leading product and growth for SaaS companies – including subscription products serving millions of users – I’ve developed a pretty reliable list of strategy mistakes that kill momentum. Not the theoretical kind you read about in business school. The real kind. The ones that cost you quarters.

Here are the 10 pitfalls I keep coming back to, the ones that have cost me the most time, energy, and momentum over the years.

What is Product Strategy, Really?

Before we get into the mistakes, let’s get aligned on what product strategy actually is – because the lack of a shared definition is often the first problem.

Product strategy is the set of choices that connect your company’s vision to the work your team does every day. It answers three questions:

  1.  Who are we building for? (target audience)
  2.  What problem are we solving for them? (value proposition)
  3. How does this create value for the business? (business model)

Marty Cagan, author of Inspired and founding partner at Silicon Valley Product Group, puts it simply: strategy is about deciding which problems are worth solving. Roman Pichler frames it as the path to your product vision – the high-level plan for achieving your goals.

The important thing is that strategy is about CHOICES. Not a roadmap. Not a feature list. Choices about what you’ll do, and more importantly, what you won’t do.

With that foundation, here are the 10 mistakes that undermine those choices.

Mistake 1: Confusing Activity with Progress

This is the one that gets almost everyone. You ship a feature. Then another. Then another. Your release notes look great. Your team feels productive.

But the metrics aren’t changing.

I’ve lived this. We shipped feature after feature and our conversion numbers stayed flat. Lots of effort, but no forward motion. The problem was that we were building things that were nice to have, not things that moved the needle.

This is what the Jobs-to-be-Done (JTBD) framework helps you avoid. When you understand the actual job your customer is hiring your product to do, it becomes much easier to evaluate whether a feature advances that job or just adds noise. Clayton Christensen’s insight was that customers don’t buy products – they hire them to make progress. If your feature doesn’t help the customer make progress on their core job, it’s activity, not progress.

How to avoid it: Before greenlighting any feature, ask “which metric does this move, and by how much?” If the team can’t answer that clearly, the feature isn’t ready to build. This is easy to say, but extremely difficult to do. Use a prioritization framework like RICE scoring (Reach, Impact, Confidence, Effort) to force the conversation beyond gut feel.

Mistake 2: Strategy by Consensus

There’s a version of inclusive leadership that sounds great in theory but kills strategy in practice. You bring everyone to the table. You gather input. You synthesize. You try to find a path that makes all stakeholders happy.

… and you end up with a strategy that offends no one and inspires no one.

Real strategy requires choices. Hard ones. The kind where someone in the room won’t like the answer. If your strategy document doesn’t explicitly state what you’re NOT doing, it’s a wish list.

This is what killed products like Google+. Google had the engineering talent, the distribution, and the resources to build a social network. But the strategy tried to be everything to everyone – a Facebook competitor, a Twitter alternative, an identity platform, a photo sharing service. No hard choices were made and the product sadly died a slow death by committee.

How to avoid it: I’ve learned (the hard way) that my job is to make everyone feel heard, synthesize the inputs, make a clear decision, and then communicate the reasoning. People can disagree with a well-reasoned decision, what they can’t work with is ambiguity. Write down your strategy in one page. If it doesn’t fit on one page, you haven’t made enough choices yet.

Mistake 3: Copying the Competition

Your competitor launches a feature. Your sales team forwards the announcement. Your CEO asks “why don’t we have this?” And suddenly your roadmap has a new top priority that wasn’t there yesterday – classic!

I’ve fallen into this trap more than I’d like to admit. You absolutely should know what your competitors are doing. The real risk is letting their decisions drive YOUR strategy.

When you copy a competitor’s feature, you’re solving for THEIR customers with THEIR context.

You don’t know why they built it. You don’t know if it’s working. You don’t know if they’re about to kill it. You’re making a strategic bet based on a press release.

Gibson Biddle, former VP of Product at Netflix, uses what he calls the DHM Model – Delight, Hard-to-Copy, and Margin-Enhancing. The “hard-to-copy” piece is key but with AI it’s getting more difficult. If your strategy is just replicating what competitors build, you’ll always be behind AND you’ll never build anything that’s uniquely valuable to your users.

How to avoid it: Understand what problem the competitor is trying to solve, then ask whether YOUR users have that same problem. Sometimes they do, and then you should solve it in a way that fits your product, your architecture, and your users’ workflow. Sometimes they don’t, and the right answer is “we’re not building that” – Jeff Bezos has a great framework for this kind of decision.

Mistake 4: Ignoring the Metrics That Actually Matter

Vanity metrics are seductive. Page views are up! Sign-ups are growing! App downloads hit a new record!

But if your churn rate is climbing at the same time, you’ve got a leaky bucket. And no amount of top-of-funnel growth fixes a retention problem.

I’ve been in situations where the dashboards looked green but the business was struggling, and situations where the top-line numbers looked concerning but the underlying health was strong. The difference was which metrics we were watching.

This is what the North Star Metric concept helps solve. Your North Star is the single metric that best captures the core value your product delivers to customers. For Spotify, it’s time spent listening. For Airbnb, it’s nights booked. For a subscription SaaS product, it might be weekly active usage or feature adoption depth.

How to avoid it: For any subscription product, the metrics that matter are: how many people start a trial, how many convert to paid, how many cancel, and what’s the net change. Everything else is context. Build your dashboard around these numbers first, THEN add the supporting metrics that explain why they’re moving.

Mistake 5: Trying to Serve Everyone

This one is especially hard in mission-driven organizations. You WANT to help everyone. Every user segment seems important. Every use case feels valid.

But trying to serve everyone equally means serving no one well.

Your onboarding can’t be optimized for beginners AND power users simultaneously. Your pricing can’t be accessible to individuals AND competitive for enterprises without compromise.

Trying to serve everyone equally means serving no one well.

Kodak learned this the hard way. They saw digital photography coming but tried to straddle both worlds – maintaining their film business while half-heartedly investing in digital. They served neither audience well, and a company that once dominated an entire industry filed for bankruptcy in 2012.

How to avoid it: The best products I’ve used (and the best products I’ve built) made clear choices about who they were for. They explicitly prioritized one audience and designed everything around their needs first. When you do that well, other segments often benefit anyway, from a focused, coherent product rather than a compromised one. Define your primary persona. Write it on the wall. When someone asks “but what about this other segment?” you have your answer ready.

Mistake 6: Having No Strategy at All

This sounds obvious, but it’s shockingly common. My last few roles I’ve called “The Fixer” because years of the company running hard has caused them to lose their focus and they suddenly realize they don’t have a strategy. They have a roadmap. They have a backlog. They have quarterly goals. They ship things on time.

But there’s no unifying thesis about WHERE the product is going and WHY.

Roman Pichler calls this the most common product strategy mistake he encounters. Teams jump straight from vision to execution without the strategic layer that connects them. The result is a collection of features that individually make sense, but collectively don’t tell a coherent story.

How to avoid it: Your strategy should be a testable hypothesis, not a document that lives somewhere on the server. Try this format: “We believe that [target audience] struggles with [problem]. If we build [solution], we’ll see [measurable outcome] within [timeframe].” If you can’t fill in those blanks, you don’t have a strategy yet. You have a to-do list.

Mistake 7: Treating Strategy as Static

You spend weeks crafting the perfect strategy document. Leadership signs off. The team aligns. You print it out and pin it to the wall.

Six months later, the market has shifted, a competitor has launched something unexpected, and your customers are telling you something you didn’t anticipate. But the strategy is “locked.”

Eric Ries built the entire Lean Startup methodology around this problem. The Build-Measure-Learn loop isn’t just for startups – it’s for any team that operates in uncertainty, which is literally every product team. Your strategy should have built-in checkpoints where you evaluate whether your assumptions still hold.

How to avoid it: Set quarterly strategy reviews. Not annual planning sessions where you redo everything – lightweight reviews where you ask: “What have we learned? What’s changed? Do our bets still make sense?” The best strategies are living documents, not manifestos. Jeff Bezos distinguishes between “one-way door” decisions (irreversible, deliberate slowly) and “two-way door” decisions (reversible, move fast). Most strategic choices are two-way doors. Treat them that way.

Mistake 8: Skipping Validation Before Committing

You have a great idea. The team is excited. Leadership is bought in. You go straight to building.

Three months later, you launch to silence. Customers don’t want it, don’t understand it, or already solved the problem another way.

I’ve seen this pattern destroy entire quarters. The excitement of a new idea creates momentum that skips right past the “should we build this?” question and lands on “how do we build this?”

How to avoid it: Before committing engineering resources, validate the problem AND the solution. Talk to 5-10 customers. Run a fake door test. Build a prototype and put it in front of real users. Teresa Torres’ Continuous Discovery framework calls this “opportunity solution trees” – mapping the opportunity space before jumping to solutions. The cost of 2 weeks of discovery is nothing compared to 3 months of building the wrong thing.

Mistake 9: Siloed Strategy Without Cross-Functional Input

Product writes the strategy. Engineering learns about it at spring planning. Design gets brought in when wireframes are needed. Marketing finds out at launch.

This isn’t strategy. It’s a relay race where nobody can actually see the finish line.

The best product strategies I’ve been part of were shaped by engineering constraints, design insights, and market intelligence from day one. Your engineers know what’s technically feasible and where the architecture creates opportunities. Your designers have insights about user behavior that data alone can’t capture. Your sales and support teams hear objections and pain points every day.

How to avoid it: Include engineering and design leads in strategy formation, not just execution. Share customer research broadly. Bring it up in meetings regularly. Make your strategy document accessible to everyone on the team, not locked into a leadership slide deck. When people understand the WHY behind the strategy, they make better decisions at every level.

Mistake 10: Being Unrealistic About Execution Capacity

This is the mistake that ties all the others together. You have a clear strategy. You’ve validated the direction. You’ve made all the hard choices about what to build.

Then you commit to 3x more than your team can actually deliver.

Your roadmap becomes a pressure cooker. Quality drops. Shortcuts get taken. The team burns out. And paradoxically, you end up delivering LESS than if you’d committed to fewer things done with excellence.

I’ve seen this cycle repeat across every company I’ve worked with. The ambition is always bigger than the capacity, and the gap gets filled with overtime and technical debt instead of honest prioritization.

How to avoid it: Be ruthlessly honest about how much your team can ship in a quarter. Then cut 20% from that estimate. Even writing that sounds crazy, but it must be done. Use the OKR framework (Objectives and Key Results) to limit your bets to 3-5 outcomes per quarter – not 3-5 per team, 3-5 total. Warren Buffett’s “two-list strategy” applies here: write down your top 25 priorities, circle the top 5, and treat the other 20 as your “avoid at all costs” list (avoid them entirely until the top 5 are achieved). The same logic applies to product strategy.

The Uncomfortable Truth

Product strategy is about having the discipline to say no to good ideas that don’t align with what matters most right now.

Every mistake on this list comes from the same root: the unwillingness to make a hard choice and live with the tradeoff.

Choose the right things. Decide clearly. Pick your own path. (I wrote about this focus in 5 things needed for business success.) Watch the honest metrics. Serve someone specific.

Strategy is the art of sacrifice. The sooner you get comfortable with that, the better your products will be.

Product Strategy Checklist

Before you finalize your next product strategy, run through this list:

  • Can you state your target audience in one sentence?
  • Can you articulate the core problem you’re solving for them?
  • Does your strategy explicitly state what you’re NOT doing?
  • Is every major initiative tied to a measurable outcome?
  • Have you validated your assumptions with real customers?
  • Does your team have the capacity to execute this quarter’s plan?
  • Have you set a date to review and adapt the strategy?
  • Can your entire team articulate the strategy without looking at a document?
  • Is there a clear North Star Metric everyone is aligned on?
  • Would you bet your own money on this plan working?

If you can’t check every box, your strategy still has gaps. Go back and make the hard choices.

Frequently Asked Questions

What are the most common product strategy mistakes?

The most common product strategy mistakes include confusing activity with progress (shipping features that don’t move metrics), strategy by consensus (avoiding hard choices to keep everyone happy), copying competitors instead of solving for your own users, ignoring retention metrics in favor of vanity metrics, and trying to serve every user segment equally. The root cause of most strategy failures is an unwillingness to make clear choices and accept tradeoffs.

What is the difference between product strategy and a product roadmap?

Product strategy defines WHERE you’re going and WHY. It’s about choices, tradeoffs, and the thesis behind your product direction. A product roadmap is the HOW and WHEN – the sequence of work that executes the strategy. A roadmap without a strategy is just a feature list. A strategy without a roadmap is just a vision. You need both, but strategy comes first.

How do you create an effective product strategy?

An effective product strategy begins with a clear understanding of your target audience, the problem you’re solving, and how solving it creates business value. Frameworks like Jobs-to-be-Done help identify what customers actually need. Validate your assumptions through customer discovery before committing resources. Set a North Star Metric to track progress. Review and adapt quarterly. Most importantly, be explicit about what you will NOT do – that’s ultimately where the real strategy lives.

How often should you update your product strategy?

Product strategy should be reviewed quarterly and updated when market conditions, customer needs, or business goals change significantly. It should NOT change weekly based on competitor moves or stakeholder requests. The best approach is setting lightweight quarterly checkpoints where you evaluate whether your core assumptions still hold, while keeping the overall strategic direction stable enough for the team to execute with confidence.

15 quotes to stir Courageous Leadership

I’ve been collecting quotes on courageous leadership for a while now. The kind that don’t just sound good on a poster but actually rearrange how you think about showing up for the people in front of you.

Here’s the question that started this collection:

Can an individual affect their society by simply, courageously caring for the individual in front of them enough to see who they truly are and encourage them into that identity?

I believe the answer is yes. And these 15 quotes have shaped how I try to live that out.

On Seeing People

  1. How many of us are stuck in the daily grind of survival? If you were to plot yourself on Maslow’s Hierarchy of Needs, where would you be today? Most of us live at level 3, but David Whyte challenges us to step beyond, to risk being truly seen and to see others as they really are.

  2. “The greatest thing a human soul ever does in this world is to see something and tell what it saw in a plain way. Hundreds of people can talk for one who can think, but thousands can think for one who can see.” – John Ruskin

  3. “Attention is the rarest and purest form of generosity.” – Simone Weil. Constant distraction makes full presence rare. Choosing to be fully present with another person is an act of courage.

On Leading with Vulnerability

  1. “Vulnerability is not winning or losing; it’s having the courage to show up and be seen when we have no control over the outcome.” – Brene Brown. This applies to every hard conversation you’re avoiding right now.

  2. “The only thing more unthinkable than leaving was staying; the only thing more impossible than staying was leaving.” – Elizabeth Gilbert. Sometimes the most courageous leadership decision is the one that costs you the most personally.

  3. “Have I not commanded you? Be strong and courageous. Do not be afraid; do not be discouraged, for the Lord your God will be with you wherever you go.” – Joshua 1:9. Courage is the decision that something else matters more.

On Doing the Hard Thing

  1. “The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood.” – Theodore Roosevelt. Courageous leaders lead from inside the mess.

  2. “You gain strength, courage, and confidence by every experience in which you really stop to look fear in the face. You must do the thing which you think you cannot do.” – Eleanor Roosevelt

  3. “Courage is not the absence of fear, but rather the judgment that something else is more important than fear.” – Ambrose Redmoon. I come back to this one regularly. Especially when I’m about to say something in a meeting that I know won’t be popular but needs to be said.

On Serving Others

  1. “Everybody can be great because anybody can serve. You don’t have to have a college degree to serve. You don’t have to make your subject and verb agree to serve. You only need a heart full of grace and a soul generated by love.” – Martin Luther King Jr.

  2. “The best way to find yourself is to lose yourself in the service of others.” – Mahatma Gandhi. The leaders who’ve had the deepest impact on my life were the ones who showed up for me when it cost them something.

  3. “Do nothing out of selfish ambition or vain conceit. Rather, in humility value others above yourselves.” – Philippians 2:3. This is the hardest standard of leadership I know. And the most transformative when you actually live it.

On Persistence

  1. “Success is not final, failure is not fatal: it is the courage to continue that counts.” – Winston Churchill

  2. “Courage doesn’t always roar. Sometimes courage is the quiet voice at the end of the day saying, ‘I will try again tomorrow.’” – Mary Anne Radmacher. This one resonates with anyone who’s had a week where nothing went right but showed up on Monday anyway.

  3. “Let us not become weary in doing good, for at the proper time we will reap a harvest if we do not give up.” – Galatians 6:9. The most courageous thing you might do today is simply not quit.

The Thread

Courageous leadership is the daily decision to see people, serve them, and keep going when it would be easier to stop.

That’s available to anyone, in any role, at any level. You don’t need a title to lead courageously. You just need to care enough about the person in front of you to show up fully. And then do it again tomorrow.

The Traffic You Depend On Is Being Answered Without You

I’ve been staring at a traffic chart for the last three weeks that I can’t stop thinking about.

It’s Chegg’s chart. The online education platform lost 34% of its organic visitors in a matter of months. That’s a cliff. Their keyword footprint went from 11.1 million to 3.5 million.

And the culprit wasn’t a competitor outranking them or a Google algorithm update penalizing thin content. It was Google answering the questions before anyone ever clicked.

The Machine That Eats Your Top of Funnel

Google’s AI Overviews are the AI-generated summaries that now appear at the top of search results, and they are fundamentally changing what it means to rank on Google. For years, the playbook was clear: create valuable content, optimize it for search, capture intent, convert visitors.

That model assumed one thing: that people would actually click through to your site.

AI Overviews break that assumption.

When someone searches “how to explain forgiveness to a congregation” or “best illustrations for an Easter sermon,” Google can now synthesize an answer from multiple sources and present it directly in the search results. No click required. No visit to your site. No entry into your funnel.

Tomasz Tunguz laid this out clearly in a recent analysis:

“Content dependency on organic search is no longer a sustainable acquisition model.”

That sentence should be pinned to the wall of every SaaS product leader who relies on organic traffic (understanding these shifts is a critical PM skill) to fill the top of their funnel.

Chegg Is the Preview

The pattern is showing up everywhere. Stack Overflow, the platform that essentially taught a generation of developers how to code (including me), is seeing the same erosion. Informational queries that used to drive millions of visits are now being answered inline by AI.

The New York Times is thriving. Why? How? A $100 million content licensing deal with Google. They’re feeding the AI, on their terms, for revenue.

Here’s what I think the data is telling us:

1. Q&A-style content is the most vulnerable. If your value proposition is answering questions that can be summarized in a paragraph, you’re in the blast radius.
2. Branded, premium, behind-the-paywall content is more defensible. AI Overviews can summarize a sermon topic, but they can’t replicate a full manuscript, a downloadable media pack, or an AI-powered sermon builder.
3. The winners will be the ones who stop treating Google as a given and start building direct relationships with their audience.

What This Means for SaaS Product Leaders

I run product and growth for a content platform that serves pastors. We have 245,000+ sermons and 50,000+ text illustrations, exactly the kind of content library that ranks well for long-tail informational queries.

For years, that library has been our primary discovery engine. Pastors search for sermon ideas, find us, browse free content, start a trial, and convert to paid.

That model still works today, but we’re down around that same 34% mark and from what I can tell so is everyone, across all industries. But I’d be naive to assume it’ll work the same way in 18 months.

Here’s the uncomfortable math: if organic traffic drops by even 20-30%, and organic is your dominant acquisition channel, no amount of conversion rate optimization saves you. You can have a best-in-class trial-to-paid flow and still miss your numbers because not enough people are entering the funnel in the first place.

It’s an exposure problem. And it requires a fundamentally different response than what most product teams are used to.

The Diagnostic Before the Panic

Before you restructure your entire growth strategy, there’s a critical diagnostic step that teams often skip. You need to know whether AI Overviews are actually appearing on YOUR highest-value queries.

Here’s the move:

  • Pull your top 50 keywords from Google Search Console. Look at click-through rate trends over the last 90 days, segmented by week.
  • The signature you’re looking for: stable or rising impressions, but declining CTR. That pattern means Google is showing your content in results, but users aren’t clicking because the AI Overview already gave them what they needed.
  • If your impressions are dropping, that’s a competitor or algorithm problem. If impressions are stable but clicks are falling, that’s AI Overview cannibalization. Different diagnosis, different treatment.

Most teams I talk to are just making this distinction. They’re looking at traffic declines and assuming it’s an SEO problem when it might be a platform shift problem. The difference matters.

Three Moves to Make Now

I’m not going to pretend I have the full playbook figured out. But here’s where my thinking is landing:

1. Shift discovery investment toward owned channels.
Email nurture sequences, community platforms, pastoral networks, partnerships with organizations that already have the audience. Organic search should be one of many channels, not the only one. Every dollar of effort I’m putting into SEO-driven top-of-funnel content I’m asking if that same effort in email or community would be more durable.

2. Make your paywall content genuinely irreplaceable.
AI can summarize a sermon outline. It cannot replicate a curated media pack, a professionally produced video series, or a workflow tool that saves someone three hours a week. The content that survives AI summarization is the content that requires depth, production value, or interactivity: things a search snippet can’t deliver.

3. Explore whether the threat is also an opportunity.
The NYT licensing deal tells us something important: Google is willing to pay for premium vertical content. If you’re the dominant content platform in your niche, there may be a deal to be made.

A licensing partnership could convert a traffic threat into a revenue stream while maintaining brand visibility inside AI-generated results. Worth exploring.

The Bigger Lesson

I keep coming back to something I’ve learned over the last few years leading product: the most dangerous risks are the ones that look like stability. Traffic holding steady today doesn’t mean the foundation isn’t shifting underneath.

Chegg’s team didn’t wake up one morning to a 34% traffic drop. It happened gradually, then suddenly. The chart looks normal until it doesn’t.

The product leaders who navigate this well will be the ones who diagnosed early, diversified before they had to, and built value that can’t be summarized in a paragraph. The ones who don’t will be staring at a chart they can’t explain and wondering where all the visitors went.

I’d rather be asking the hard questions now than explaining the traffic decline later.