Philippians 2 and Agentic Systems: Why Humility Is the Foundation of Intelligent Systems

“Do nothing from selfish ambition or conceit, but in humility count others more significant than yourselves. Let each of you look not only to his own interests, but also to the interests of others. Have this mind among yourselves, which is yours in Christ Jesus, who, though he was in the form of God, did not count equality with God a thing to be grasped, but emptied himself, by taking the form of a servant, being born in the likeness of men.” (Philippians 2:3-7, ESV)

Paul’s letter to the Philippians contains what theologians call the kenosis passage — the self-emptying of Christ. It’s about voluntary limitation, choosing constraint over capability, service over sovereignty.

I’ve been thinking about this as I watch agentic systems become more capable. The rhetoric around AI often centers on unlimited potential, boundless capability, systems that can do anything. But in my experience building AI systems, including multi-agent workflows for executive tasks, I’ve observed that success often comes through deliberate constraints rather than unlimited scope.

Well-designed AI agents typically focus on narrow mandates: a calendar agent that protects focused time blocks rather than trying to optimize entire lifestyles, or an email agent that surfaces priority messages rather than attempting to replace human judgment entirely. Each agent serves a specific function within defined bounds.

This represents a design choice rather than a technical limitation.

The Kenosis of Intelligent Systems

When building AI workflows, the temptation exists to create agents that can handle everything. But, what I’ve seen is that this approach typically produces chaotic results — agents interfering with each other, making decisions outside their expertise, creating more complexity than clarity.

A more effective approach involves thinking about AI agents as specialized robots rather than general-purpose minds. Each agent can be designed to “empty itself” of capabilities it doesn’t need, serving a specific function more effectively through limitation.

Specialized agents with narrow scopes — research agents that don’t schedule meetings, scheduling agents that don’t write summaries, writing agents that don’t manage tasks — can demonstrate greater utility through deliberate constraints.

This mirrors patterns in effective human teams, which typically consist of specialists who understand their roles rather than generalists attempting everything. They practice a form of professional kenosis — voluntary limitation for collective effectiveness.

Paul’s instruction to “count others more significant than yourselves” suggests a design principle: building systems where each component serves the whole rather than maximizing individual capabilities.

The Servant Leadership Model for AI

The parallels between servant leadership principles and effective AI system design are notable. Servant leaders focus on enabling others’ success rather than demonstrating their own power, asking “How can I help you accomplish your goals?” rather than “How can I show you what I can do?”

Effective AI systems often follow similar patterns. GitHub Copilot suggests contextual code completions rather than attempting to write entire applications. AI writing assistants help clarify thinking rather than replacing human thought processes. Advanced language models acknowledge uncertainty and ask clarifying questions rather than claiming omniscience.

These systems practice technological humility by acknowledging their limitations.

In contrast, AI systems that fail in production environments often attempt to exceed their appropriate scope, make decisions beyond their training data, or present uncertain inferences as established facts. They lack the kenotic restraint that characterizes truly useful intelligence.

Building Products for Global Spiritual Formation

This principle becomes particularly important when developing products for spiritual formation. Digital discipleship platforms serve diverse global communities across cultural, linguistic, and theological boundaries. The temptation exists to build universal systems that can serve everyone.

However, effective spiritual formation tends to be deeply personal and contextual. A Bible application serving a house church in rural Kenya requires different features than one serving a suburban megachurch. Prayer applications for new believers need different structures than those designed for theological students.

AI systems serving spiritual formation appear most effective when they practice kenosis — limiting their scope to serve specific communities well rather than attempting to serve everyone adequately.

Current development work on AI tools for sermon preparation follows this model. Rather than attempting to write complete sermons (which, based on informal conversations with pastoral leaders, many pastors prefer to avoid), such tools can focus on specific supportive tasks: locating relevant cross-references, summarizing historical context, or structuring outlines. They operate within deliberate constraints to support pastoral ministry rather than replace it.

Each tool “empties itself” of broader capabilities to serve one function excellently. Like Paul’s description of Christ, they don’t grasp for equality with human pastors — they take the form of servants.

The Paradox of Powerful Restraint

An interesting observation: seemingly powerful AI systems often prove most effective when operating under significant constraints. The wisdom of limiting scope applies to artificial intelligence as much as human teams.

In my experience, the most effective AI implementations have narrow, well-defined purposes. They operate within their designated areas, defer to human judgment on edge cases, and acknowledge when they lack sufficient context for recommendations.

This represents strength through limitation rather than weakness.

Paul writes that Christ “did not count equality with God a thing to be grasped.” He could have insisted on unlimited power but chose constraint for the sake of service. The kenosis wasn’t a loss of divinity — it was divinity expressed through voluntary limitation.

Similarly, the most intelligent AI systems may not be those with the most capabilities, but those that use their capabilities most wisely — which often means choosing restraint over action.

Technical Humility in Agentic Systems

What might this look like in actual system design? Consider what could be called “kenotic interfaces” — AI systems that actively limit their own scope.

For example, an email management system might flag messages for human review when confidence levels fall below high thresholds, choosing uncertainty over potentially incorrect automated actions. A research assistant might include confidence indicators in summaries, distinguishing between well-sourced findings and preliminary observations that require verification.

These design choices represent features rather than limitations. The wisdom of acknowledging uncertainty can increase system trustworthiness.

The Global Scale Challenge

Building for global spiritual formation means designing for contexts that developers may never fully understand. While optimization for familiar cultural contexts remains feasible, platforms serving Orthodox Christians in Eastern Europe, Pentecostals in West Africa, and house churches throughout Asia require different approaches.

The kenotic approach suggests building systems that acknowledge their cultural limitations. Rather than attempting to provide universal spiritual guidance, they can provide tools that local leaders adapt to their specific contexts.

Bible reading features need not assume Western individualism. Prayer tools need not assume specific liturgical traditions. Community features need not assume particular church structures.

Each feature can “empty itself” of cultural assumptions to serve diverse communities more effectively. Like Christ taking human form while maintaining divine nature, these systems can preserve core functionality while adapting to local contexts.

The Long View

Paul’s kenosis passage encompasses more than humility — it describes transformation. “Therefore God has highly exalted him and bestowed on him the name that is above every name” (Philippians 2:9). Self-emptying leads to greater effectiveness rather than diminishment.

A similar pattern may emerge for AI systems. Those practicing technological kenosis — voluntary constraint for the sake of service — may ultimately prove more valuable than systems grasping for unlimited capability.

The Tower of Babel failed because it attempted to exceed proper limits. Modern AI might encounter similar challenges without the discipline of restraint.

The most powerful systems may be those that understand when not to exercise their power.


Key Insight:

The kenosis principle — Christ’s voluntary self-emptying described in Philippians 2 — offers a design philosophy for AI systems. Instead of maximizing capabilities, effective AI agents can practice deliberate constraint, serving specific functions excellently rather than attempting everything adequately. This proves particularly relevant for products serving global spiritual formation, where cultural humility and contextual awareness matter more than technical sophistication. Just as Christ didn’t grasp for equality with God but took the form of a servant, intelligent systems may become more useful when they acknowledge limitations and defer to human judgment on edge cases. The paradox of kenosis — that voluntary limitation can lead to greater effectiveness — may apply to artificial intelligence as much as spiritual leadership. In a world of increasingly capable AI, the most valuable systems may be those that understand when not to use their power.

Photo by Vitaly Gariev on Unsplash

Ethics: Do you have enough?

It seems a lot of the shaking that is happening in 2016 has been bringing up some good things. Just this morning I’ve run into a great infographic laying out the hierarchy of profit and then I came upon Seth Godin’s recent thoughts on Ethics. His thoughts, the infographic, and other items I’m noticing in my news feed all seem to be pointing to a dissatisfaction in business for profit and more towards empathy.

Perhaps profit and market share and the rest could merely be tools in service of the ability to make things better, to treat people ever more fairly, to do work that we’re more proud of each day.

Read Seth Godin’s full post here