A Data-Driven Look at Real Client Outcomes

When we introduced AI-powered development tools into our Xperience by Kentico practice, I wasn't convinced we'd see meaningful ROI beyond basic code completion. We'd heard the promises before: faster development, fewer bugs, happier clients. But would AI actually deliver measurable results, or just add another layer of tooling complexity?

After completing several major projects with Claude Code as a core development partner, the results speak for themselves.

The Toolset: Beyond Code Completion

Our AI integration isn't about replacing developers with ChatGPT. We've built a focused toolkit that augments where traditional development workflows break down:

Claude Code handles architectural decisions, complex integrations, and cross-cutting concerns across our multi-site Xperience by Kentico solutions. It doesn't just autocomplete—it architects, documents, and catches edge cases that slip through manual review.

GitHub Copilot accelerates the repetitive work—MediatR query handlers, view models, cache dependency implementations—that follows consistent patterns but demands attention to detail.

Custom GPT-4 assistants trained on Kentico documentation and our coding standards provide real-time guidance on platform-specific best practices and help new developers get productive faster.

A Real Case Study: Identity Provider Integration

Here's where theory meets reality. We recently completed a complex authentication overhaul for a multi-site Xperience by Kentico implementation:

The scope:

  • Implement WS-Federation Identity Provider integration across four distinct websites
  • Build comprehensive user synchronization with external SOAP API
  • Integrate third-party role management package (with custom workarounds for package bugs)
  • Create complete documentation for deployment and testing
  • Support multiple environments with different configurations

Traditional estimate: 2+ weeks (80+ hours) of senior developer time

Actual delivery with AI assistance: 3-4 days (approximately 24-32 hours)

Time savings: 60-70% reduction in development time

But the time savings only tell part of the story. The AI-assisted implementation delivered:

  • Cleaner architecture than manual development—Claude Code identified edge cases and error handling paths I would have missed
  • Comprehensive documentation generated alongside the code, not as an afterthought
  • Built-in testing strategy for multiple environments, reducing deployment risk
  • Complete audit trail of architectural decisions and implementation rationale

The code quality actually exceeded what I would have produced manually, and we shipped features (multi-environment support, extensive logging, detailed documentation) that would normally be cut due to time constraints.

What Changed in Our Process

The most significant shift isn't just speed—it's what becomes possible within the same timeline:

Documentation became a feature, not a burden. The IDP integration shipped with five comprehensive markdown documents covering architecture, runtime considerations, integration points, testing procedures, and deployment checklists. That documentation was generated alongside the code, not bolted on afterward.

Code review focus shifted. Instead of catching missing null checks and forgotten async/await patterns (which AI handles consistently), reviews now focus on business logic, security implications, and architectural fit. The quality of discussion improved because we're not distracted by mechanical issues.

Confidence in complexity increased. When a client asks for "single sign-on across four sites with role-based access control," we can confidently commit to tight timelines because we know the AI tools excel at exactly this type of structured, pattern-heavy work.

The Reality Check

AI-assisted development isn't magic. You still need to understand the architecture deeply enough to guide the AI, review its output critically, and catch where it misunderstands domain-specific requirements.

Certain integrations still require extra scrutiny—Dynamics 365 FetchXML queries, for example, often need manual refinement because the AI models struggle with the query syntax nuances. And authentication flows demand careful security review regardless of who (or what) wrote the code.

But here's what changed: instead of spending 80 hours implementing a well-understood pattern, I spent 30 hours architecting the solution and reviewing AI-generated implementation. That's not just faster—it's a fundamentally different (and better) use of senior developer time.

What This Means for CMS Development

For platforms like Xperience by Kentico, AI assistance creates an interesting dynamic. The platform's inherent structure—CQRS patterns, MediatR handlers, cache dependency management, multi-site architecture—used to represent a learning curve. Now that structure becomes an advantage. AI tools perform exceptionally well within well-defined architectural constraints.

The question for CMS development shops isn't whether AI tools improve outcomes. Based on our experience, they demonstrably do—with measurable time savings and quality improvements. The question is how quickly teams adapt their workflows to leverage these tools effectively, and what happens to competitive positioning when some teams do and others don't.

Want to Implement This in Your Workflow?

If you're a developer (or leading a development team) and this resonates with your pain points—spending too much time on boilerplate, wishing documentation didn't feel like punishment, wanting to ship more ambitious features without burning out—we've learned a lot about making AI tools actually work in production environments.

The setup isn't trivial. Teaching Claude Code your coding standards, configuring Copilot for platform-specific patterns, building the right review processes—it took us weeks to get this dialed in. But once it clicks, the productivity shift is real.

We're offering training and enablement services to help other agencies and development teams implement similar workflows. Whether you're working with Xperience by Kentico, other CMS platforms, or custom application development, the principles translate.

For developers: Learn how to architect prompts that generate production-quality code, build effective AI-assisted code review practices, and configure tools for your specific tech stack.

For agencies/teams: Understand the ROI model, implementation timeline, training requirements, and how to measure outcomes.

Reach out if you want to talk through what this could look like for your team. No sales pitch—just practical conversation about what works (and what doesn't) based on real project experience.


About this series: We're documenting real experiences implementing AI-assisted development workflows in enterprise CMS projects. All metrics and case studies are based on actual client work, not theoretical projections.

Something genuinely remarkable is happening in web development studios right now, and we're living it firsthand at Refined Element. AI isn't just making our work slightly faster—it's fundamentally changing what's possible in a day, and our clients are noticing.

Here's what that looks like in practice: Last week, a client needed urgent updates to their Xperience by Kentico multi-site deployment. The kind of task that used to mean late nights parsing through documentation and testing edge cases. Instead, our AI-assisted workflow handled the heavy lifting—generating boilerplate code, catching potential issues before they hit QA, and even drafting the technical documentation. What would have been a three-day sprint wrapped in six hours. The client was ecstatic. We were too, honestly.

This isn't about AI replacing developers—that's not what's happening here. It's about reclaiming time for the work that actually matters: solving complex problems, understanding client needs, architecting elegant solutions. The grunt work—the repetitive patterns, the documentation that always falls behind, the test coverage that never quite gets finished—AI handles that now.

The productivity gains are legitimately through the roof. But the real story isn't just speed; it's quality. When your AI assistant catches a subtle bug at 2 PM instead of your client discovering it at deployment, everyone wins. When comprehensive documentation writes itself alongside the code, knowledge transfer becomes effortless. When pattern matching across a massive codebase happens instantly rather than through hours of grep commands, developers stay in flow state longer.

Our clients are seeing this transformation too. Faster turnarounds, yes, but also more polished deliverables, better communication, and budgets that stretch further. One client recently told us their expectations for what's achievable in a sprint have completely reset—in the best way possible.

We're not at the "code that writes itself" sci-fi future yet, but we're somewhere equally interesting: a partnership between human expertise and machine efficiency that makes both sides better. The platform knowledge required for sophisticated Kentico implementations hasn't diminished—if anything, it's more valuable. But now that knowledge gets leveraged more effectively, applied more consistently, and documented more thoroughly.

The transformation is real, it's happening now, and honestly? It's thrilling to be part of it.

I'm using Claude to write about using Claude to write about using Claude to build features in Xperience by Kentico. If your head just tilted slightly, you're tracking correctly.

Here's what happened: I built AI-powered content and coding features into the CMS. Then I asked an AI to help me write about that work. The post resonated—readers loved it, executives forwarded it around. Success tastes sweet until you realize the weirdest part isn't what you built, but who wrote the story.

Now I'm back in the editor, one layer deeper into the recursion, asking the same AI to help me write about asking that AI to write about...you get it. We've created a documentation ouroboros, and honestly? It feels like glimpsing the future through a funhouse mirror.

The technical work is straightforward enough: machine learning models personalizing content delivery, natural language processing improving search, automated testing validating component outputs. Standard 2025 web development, really. But somewhere between implementing the features and explaining them to stakeholders, I crossed a threshold. The tools I use to build became the tools I use to document became the tools I use to reflect on documentation itself.

This is where it gets genuinely strange. Every prompt I write teaches the AI about my project. Every response shapes how I think about the work. The boundary between "doing development" and "explaining development" has dissolved into something like a collaborative improvisation where neither participant is entirely sure who's leading.

I keep wondering: when the AI helps me articulate what I built with AI, is the resulting clarity genuine insight or just really convincing recursion? Does it matter? The executives greenlit more budget. The developers on my team actually read the documentation. The features ship on schedule.

Maybe this is just what technical writing becomes when the tools achieve a certain capability threshold. Your documentation toolchain doesn't just record the work—it participates in how you understand the work. It's less "AI replacing writers" and more "writing becoming a real-time negotiation between human intention and machine articulation."

The recursive loop tightens. Next week I'll probably use this post as context for the AI to help plan the next feature sprint. The snake continues eating its tail, and somewhere in that spiral, we're building the future of content management and digital experience.

I just can't tell anymore who's holding the pen.

There's a peculiar moment that happens when you're knee-deep in Azure KeyVault certificate configurations at 3 PM on a Tuesday, managing authentication schemes for four separate websites running through a single ASP.NET Core application, when you realize: this is exactly the kind of complexity AI was built to help us navigate.

Modern enterprise CMS development isn't the "install WordPress and pick a theme" experience many imagine. Real-world platforms like Kentico Xperience power ecosystems—multiple brands, intricate authentication flows, Dynamics 365 integrations, WS-Federation SSO schemes that need perfect orchestration. The cognitive load is immense. You're not just building websites; you're architecting digital experiences that span organizational boundaries while maintaining security, performance, and developer sanity.

This is where AI tooling is fundamentally changing the game, not through flashy automation, but through something more subtle: context management at human scale.

Consider a scenario I encountered recently: troubleshooting a 500.30 error in a multi-site configuration while implementing Azure Identity Provider integration. Twenty years ago, this meant hours of documentation diving, Stack Overflow archaeology, and tribal knowledge phone calls. Ten years ago, it meant better documentation and more targeted searches. Today? AI-assisted development tools can hold the entire context—your authentication schemes, certificate deployment strategies, CQRS patterns with MediatR, Lucene search configurations—and help you reason through the problem space in natural language.

The transformation isn't that AI writes your code (though it can). It's that AI reduces the context-switching tax that makes complex architectures so mentally expensive.

When you're working with embedded Razor class libraries, managing four separate WS-Federation callbacks, coordinating Dynamics 365 marketing lists, and implementing sophisticated cache invalidation strategies—the traditional "figure it out" approach means holding an impossible amount of architectural knowledge in your head simultaneously. AI becomes a thought partner that remembers the details while you focus on the decisions.

But here's what genuinely intrigues me: platforms like Xperience by Kentico are themselves evolving to incorporate AI capabilities—content recommendations, personalization engines, intelligent search. We're approaching an inflection point where AI assists both the creation of the platform and the experience it delivers. The developer uses AI to navigate CQRS query handlers and cache dependency management, while the end user experiences AI-powered content discovery they never consciously notice.

The irony? The more sophisticated our CMS architectures become—multi-tenant, headless, composable—the more we need AI assistance just to maintain them effectively. We've built systems whose complexity exceeds comfortable human cognition. AI isn't replacing developers in this equation; it's making it possible for developers to keep building increasingly ambitious systems without drowning in their own technical debt.

Is this progress? Unquestionably. But it raises an interesting question: are we building complex systems because AI can help us manage them, or is AI emerging because our systems demanded it?

I suspect the answer is yes.