A Data-Driven Look at Real Client Outcomes

When we introduced AI-powered development tools into our Xperience by Kentico practice, I wasn't convinced we'd see meaningful ROI beyond basic code completion. We'd heard the promises before: faster development, fewer bugs, happier clients. But would AI actually deliver measurable results, or just add another layer of tooling complexity?

After completing several major projects with Claude Code as a core development partner, the results speak for themselves.

The Toolset: Beyond Code Completion

Our AI integration isn't about replacing developers with ChatGPT. We've built a focused toolkit that augments where traditional development workflows break down:

Claude Code handles architectural decisions, complex integrations, and cross-cutting concerns across our multi-site Xperience by Kentico solutions. It doesn't just autocomplete—it architects, documents, and catches edge cases that slip through manual review.

GitHub Copilot accelerates the repetitive work—MediatR query handlers, view models, cache dependency implementations—that follows consistent patterns but demands attention to detail.

Custom GPT-4 assistants trained on Kentico documentation and our coding standards provide real-time guidance on platform-specific best practices and help new developers get productive faster.

A Real Case Study: Identity Provider Integration

Here's where theory meets reality. We recently completed a complex authentication overhaul for a multi-site Xperience by Kentico implementation:

The scope:

  • Implement WS-Federation Identity Provider integration across four distinct websites
  • Build comprehensive user synchronization with external SOAP API
  • Integrate third-party role management package (with custom workarounds for package bugs)
  • Create complete documentation for deployment and testing
  • Support multiple environments with different configurations

Traditional estimate: 2+ weeks (80+ hours) of senior developer time

Actual delivery with AI assistance: 3-4 days (approximately 24-32 hours)

Time savings: 60-70% reduction in development time

But the time savings only tell part of the story. The AI-assisted implementation delivered:

  • Cleaner architecture than manual development—Claude Code identified edge cases and error handling paths I would have missed
  • Comprehensive documentation generated alongside the code, not as an afterthought
  • Built-in testing strategy for multiple environments, reducing deployment risk
  • Complete audit trail of architectural decisions and implementation rationale

The code quality actually exceeded what I would have produced manually, and we shipped features (multi-environment support, extensive logging, detailed documentation) that would normally be cut due to time constraints.

What Changed in Our Process

The most significant shift isn't just speed—it's what becomes possible within the same timeline:

Documentation became a feature, not a burden. The IDP integration shipped with five comprehensive markdown documents covering architecture, runtime considerations, integration points, testing procedures, and deployment checklists. That documentation was generated alongside the code, not bolted on afterward.

Code review focus shifted. Instead of catching missing null checks and forgotten async/await patterns (which AI handles consistently), reviews now focus on business logic, security implications, and architectural fit. The quality of discussion improved because we're not distracted by mechanical issues.

Confidence in complexity increased. When a client asks for "single sign-on across four sites with role-based access control," we can confidently commit to tight timelines because we know the AI tools excel at exactly this type of structured, pattern-heavy work.

The Reality Check

AI-assisted development isn't magic. You still need to understand the architecture deeply enough to guide the AI, review its output critically, and catch where it misunderstands domain-specific requirements.

Certain integrations still require extra scrutiny—Dynamics 365 FetchXML queries, for example, often need manual refinement because the AI models struggle with the query syntax nuances. And authentication flows demand careful security review regardless of who (or what) wrote the code.

But here's what changed: instead of spending 80 hours implementing a well-understood pattern, I spent 30 hours architecting the solution and reviewing AI-generated implementation. That's not just faster—it's a fundamentally different (and better) use of senior developer time.

What This Means for CMS Development

For platforms like Xperience by Kentico, AI assistance creates an interesting dynamic. The platform's inherent structure—CQRS patterns, MediatR handlers, cache dependency management, multi-site architecture—used to represent a learning curve. Now that structure becomes an advantage. AI tools perform exceptionally well within well-defined architectural constraints.

The question for CMS development shops isn't whether AI tools improve outcomes. Based on our experience, they demonstrably do—with measurable time savings and quality improvements. The question is how quickly teams adapt their workflows to leverage these tools effectively, and what happens to competitive positioning when some teams do and others don't.

Want to Implement This in Your Workflow?

If you're a developer (or leading a development team) and this resonates with your pain points—spending too much time on boilerplate, wishing documentation didn't feel like punishment, wanting to ship more ambitious features without burning out—we've learned a lot about making AI tools actually work in production environments.

The setup isn't trivial. Teaching Claude Code your coding standards, configuring Copilot for platform-specific patterns, building the right review processes—it took us weeks to get this dialed in. But once it clicks, the productivity shift is real.

We're offering training and enablement services to help other agencies and development teams implement similar workflows. Whether you're working with Xperience by Kentico, other CMS platforms, or custom application development, the principles translate.

For developers: Learn how to architect prompts that generate production-quality code, build effective AI-assisted code review practices, and configure tools for your specific tech stack.

For agencies/teams: Understand the ROI model, implementation timeline, training requirements, and how to measure outcomes.

Reach out if you want to talk through what this could look like for your team. No sales pitch—just practical conversation about what works (and what doesn't) based on real project experience.


About this series: We're documenting real experiences implementing AI-assisted development workflows in enterprise CMS projects. All metrics and case studies are based on actual client work, not theoretical projections.

You know that moment when you're troubleshooting authentication errors at 2 AM and suddenly realize you've been thinking about certificates all wrong? Mine came when a crypto-savvy colleague said, "Wait, this is literally just Bitcoin for logging in."

He was right. And once you see it, WS-Federation—that enterprise authentication protocol powering single sign-on across millions of corporate apps—becomes remarkably intuitive.

The Core Parallel

Bitcoin transactions and SAML authentication tokens solve the same fundamental problem: How do you prove authority without revealing your secret?

In Bitcoin, your private key signs transactions. Anyone with your public key can verify you authorized that transfer of value, but they can't forge your signature or steal your funds. The math is beautiful: asymmetric cryptography means verification doesn't compromise security.

WS-Federation works identically. Your Identity Provider (IdP) holds a private key that signs SAML tokens asserting "This user is Bob from Accounting." The relying party—your application—has the IdP's public key (usually fetched from federation metadata) and verifies the signature. Bob gets in. The application never needs the IdP's private key, just like you never need someone's Bitcoin private key to confirm they sent you payment.

Why This Matters Beyond Theory

When you're debugging certificate errors in WS-Federation, this mental model is gold. That "metadata URL can't be reached" error? You're missing the public key to verify signatures—like trying to validate a Bitcoin transaction without access to the blockchain. Certificate mismatch? Your relying party is using the wrong public key—equivalent to checking a Bitcoin signature against the wrong address.

The authentication flow mirrors a Bitcoin transaction perfectly: User requests access (initiates transaction) → IdP signs SAML token with private key (signs transaction) → Application verifies with public key (blockchain confirms) → Access granted (transaction confirmed).

The Breakthrough

Understanding this parallel transformed how our team approaches federation. We stopped thinking about certificates as mysterious files to pass around and started thinking about them as asymmetric key pairs with clear roles. The IdP guards its private key like a Bitcoin wallet. The relying party publishes its requirements like a blockchain validates transactions.

Two worlds. Same elegant math. Different outcomes—one gets you into Salesforce, the other might get you a Lambo.

Something genuinely remarkable is happening in web development studios right now, and we're living it firsthand at Refined Element. AI isn't just making our work slightly faster—it's fundamentally changing what's possible in a day, and our clients are noticing.

Here's what that looks like in practice: Last week, a client needed urgent updates to their Xperience by Kentico multi-site deployment. The kind of task that used to mean late nights parsing through documentation and testing edge cases. Instead, our AI-assisted workflow handled the heavy lifting—generating boilerplate code, catching potential issues before they hit QA, and even drafting the technical documentation. What would have been a three-day sprint wrapped in six hours. The client was ecstatic. We were too, honestly.

This isn't about AI replacing developers—that's not what's happening here. It's about reclaiming time for the work that actually matters: solving complex problems, understanding client needs, architecting elegant solutions. The grunt work—the repetitive patterns, the documentation that always falls behind, the test coverage that never quite gets finished—AI handles that now.

The productivity gains are legitimately through the roof. But the real story isn't just speed; it's quality. When your AI assistant catches a subtle bug at 2 PM instead of your client discovering it at deployment, everyone wins. When comprehensive documentation writes itself alongside the code, knowledge transfer becomes effortless. When pattern matching across a massive codebase happens instantly rather than through hours of grep commands, developers stay in flow state longer.

Our clients are seeing this transformation too. Faster turnarounds, yes, but also more polished deliverables, better communication, and budgets that stretch further. One client recently told us their expectations for what's achievable in a sprint have completely reset—in the best way possible.

We're not at the "code that writes itself" sci-fi future yet, but we're somewhere equally interesting: a partnership between human expertise and machine efficiency that makes both sides better. The platform knowledge required for sophisticated Kentico implementations hasn't diminished—if anything, it's more valuable. But now that knowledge gets leveraged more effectively, applied more consistently, and documented more thoroughly.

The transformation is real, it's happening now, and honestly? It's thrilling to be part of it.

I'm using Claude to write about using Claude to write about using Claude to build features in Xperience by Kentico. If your head just tilted slightly, you're tracking correctly.

Here's what happened: I built AI-powered content and coding features into the CMS. Then I asked an AI to help me write about that work. The post resonated—readers loved it, executives forwarded it around. Success tastes sweet until you realize the weirdest part isn't what you built, but who wrote the story.

Now I'm back in the editor, one layer deeper into the recursion, asking the same AI to help me write about asking that AI to write about...you get it. We've created a documentation ouroboros, and honestly? It feels like glimpsing the future through a funhouse mirror.

The technical work is straightforward enough: machine learning models personalizing content delivery, natural language processing improving search, automated testing validating component outputs. Standard 2025 web development, really. But somewhere between implementing the features and explaining them to stakeholders, I crossed a threshold. The tools I use to build became the tools I use to document became the tools I use to reflect on documentation itself.

This is where it gets genuinely strange. Every prompt I write teaches the AI about my project. Every response shapes how I think about the work. The boundary between "doing development" and "explaining development" has dissolved into something like a collaborative improvisation where neither participant is entirely sure who's leading.

I keep wondering: when the AI helps me articulate what I built with AI, is the resulting clarity genuine insight or just really convincing recursion? Does it matter? The executives greenlit more budget. The developers on my team actually read the documentation. The features ship on schedule.

Maybe this is just what technical writing becomes when the tools achieve a certain capability threshold. Your documentation toolchain doesn't just record the work—it participates in how you understand the work. It's less "AI replacing writers" and more "writing becoming a real-time negotiation between human intention and machine articulation."

The recursive loop tightens. Next week I'll probably use this post as context for the AI to help plan the next feature sprint. The snake continues eating its tail, and somewhere in that spiral, we're building the future of content management and digital experience.

I just can't tell anymore who's holding the pen.