The Numbers Don't Lie: How AI Tools Are Reshaping CMS Development at Refined Element
A Data-Driven Look at Real Client Outcomes
When we introduced AI-powered development tools into our Xperience by Kentico practice, I wasn't convinced we'd see meaningful ROI beyond basic code completion. We'd heard the promises before: faster development, fewer bugs, happier clients. But would AI actually deliver measurable results, or just add another layer of tooling complexity?
After completing several major projects with Claude Code as a core development partner, the results speak for themselves.
The Toolset: Beyond Code Completion
Our AI integration isn't about replacing developers with ChatGPT. We've built a focused toolkit that augments where traditional development workflows break down:
Claude Code handles architectural decisions, complex integrations, and cross-cutting concerns across our multi-site Xperience by Kentico solutions. It doesn't just autocomplete—it architects, documents, and catches edge cases that slip through manual review.
GitHub Copilot accelerates the repetitive work—MediatR query handlers, view models, cache dependency implementations—that follows consistent patterns but demands attention to detail.
Custom GPT-4 assistants trained on Kentico documentation and our coding standards provide real-time guidance on platform-specific best practices and help new developers get productive faster.
A Real Case Study: Identity Provider Integration
Here's where theory meets reality. We recently completed a complex authentication overhaul for a multi-site Xperience by Kentico implementation:
The scope:
- Implement WS-Federation Identity Provider integration across four distinct websites
- Build comprehensive user synchronization with external SOAP API
- Integrate third-party role management package (with custom workarounds for package bugs)
- Create complete documentation for deployment and testing
- Support multiple environments with different configurations
Traditional estimate: 2+ weeks (80+ hours) of senior developer time
Actual delivery with AI assistance: 3-4 days (approximately 24-32 hours)
Time savings: 60-70% reduction in development time
But the time savings only tell part of the story. The AI-assisted implementation delivered:
- Cleaner architecture than manual development—Claude Code identified edge cases and error handling paths I would have missed
- Comprehensive documentation generated alongside the code, not as an afterthought
- Built-in testing strategy for multiple environments, reducing deployment risk
- Complete audit trail of architectural decisions and implementation rationale
The code quality actually exceeded what I would have produced manually, and we shipped features (multi-environment support, extensive logging, detailed documentation) that would normally be cut due to time constraints.
What Changed in Our Process
The most significant shift isn't just speed—it's what becomes possible within the same timeline:
Documentation became a feature, not a burden. The IDP integration shipped with five comprehensive markdown documents covering architecture, runtime considerations, integration points, testing procedures, and deployment checklists. That documentation was generated alongside the code, not bolted on afterward.
Code review focus shifted. Instead of catching missing null checks and forgotten async/await patterns (which AI handles consistently), reviews now focus on business logic, security implications, and architectural fit. The quality of discussion improved because we're not distracted by mechanical issues.
Confidence in complexity increased. When a client asks for "single sign-on across four sites with role-based access control," we can confidently commit to tight timelines because we know the AI tools excel at exactly this type of structured, pattern-heavy work.
The Reality Check
AI-assisted development isn't magic. You still need to understand the architecture deeply enough to guide the AI, review its output critically, and catch where it misunderstands domain-specific requirements.
Certain integrations still require extra scrutiny—Dynamics 365 FetchXML queries, for example, often need manual refinement because the AI models struggle with the query syntax nuances. And authentication flows demand careful security review regardless of who (or what) wrote the code.
But here's what changed: instead of spending 80 hours implementing a well-understood pattern, I spent 30 hours architecting the solution and reviewing AI-generated implementation. That's not just faster—it's a fundamentally different (and better) use of senior developer time.
What This Means for CMS Development
For platforms like Xperience by Kentico, AI assistance creates an interesting dynamic. The platform's inherent structure—CQRS patterns, MediatR handlers, cache dependency management, multi-site architecture—used to represent a learning curve. Now that structure becomes an advantage. AI tools perform exceptionally well within well-defined architectural constraints.
The question for CMS development shops isn't whether AI tools improve outcomes. Based on our experience, they demonstrably do—with measurable time savings and quality improvements. The question is how quickly teams adapt their workflows to leverage these tools effectively, and what happens to competitive positioning when some teams do and others don't.
Want to Implement This in Your Workflow?
If you're a developer (or leading a development team) and this resonates with your pain points—spending too much time on boilerplate, wishing documentation didn't feel like punishment, wanting to ship more ambitious features without burning out—we've learned a lot about making AI tools actually work in production environments.
The setup isn't trivial. Teaching Claude Code your coding standards, configuring Copilot for platform-specific patterns, building the right review processes—it took us weeks to get this dialed in. But once it clicks, the productivity shift is real.
We're offering training and enablement services to help other agencies and development teams implement similar workflows. Whether you're working with Xperience by Kentico, other CMS platforms, or custom application development, the principles translate.
For developers: Learn how to architect prompts that generate production-quality code, build effective AI-assisted code review practices, and configure tools for your specific tech stack.
For agencies/teams: Understand the ROI model, implementation timeline, training requirements, and how to measure outcomes.
Reach out if you want to talk through what this could look like for your team. No sales pitch—just practical conversation about what works (and what doesn't) based on real project experience.
About this series: We're documenting real experiences implementing AI-assisted development workflows in enterprise CMS projects. All metrics and case studies are based on actual client work, not theoretical projections.
