What Does It Mean to Be a Developer in the Age of AI Agents?
October 15, 2025, Philadelphia, PA
At a recent Chariot Solutions panel discussion hosted at Certara (thanks Martin Snyder!), technology leaders gathered to explore one of the most pressing questions facing the software industry today: how is the developer’s role evolving as AI agents become integral to the software development lifecycle?
Moderated by veteran tech executive Grace Francisco, co-founder and partner at EMG Worldwide, the conversation brought together three perspectives from the front lines of AI adoption:
- Brian O’Neill, CTO of ProofPilot
- Sujan Kapadia, Chief AI Officer at Chariot Solutions
- Joel Cofino, Chief Omnichannel Architect at Vanguard
The discussion spanned everything from productivity gains and process bottlenecks to developer culture, hiring, and the future of software teams.
AI in the Development Workflow: From Tab-Completion to Team Member
The panelists agreed that AI’s role in development has moved far beyond autocomplete. Tools like GitHub Copilot or Jetbrains AI Assistant once provided incremental productivity gains, but today’s “agentic” tools — such as Windsurf, Cursor, and Claude Code — are capable of participating in every stage of the software lifecycle: writing requirements, suggesting architectures, generating tests, and even examining Github or JIRA issues and issuing fixes autonomously.
“We’re going from context-switching across product, requirements, and code to AI that walks alongside us through the entire SDLC,” said Kapadia.
O’Neill described how his team has integrated AI agents directly into their development and operations workflows. The Cursor Slack bot, for example, now performs first-pass code reviews, catches security issues like SQL injection, and even fixes flaky tests automatically, just by @ mentioning Cursor in a Slack channel with the relevant task.
New Bottlenecks: People, Process, and Product
While AI is accelerating coding tasks, panelists warned that traditional software processes have not caught up. Faster code generation can quickly expose friction elsewhere, from code review backlogs to QA delays and outdated agile rituals.
“If your processes still assume development takes days or weeks, everything else becomes the bottleneck,” said Confino. “We need to rethink the product operating model.”
Kapadia noted that increased code volume raises the importance of robust testing, clear coding standards, and defensive programming. “You’re producing so much code so quickly that automated tests become even more critical,” he said.
Another emerging challenge is uneven adoption within teams. Developers using AI tools are starting to become more productive than those who are not, creating new disparities in velocity and collaboration.
The Scary Side of AI: Security, Expectations, and Culture
Asked what keeps them up at night, panelists pointed to several risks:
- Unchecked code: Rapidly generated code without proper review or test coverage can introduce bugs or architectural issues.
- Data exposure: Using third-party AI tools may mean sensitive code or customer data leaves the organization, a serious concern in regulated industries like finance and healthcare.
- Sky-high expectations: Stakeholders often overestimate what AI can do based on demos and hype, leading to unrealistic assumptions about timelines and costs.
- Developer morale: Some engineers enjoy coding for its own sake, and replacing “puzzle solving” with code review can diminish job satisfaction.
Human-in-the-Loop: Necessary for Now
Despite bold claims that human oversight is unnecessary, the panelists strongly disagreed. While AI is getting better at automating code generation, review, and testing, humans remain essential for context, creativity, and judgment.
“We’re building the machine that builds the machine,” said Confino. “Humans still need to guide product decisions, assess tradeoffs, and prevent compounding errors.”
Kapadia added that experienced humans are particularly critical, as junior developers may accept AI outputs uncritically without questioning best practices.
Security in the Age of AI: New Threats, New Defenses
AI introduces novel security considerations. Large language models can be manipulated through “crescendo attacks” — repeated, adversarial prompting that degrades their behavior. Input/output validation, guardrails, and red-teaming are now part of standard LLM security practice.
The panelists advised treating AI like any other component in the tech stack: apply least-privilege principles, air-gap sensitive data, and avoid connecting AI directly to deployment or infrastructure automation.
Lessons from Failure: Learning Fast, Failing Small
Each panelist shared lessons learned from failed or premature AI initiatives. Early chatbot pilots fell short due to immature models, while many retrieval-augmented generation (RAG) projects failed to deliver useful results due to poor data quality or overly ambitious expectations.
“We wasted weeks trying to parse PDF tables as text before realizing we needed multimodal models,” recalled O’Neill. “These systems evolve so quickly that what was state-of-the-art last year may already be obsolete.”
The consensus: embrace experimentation and expect rapid change. Treat each failure as a source of valuable context and iteration.
Building the Next Generation of Developers
The rise of AI has complicated the career paths of junior engineers, who face hiring headwinds and “competency bias” from senior colleagues. But the panelists argued this moment is also a unique opportunity.
“This is your time,” said Confino. “You’re competing on a level playing field — no one’s been doing this for more than a couple of years. Learn these tools deeply, build side projects, start companies.”
Kapadia urged companies not to neglect junior hiring: “Reducing team size is not the same as not hiring juniors. We still need to invest in talent who will become tomorrow’s senior engineers and leaders.”
Pair programming, mentorship, and strong onboarding practices remain critical to helping new developers build context and judgment.
Final Takeaway: The Developer’s Role Is Changing and Expanding
Across the discussion, one theme emerged clearly: AI is not replacing developers, it is reshaping their roles. Developers will increasingly act as orchestrators, code reviewers, system designers, and “managers of machines,” guiding fleets of AI agents through complex software systems.
The core of software engineering — creativity, collaboration, and critical thinking — remains more important than ever. But in the age of AI agents, developers must also master a new skill: building the systems that build the systems.
Stay in touch!
Watch for future events from Chariot Solutions as we continue to explore how AI is transforming software development and business, and what it means for the next generation of builders and business leaders. If you want to find out how Chariot can help you with AI strategy and implementation, please check out Chariot – AI and Intelligent Agents and start a conversation with us!