AI SDLC

How does AI affect your existing SDLC? What changes have you made to leverage AI more effectively? What impact have you observed from these changes? What new risks have you encountered? What is required of developers to adapt to your AI SDLC?

Lightning Talk #

Are Engineers Already Adopting AI? #

Artificial intelligence is already part of engineering workflows. Adoption is happening, but it does not look the way many people expect.

Engineers are adopting AI faster than customers. Engineering teams are moving quickly, while many customers are still struggling with adoption. The gap between internal usage and customer readiness is visible.

When we look at usage patterns, coding represents only about twenty to thirty percent of AI usage. The larger share of adoption is happening across the software development life cycle.

AI is already being used to build product requirement documents and technical designs. Teams are generating structured drafts and refining them, reducing the time spent starting from scratch.

AI is also handling issue triage from support. Incoming tickets are being summarized, categorized, and prioritized. Engineers are spending less time sorting through issues and more time resolving them.

Planning workflows are changing as well. Meetings are producing epics and requirements within hours instead of weeks. Discussions are converted into structured outputs quickly, reducing follow up effort.

Performance reviews are also supported by AI. Engineers are using it to recall what they worked on and to draft meaningful feedback. Managers are using it to provide clearer evaluations.

Adoption is accelerating most effectively when AI is applied across the software development life cycle, not just in coding. The shift is already happening beyond writing code and into planning, triage, documentation, and reviews.

Discussions #

Novel Ways to Inject AI into the SDLC #

AI adoption is no longer limited to writing code. Teams are integrating AI across the software development life cycle in new and practical ways. These changes are already reshaping how product, engineering, and operations work together.

Product leads are actively using Claude Code to build prototypes and draft PRDs. It is becoming part of their daily workflow. Instead of manually writing long documents, they generate structured drafts and refine them. AI is also being used in technical design discussions and in triaging issues coming from support. Product management workflows are evolving as a result.

Design and development are becoming more connected. Teams are linking Figma with MCP and using that connection to generate working prototypes directly from design artifacts. The transition from idea to prototype is becoming faster and more fluid.

Some companies are facing a different problem. There are too many AI tools available, and teams are unsure how to use them effectively. In response, one company hired a third party firm to answer a simple question: where should we start?

The third party firm did not begin with tools. They started with time. They asked teams where they spend the most time. From there, they identified the top three use cases that consume the most effort. Instead of adopting tools broadly, they built focused agents specifically for those use cases. This created clarity and measurable impact.

Another company follows a structured rule for AI integration in the SDLC. They divide work into three phases: research, planning, and production coding. They have built internal hooks that refine how AI supports each phase. Research is assisted with summarization and exploration. Planning is supported with structured outputs. Production focuses on code generation and refinement. The structure keeps usage intentional.

Meetings are also being transformed. Zoom meeting summaries are being generated automatically, along with clear calls to action. Teams are no longer manually extracting next steps.

An agenda tool connected to Claude reads past meeting notes and prepares agendas for upcoming meetings. It reviews historical context and suggests discussion points. Meeting preparation is becoming automated and informed by prior conversations.

Compliance work such as SOC and PCI certifications often slows teams down. Some organizations are now asking a different question: what specific questions are blocking engineers? Once those blockers are identified, AI is used to generate answers and even take action on them. Instead of treating compliance as a separate burden, AI becomes part of removing friction.

Competitive intelligence is another active area. Teams are using AI to analyze competitors and determine relevance. Rather than manually tracking every update, AI evaluates whether changes in the market matter to their own roadmap.

These new integrations are not without challenges.

One challenge is that separation of responsibilities is becoming less clear. Product managers and developers are doing overlapping work because AI supports both roles. Boundaries that were once distinct are now blurred.

Another challenge comes from how AI is trained. Claude sometimes generates PRDs that assume weeks of human effort, even when AI can complete large portions quickly. This happens because models are trained on historical workflows where tasks traditionally took longer. The assumptions do not always reflect current AI capabilities.

AI is now embedded across research, planning, prototyping, compliance, meetings, and competitive analysis. The injection of AI into the SDLC is not theoretical. It is already happening, and it is reshaping how teams operate.

AI Tools Churn #

AI tools are changing quickly, and teams are constantly evaluating new models and platforms. This creates an important question: should a company stick to one model because predicting the cost of the next one is difficult? Or should they continuously switch as better options appear?

Different teams are taking different approaches.

Some organizations are choosing to try everything and compare results directly. Instead of committing early, they experiment with multiple models and tools. The challenge is measurement. Success cannot rely on opinions alone.

To address this, evaluation is being plugged directly into existing SDLC hooks. Model performance is measured within real workflows rather than in isolated tests. Some teams are also implementing gatekeeping at the GitHub organization level. This creates structured control over which models or tools can be used and how they are introduced.

The broader discussion suggests that churn is not necessarily negative. In the AI space, sticking with one tool for too long can mean missing improvements elsewhere. Newer models often deliver better performance, lower cost, or new capabilities. Avoiding change can result in lost value.

At the same time, most companies already use hundreds of SaaS tools. That landscape is continuously evolving. With AI, the pace of change is even faster. Tool churn is becoming part of normal operations.

AI tool churn is not viewed as instability. It is becoming a reflection of rapid innovation and continuous optimization.

How Do You Know If Delivery Is Becoming Faster, Cheaper, and Better? #

As AI becomes embedded in the software development life cycle, an important question emerges: how do teams know if delivery is actually improving?

Productivity alone is proving to be the wrong metric. Shipping more features or moving faster does not automatically create value. Speed without clarity often creates clutter. It is like building a remote control with one thousand buttons. Everything is available, but nothing is easy to use. Finding the volume button becomes harder, not easier.

Instead of measuring productivity, teams are measuring outcomes. The focus is shifting toward business value. The real question is whether the work delivered solves meaningful problems and drives results.

Many SaaS companies have become feature rich but outcome poor. They release many capabilities, yet the impact on users and the business remains limited. AI acceleration can amplify this problem if measurement stays focused on output instead of value.

Earlier in the discussion on AI in the SDLC, the importance of context during research and planning phases was highlighted. When teams provide AI with the right context about the company, objectives, and constraints at the beginning, outcomes improve downstream. Better context early leads to better decisions later.

With current tools, teams are reaching product market fit and defining their ideal customer profile faster. Prototypes are built quickly. Feedback cycles are shorter.

The SDLC process itself is collapsing in traditional boundaries. Product managers are building prototypes directly and taking them to users for feedback. Iteration is happening earlier and more frequently.

Delivery is becoming faster and cheaper in many cases. The real test is whether it is also becoming better. The shift toward measuring outcomes over productivity is shaping how teams answer that question.

Future Tools and Platforms #

As AI adoption grows, teams are also rethinking their tool strategy. One company has set a clear goal: reduce the number of tools they use. The belief is that both extremes create problems. Too many tools create fragmentation. Too few limit flexibility. The right balance is necessary.

The approach begins with defining a clear goal and measuring success against that goal. Instead of allowing different parts of the organization to adopt separate tools independently, teams align around shared objectives. Without alignment, the tool landscape becomes messy and hard to manage.

Several tools and ideas are shaping this direction.

Onit AI is being used as a central layer that connects legal, finance, and other functions. It acts as a coordinating brain across tools and enables proactive AI workflows instead of isolated automation.

Some organizations are creating internal wikis that document all tools in use. This provides visibility and prevents duplication.

Circleback is being used to listen to all Zoom calls. On top of that, teams are building CRM workflows using Claude Code. Customer requirements discussed in meetings are captured automatically, and next actions are prepared. Standups become easier because tasks are already structured from prior conversations.

Another approach involves shared context through GitHub. Teams maintain repositories that feed into Claude Code for different groups. For example, when the design team updates something, Claude reflects those changes in the repository. The engineering team then receives updated context automatically. This keeps collaboration synchronized across functions.

The future of tools and platforms is not about adding more software. It is about consolidation, shared context, and clearer alignment around goals.

The Management Business Case for AI in Engineering #

AI is changing the business case for how software teams are built and managed.

Competition is increasing because the barrier to building software is lowering. More people can build similar products using AI tools. What once required a specialized team can now be done by a smaller group supported by intelligent systems. As a result, differentiation becomes harder and speed becomes more important.

One CTO shared a simple example. He was teaching his child how to code. The child found coding difficult and frustrating. Instead of correcting everything manually, he asked the child to flip the process and use Claude to explain what was done right and what was wrong. The child enjoyed that experience much more. The feedback loop made learning engaging.

This pattern reflects a larger shift. More humans are becoming coders because AI lowers the entry barrier. Writing code is no longer limited to formally trained engineers.

The role itself is evolving toward what can be described as an agentic engineer. The focus shifts from writing every line of code to building pipelines with feedback loops. The engineer sets up the system, defines context, and refines outputs rather than manually producing everything.

Software engineers are becoming more ubiquitous. The skill set is expanding into other domains. Engineers are contributing to areas such as electrical engineering and circuit design with AI assistance. New types of roles are emerging as technical capabilities spread across industries.

The specific engineer role is no longer the same bottleneck it once was. A decade ago, updating the frontend required hiring a dedicated frontend engineer immediately. Today, the work can be generated with AI support, and one experienced reviewer can handle heavy pull requests. The constraint shifts from production to review and judgment.

For management, the business case is clear. The structure of teams, the definition of roles, and the source of competitive advantage are all changing. AI is not only increasing speed. It is reshaping how organizations think about talent and execution.


Umesh Raja is the founder and CEO of WakePackGo, an AI powered travel platform.


Image
Image
Image
Image
Image
Image
Image
Image