How did you gather evidence or build your case for change?
Who did you need to convince and how did you approach them?
What specific steps did you take to overcome resistance?
How did you implement the change once you gained support?
What was the measurable impact of the change you pushed for?
How did the team or organization respond over time?
What did you learn about driving change in your organization?
Sample Answer (Junior / New Grad) Situation: During my internship at a fintech startup, our team was manually testing all API endpoints before each release, which took 6-8 hours every Friday. This process had been in place for two years because "that's how we've always done it," and management was hesitant to change it since they felt manual testing caught edge cases. I noticed we were repeatedly testing the same scenarios and often found the same types of bugs.
Task: As an intern, I wasn't expected to challenge processes, but I felt responsible for suggesting improvements since I was spending most of my Fridays on this repetitive work. My goal was to propose an automated testing solution that could save time while maintaining or improving quality. I knew I'd need to prove the concept without disrupting the current workflow.
Action: I spent two weeks of my personal time building a proof-of-concept automated test suite using Python and Pytest that covered our 20 most frequently tested endpoints. I documented three releases worth of data showing that 85% of bugs we found manually were predictable and could be caught by automated tests. I then presented my findings to my mentor and the engineering lead in a 15-minute demo, showing how the suite caught intentional bugs I'd introduced. After getting their support, I volunteered to run both manual and automated tests in parallel for two sprints to build confidence in the approach.
Result: After the parallel testing period proved successful, the team fully adopted automated testing, reducing release testing time from 6-8 hours to 45 minutes. This freed up the equivalent of one full engineering day per week across the team. My manager was so impressed that they converted my internship to a full-time offer, specifically mentioning my initiative in challenging existing processes. I learned that even junior team members can drive change if they back their ideas with data and propose low-risk ways to validate new approaches.
Sample Answer (Mid-Level) Situation: I was working as a software engineer at a SaaS company where our deployment process required manual approval from our VP of Engineering for every production release, regardless of size. This practice started when the company was small and had experienced a major outage, but now with 40 engineers, it had become a significant bottleneck. Deployments were delayed by hours or even days, and engineers were rushing code on Fridays to get approvals before the weekend. The VP believed this oversight prevented incidents, and previous engineers who suggested changes were told the risk was too high.
Task: As the tech lead for our team, I owned the reliability of our services and needed to find a way to increase deployment velocity without compromising safety. My challenge was to convince leadership that the current process was actually creating more risk through batched changes and rushed code reviews. I needed to propose an alternative that would give leadership confidence while removing the bottleneck.
Action: I spent three weeks collecting data on our deployment patterns, measuring lead time, change failure rate, and correlating incident frequency with deployment size. My analysis showed that 92% of deployments had zero issues, and that larger batched deployments were 4x more likely to cause incidents than smaller ones. I created a proposal for automated deployment guardrails including comprehensive automated testing, canary deployments, and automatic rollback mechanisms. I presented this to the VP with a pilot plan: our team would use the new system for one month while maintaining his approval rights, and we'd review the data together. I partnered with DevOps to implement the infrastructure and created detailed monitoring dashboards so leadership could observe every deployment in real-time.
Result: During the pilot month, we completed 23 deployments with zero incidents and reduced average deployment time from 8 hours to 45 minutes. The VP was convinced by the data and rolled out the new process company-wide within two months. Over the next quarter, engineering-wide deployment frequency increased by 180%, and our change failure rate actually decreased from 8% to 3% due to smaller, more focused changes. The initiative became a case study in our engineering all-hands about challenging legacy processes with data. I learned that changing entrenched practices requires not just proving your solution works, but also understanding and addressing the underlying fears that created those practices in the first place.
Sample Answer (Staff+) Situation: As a Staff Engineer at a Fortune 500 financial services company, I encountered a fundamental problem with how we approached technology modernization. The company had a 15-year-old policy requiring all new systems to be built on our existing on-premise infrastructure using approved vendor solutions, stemming from a major cloud security incident at a competitor in 2008. This meant we were locked into aging technology and couldn't adopt modern practices like microservices, containerization, or cloud-native development. Our time-to-market was 3-4x slower than fintech competitors, we were losing senior engineers to companies with modern tech stacks, and the infrastructure costs were $40M annually more than cloud alternatives. Multiple VPs had tried to change this policy over the years but were blocked by the CTO and CISO who viewed cloud as inherently risky and were unconvinced the business benefits justified retraining staff and changing security models.
Task: While I had no formal authority over infrastructure policy, I recognized this was an existential threat to our competitiveness and felt a responsibility to drive change at the organizational level. My challenge was to shift the perspective of senior technical leadership who had built their careers on the current architecture and genuinely believed they were protecting the company. I needed to build an airtight case that addressed security, compliance, cost, and talent concerns while proposing a practical migration path that minimized risk.
Action:
Common Mistakes
- Positioning yourself as a hero fighting idiots -- Frame it as "different perspectives" rather than "I was right, they were wrong"
- Not explaining why the norm existed -- Show you understood the original reasoning before challenging it
- Skipping the relationship-building -- Change requires bringing people along, not just being correct
- Lacking concrete data -- Vague feelings that "something should change" won't convince skeptics; use metrics and evidence
- No acknowledgment of risk -- Every change has downsides; show you considered them thoughtfully
- Taking credit for team effort -- If others helped build the case or implement change, acknowledge their contributions
- Missing the "so what" -- Always connect your change to business impact, not just "this is a better way to do things"
Result: During the pilot month, we completed 23 deployments with zero incidents and reduced average deployment time from 8 hours to 45 minutes. The VP was convinced by the data and rolled out the new process company-wide within two months. Over the next quarter, engineering-wide deployment frequency increased by 180%, and our change failure rate actually decreased from 8% to 3% due to smaller, more focused changes. The initiative became a case study in our engineering all-hands about challenging legacy processes with data. I learned that changing entrenched practices requires not just proving your solution works, but also understanding and addressing the underlying fears that created those practices in the first place.
Result: The search redesign launched in 6 weeks instead of the originally planned 11 weeks, achieved 140% of target engagement metrics, and required 60% fewer engineering hours than estimated. More importantly, my team's engagement scores jumped 35 points in the next survey, with autonomy and impact as the biggest improvements. I documented the process and outcomes in a detailed write-up that I presented to senior leadership, which led to the VP of Product sponsoring a company-wide rollout of collaborative discovery practices. Within a year, average project delivery time decreased by 30% company-wide, and engineering engagement scores improved across all teams. This experience taught me that changing organizational norms requires creating undeniable proof points, finding executive sponsors who will take risks with you, and building frameworks that address the legitimate concerns underlying the status quo.
I started by running a retrospective where I asked my team to identify what percentage of specifications needed significant revision after engineering review—the answer was 67%. I then identified our next project, a search feature redesign, as a pilot opportunity and negotiated with our VP of Product to try a different approach for just this one initiative. Instead of starting with detailed specs, I facilitated a week-long discovery phase where engineers, designers, and product managers jointly interviewed users, analyzed technical constraints, and prototyped solutions together. I created clear decision-making frameworks so everyone understood who owned what decisions, addressing leadership's concerns about accountability. I also established weekly transparent progress updates to executives, showing velocity metrics and technical decisions being made. When the team identified a completely different technical approach that would deliver the core value in half the time, I championed their recommendation even though it meant deviating significantly from the original concept.1f