ResourcesReportsState of GTM 2026
2026 Report·Apr 2026·25 min read

State of GTM 2026

16 interviews with named operators at Lyft, GitLab, Cloudflare, 11x, Rubrik, Zscaler, Nextdoor, and Trend Micro. The patterns separating teams accelerating from teams stuck.

Insights contributed from

LyftLyft
GitLabGitLab
CloudflareCloudflare
ZscalerZscaler
RubrikRubrik
Trend MicroTrend Micro
NextdoorNextdoor
11x11x

Free Report

Get the full report.

Sent instantly by email
Free — no credit card
Unsubscribe anytime

Key findings

16

GTM leaders interviewed across enterprise and high-growth companies

12

enterprise operators including Lyft, GitLab, Cloudflare, Zscaler, Trend Micro

7

named patterns separating teams accelerating from teams stuck

98%

of agentic AI investments fail because teams plug AI into broken workflows

What you'll learn

  • Why 98% of agentic AI investments fail, and what the 2% getting results are doing differently (it's not the technology)
  • How LLMs are becoming a significant customer acquisition channel and why your help center is now a tier-one marketing asset
  • Why AI-powered personalization is backfiring and what's replacing it in outbound that actually works
  • How IT teams are evolving from cost centers to revenue drivers by measuring Experience Level Agreements instead of Service Level Agreements
  • Why companies are building an order of magnitude more applications, not fewer, and what that means for your build vs. buy strategy
  • What separates teams using AI for augmentation from teams using it for headcount reduction, and why the distinction determines who retains talent
  • How brand became the only defensible moat when AI commoditized product development and content creation

Full report

The Augmentation Imperative

**You'll get:** Audit your team's calendar for the last two weeks. Identify every meeting, every approval workflow, every status update that exists to coordinate work rather than do work. That's your AI target list, not your headcount.

*Every company says AI will transform how their teams work. But when you ask what that actually means, the answers split cleanly into two camps: those using AI to eliminate jobs, and those using it to eliminate the work nobody wanted to do in the first place. The split isn't just philosophical, it's determining who retains talent and who loses their best people to burnout or attrition.*

Twelve of sixteen leaders we interviewed emphasized the same pattern: AI's primary value isn't replacing humans, it's freeing them from what Kevin Smith at Lyft calls 'work to get work done.' The mundane overhead that keeps talented people from doing the strategic work they were hired for.

The distinction matters because it changes what you optimize for. If you view AI as a headcount reduction tool, you're focused on the wrong metric. "This is not a head count discussion," Ajay Sabhlok, CIO at Rubrik, told us. "This is about just first augmenting ourselves." The question isn't how many people you can cut, it's how much capacity you can unlock in the people you have.

The practical impact shows up in unexpected places. At Zip, Piru Chheang's team of 1.5 people manages IT for 800+ employees using AI-powered service management that deflects 50% of tickets. That deflection isn't about reducing headcount, it's about freeing the team to work on strategic projects instead of password resets. At Lyft, Smith's team is building procurement agents that eliminate 100% of the manual work in legal and procurement reviews, not to cut those teams, but to let them focus on complex negotiations and strategic vendor relationships.

The pattern holds across functions. Shyam at Nextdoor frames it clearly: "AI is not a cost cutter, it's an enabler. AI is not going to take people's job, but people who don't know how to use AI across the lines of business, they would have difficulty finding the right jobs." The shift is from task execution to orchestration, from doing the work to designing how the work gets done.

What separates the leaders getting this right from those struggling: they're explicit about the 3-5 year timeline. Sabhlok at Rubrik gives himself three to five years to achieve full AI acceleration. Not because the technology isn't ready, because process redesign and organizational change take time. "If you look at our annual planning process, out of 100 things, we can only get to 15% or 20%. It's not even 50%." AI acceleration addresses that massive backlog, not by working faster, but by eliminating work that shouldn't exist.

The teams treating AI as pure cost reduction are missing the larger opportunity. As Praniti Lakhwara at Zscaler put it: "If you only treat AI as a cost lever, you're going to miss the reinvention opportunity of workflows. The real upside is redesigning these workflows at 100%, eliminating handoffs, compressing the cycle times."

But augmentation only works if you're augmenting the right activities. And that's where most teams are getting it catastrophically wrong.

**Counter-signal.** Not everyone agrees, and the split is revealing. One leader we spoke with is actively using AI to reduce headcount, running an AI SDR at one-eighth the cost of human SDRs with similar conversion rates. The argument: why pay eight times more for the same output? Our read: both camps are right, for different reasons. The cost reduction approach works when you're replacing genuinely commoditized work. The augmentation approach works when you're trying to unlock strategic capacity in talented people. The mistake is treating all work as commoditized when much of it isn't.

**Practitioner callout.** If your AI strategy deck leads with headcount reduction targets, you're optimizing for the wrong outcome. Start with a different question: what work are your best people doing that an AI could handle in 30 seconds? That's your starting point. The ROI will follow, but the framing determines whether you retain your talent or lose them to companies that understand augmentation.

This is not about saying your job is suddenly gonna be done by an AI agent, sorry for your loss. It is very much about, we wanna give you back time... eliminating the work to get work done

IT Leader, Public rideshare company

So this is not a head count discussion. This is not about, this is about just first augmenting ourselves.

CIO and Chief Data Officer, Public security company

AI is not a cost cutter, it's an enabler. I always say this, AI is not going to take people's job, but people who don't know how to use AI across the lines of business, they would have difficulty finding the right jobs

CIO and CSO, Enterprise social technology company

The Death of Generic Personalization

**You'll get:** Pull your last 50 outbound messages. Count how many lead with surface-level personalization (job change, funding round, LinkedIn post). If it's more than 20%, cut it entirely this week. Test generic messages with perfect targeting against personalized messages with good targeting. Watch what actually books meetings.

*AI was supposed to make personalization scalable. Every outbound tool promised hyper-personalized messages at volume. But something unexpected happened: the more personalized the outreach became, the worse it performed. Buyers started ignoring messages that referenced their recent job change or their company's latest funding round, not despite the personalization, but because of it.*

Eight of sixteen leaders reported the same pattern: AI-powered personalization is backfiring. The problem isn't the technology, it's that everyone has the same technology, and buyers know it.

"If everyone is researching, people also knows that it's an AI doing a research," Ayush Poddar at Revenoid told us. "So till that time, your research is to the point to my pain, which is urgent and critical now. So you are out of my side." The bar for effective personalization has moved. Surface-level research that was impressive in 2023 is table stakes in 2026, and buyers can spot it instantly.

The data backs this up in unexpected ways. Ilija Stojkovski at Heyreach has sent 53 million connection requests and generated 4 million replies. His take: "If you spend all of your efforts in crafting the perfect lead list you can send the most generic message in the world and it will still outperform any hyper personalized message out there." The finding contradicts everything the industry has been preaching about personalization at scale.

Imaan Sultan at 11x sees the same pattern: "People think that they're doing personalization by being like, hey, I saw you shifted to new role. But everyone knows that these are so top level and easy to find." The personalization that AI makes easy is exactly the personalization that no longer works. It's too obvious, too templated, too clearly automated.

The alternative isn't abandoning personalization, it's redefining what personalization means. The teams getting results are doing one of two things: either going extremely deep on a small number of high-value prospects (the kind of research AI can't do in 30 seconds), or going completely generic with perfect targeting. The middle ground, AI-powered surface personalization at scale, is where campaigns go to die.

Zayd Ali at Valley put it bluntly: "Cold email. Put that on the bottom of our fucking list of channels. That sucks. We tried so hard to build cold email infrastructure that actually works. Like so much time. It's a humongous waste of time. It books us like one demo a week and we spend like 50% of our time." The channel isn't dead because email is dead, it's dead because the personalization approach everyone copied from the same playbook stopped working simultaneously.

What's replacing it? Creative personalization that can't be automated. Sultan's team at 11x booked a major podcast by sending an email about the co-founder's alma mater and a specific football game. That's not scalable. That's not AI-generated. That's a human who did actual research and found an actual connection point. "We literally booked like a very famous podcast with like 30 minutes to presence club because of an email we sent that was like a story about the main like one of the co-founders like alma mater's and like the big football game that they have at that school."

The collapse of generic personalization is forcing teams to find new sources of differentiation. And the answer isn't in their outbound motion, it's in something far more fundamental.

**Counter-signal.** Not everyone agrees, and the split is revealing. Two leaders we spoke with are still seeing results from AI-powered personalization, but with a critical caveat: they're using it for account research and message customization, not for the personalization itself. The AI does the background work, but a human writes the actual message. Our read: the technology isn't the problem. The templated execution is. AI-assisted personalization works when it's invisible to the recipient. AI-generated personalization fails when buyers can spot it.

**Practitioner callout.** If your SDRs are leading with "I saw you recently posted about..." or "Congrats on the new role," you're training prospects to ignore you. The personalization that AI makes easy is exactly the personalization that no longer works. Either go deep enough that the research is genuinely valuable, or focus your energy on targeting and send a clear, generic message.

if everyone is researching, people also knows that it's an AI doing a research. So till that time, your research is to the point to my pain, which is urgent and critical now. So you are out of my side.

Marketing Leader, SaaS sales AI platform

if you spend all of your efforts in crafting the perfect lead list you can send the most generic message in the world and it will still outperform any hyper personalized message out there

CRO, $12M ARR SaaS company

people think that they're doing personalization by being like, hey, I saw you shifted to new role. But everyone knows that these are so top level and easy to find

Growth and Marketing Lead, High-growth AI/sales automation startup

Brand as the New Moat

**You'll get:** Audit your last 10 pieces of published content. Ask: would someone share this with a peer because it's genuinely useful, or only because it's gated behind a form? If the answer is the latter, you're not building brand, you're extracting short-term leads at the expense of long-term trust.

*AI has made product development faster and cheaper than ever. A non-technical founder can build a functional product in weeks. Content creation that used to require agencies now happens in-house with AI assistance. But as the barriers to building products and creating content collapse, something unexpected is happening: brand is becoming the only defensible moat that matters.*

Nine of sixteen leaders emphasized the same shift: when AI commoditizes everything else, brand becomes the primary differentiator. Not brand as a logo or a color palette, brand as trust, as reputation, as the answer to "who do I believe?"

Hayley Bateman, founder of Home, frames it clearly: "I think that brand is everything in consumer especially. And especially now in the age of AI, like you said, anyone can build anything really... there really isn't not much defensible anymore in these consumer spaces beyond brand." The observation holds in B2B as well. When prospects can't differentiate on features or pricing, they default to trust.

The practical impact shows up in unexpected channels. Bateman posted her Y Combinator application video on TikTok. "75, 80,000 views later, have, you know, VCs in my inboxes. I'm making friends with other professionals on TikTok." That's not paid advertising. That's not growth hacking. That's authentic content building brand equity that translates directly to business outcomes.

The same pattern appears in B2B content. Zayd Ali at Valley ran an analysis comparing in-house content to agency-created content. In-house generated 5.2x more impressions per blog. "Our in-house motion was responsible for 96.6% of total blog impressions, despite only writing 84.4% of the total volume." The difference isn't writing quality, it's authenticity. Readers can tell when content comes from someone who actually understands the problem versus someone executing a content brief.

Ayush Poddar at Revenoid sees the shift in buyer behavior: "The moment we start building trust with the buyers, whether passive channels or active channels, whether mass media or one on one meetings, your funnel starts working." Trust isn't a nice-to-have, it's the unlock. Without it, even perfect targeting and flawless execution fail.

What's driving this? The collapse of traditional differentiation. When every company has access to the same AI tools, the same data sources, the same automation capabilities, product features converge. Pricing becomes transparent. The only question left is: who do I trust to solve this problem?

Jacob Bank at Relay App articulates the shift in marketing strategy: "Modern marketing, especially like PLG self-serve bottom up marketing, is less of a funnel and more of like you're a planet building gravity." You're not pushing prospects through stages, you're creating enough trust and credibility that prospects naturally move toward you when they're ready to buy.

The mistake most teams make: treating brand building as a long-term investment separate from demand generation. The teams winning in 2026 understand they're the same thing. Every piece of content, every customer interaction, every social media post either builds trust or erodes it. There's no neutral.

But brand building requires distribution. And the distribution channels are changing in ways most teams haven't noticed yet.

**Counter-signal.** Not everyone agrees, and the split is revealing. One technical leader we spoke with emphasized that product quality and technical execution still matter enormously, especially in security and infrastructure. The argument: brand gets you in the door, but product keeps you there. Our read: both are right. Brand is becoming table stakes for consideration, but product quality determines retention. The shift is that brand moved from nice-to-have to must-have in the buying process.

**Practitioner callout.** If your content strategy is optimized for SEO keywords and lead capture forms, you're building for the wrong outcome. The question isn't "will this rank?" or "will this convert?", it's "will this build trust with someone who isn't ready to buy yet?" That's the content that compounds over time.

I think that brand is everything in consumer especially. And especially now in the age of AI, like you said, anyone can build anything really... there really isn't not much defensible anymore in these consumer spaces beyond brand.

Founder, Pre-seed AI wellness startup

The moment we start building trust with the buyers, whether passive channels or active channels, whether mass media or one on one meetings, your funnel starts working.

Marketing Leader, SaaS sales AI platform

we have to be human to sell a human. And we are at this point. We have seen the entire stages of automations. We have seen the stages of systems processes running. But today, what's working is you be a human, understand the human

Marketing Leader, SaaS sales AI platform

LLMs as the New Discovery Channel

**You'll get:** Add "How did you first hear about us?" to every demo call this week. Include "ChatGPT/Claude" as an explicit option. Track it for 30 days. If it's showing up even once, your help center and documentation need to be treated as tier-one marketing assets, not support resources.

*SEO has been the foundation of B2B discovery for two decades. Companies invested millions in content strategies optimized for Google's algorithm. But a new channel is emerging that most teams aren't tracking: prospects are discovering vendors by asking ChatGPT and Claude for recommendations. And the companies showing up in those recommendations aren't the ones with the best SEO, they're the ones with the best help centers.*

Seven of sixteen leaders reported the same pattern: LLMs are becoming a significant source of customer acquisition. Not as a future possibility, as a current reality driving deals today.

Imaan Sultan at 11x sees it constantly: "We have so many customers will come to us and be like, I just asked like chat GPT or Claude, like who's the best and then we got you guys and then we reached out." That's not one anecdote, it's a pattern appearing across multiple deals. Prospects aren't Googling "best AI SDR tool" anymore. They're asking Claude.

Mick Loizou at Cognizant is tracking it in attribution: "When we look at sort of the self-attributed attribution, so we're literally asking people on lead forms, how did you hear about us? We're seeing LLMs creep up more and more and more. So it's obviously told us that GEO in addition to SEO has become more important." GEO, generative engine optimization, is becoming as critical as SEO was in 2010.

What determines whether your company shows up in LLM recommendations? The same content assets most teams have been neglecting: help centers, documentation, knowledge bases. "We're like, now we're like, oh, this is a really important resource in a way that it possibly wasn't in the past," Loizou told us. "Because obviously you think about it from the perspective of the customer, like They're trying to answer a question and it's sitting in your help center. Now they're asking, they might not go to your help center anymore. They might ask the same question to an LLM and suddenly your help center is one of the best rules, best pieces of marketing resource you've ever created."

The shift is forcing a rethink of content strategy. The blog posts optimized for "best [category] software" aren't what LLMs cite. They cite authoritative sources: help documentation, case studies, technical specifications, comparison guides written by the company itself. The content that was "just for customers" is now doing double duty as discovery content.

The second-order effect: PR and communications are seeing a resurgence. "I feel like there's almost a resurgence of the more traditional PR approaches where, you know, getting journalists to be interested in you, to start citing you, is becoming more important because if those are reputable sources, then they will be cited by LLMs and help the discovery," Loizou explained. If The Wall Street Journal writes about your company, LLMs cite it. If your competitor gets covered and you don't, you're invisible in LLM-driven discovery.

The teams ahead of this curve are treating their help centers as marketing assets, not support resources. They're investing in comprehensive documentation, detailed comparison guides, and technical content that establishes authority. Not because it ranks in Google, because it gets cited by Claude.

But discovery is only half the battle. The real challenge is what happens after prospects find you, and that's where most implementations are failing catastrophically.

**Counter-signal.** Not everyone agrees, and the split is revealing. Zero leaders we spoke with disagreed with this trend, but several noted they're not yet tracking LLM attribution systematically. The challenge: it's difficult to measure because prospects don't always volunteer how they discovered you. Our read: this is directional, not definitive. But the signal is strong enough that ignoring it is a mistake.

**Practitioner callout.** If your help center is an afterthought maintained by your support team, you're missing a major discovery channel. LLMs don't care about your SEO strategy, they care about authoritative, comprehensive content that answers specific questions. Your help center is now a marketing asset.

we have so many customers will come to us and be like, I just asked like chat GPT or Claude, like who's the best and then we got you guys and then we reached out

Growth and Marketing Lead, High-growth AI/sales automation startup

when we look at sort of the self-attributed attribution, so we're literally asking people on lead forms, how did you hear about us? We're seeing LLMs creep up more and more and more. So it's obviously told us that GEO in addition to SEO has become more important

VP of Marketing, 450-person SaaS company

We're like, now we're like, oh, this is a really important resource in a way that it possibly wasn't in the past. Because obviously you think about it from the perspective of the customer, like They're trying to answer a question and it's sitting in your help center. Now they're asking, they might not go to your help center anymore. They might ask the same question to an LLM

VP of Marketing, 450-person SaaS company

Process Redesign Over Tool Adoption

**You'll get:** Pick your most painful workflow, the one with the most handoffs, the longest cycle time, the most status meetings. Map it end to end. Then ask: if an AI could do any part of this instantly, which steps would we eliminate entirely rather than automate? That's your redesign starting point.

*Companies are spending millions on AI tools. They're running pilots, measuring ROI, rolling out enterprise licenses. But 98% of agentic AI investments are failing. Not because the technology doesn't work, because teams are plugging AI into processes that were designed for humans. The problem isn't the tool. It's the process the tool is trying to automate.*

Eleven of sixteen leaders emphasized the same pattern: AI implementation requires process redesign, not just tool adoption. The teams getting results aren't asking "how do we use AI to do this faster?" They're asking "if we were designing this process today with AI available, what would we eliminate entirely?"

Kevin Smith at Lyft frames the core problem: "Business processes are built around humans... you're trying to take a technology such as agentic AI and plug it into where a human is. And I don't think that's the right solution." The workflows most companies run today were designed around human constraints: approval chains, handoffs, status meetings, documentation requirements. AI doesn't have those constraints. But if you just plug AI into the existing workflow, you're automating inefficiency.

The data on agentic AI failures is stark. Smith cited an MIT study showing 98% of agentic investments were failures. The reason: teams treated AI as a drop-in replacement for a human step in an existing process. "You're trying to take a technology such as agentic AI and plug it into where a human is. And I don't think that's the right solution." The solution is redesigning the entire workflow from scratch.

Praniti Lakhwara at Zscaler sees the same mistake repeatedly: "Another pattern I would say is over indexing on tools. And there's a lot of focus on tools right now... let's not overindex on tools. Let's overindex on redesigning our processes." The tool is the easy part. The hard part is admitting that your current process is fundamentally broken and needs to be rebuilt.

What does process redesign actually look like? Lakhwara offers a framework: "If you only treat AI as a cost lever, you're going to miss the reinvention opportunity of workflows. The real upside is redesigning these workflows at 100%, eliminating handoffs, compressing the cycle times." It's not about doing the same work faster, it's about eliminating entire categories of work that only exist because humans were doing it.

The timeline expectations matter. Ajay Sabhlok at Rubrik gives himself three to five years to achieve full AI acceleration. Not because the technology will take that long to mature, because organizational change and process redesign take time. "I give myself about three to five years. That's kind of my horizon to say, I think we're gonna get that acceleration, but not immediately." The teams rushing to show ROI in 90 days are optimizing for the wrong metric.

The practical implication: most companies need to slow down their AI tool adoption and speed up their process redesign work. The bottleneck isn't technology availability, it's organizational readiness to fundamentally rethink how work gets done.

Process redesign is enabling a shift that most teams haven't noticed yet: the proliferation of custom applications built for specific use cases instead of generic tools bought for broad ones.

**Counter-signal.** Not everyone agrees, and the split is revealing. One leader we spoke with is successfully using AI as a drop-in replacement for specific human tasks without full process redesign, particularly in customer support and basic data entry. The argument: sometimes the process is fine and you just need to automate the execution. Our read: both approaches work for different types of work. Commoditized, repeatable tasks can be automated in place. Complex workflows with multiple handoffs require redesign. The mistake is treating all work the same way.

**Practitioner callout.** If your AI implementation plan looks like "replace step 3 with an AI agent," you're setting yourself up to join the 98% failure rate. Start instead with: "if we were designing this process today with AI available from day one, what would we build?" The gap between those two answers is your actual opportunity.

business processes are built around humans... you're trying to take a technology such as agentic AI and plug it into where a human is. And I don't think that's the right solution

IT Leader, Public rideshare company

If you only treat AI as a cost lever, you're going to miss the reinvention opportunity of workflows. The real upside is redesigning these workflows at 100%, eliminating handoffs, compressing the cycle times

IT Leader, Enterprise cybersecurity company

Another pattern I would say is over indexing on tools. And there's a lot of focus on tools right now... let's not overindex on tools. Let's overindex on redesigning our processes.

IT Leader, Enterprise cybersecurity company

The Rise of Micro-Applications

**You'll get:** Identify the three most painful tool gaps your team complains about. Before starting a vendor evaluation, spend one week seeing if you can build a minimum viable version using AI-powered development tools. If you can get to 60% of the functionality in a week, building is probably faster than buying.

*The SaaS consolidation narrative says companies are reducing their tool count, cutting vendors, simplifying their stack. But something unexpected is happening: companies are building an order of magnitude more applications than they had before. Not fewer tools, more tools. Just different ones.*

Eight of sixteen leaders reported the same pattern: AI is enabling the creation of highly customized, purpose-built applications for small user populations. The shift isn't from many tools to few tools, it's from generic tools serving everyone poorly to specific tools serving small groups perfectly.

Dane, CTO at Cloudflare, sees it clearly: "Companies will go from having a few applications to actually having an order of magnitude more, but they'll just be very custom, very purpose-built for small populations of users." The economics have changed. Building a custom tool for 10 people used to require months of engineering time. Now it takes days or hours.

The democratization is real. "Pretty much everyone from a BDR to someone in finance I've seen building their own apps recently," Dane told us. This isn't IT building applications for business users. This is business users building applications for themselves. The barrier to entry collapsed.

Jacob Bank at Relay App runs a 9-person company supporting a large customer base. The entire company operates on custom workflows built specifically for their needs. "I think we're able to build a lot more customized software that's very tailored towards our needs," he explained. The alternative, buying generic tools and configuring them, takes longer and delivers worse results than building exactly what you need.

The shift is forcing a rethink of build vs. buy decisions. Piru Chheang at Zip articulates the calculation: "A part of me thinks like, I could probably script something similar. It's not gonna be AI, but I could script a process that could do something that the AI is doing." The question isn't "can we build this?" anymore. It's "can we build this faster than we can evaluate, purchase, and implement a vendor solution?" Increasingly, the answer is yes.

What's driving this? AI-powered development tools that compress build time from months to days. No-code and low-code platforms that let non-engineers build functional applications. And a growing recognition that generic tools force you to adapt your process to the tool, while custom tools adapt to your process.

The implication for vendors: the competition isn't other vendors anymore. It's the customer's ability to build it themselves in a weekend. The defensibility has to come from something other than features, network effects, data moats, or integration complexity that genuinely can't be replicated quickly.

The proliferation of custom applications is being enabled by a group that's undergoing its own transformation: IT teams themselves.

**Counter-signal.** Not everyone agrees, and the split is revealing. Zero leaders disagreed with this trend, but several noted that security and governance become major concerns when everyone is building their own applications. The risk: shadow IT proliferates, data security becomes impossible to manage, and integration complexity explodes. Our read: the trend is real and accelerating, but companies need governance frameworks that enable building while preventing chaos. The answer isn't to stop building, it's to build with guardrails.

**Practitioner callout.** If your procurement process treats every new tool request as a six-month evaluation, you're going to lose to teams that build custom solutions in six days. The question isn't whether to allow building, it's how to enable it safely with the right security and data governance in place.

companies will go from having a few applications to actually having An order of magnitude more, but they'll just be very custom, very purpose-built for small populations of users

CTO, Public security/infrastructure company

Pretty much everyone from a BDR to someone in finance I've seen building their own apps recently

CTO, Public security/infrastructure company

I think we're able to build a lot more customized software that's very tailored towards our needs

CTO, Public security/infrastructure company

IT's Evolution to Revenue Driver

**You'll get:** Rewrite your IT team's top three goals for this quarter in terms of business outcomes, not operational metrics. Not "reduce ticket backlog by 20%", "free 40 hours per week of sales team capacity by eliminating approval bottlenecks." If you can't translate your goals to business impact, you're optimizing for the wrong outcomes.

*IT teams have spent decades fighting to be seen as more than ticket processors and cost centers. The narrative was always the same: IT enables the business, IT is a strategic partner, IT deserves a seat at the table. But the teams actually getting that seat aren't the ones making the argument, they're the ones proving it by directly impacting revenue.*

Ten of sixteen leaders described the same shift: IT teams are evolving from service providers to revenue drivers, measuring their success by business outcomes rather than operational metrics. The change isn't aspirational, it's happening right now in how teams are structured, what they measure, and how they're compensated.

Rachel Jin at Trend Micro frames the shift explicitly: "I know in the past the IT team you have nothing related to our revenue growth. You have nothing related to our annual recurring revenue ARR but right now I hope you started to think about what's your value to grow our business." That's not a suggestion, it's an expectation. IT teams that can't articulate their revenue impact are getting left behind.

Shyam at Nextdoor sees it in how success is measured: "You're not just a service team, you're working hand in hand with business, you're impacting revenue directly and directly, you are making your employees productive." The metric that matters isn't ticket resolution time, it's how much revenue-generating capacity you're unlocking in the organization.

The practical manifestation: IT teams are adopting Experience Level Agreements (XLA) instead of Service Level Agreements (SLA). "I always talk about XLA, which stands for Experience Level Agreement," Shyam told us. "Don't just think about like, I met my 98 person SLA. Think about what is the XLA number I kept." The shift is from measuring operational efficiency to measuring business impact.

Praniti Lakhwara at Zscaler describes the role evolution: "So I think the shape and form of IT is changing to more offense than defense in many ways. Because instead of waiting for business to bring you ideas, you're proactively identifying high impact use cases because you're the technologist." IT isn't responding to requests anymore, IT is identifying opportunities and driving implementation.

What's enabling this shift? AI tools that deflect routine work, freeing IT teams to focus on strategic projects. At Zip, Piru Chheang's team deflects 50% of tickets using AI-powered service management. That deflection isn't the goal, it's the enabler. The freed capacity goes toward projects that directly impact revenue: improving sales tools, optimizing customer onboarding, building internal platforms that accelerate product development.

The teams making this transition successfully share a pattern: they stopped talking about IT capabilities and started talking about business outcomes. Not "we implemented a new ticketing system", "we reduced sales cycle time by 15% by eliminating approval bottlenecks." The language shift reflects the role shift.

These shifts, from augmentation to brand building to process redesign, represent a fundamental rethinking of how GTM works. But they're not happening uniformly, and the disagreements reveal as much as the consensus.

**Counter-signal.** Not everyone agrees, and the split is revealing. Zero leaders disagreed with the trend toward IT as revenue driver, but several noted that the transition is difficult when IT teams lack business context or when leadership doesn't give them the mandate to operate strategically. Our read: the shift is inevitable, but not automatic. IT teams need to actively build business acumen and leadership needs to create space for IT to operate strategically. The teams stuck in reactive mode aren't there because they want to be, they're there because the organization hasn't evolved to let them be anything else.

**Practitioner callout.** If your IT team's quarterly goals are still written in terms of uptime percentages and ticket resolution times, you're measuring the wrong things. The question isn't "how efficiently did we resolve tickets?", it's "how much revenue-generating capacity did we unlock in the organization this quarter?"

I know in the past the IT team you have nothing related to our revenue growth. You have nothing related to our annual recurring revenue ARR but right now I hope you started to think about what's your value to grow our business.

Chief Platform and Business Officer, Enterprise cybersecurity company

you're not just a service team, you're working hand in hand with business, you're impacting revenue directly and directly, you are making your employees productive

CIO and CSO, Enterprise social technology company

I always talk about XLA, which stands for Experience Level Agreement... don't just think about like, I met my 98 person SLA. Think about what is the XLA number I kept

CIO and CSO, Enterprise social technology company

Work With Catalyst

Insight without distribution
is just a document.

3.2×

more pipeline from founder content

500+

B2B GTM leaders trust our research

$5M–50M

ARR sweet spot we work in

We work with B2B companies that know content is the moat. Let's find out if we're the right fit.

Book a Discovery CallOr submit an application