Growing Engineers in the Agentic Tooling Era

Over the last few posts we’ve explored how AI tools are reshaping the development process. First, we talked about working with AI as a form of modern pair programming—two minds, one keyboard. Then we looked at how frameworks like the BMad Method introduce structure into AI-assisted development. But there’s another important question emerging for engineering leaders: how do we continue to develop engineering talent in a world where tools like Claude and GitHub Copilot can write large amounts of code?

The concern often sounds something like this: If AI can generate code, how will junior engineers learn? Will they skip the fundamentals? Will there still be meaningful work for early-career developers? It’s a reasonable concern, but it may also be based on a misunderstanding. The reality is that the core process for developing engineering talent hasn’t changed nearly as much as people think.

The Problem: Fear That AI Will Replace the Learning Curve

Historically, the way engineers learned on the job was simple and very practical.

A new graduate or junior developer would join the team and receive a small amount of onboarding:

  • How the codebase is structured
  • How deployments work
  • The team’s coding standards
  • The tools and frameworks being used

Then the real learning began.

Managers or senior engineers would assign progressively larger tasks:

  1. Add a field to a form
  2. Build a small feature
  3. Extend a module
  4. Own a subsystem

Through this process, junior engineers learned by doing. They wrote code, made mistakes, received feedback, and gradually built confidence.

When people look at AI today, they worry that this progression disappears. If an AI can generate the code instantly, does the learning opportunity disappear with it?

The answer is no. The learning model is still fundamentally the same.

The Solution: AI as a Learning Amplifier

The key shift is that the junior engineer now works with an AI assistant during the process.

Instead of writing every line of code manually, they collaborate with tools like Claude or Copilot to generate the initial implementation.

But the responsibility for understanding and validating the code remains with the engineer.

The workflow might look like this:

Step 1: Assign the same small task

The manager still assigns the same type of work:

“Add a new field to this form and store it in the database.”

Step 2: Use AI to generate a starting point

The engineer asks the AI:

“Generate the code needed to add a ‘Customer ID’ field to this React form and persist it to the API.”

Step 3: Review and validate

The critical learning happens here.

The engineer must confirm:

  • Does the code follow our coding standards?
  • Does it integrate correctly with the existing module?
  • Are validation rules correct?
  • Does it handle edge cases?

Step 4: Improve and refine

If the generated code doesn’t align with the team’s approach, the engineer refactors it.

This mirrors the traditional learning loop: Attempt → Review → Improve

The difference is that the first attempt may be generated by AI, but the learning still comes from understanding and improving the result.

Evidence: Why This Model Still Works

In practice, many engineering teams experimenting with AI-assisted development are seeing something interesting.

Junior engineers often become productive faster.

Instead of spending hours stuck on syntax or documentation, they can:

  • Explore solutions quickly
  • Understand patterns faster
  • Iterate on ideas in real time

This allows them to spend more time on higher-value learning activities:

  • Understanding architecture
  • Improving code quality
  • Thinking about system design

In other words, AI removes some of the friction around writing code, but it doesn’t remove the need to understand the code.

The same progression still happens:

Traditional GrowthAI-Assisted Growth
Add a fieldGenerate and validate a field
Build a formGenerate and refine a form
Build a moduleDesign and orchestrate a module

The tasks remain the same—the tools simply accelerate the first draft.

What Changes for Engineering Leaders

The biggest shift may not be for developers at all—it may be for engineering managers.

If junior engineers can move faster with AI assistance, it becomes possible for a single manager or senior engineer to support more developers than before.

Instead of reviewing every line of code, leaders focus on:

  • Architectural guidance
  • Code quality standards
  • System design decisions
  • Mentoring engineers on judgment and tradeoffs

In this environment, the role of leadership evolves from code oversight to engineering coaching.

The goal becomes teaching engineers how to:

  • Prompt effectively
  • Evaluate generated code
  • Align outputs with engineering standards
  • Think critically about design decisions

The Future: Same Journey, Better Tools

Despite all the excitement around AI, the core journey of becoming a great engineer hasn’t changed.

Developers still learn by:

  • Solving real problems
  • Iterating on solutions
  • Receiving feedback
  • Gradually taking on more responsibility

The difference is that the tools available today can dramatically accelerate the process.

For organizations building teams in the agentic tooling era, the opportunity is clear:

  • Continue hiring and developing junior engineers
  • Integrate AI into the learning process
  • Focus mentorship on judgment rather than syntax

Because even in a world of AI-generated code, great software still depends on great engineers.

And the best way to build great engineers is still the same as it’s always been:

Give them real problems to solve, support them as they learn, and let them grow.

Your AI Writes Fast. BMad Makes Sure It Builds Right

Over the past year, a quiet shift has been happening in how software gets built. Tools like Claude, ChatGPT, and GitHub Copilot are changing the development process in ways that feel surprisingly familiar. For many engineers, the experience resembles something developers have practiced for years: pair programming. The difference now is that one side of the “pair” is an AI assistant, instead of two developers sitting side by side, it’s a human collaborating with an AI, sharing a single keyboard and building software together.

The challenge is that most people still think of AI coding tools as autocomplete on steroids. When used that way, asking AI to simply do the work — the results feel inconsistent or shallow. What many teams haven’t yet realized is that these tools work best when treated like a real development partner. The shift is subtle but powerful: instead of asking AI to generate code in isolation, you collaborate with it the same way you would with another engineer sitting beside you.

AI as a Pair Programming Partner

Traditional pair programming involves two roles:

  • Driver — the person typing at the keyboard
  • Navigator — the person reviewing, thinking ahead, and suggesting improvements

When working with AI tools like Claude, the same pattern emerges naturally. The developer remains the driver, making architectural decisions, steering the problem, and validating outcomes. The AI becomes the navigator, helping explore options, identifying edge cases, generating scaffolding, and reviewing logic.

The interaction might look something like this:

  1. Developer frames the problem — “I’m building a React component that handles document uploads and validation.”
  2. AI suggests approaches — architecture patterns, libraries, validation strategies.
  3. Developer refines the direction — “Let’s use TypeScript and handle file size and MIME type validation.”
  4. AI generates an initial implementation.
  5. Developer critiques and improves — “This logic needs better error handling.”
  6. AI helps refactor or extend.

This loop continues until the feature is complete. The human remains responsible for judgment, while the AI accelerates thinking and execution.

Working this way changes how development feels day to day. AI eliminates much of the mechanical overhead — boilerplate, documentation lookup, and scaffolding can be generated instantly. When you’re stuck on a design decision, AI can quickly explore multiple options, acting as a brainstorming partner. It can evaluate code continuously, identifying edge cases, suggesting refactoring opportunities, and highlighting potential bugs. For developers exploring new stacks, it can explain concepts in real time while generating working examples.

The loop becomes shorter and more collaborative:

Traditional workflow: Think → Search → Read Docs → Write Code → Debug

AI pair programming workflow: Think → Discuss with AI → Generate → Refine Together

When Speed Needs Structure

This is where many teams hit a second challenge. Once you’ve experienced how fast the AI pair programming loop moves, a new problem emerges: speed without structure can lead to messy architectures, unclear requirements, and rework later. When developers rely too heavily on prompting without a clear workflow, the process can become chaotic.

The real opportunity isn’t just coding faster — it’s creating a repeatable process where AI helps move work from idea to production in a disciplined way. The best developers in this new world won’t simply be the ones who write the most code. They’ll be the ones who know how to direct the system, ask the right questions, and collaborate effectively. And to do that consistently across a team, you need more than a good instinct for prompting. You need a framework.

That’s where the BMad Method comes in.

Structured AI Collaboration with the BMad Method

The BMad Method — which stands for Build More Architect Dreams — is an AI-driven development framework that takes you from ideation and planning all the way through to implementation. Rather than treating AI like a code generator you prompt ad hoc, BMad gives you specialized AI agents, guided workflows, and structured context management that adapts to your project’s complexity, whether you’re fixing a bug or building an enterprise platform.

It works with any AI coding assistant that supports custom system prompts, including Claude Code (the recommended option), Cursor, and Codex CLI.

The key insight behind BMad is that AI agents work best when they have clear, structured context. Without it, agents make inconsistent decisions. BMad builds that context progressively across phases, so each step informs the next.

How the BMad Workflow Is Structured

BMad organizes development into four phases, each producing documents that feed into the next.

Phase 1 — Analysis (optional) Before committing to building anything, you can explore the problem space. BMad provides workflows for brainstorming, market research, domain research, and capturing a strategic product brief. This phase is optional but valuable when requirements aren’t yet clear.

Phase 2 — Planning Define what to build and for whom. This is where you create a Product Requirements Document (PRD) that captures functional and non-functional requirements, and optionally a UX spec if user experience decisions need to be made explicit.

Phase 3 — Solutioning Decide how to build it. This phase produces an architecture document with decision records, breaks requirements into epics and stories, and includes an implementation readiness check before any code is written, a deliberate gate that prevents the “just start coding” trap that leads to rework.

Phase 4 — Implementation Build one story at a time. BMad’s developer agent implements stories, a code review workflow validates quality, and a retrospective captures lessons learned after each epic.

Quick Flow: When You Don’t Need the Full Process

BMad is pragmatic. Not every task warrants four phases of planning. For bug fixes, refactoring, small features, and prototyping, there’s a parallel track called Quick Flow that takes you from idea to working code in just two steps:

  1. bmad-quick-spec — A conversational planning process that scans your codebase, asks informed questions, and produces a tech-spec.md file with ordered implementation tasks, acceptance criteria, and a testing strategy.
  2. bmad-quick-dev — Implements the work against the spec, runs a self-check audit against all tasks and acceptance criteria, then triggers an adversarial code review before wrapping up.

If Quick Flow detects that scope is larger than it first appeared, it offers to escalate automatically to the full PRD workflow — without losing any work already done.

The Role of Specialized Agents

One of the things that makes BMad distinct from a generic AI workflow is that it uses named, specialized agents for different roles:

AgentRole
Mary (Analyst)Brainstorming, research, product brief
John (Product Manager)PRD creation and validation
Winston (Architect)Architecture and technical decisions
Bob (Scrum Master)Sprint planning and story creation
Amelia (Developer)Implementation and code review
Barry (Quick Flow)Quick spec and quick dev
Paige (Technical Writer)Documentation

Each agent operates within a structured workflow rather than responding to open-ended prompts. This is what makes output predictable and consistent, rather than dependent on how well you phrased your last message.

The Difference Structure Makes

Teams experimenting with AI-assisted development typically go through a predictable evolution:

Stage 1 — Prompting for code. Developers ask AI to generate snippets or functions. Results are fast but inconsistent.

Stage 2 — AI as pair programmer. Developers collaborate interactively with AI to build features. Better, but still ad hoc.

Stage 3 — Structured AI workflows. Teams introduce frameworks like BMad to manage the process end to end.

The leap from stage two to stage three is significant. Without structure, AI development tends to produce inconsistent code quality, unclear design decisions, and duplicated logic. With a framework like BMad, you get predictable development cycles, better architectural outcomes, and artefacts that help onboard new engineers faster.

What This Means for Modern Development Teams

AI is changing how software gets written, but the bigger transformation is how software gets designed and delivered.

The pair programming analogy that opened this post is still the right mental model: the human as driver, the AI as navigator. But BMad takes that model and makes it work at scale, across a whole team, across a whole project lifecycle, not just in a single coding session.

Pair programming with AI gives developers speed. Frameworks like the BMad Method give teams discipline. Together they create something more durable than either alone:

Human creativity + AI acceleration + structured workflow.

Your AI partner is available 24/7, can explore thousands of possibilities instantly, and never gets tired. The question is whether your process is good enough to make the most of that.

Two minds. One keyboard. A system built to last. 

Learn more and get started at docs.bmad-method.org

Pair Programming with AI: Two Minds, One Keyboard

Over the past year, a quiet shift has been happening in how software gets built. Tools like Claude, ChatGPT, and GitHub Copilot are changing the development process in ways that feel surprisingly familiar. For many engineers, the experience isn’t completely new, it resembles something developers have practiced for years: pair programming. The difference now is that one side of the “pair” is an AI assistant. Instead of two developers sitting side by side, it’s a human developer collaborating with an AI, sharing a single keyboard and building software together.

The challenge, however, is that most people still think of AI coding tools as autocomplete on steroids. They expect them to simply generate code or answer questions. When used this way, the results can feel inconsistent or shallow. What many teams haven’t yet realized is that these tools work best when treated like a real development partner. The shift is subtle but powerful: instead of asking AI to do the work, you collaborate with it the same way you would with another engineer sitting beside you.

The Solution: AI as a Pair Programming Partner

Traditional pair programming usually involves two roles:

  • Driver – the person typing at the keyboard
  • Navigator – the person reviewing, thinking ahead, and suggesting improvements

When working with AI tools like Claude, the same pattern emerges naturally.

The developer remains the driver, making architectural decisions, steering the problem, and validating outcomes. The AI becomes the navigator, helping explore options, identifying edge cases, generating scaffolding, or reviewing logic.

The interaction might look something like this:

  1. Developer frames the problem
    “I’m building a React component that handles document uploads and validation.”
  2. AI suggests approaches
    It may propose architecture patterns, libraries, or validation strategies.
  3. Developer refines the direction
    “Let’s use TypeScript and handle file size and MIME type validation.”
  4. AI generates an initial implementation
  5. Developer critiques and improves the code
    “This logic needs better error handling.”
  6. AI helps refactor or extend

This loop continues until the feature is complete.

The important part is that the human remains responsible for judgment, while the AI accelerates thinking and execution.

What Changes When You Work This Way

Treating AI like a pair programmer changes how development feels day-to-day.

1. Faster Iteration

AI eliminates much of the mechanical overhead in development. Boilerplate, documentation lookup, and scaffolding can be generated instantly.

2. A Second Brain

When you’re stuck on a design decision, AI can quickly explore multiple options. It becomes a brainstorming partner.

3. Constant Code Review

AI can evaluate code continuously:

  • identifying edge cases
  • suggesting refactoring opportunities
  • highlighting potential bugs

4. Learning While Building

For developers exploring new stacks, whether it’s React, Supabase, or a new API, AI can explain concepts in real time while generating working examples.

Evidence: How Teams Are Already Using AI Pair Programming

Across engineering teams, this pattern is emerging naturally.

Many developers using tools like Claude or Copilot report that their workflow now looks like this:

  • Describe the feature in natural language
  • Generate an initial implementation
  • Iterate with AI on improvements
  • Validate and finalize the code

Instead of searching documentation or browsing Stack Overflow for answers, developers interact conversationally with their development partner.

Even experienced engineers are adopting this approach because it amplifies their productivity without removing control. AI becomes a force multiplier, not a replacement.

A helpful way to visualize the workflow:

Traditional workflow

Think → Search → Read Docs → Write Code → Debug

AI pair programming workflow

Think → Discuss with AI → Generate → Refine Together

The loop becomes shorter and more collaborative.

Why This Matters for the Future of Software Development

The biggest misconception about AI in development is that it will replace engineers. In reality, it’s transforming how engineers work together with machines.

The best developers in this new world won’t be the ones who simply write the most code. They will be the ones who know how to direct the systemask the right questions, and collaborate effectively with AI tools.

In many ways, AI is simply extending a practice developers already understand: pair programming. The difference is that your partner is now available 24/7, can explore thousands of possibilities instantly, and never gets tired.

For teams building modern applications, the opportunity is clear:

  • Treat AI like a collaborator, not a tool
  • Work in conversational loops rather than isolated coding sessions
  • Use AI to accelerate thinking, not just typing

The result isn’t just faster software development, it’s a fundamentally more interactive way of building software.

If you’re experimenting with AI tools today, try changing your mindset on the next feature you build. Instead of asking the AI to generate code, sit down with it like a colleague.

Two minds. One keyboard. Better software.

The Hidden Headwinds

What No One Tells You About Partnering with Microsoft

If you’re a tech company looking to scale, the allure of partnering with Microsoft is undeniable. With one of the world’s most powerful sales engines, a global footprint, and a vast customer ecosystem, it seems like a no-brainer. You sign up as a partner, align your solution to their cloud, and watch the leads pour in… right?

With over 25 years of working with Microsoft as a partner this is so far from the truth!

Microsoft tax

This post is about pulling back the curtain. As someone who’s spent years in and around the Microsoft ecosystem, as a partner, as a customer, and as a leader how has built strategic alliances I’ve seen both the gold and the gravel.

While partnering with Microsoft can absolutely unlock game-changing growth, it’s not as simple as flipping a switch and you have to keep reinventing yourself. Let’s talk about why.

Partnership ≠ Pipeline (At Least Not Right Away)

One of the most common misconceptions about partnering with Microsoft is the belief that doing so instantly translates to revenue and that every other partner is getting leads all day every day.

After all, Microsoft touts a massive partner network, co-sell programs, marketplace opportunities, and joint go-to-market initiatives. But the truth is, becoming a Microsoft partner is just the starting line, not the finish. And the path forward? It’s winding.

First, you need to understand that “Microsoft” is not a monolith. There’s Microsoft Corp (a.k.a. Redmond HQ) and then there’s Microsoft Subsidiaries (the regional field sales orgs). These two groups operate with very different priorities, and forging strong ties at Corp won’t automatically win you love (or pipeline) in the field. In fact, one of the biggest mistakes new partners make is investing heavily in Corp-level relationships, only to realize the local Subsidiary they’re trying to sell with doesn’t even know who they are.

Even more critical: if you don’t know what both Corp and the Subsidiary are measured on, you won’t know whether you can (or should) align. Microsoft’s priorities change every fiscal year, think Azure growth, security SKU attach, Fabric or Copilot adoption, and if your solution or service offering doesn’t map to the current targets, getting traction will feel like swimming upstream.

The Reality Check: You Have to Pay the Microsoft Partner Tax

Here’s a hard truth that’s rarely spoken out loud: if you’re not ready to “pay the Microsoft partner tax,” you won’t get much in the way of traction or engagement.

So what is the Microsoft tax?

It’s not a literal fee. It’s the investment, time, people, and money, that’s required to play the game at a level where you’re taken seriously.

Here’s what it looks like in practice:

  • You’ll need certified people on your team, especially those with Azure and security credentials.
  • You’ll be expected to show tangible cloud consumption, which often means becoming a CSP (Cloud Solution Provider) or at least working closely with one.
  • You’ll need to be present—at Microsoft events, in Microsoft Partner Center, at regional GTM calls, and often flying globally to make face-to-face connections.
  • You’ll need to build strong up and down relationships with Subsidiary teams in every market you operate in (and each country will be different).
  • Your senior leadership needs to show up, consistently. Not just once a year, but as a regular drumbeat of visibility and decision-making presence.
  • You may even need to commit on the spot, sponsoring an event, jumping on a joint marketing campaign, or funding a proof of concept, just to stay top-of-mind.
  • You might have to bet on bleeding-edge Microsoft products—investing early in developing skills, pitch muscle, and delivery experience for a solution that isn’t quite ready for primetime. And if you do win those early lighthouse deals, be prepared: they can often be painful, margin-squeezing loss-leaders as you wrestle with immature tooling and shifting product roadmaps.

All of this typically happens before you see much return. That’s why many companies dabble, but few go deep enough to win.

The Strategic Shift: Partnership is a Motion, Not a Moment

To succeed with Microsoft, you have to treat the partnership like building a second sales engine, not just slapping a logo on your deck. That means:

  • Aligning your value to Microsoft’s annual scorecard: Know what moves their needle this year and tailor your messaging, demos, and integrations accordingly.
  • Learning the rhythm of their fiscal year: Budgeting, planning, co-sell cycles, and performance metrics are all tied to this calendar.
  • Building field relationships intentionally: You’ll need local champions, joint account plans, and consistent syncs to stay relevant.
  • Committing leadership bandwidth and resources: Your CEO and senior execs need to be in the loop and visibly invested.

Proof in Practice: How Some Partners Make It Work

The partners who win with Microsoft don’t just show up, they build parts of their business around the ecosystem.

  • Veeam doubled down on Azure integration and CSP alignment, earning credibility with Microsoft field sellers by driving tangible cloud adoption.
  • Nintex aligned with Microsoft’s Power Platform narrative by integrating their automation tools and extended offerings, even though, on the surface, Power Automate could be seen as a competitor. This bold move turning Nintex from a potential rival into a complementary solution.
  • Elastic earned trust by delivering consumption-heavy scenarios that field sellers could easily plug into Azure deals, making them a multiplier, not a distraction.

What do these companies have in common? They aren’t afraid to make bold, ongoing investments, and they never stop turning into Microsoft’s evolving priorities.

It can be Worth It, But Only If you Invest

Partnering with Microsoft can be transformative, but it’s not transactional, it’s a commitment. You’re entering a relationship where alignment, visibility, and relentless execution are required.

Key Takeaways

  • Understand the difference between Microsoft Corp and Subsidiary, and why both matter.
  • Know what Microsoft is measured on before you try to align.
  • Accept that “partnering” means paying a real-world tax, in certifications, cloud consumption, travel, time, and senior commitment.
  • Build local and global field relationships proactively. This is a human game.
  • Stay agile. What worked last year may not move the needle this year.

Why SIs Should Embrace Channel Partnerships for Growth

In today’s ever-evolving technology landscape, being a successful systems integrator (SI) is no longer just about writing code or delivering projects. It’s about shaping end-to-end business outcomes, delivering scalable solutions, and creating long-term customer value. One of the most powerful levers to drive that growth? Becoming a channel partner for leading technology vendors. Whether it’s Microsoft, AWS, Salesforce, or emerging AI startups, aligning with the right partners can accelerate your business in ways that go far beyond reselling licenses.

This post will unpack the strategic why behind partnering, cover what makes for a high-functioning SI–ISV alliance, and offer some hard-won advice to avoid common pitfalls along the way.

The Problem: Growth Is Harder When You Go It Alone

Many SIs today face a common problem: they want to grow, but the path forward is crowded and competitive. Whether you’re a boutique consultancy or a regional powerhouse, the pressure is on to expand your services, win bigger deals, and deepen client relationships. But how do you scale when your current offerings are tapped out or lack the technical depth that clients increasingly expect?

Maybe you’ve built a killer automation practice, but you’re missing cloud-native AI expertise. Or perhaps you’re great at building bespoke solutions, but clients are asking for packaged IP. This is where the right technology partnership can help fill the gap and unlock new revenue streams.

The Solution: Strategic Channel Partnerships

Let’s start with what becoming a channel partner really means. It’s not just about getting a discount on licenses or slapping a logo on your website. Done right, it’s about mutual growth: the vendor grows their reach and adoption; you grow your services pipeline, win more deals, and improve your margins through differentiated offerings.

Here’s how that plays out across a few standout ecosystems:

  • Microsoft: One of the most mature and partner-driven ecosystems in the world. SIs benefit from co-sell motions, marketing development funds (MDF), and technical enablement. Whether it’s Power Platform, Azure, or Copilot, there’s a constant stream of customer demand—if you’re a preferred partner, you’re in the running for it.
  • AWS: Focused on technical excellence and specialization. Partnering here gives you access to APN programs, migration acceleration initiatives, and most importantly, credibility. AWS customers often only look for certified partners.
  • Google Cloud (GCP): While a bit leaner on channel maturity, GCP rewards innovation and AI-centric solutions. Their partner portal and sales alignment processes are improving, especially in AI and data workloads.
  • Salesforce: The AppExchange and consulting partner model creates a rich ecosystem for SIs to resell, build, and support. The Trailblazer community is particularly strong in verticals like finance, health, and public sector.
  • Nintex: A powerful process automation partner for SIs focused on digital transformation and workflow. Their clear partner tiers and technical enablement allow SIs to stand out with certifications and use-case IP.

Now add to that a wave of new AI-native companies like:

  • Anthropic (Claude), OpenAI, and Mistral AI – all with growing ecosystem and partner programs for integrating models or creating fine-tuned solutions for enterprise.
  • Cohere – especially if you’re focusing on retrieval-augmented generation (RAG) and privacy-sensitive deployments.
  • Aible – who offer partner-friendly AI solutions with low-code automation and business integration layers.

Business Outcomes: Growth, Differentiation, and Filling Gaps

So why jump in?

  1. Drive Top-Line Revenue: If you’re growth-oriented, there’s no better way to scale than by tapping into a vendor’s brand equity and customer base. Microsoft, Salesforce, and AWS all generate inbound leads for certified partners. That said, it takes real effort to make these partnerships pay off. I’ve spoken with many partners who feel like everyone else is getting more leads than they are, but that’s rarely the case. Success often comes down to consistent engagement, internal alignment, and patience.
  2. Fill Offering Gaps: If you’re missing security, observability, or AI expertise in your offering, partners like Datadog, Anthropic, or Microsoft help you round out the solution.
  3. Showcase Differentiation: Becoming a certified partner signals to customers that you know your stuff. If you’re a GCP Premier Partner or a Microsoft Solutions Partner, that badge means something and also it often is required to tap into funding.
  4. Combined Selling Power: When co-selling is in play, everyone wins. The vendor wants to land a new customer. You want to grow your services business. Working together through combined account teams can build real momentum and reduce sales cycle friction. That said, be mindful, sometimes, you’ll need to walk away from a deal. For example, if Microsoft brings you into an account to shift workloads off AWS, but the customer shows no interest, and you pivot aggressively to selling AWS instead, you could risk burning a bridge. Strategic alignment matters just as much as short-term opportunity.

What Great Partnerships Look Like

To maximize the value, you’ll need structure, on both sides. The most successful SI partner models include:

  • An Executive Sponsor: Someone internally who can clear roadblocks, approve investment (like getting your team certified), and show your commitment to the partner.
  • A Sales or Alliance Lead: This person builds the relationship, finds co-sell opportunities, and evangelizes the partner internally and to clients.
  • A Hands-On Delivery Champion: Someone who deeply knows the tech and can prove your team can actually implement the solution well.

Don’t overlook your internal comms either. Having a structured partner enablement plan—like monthly capability sessions, shared go-to-market (GTM) assets, and use-case examples, can help you scale faster across your teams.

Pitfalls to Avoid (And How to Fix Them)

Of course, not all partnerships are smooth sailing. Watch out for these traps:

  • Lack of internal alignment: If your execs see partnership as “just another vendor relationship,” it won’t get the internal muscle it needs. Assign clear roles and make sure someone owns success.
  • Over-commitment without ROI: Some programs require expensive certifications or quota commitments. Be honest about whether the opportunity justifies the investment. Start small, prove value, scale up.
  • Mismatched goals: Vendors are focused on license growth; you’re likely focused on services margin. Clarify early how you both win, and track joint KPIs like pipeline generated, deals closed, and service attach rate.
  • Under-leveraging resources: Many vendors have pre-sales engineers, marketing funds, and partner incentives, but you have to ask for them. Build a quarterly plan and activate those benefits.

Wrapping It Up: Play Bigger, Together

If you’re a systems integrator looking to grow, don’t go it alone. Channel partnerships can open doors to new clients, richer offerings, and higher-margin services. The key is picking the right partners, structuring internally for success, and avoiding the common traps that dilute value.

Next Steps:

  • Identify your offering gaps and see which vendor can help fill them.
  • Choose a strategic partner to start with and nominate your internal sponsor, sales lead, and delivery owner.
  • Build a 90-day GTM plan that aligns your goals with theirs.

If you’re not sure where to start? Look for ISVs that are your “plus one”. What do we mean by that? Simply put, find a partner that enhances what you already do, rather than requiring you to build an entirely new offering from scratch. Partnering is most effective when it’s an extension of your strengths, not a reinvention of your business. Unless you have deep pockets or strong internal buy-in, avoid partnerships that pull you too far out of your lane. Launching a successful partnership takes time, focus, and commitment. If your executive team isn’t aligned or patient with the ramp-up, you risk half-hearted execution and limited results.

About me

Understanding Token Math

Turning Ideas into Hard Numbers

There’s no shortage of hype around generative AI. From dinner table debates to executive boardrooms, people are abuzz with talk of AI transforming everything, from coding to customer service, risk analysis to recipe generation. Across industries, leaders are feeling the pressure to “do something” with AI. But what exactly?

As businesses look for ways to improve productivity and reduce costs, inference-based solutions can offer a smart entry point into the generative AI era.

Modernizing Legacy Applications

Organizations should be actively exploring how inference-based functionality can be integrated into their existing line-of-business (LOB) solutions. Done right, this has the potential to be a genuine game changer, not just another flashy demo.

To move beyond a generative AI proof of concept, leadership teams need to shift the conversation toward what it actually costs to run these solutions at scale. Without that clarity, operationalizing AI remains out of reach

From Cool Demo to Scalable Reality

Let’s talk about the hard part.

Once you move beyond a clever prototype and start considering inference in a production setting, several challenges appear. First, there’s latency and performance, your model needs to return results fast enough for real-world use.

Then there’s infrastructure. Do you run this on CPUs? GPUs? Where? And let’s not forget model size, fine-tuning, and security.

Today, I’d recommend starting with inference before diving into fine-tuning. But one question tends to dominate stakeholder discussions. What does it actually cost to run?

Whether you’re using OpenAI’s GPT models, your own LLaMA instance on Azure, or Hugging Face models via containers, the real question is, What’s the dollar cost per inference? And more importantly, what’s the cost per business transaction?

Three Paths to Inference Deployment

Let’s break down the three most common paths for running inference:

Option 1: API-Based Inference (Token-Based Services)

  • How it works: You consume a model via a managed API (like OpenAI, Azure OpenAI, or Cohere). You pay per token used.
  • Pros: No infrastructure overhead, rapid setup, great for experimentation and burst workloads.
  • Cons: Limited control over performance, latency, and data governance. You’re locked into model choices and pricing.

Option 2: Containerized Inference (Self-Hosted Models)

  • How it works: You run models like LLaMA or DeepSeek in your own cloud (or even on-prem) using GPU VMs.
  • Pros: Full control over the model and tuning, consistent performance, and easier cost predictability at scale.
  • Cons: High setup complexity, need for ML engineering expertise, GPU cost volatility, and you carry the burden of uptime and scaling.

Option 3: Hybrid Model (Burstable Inference)

  • How it works: You run a base level of dedicated GPU capacity and burst to an API when demand spikes.
  • Pros: Balances cost and performance, reduces latency under load, and provides fallback capacity.
  • Cons: Requires orchestration logic and potentially dual billing models, with added complexity to monitor.

What’s the Cost Per Business Transaction?

This is where it gets real.

An API request or a GPU inference run is not a business outcome. To justify the investment to leadership, you need to tie this back to actual workflows.

Use Case: Construction Site Safety Inspection with AI

Here’s a process you could automate with generative AI:

  1. A construction site photo is uploaded.
  2. A 10MB safety policy document is ingested.
  3. The model identifies any safety violations in the image by comparing it against the policy.
  4. A risk register is generated with identified issues and proposed mitigations.
  5. Tasks are created for the site manager to resolve each issue.

As a ballpark estimate lets say it costs on average of $250 to perform a site inspection and takes about 3 hours per visit.

What would it look like if you could automate most of this and do it daily across every construction site and only send a human when high-risk sites are identified ?

Token Math: Estimating Inference Cost with Real Data

Let’s get into the numbers. A quick and dirty way to estimate inference costs is what I call “token math.”

Assumptions,

  • Policy document: ~10MB of text → ~40,000–50,000 tokens
  • Photo (analyzed for context): ~500–1,000 tokens
  • Prompt: 100–300 tokens
  • Output (structured data, tasks, risk register): 500–2,000 tokens

That gives us a total token count per job of ~11,000–53,000, depending on prompt structure and policy complexity.

Now let’s look at the cost,

  • Containerized GPU run (e.g., LLaMA 8B on Azure low-end GPU VM): ≈ $0.72 per scan at the high end of token usage
  • API-based inference with a similar model ≈ $0.05 per scan

This doesn’t include supporting cloud infrastructure (storage, networking, orchestration), but those are relatively predictable costs that most teams already model.

So even at the higher end, $0.72 vs. $250 per inspection? That’s an eye-popping reduction. Even if you only automated part of the process and cut site visits in half, the ROI becomes clear.

What’s the Takeaway?

As you consider deploying generative AI in production, especially for inference-heavy use cases, the deployment model you choose has a dramatic impact on cost and flexibility.

  • APIs are great for speed and scale
  • Containers give you control and cost predictability
  • Hybrid models offer balance, if you’re ready for the complexity

But no matter the tech stack, the business case is won or lost on how clearly you map tokens to transactions, and dollars to outcomes.

Links to head over to if you want to read some more

Hugging Face Inference Endpoints – Hugging Face

Read about responsible AI if you are interested Responsible AI: Ethical policies and practices | Microsoft AI

If your want to build workflow solutions and inject GenAI check out CoPilot Studio https://www.microsoft.com/microsoft-copilot/microsoft-copilot-studio

If you want to build with AI check out

About me About me – Brendon Ford my Home of thoughts on the web

Demystifying the ISV Partner Channel

Why Partner Ecosystems Matter for ISVs

For Independent Software Vendors (ISVs), scaling beyond a direct sales model is no longer a luxury, t’s a necessity. In today’s competitive SaaS landscape, partner ecosystems serve as powerful growth engines that enable global reach, faster customer onboarding, deeper market penetration, and scalable service delivery.

Whether you’re just starting to explore indirect channels or refining a mature partner strategy, one truth remains: not all partners are created equal.

Navigating the partner ecosystem can feel like alphabet soup—GSI, NSI, SI, ISV, VAR, LAR, MSP. Each partner type brings different strengths, incentives, and go-to-market (GTM) models. Without a clear understanding of their roles, ISV leaders risk misalignment, lost revenue, or poor partner engagement. This guide breaks it all down.

One Size Doesn’t Fit All

Many ISVs step into the partner world assuming more partners mean more revenue, what it really means is busy work. But quality trumps quantity. A GSI won’t solve the same problems as a VAR. An MSP isn’t going to co-develop your product like an OEM might. Clarity is key.

This post helps you:

  • Understand the major partner types
  • Recognize what each brings to the table
  • Identify which ones are right for your strategy

Let’s look at the partner landscape.


Putting It All Together: Strategy Meets Partner Type (with a Microsoft Ecosystem Lens)

Microsoft’s partner ecosystem is one of the most mature and structured in the software industry, making it a powerful lens through which to understand how different partner types contribute to an ISV’s go-to-market success.

Each partner type aligns to specific business goals within Microsoft’s Cloud Partner Program and Azure Marketplace. Here’s how it typically breaks down:

Partner TypeIdeal ForExample Microsoft-Aligned ISV Scenarios
Global System Integrator (GSI)Enterprise deals, global deliveryDynamics 365 or Power Platform transformation via Accenture or TCS
National System Integrator (NSI)Regional growth, regulated industriesSlalom delivering Azure data solutions to U.S. healthcare orgs
System Integrator (SI)Ideally industry-focused, but often not. Access to midmarket customers and great for a product that requires services to be successful. Smaller SI’s are often nimble and flexible SI’s offer a huge range of services from building Microsoft Teams integrations, developing custom software solutions and now building agentic solutions
Independent Software Vendor (ISV)Product innovation and ecosystem playsDocuSign integrating with Microsoft 365 or Power Automate connectors
Value Added Reseller (VAR)Mid-market sales & deploymentCDW bundling Microsoft 365 with cybersecurity solutions
Large Account Reseller (LAR)Licensing scale & procurementSoftwareONE reselling Microsoft 365 or Azure consumption SKUs
DistributorReach and partner enablementTD Synnex recruiting resellers for Azure and Defender bundles
Managed Service Provider (MSP)Operational management & retentionRackspace offering managed Azure and Microsoft 365 environments

In the Microsoft world, co-sell readiness and marketplace presence are essential. ISVs looking to succeed here often:

  • Register as co-sell ready in Partner Center
  • List solutions in Microsoft AppSource or Azure Marketplace
  • Enable their partners through Solution Workspace and PDM relationships

Mapping your partner mix to Microsoft’s ecosystem could provides both strategic leverage and operational scale. Understanding how these partner types fit into Microsoft’s tiered programs, incentives, and co-sell motions can significantly accelerate your ISV growth.


Real-World Applications: How ISVs Use a Partner Mix

Below are some simple examples

1. ServiceNow

  • Works with GSIs NSI and SI’s (e.g., Deloitte, Accenture) for enterprise transformation
  • Builds with ISVs for App Store extensions
  • Leverages SIs and VARs for implementation

2. Okta

  • Partners with SIs and MSPs for identity management rollouts
  • Collaborates with ISVs (e.g., Zoom, Slack) for SSO integrations
  • Uses distributors to reach smaller resellers

3. Atlassian

  • Strong ecosystem of marketplace ISVs
  • Engages SIs for Jira Service Management implementations
  • Scales globally with distributors like Arrow

Know Your Partners, Know Your Growth

As your ISV scales, your partner strategy must evolve. Understanding the strengths and roles of each partner type ensures you can:

  • Design effective co-sell and delivery models
  • Fill ecosystem gaps with purpose
  • Accelerate growth without adding internal overhead

Next steps:

  • Map your ideal partner profile by business goal
  • Evaluate your current ecosystem for coverage and alignment
  • Explore partner enablement and co-sell plays

Excited for the Power Platform Community Conference!

I’m thrilled to be attending the Power Platform Community Conference on September 18th! 🎉 This event is a fantastic opportunity to connect with like-minded professionals, learn about the latest innovations in the Power Platform, and dive deep into the tools that help shape so many incredible solutions. I can’t wait to see what new ideas and best practices I’ll be able to bring back to my work after attending.

If you’re planning on joining or just curious, check out the event here: Power Platform Community Conference 2024.

Looking forward to sharing what I learn with all of you!

Blog refresh time!

I’m starting again, again. I’ve just moved hosting providers again and also starting a fresh. Over the coming month I’ll be drafting and publishing some new content!

What am I looking to share? At this stage I want to share content about BBQ’ing, tech and life in general,