Where the weight in software work is moving

Engineering teams that have been using AI coding tools in production for a year or more tend to show the same pattern. Output volume is clearly up. Bugs per developer are up too. Review time has gone through the roof. Something in the economics of the work has flipped, and the standard productivity numbers aren’t catching it.

What’s happening underneath the numbers is a split. Writing software used to be one thing. You thought about what to build, translated that into code, made sure the code worked, integrated it, fixed what broke. One person, one flow, one craft. The individual steps were visible, but they belonged together.

That’s not how it holds together anymore. The work has split into a part that’s getting faster, and a part that’s getting heavier. The typing leaves the developer’s hands. Almost everything that makes the typing worth anything stays with them, and carries more weight than before.

Where the volume is going

By early 2026, somewhere between 30 and 50 percent of code committed to GitHub is AI-authored or AI-assisted, depending on how you count. GitHub Copilot alone generates about 46 percent of code in users’ sessions. For Java developers it rises to 61. At Anthropic, an extreme case, the CPO reported late last year that “effectively 100 percent” of their code is AI-written, under human direction.

The direction is unambiguous. For greenfield work, boilerplate, translation between frameworks, scaffolding, internal tools that don’t need to survive: AI does the typing now. The typing has been promoted to a utility.

The typing is cheap. It also requires explicit attention for things that used to sit implicit. AI-generated code, left to itself, contains around 2.7 times more security vulnerabilities than human-written code. In pull-request analysis, AI PRs had about 1.7 times as many issues, with critical issues per hundred PRs up 40 percent. Forty-five percent of samples contained OWASP Top 10 vulnerabilities. For Java, the security failure rate reached 72 percent. The AI does what it’s set up to do. What used to happen inside a developer’s head while they typed, now has to happen somewhere else.

Where the weight is going

Here the conventional production economy inverted. Producing code used to be the expensive step. Review and integration were cheap, because you were checking the work of someone who already knew what they were building and why. Now producing code is cheap, and everything around it is expensive.

Median pull-request review time is up 441 percent. Bugs per developer, up 54. Incidents per PR, up 243. Senior engineers spend 4.3 minutes on an AI suggestion versus 1.2 minutes on one from a human. A study with senior developers on mature codebases produced the counterintuitive finding: engineers with AI tools were 19 percent slower, while believing they had been 20 percent faster. A forty-point gap between feels and is.

“Review” on its own is too narrow a word for what’s happening. The part that stays with the developer isn’t one task. It’s an expanding range:

  • Understanding what the code should actually do, in relation to a customer, a product, a business
  • Designing the architecture the code sits inside
  • Writing the context, briefs, and prompts that set the agents up to succeed
  • Building and maintaining the test infrastructure AI outputs have to pass through
  • Deciding what to trust, what to rework, what to throw out
  • Integrating what AI produces with systems it doesn’t have in context
  • Organising the work itself: how agents work together, what standing context they carry, where handoffs happen

This is what good developers have always done. The ratio has changed. In volume terms, AI takes on about eighty percent of the typing. In qualitative terms, eighty percent of what determines whether the output is any good stays with the human, and weighs more than before.

That’s the piece that’s easy to miss when looking only at output numbers. The output going up doesn’t mean the work got lighter. The weight moved.

What the shift feels like from inside

A small observation from my own work makes the pace tangible. Three months ago, most of what I did went through my keyboard. Now most of it goes through a set of instructions I give to something else. What stays with me is deciding what to do, reviewing what came back, integrating it with what already exists. Three months ago, not three years. I’m not a developer. I run a company. The same split runs through any work that involves producing something with AI in the loop. In software development it’s further along, and clearer.

For how this shows up in software specifically, a recent conversation with the founder of a Dutch development studio gave me a concrete picture.

They run about five agents per workstream. Different agents handle different layers: security review, infrastructure, dev-ops, code generation. One developer sits in the role of what he called an “AI-automation process consultant”: designing the context the agents work in, reviewing outputs against the architecture, deciding where a human has to stay in the loop. A team that used to ship features in weeks now ships in days. The typing volume is far higher. The human judgment layer is deeper, more deliberate, and also more front-loaded: much of it is encoded in how the agents are set up, briefed, and handed off, not performed fresh on every task.

That picture isn’t yet mainstream. But it’s how the front-runners are operating right now, in April 2026.

Where the value lives

Producing code has gotten cheap. Everything that makes code hold together has gotten more expensive. Architectural judgment. Specification. Test design. Security discipline. Context design for the agents. How the work itself is organised. Knowing what to trust in review. These are the parts that now decide whether anything produced with AI is worth shipping.

This changes where the value in a team lives. A team strong in architecture, in specification, in review discipline, in taste, will use these tools to compound. A team weak in those places will compound the weakness. AI amplifies what’s already there.

Whether working this way is worth it comes down to those conditions. The same setup produces compounding in one team and technical debt in another.

The split isn’t a loss of work. It’s a reallocation of weight. What used to live in the typing now lives in the thinking around the typing. That’s harder, not easier. It’s also a lot more interesting.

There is a manual for hiring across cultures. It’s only 3 questions long.

Hiring across cultures comes with questions you didn’t quite see coming. What’s expected of you. What’s expected of them. What you can joke about and what you can’t. Whose job it is to flag a problem, and when. Somewhere between signing the contract and the first call, you realise: nobody handed you the manual.

That missing manual is not a small thing. A year of quiet cultural friction can erode the whole business case for hiring across borders in the first place.

So the question surfaces, often quietly: what do I need to know about their culture to make this work?

What the standard answer offers

There’s a body of well-developed models in this sphere (Hofstede, Meyer’s Culture Map among the best known), country scores you can look up, and trainings you can book. None of it is wrong. Each model was built carefully and answers a real question: usually some version of how do these two cultures differ on this specific dimension.

That’s a useful question for some things. But it’s not very predictive of how the next 12 months of working with a specific person will go.

In 10 years of doing this work I’ve never run into a model or framework on this topic that stuck. That isn’t a verdict on the models. It’s a verdict on the question they answer, which sits a layer or two removed from the question that matters when an engagement is starting on Monday.

If you analyse what founders and CTOs actually talk about on X when they describe their distributed teams, it’s almost always about a specific operational incident, not about a dimension score.

What did stick is smaller than a framework. 3 questions, asked early. Together they take less than an hour, and over 10 years they’ve predicted more about how an engagement goes than any country score I’ve ever looked at.

Start with what success looks like for both sides

If you’re hiring across borders, surface what success looks like early, for the person doing the hiring and for the person being hired. What does success look like over the next 6 or 12 months for each of them? Where do those overlap, and where don’t they?

Most engagements skip this and go straight to scope. The cost of skipping shows up later, usually as misalignment that gets diagnosed as a culture clash but is really the absence of a shared frame.

Research shows that when two groups share a goal neither can reach alone, the salience of group difference drops. Shared goals don’t dissolve cultural difference. They make it operationally subordinate to something else.

A 15-minute conversation. One of the cheapest moves available, and one of the most consistently skipped.

Then ask where value differences actually create risk

Once shared success is named, value differences become assessable instead of abstract. The question shifts from do they share our values, which is unanswerable and operationally of limited use, to in this specific engagement, with this specific success definition, where do value differences create operational risk and where don’t they.

In practice, most differences turn out to be operationally inert. A new hire who personally holds different views about religion, family, or political life than the people doing the hiring is, in the context of shipping a feature on Friday or building a payments integration, almost always indistinguishable from someone who holds similar views. This isn’t optimism; it tracks the well-researched gap between what people say they think and how they actually behave in structured professional contexts.

A small subset of differences do matter, and they matter specifically. A team norm of openly disagreeing with the most senior person in the meeting requires a different setup with someone trained in a workplace where deference to seniority is the default. A culture of giving sharp, immediate written feedback requires a different setup with someone whose professional formation taught them to deliver criticism slowly and through indirection.

Most value differences are operationally inert. The few that aren’t tend to be specific, surfaceable, and manageable once you’ve named them. The work in this step is identifying which is which for this engagement, not for engagements in general.

Tell the new hire what they’re signing up for

This is the question that’s most often missing, and it’s the one that holds the other two together.

If part of what counts as success on the hiring side, even implicitly, is that the new hire will shift over time on something specific, the new hire has the right to know that going in. Not as confrontation. As transparency.

The “something specific” is almost never about personal beliefs. It’s about professional norms. A developer joining a team where the expectation is that you push back hard on the CTO’s design choices in the standup may have been trained in an environment where you raise concerns privately afterward, with seniority preserved. That’s a real shift to ask of someone, and it’s reasonable to ask it. It’s not reasonable to ask it without saying so.

Other examples are mundane. A team that operates fully async, in writing, with everything in English, can be a real adjustment for someone whose default working rhythm is verbal and synchronous. A hiring manager who wants problems flagged at the moment they appear, rather than when there’s a proposed solution alongside, is asking for a specific way of working, not a personality.

Naming this stuff up front does two things. It lets the new hire make an informed choice about whether the engagement is one they want; most do, and they’d rather know. And it removes the possibility of the engagement quietly drifting into bridging-by-assimilation, where the new hire absorbs the new norms without ever having been asked.

What this question does, quietly, is treat the new hire as a professional adult capable of deciding whether to take the engagement, which is what they are.

What this comes down to

These 3 questions aren’t much. Not a framework, not a training. They take less than an hour at the start of an engagement and they don’t ask you to walk on eggshells, defuse history, or solve anything bigger than the engagement itself.

What they do is small and specific. They surface what’s going to matter in this collaboration, before the misreads accumulate. What you get back from that, in my experience, is much more than a hire.

AI’s real value is in your operations, not your product

Everyone around you seems to be hiring AI engineers. AI job postings are up 143% in a year. The wage premium is 56%. The message is hard to miss: if you’re serious about AI, you need serious technical talent.

But the idea of competing for machine learning talent against companies with 10 times your budget is not very appealing. Especially when your team is at capacity and there are 4 things on your plate that needed attention yesterday.

Surprisingly, the companies actually getting value from AI aren’t playing that game.

What founders actually report

When founders talk about their first real AI win, the stories don’t match the headlines. It’s not a recommendation engine, a chatbot, or a clever ML model embedded in the product.

It’s operations.

One Head of Finance rebuilt their financial modelling with AI. Two hours instead of two weeks. A healthcare founder found the real value not in automating what already existed, but in redesigning clinical process maps entirely. Another founder tracked the numbers precisely: $74 a month in AI tools. Twenty hours a week saved. $58,000 a year in recovered time.

On X, where founders share what they’ve actually done rather than what they plan to do, this pattern is consistent. Roughly 70-80% of posts about first AI investments describe operational wins: workflow automation, internal tooling, process redesign. Product-feature stories exist, but they are the minority. One founder captured it well: “Underrated founder skill: knowing when NOT to build. The best operators automate what they already have before adding anything new.”

No ML models. No training data pipelines. No AI engineers. Process by process, workflow by workflow.

First, rethink your workflows

This is not just founder anecdote.

88% of organisations use AI in at least one function. Only 5-7% create substantial value at scale. BCG surveyed 13,000 people across 15 countries and found the same: 72% of employees use AI regularly, but 60% of organisations generate zero measurable value from it.

Those numbers are worth sitting with. The tools are there. Most people have access. The technology works. And still, few are seeing real results.

McKinsey looked at nearly 2,000 organisations across 105 countries to understand why. The single strongest predictor of whether AI delivers measurable business impact is not the technology you choose. It is whether you fundamentally redesign your workflows. Companies that do are 3.6 times more likely to see real financial returns. More than half of the high performers completely reworked how work gets done before deploying AI.

What’s missing is not better technology. It’s someone who has thought about which processes to change and how.

The market is hiring for what AI looks like in the headlines: engineers who build models. The value sits somewhere else.

Why this is more accessible than you think

If the value is in operations, not engineering, that changes the cost picture entirely.

The average cost of integrating an AI solution for a small or medium business dropped from $15,000 to $3,000 between 2023 and 2026. An 80% decrease. Most businesses can start for $20-100 per month per user, using off-the-shelf tools.

Operational AI is dramatically more accessible than most people think.

But only 5% of SMBs using AI are what researchers call “fully enabled.” And when the ECB surveyed European firms that had not adopted AI, 30% said the reason was “lack of usefulness.” In a year when the technology is demonstrably capable, “lack of usefulness” really means: nobody has shown me what to do with it in my specific situation.

That is the actual bottleneck. Not the technology. Not the price. The thinking.

What the thinking looks like

I’ve been on this path myself. At Tunga we started the way most companies do: give everyone access to the tools, see what happens. And something did happen. Some people took to it immediately. Others opened it once. The classic pattern that MIT documented: power users in the same company send six times as many AI messages as the median employee. Same tools, same access, very different outcomes.

So we tried a second approach: map your own workflows, brainstorm per step where AI could help. I stopped that exercise halfway through. Not because it was a bad idea in principle, but because I realised you can’t expect everyone to think like a process architect. It was becoming fragmented. Everyone pursuing different tools, different solutions, varying quality.

What actually worked was a different question entirely. Not “where can we use AI?” but “what information do we produce, and how do we make sure it’s available at the right moment?”

That sounds abstract. In practice, it’s concrete. It is thinking about where you store a file so that a tool can actually find it. It is mapping which steps in a process can be handled by something else if the right context is provided. It is the kind of thinking that has nothing to do with machine learning and everything to do with knowing your own business.

I ended up creating two new roles: a Context Architect who designs how information flows through the organisation, and a Context Manager who maps processes per role and builds the solutions. The investment is a fraction of what you’d expect. Less than half a percent of our monthly revenue, partly because both roles are filled by our team in Uganda. But even adjusted for that: this is not the kind of investment that requires board approval.

Every company will find its own version. But the pattern is transferable: the work that makes AI valuable in your organisation is not engineering. It is organisational thinking.

What this changes

The evidence points in one direction. The companies that get real value from AI are not the ones that hired the most engineers. They are the ones that rethought their processes. McKinsey, BCG, and the lived experience of founders all converge on the same finding: the value is in the redesign, not the technology.

That shifts the question from “who do I need to hire?” to “how well do I understand my own operations?” And that is a question every founder is qualified to answer.

The AI revolution feels like something outside your expertise. It isn’t. It is about your processes, your information, your organisation. The technology is a commodity. The thinking is yours.

Groeivoer Podcast: What Happens When AI Reshapes the Roles You Built Your Company Around?

Our founder Ernesto Spruyt recently joined Guido Koershuis on the Groeivoer podcast (episode 390) for a wide-ranging conversation about building teams across cultures, what AI means for the future of software development, and the things that should never be automated.

Some of the topics covered:

  • Why the “first moment of truth” in cross-cultural hiring changes everything: the moment a client realizes talent in Lagos or Kampala matches or exceeds what they’ve seen locally
  • How AI is reshaping software development roles within the next year, and what that means for a company built on placing developers
  • The cultural differences between working with talent from Nigeria, East Africa, and other regions, and why matching culture matters as much as matching skills
  • What makes a company unique, and the risk of automating away exactly the thing that sets you apart
  • How to protect your identity as a company while embracing automation

The podcast is in Dutch.

Listen on Spotify:

Watch on YouTube:

Groeivoer is a Dutch podcast for entrepreneurs, hosted by Guido Koershuis. The show explores growth, leadership, and the human side of building a company.

The SaaS sector is supposedly dying. Here’s what the data actually shows.

If you run a SaaS company, you’ve been reading a lot of alarming things lately. AI will make your product irrelevant. The “SaaSpocalypse” is here. The era of software subscriptions is over. If even half of this is true, it changes what your company is worth, who will buy from you, and whether your team needs to look different a year from now.

Three quarters of our clients at Tunga are SaaS companies. So I went and looked at what is actually happening. Not the sentiment. The data.

Two sets of numbers

The crash numbers are real. On January 29, 2026 the S&P 500 Software Index dropped 8.7% in a single day. SAP lost over €40 billion in market value. In the first week of February, an estimated $285 billion was wiped out. The total correction is estimated at over $1 trillion.

But there is a second set of numbers. The global SaaS market grew to $408 billion in 2025 and is projected to hit $465 billion in 2026. Gartner forecasts software spending growing 15.2% this year. The average organisation’s SaaS spend is up 8% year over year.

Stock prices crashing while revenue keeps growing is not a contradiction. The market prices uncertainty about the future, not a collapse in the present.

Not all SaaS is equal

This is where it gets specific. The total market is growing, but underneath that growth, a selection is taking place. And the difference comes down to what your product actually does.

One type of SaaS company is essentially a database with an interface on top. It stores data, makes it accessible, and the value sits in the convenience of that interface. Simple CRMs, form builders, basic project management, standard dashboards. This type is becoming vulnerable. AI agents are increasingly capable of doing exactly that: storing, retrieving, and presenting data without needing a separate product for it.

Another type helps you execute. It manages complex workflows, carries responsibility for compliance or financial logic, and is deeply embedded in how an organisation actually runs. This type is not being replaced, because the cost of getting it wrong is too high. An AI that gives the right answer 6 out of 10 times is not usable for payroll or financial reporting.

Gartner puts a number on this: 35% of simple, single-purpose SaaS tools will be replaced by AI agents by 2030. That’s substantial. But it means 65% survives, likely in adapted form. IDC confirms: workflow automation and small-business SaaS are the most exposed. Platforms embedded in core business processes are not.

The model changes, the sector doesn’t

The per-seat pricing model is under pressure. When one person with AI support does the work of five, the link between licence count and value breaks. The share of SaaS companies using some form of usage-based pricing has risen from 27% in 2021 to somewhere between 38% and 61% today, depending on whether you count hybrid models. That is not a sector dying. That is a business model adapting.

And the AI tools that are supposedly replacing SaaS? They have their own challenges to figure out first. An analysis of 3,500 companies shows AI-native products have a median gross retention of just 40%, compared to around 90% for traditional B2B SaaS. Gross margins average 25% versus 75-80%. Much of what is currently counted as AI revenue appears to be companies experimenting with AI tools. There is no guarantee they’ll keep spending once the experiment phase is over.

Both sides of the market are in transition. But the idea that one is simply replacing the other doesn’t hold up in the data.

What this comes down to

The SaaS sector is not collapsing. The market is growing, spending is up, and companies that build something deeply embedded in how their clients operate have a strong position.

But a selection is happening. Not between SaaS and AI. Between SaaS products that do something hard to take over, and SaaS products that don’t.

That distinction is more useful than the headlines. Whether your product manages complexity, executes processes, or carries responsibility that can’t afford to be approximate: that’s what determines where you sit. And if the answer is yes, the coming years are more likely to bring opportunity than threat.

What remains genuinely uncertain is how fast this selection plays out, and how the shift in pricing models reshapes the economics along the way. That part of the story is still being written.

Finding Digital Talent Where You Weren’t Looking

Most Dutch founders don’t have Nigeria on their radar when they think about hiring. Not as a conscious decision. It simply isn’t on the map.

That’s understandable. And it’s exactly what a new Clingendael report set out to investigate.

What the Dutch government wanted to know

The Netherlands has a bilateral partnership with Nigeria that includes creating employment pathways for both sides. As part of that, the Clingendael Institute investigated whether Nigeria can serve as a credible source of digital talent for Dutch companies through remote work.

The question matters because the conditions, on paper, are surprisingly strong. Nigeria has a large, young, English-speaking talent pool. The timezone overlap with continental Europe is near-perfect. Cost-wise, Nigerian developers sit in the same band as India and Eastern Europe, at $20 to $40 per hour. And 82.8% of Nigerian ICT students surveyed for the report said they want to work remotely for a Dutch company.

So what explains the gap between these conditions and actual adoption?

The barrier is not talent. It is trust.

This is the report’s central finding. European companies don’t hesitate because the skills aren’t there. They hesitate because of uncertainty around data protection, compliance, quality assurance, and what happens when things go wrong. Not a capability gap. A familiarity gap.

The report is specific about where market-ready talent develops. Not at universities, whose curricula lag years behind employer needs. The professionals who are ready for international teams come through private academies, structured bootcamps, and what the report calls “challenge-based finishing schools”: programmes that train technical skills alongside communication norms, documentation practices, and the professional rhythm that European teams expect.

One finding makes the point concrete. A Danish-Polish company reported that onboarding a Nigerian engineer took approximately 3 weeks. Their previous benchmark with engineers from India was 4 months. The difference was not raw capability. It was what sat around the talent: structured preparation, cultural bridging, and ongoing involvement.

The model that closes the trust gap

Clingendael recommends what it calls a “corridor model”: European companies using trusted intermediaries as talent partners. These partners handle the full cycle:

  • Matching talent to team needs
  • Vetting and preparing candidates
  • Managing onboarding
  • Handling administration and payroll
  • Providing performance guidance
  • Offering replacement when placements don’t work out

They carry the operational cycle that companies would otherwise carry themselves.

The report highlights European intermediaries with local African networks as best positioned for this, because they combine knowledge of the talent ecosystem with familiarity with European standards and ways of working.

Tunga fits this picture exactly, so it’s no surprise that Clingendael interviewed us as part of their fieldwork. The report references our operations and our Academy in multiple places. What’s interesting is that the model they recommend is a confirmation of the approach we’ve built over the past 11 years: matching and vetting talent across 20 African countries, structured onboarding that bridges cultural and professional norms, and staying involved after placement with performance guidance and replacement guarantees. The salary ranges the report cites match what we see in our own operations.

What this opens up

The report concludes that ICT nearshoring between the Netherlands and Nigeria is “neither a quick fix nor a speculative bet, but an opportunity under clearly defined conditions.” Work through experienced intermediaries. Invest in structured onboarding. Think in terms of a talent pipeline rather than instant senior hires.

For founders who spend weeks filling roles, months getting people productive, and years building the operational infrastructure to keep the cycle running: the relevant insight from this report is not that Nigeria has developers. It is that a model exists in which someone else unburdens you, with talent from an ecosystem that has been quietly delivering to the US and UK for years.

What It Actually Takes to Be AI-Ready

Late last year I decided to get serious about AI at Tunga. Not the “give everyone a ChatGPT license” kind of serious. Structurally serious. Redesign how we work.

The first question was obvious: where on earth do I start?

Turns out I’m not the only one. Every entrepreneur I’ve spoken to in the past few months has the same question. And every major survey published in the past twelve months confirms it:

  • McKinsey: 88% of organizations use AI. 7% have implemented it structurally.
  • BCG, different methodology, same conclusion: 72% of employees use AI regularly. 60% of companies generate zero measurable value from it.
  • Cisco classifies just 13% of organizations as fully AI-ready.

Everyone has started. Almost nobody has arrived. Here is what I have encountered on the way.

Tools first, questions later

Most companies do the same thing first. Ours included. Make tools available. Switch on Copilot. Get a team license. See what happens.

Something happens. Some people take to it immediately. Others open it once and never touch it again. A few get measurably better at their work. Others start producing output that looks polished but is somehow emptier than before.

MIT studied this pattern inside organizations. Power users send six times as many messages to AI tools as the median employee. For coding, the difference is sixteen times. Same company, same tools, radically different outcomes. EY adds that 78% of employees use unapproved AI tools, while only 7.5% have received extensive training.

The picture is familiar: uneven adoption, uncoordinated use, nobody overseeing whether it adds up. But here is the thing. The problem is not the tools. The problem is that nobody asked a certain important question before deploying them.

The question almost everyone skips

That question is: what are we uniquely good at, and how do we make sure AI strengthens that rather than undermines it?

I can make this concrete. At Tunga, one of the things that sets us apart is personal contact. Our clients work with developers in Africa. That requires building trust in ways you cannot automate. If I were to optimize that away because it is more efficient, I would be competing with Toptal and Upwork in a game I cannot win.

Every company has a version of this. Something that makes them distinctive, that is tempting to automate, and that would hollow them out if they did.

The most visible example of getting this wrong is content. My LinkedIn feed is full of it. Professional-looking but soulless. Generic, interchangeable, belonging to nobody. The internet calls it “AI slop.” Merriam-Webster chose it as word of the year. Consumer trust drops 50% when content is perceived as AI-generated.

That is what happens when you automate without direction. You produce more, but you get worse. And it is not limited to content. Any part of your business where the human element is what creates value is vulnerable to the same pattern.

Mapping our workflows

Once I understood this, I tried to do it properly. I started a project to map all of our workflows and redesign them with AI in mind. Everyone on the team contributed.

The quality varied significantly. Not because people were not trying, but because thinking about where AI fits into a workflow is a skill. One person immediately saw where a process could be more efficient. Another described the current situation and stayed there.

Halfway through I realized: this needs structure. It felt like being a football coach who lets everyone onto the pitch and hopes they figure out their positions. Not everyone needs to be good at the same thing. But someone has to set up the team. At Tunga, that meant deliberately creating two roles: a Context Architect to design how AI integrates into our workflows, and a Context Manager to keep it running day to day.

This turns out to be a broader pattern. One in four companies now has a Chief AI Officer. “AI Engineer” is the fastest-growing job title on LinkedIn. A new role, the “Context Engineer,” is appearing: someone who structures the information that makes AI systems perform well. Organizations are discovering that AI readiness requires dedicated attention, not just distributed enthusiasm.

But even having the right people is not enough if you are not clear on what direction to point them.

Strategy as the actual dividing line

Cisco found something striking. Of the companies they classify as fully AI-ready, 99% have a well-defined AI strategy. Across all companies: 58%. Companies without a formal strategy report 37% success in AI adoption. Those with one: 80%.


Yet only 21% have actually redesigned workflows around AI. The rest layer AI onto existing processes and hope for the best.

The dividing line is not technology, budget, or company size. It is whether someone has forced the question: what are we protecting, what are we strengthening, and what are we willing to let change?

Where this leaves the question

I started with “where do I start?” The answer I have found so far is not a tool or a vendor. It is a sequence. Free up one or two people to focus on this, not on the side of their regular work. Map your workflows before deploying AI anywhere. And answer the identity question: what makes you distinctive, what must be protected, where does AI strengthen your advantage and where does it erode it?

Follow this path, and something shifts. You stop reacting to every new tool and start making choices that compound. That is when AI stops being a distraction and starts being an advantage.

The Software Developer Market Is Not Doing What Everyone Says It Is

There is a version of this story that has become almost ambient. AI is replacing software developers. Demand is collapsing. The market has fundamentally changed, and whoever hasn’t adapted yet is about to find out the hard way.

I hear this regularly. In news articles, on LinkedIn, in conversations at events. It is stated with confidence, and it spreads easily because it fits a coherent narrative: AI improves, developers become redundant, companies need fewer of them.

The thing is, we track software developer job postings daily across multiple Western markets. And I wanted to know whether the data actually supports what everyone is saying.

It mostly does not.


What the data shows

We use the Indeed Hiring Lab sectoral index for software development, which tracks job postings relative to a pre-pandemic baseline of 100. It covers the US, UK, Germany, France, Canada, and Australia, among others.

Yes, postings are significantly below their 2022 peak in most markets. That correction was real and steep. Worth noting: the decline started before AI tools became widely available. Nearly half of the drop from the 2022 peak had already occurred before the end of that year, before large language models entered mainstream use. The correction looks more like a normalization after pandemic-era overhiring, combined with interest rates returning to normal levels, than a structural AI-driven displacement. But what has happened since is where the story gets more interesting.

In the United States, the software developer index currently sits at 70.7. That is the highest reading in roughly two years. Six months ago it was at 65.8. The market troughed in May 2025 and has been recovering since. Nine consecutive months of upward movement.

The United Kingdom follows a similar pattern. The index reached its lowest point in May 2025 at around 53, and has recovered to 63.7 by February 2026. That is an 18% recovery from the bottom.

In the Netherlands, where Indeed does not publish a comparable sectoral index, the picture comes from CBS data. In Q4 2025, ICT vacancies increased from 16,600 to 16,900, making it the only major sector to grow while the overall Dutch labor market contracted by 7,000 vacancies in the same quarter.

The software developer market is recovering in the markets that matter most to us.


The geographic split nobody is talking about

What the data shows, and what I find genuinely surprising, is how differently this plays out across Europe.

The Anglo-Saxon markets and the Netherlands are recovering. Continental Europe is not, at least not yet.

Germany’s software developer index sits at 57.7, down almost 3% over the past six months. France is at 55.9, barely moved from its December 2025 trough. Both markets declined through the autumn while the US and UK were already climbing. Neither has shown a clear recovery signal.

Why the split? I genuinely do not know, and I want to be careful not to reach for an explanation that fits the pattern but cannot be verified.

A few possibilities come to mind. It could be macro-economic: Germany in particular has been dealing with a structural industrial slowdown that has nothing to do with AI or tech specifically. It could be cultural: more internationally oriented, entrepreneurial business climates may move faster through transitions like this, partly because English-language AI tools and discourse are more accessible to them. It could even, somewhat counterintuitively, be that Germany and France are actually realizing more productivity gains from AI tools, meaning they genuinely need fewer developers right now. I do not think that is the most likely explanation, but it cannot be ruled out from this data alone.

What I can say is that the pattern is real and consistent across several months of data. The mechanism behind it remains an open question.


What this does not prove

A few things I want to flag explicitly, because the data does not support them even if the headline seems to.

This is not evidence that demand has returned to pre-pandemic levels. It has not. The US at 70.7 is still 29% below February 2020. The UK at 63.7 is 36% below. The recovery is real, but the context is a deep trough.

It is also not evidence that the nature of the work is not changing. It is. I wrote about that at length recently, specifically about how job postings have become longer, more contradictory, and often describe people who cannot exist given how recently the required technologies were invented. The shape of demand is shifting even as the volume recovers. Those are separate questions.

And it is not evidence that the recovery will continue at the same pace. Germany and France show that different conditions produce different trajectories. The recovery in the US and UK could stall.


Why the gap between narrative and data persists

The collapse narrative is intuitive. It follows a clean logic: better AI tools mean fewer developers needed. That logic is not wrong as a general direction. But it runs ahead of what is actually measurable in the market right now. When the pace of change is high enough that it is difficult to distinguish signal from noise, you tend to absorb the ambient story rather than go looking for the data.

The demand for software developers is not collapsing. In some of the most important markets, it is growing again. That does not mean the field is unchanged. It means the story is more complicated than what most people are repeating.

Why Your Certificates Don’t Matter (And What Actually Does)

A lot of developers believe certifications are a meaningful lever for getting hired internationally.

They are not.

Hear me out.

Certificates can help you pass an automated screen. They signal you’ve invested time in learning. But once a human is involved, once someone is actually deciding whether to hire you, they rarely move the decision in the way people think they do.

What moves the decision is simpler, and harder: how you show up, and how you operate.

And those are not the same as “more credentials”.

The Nairobi moment

Last week I was in Nairobi for the Africa Tech Summit. We organized a side event through Tunga Academy on landing international tech jobs. The response was overwhelming: we had to close registration at 450 people.

The room was packed. Mix of junior, mid-level, and senior developers. Most had never worked for an international client.

During the panel discussion – myself, Timothy Maenda from StartHub, and Irene Mwangi from Power Learn Project – I brought up something that comes up regularly in our conversations with developers.

Certifications.

“In real hiring decisions,” I said, “I’ve almost never seen a certificate make the difference. Neither have my clients”. All three of us were unanimous. But the room struggled to accept it.

Someone asked: “Then why is it always listed in job descriptions?” (my answer: “Out of habit”)

This moment has stayed with me because it reveals something bigger. Developers are investing enormous energy in the wrong places. Not because they’re foolish (they’re anything but). But because they can’t see what the hiring side actually focuses on.

What we actually look for at Tunga

When we evaluate candidates for international roles, we’re trying to answer two practical questions fast.

1) How do you show up?

Not as a personality assessment. As signal quality.

Can I quickly understand who I’m dealing with? Can you communicate like someone who has thought about the other side of the screen?

At the Nairobi session, we put a few LinkedIn profiles on the screen, volunteers from the audience. One had a photo, but the person was standing too far away. “I can’t see who you are,” I said. There was not a clear signal of intent, context, or seriousness about the role they were seeking.

People recognized it immediately. Some uncomfortable laughter.

The point isn’t grooming or polish. The point is awareness. Remote work is mostly communication. If your primary professional surface doesn’t communicate clearly, you are losing points before anyone has even spoken to you.

2) How do you operate?

This is what we generally – though perhaps incorrectly – call “soft skills”.

It’s not soft. It’s delivery.

Can you find your own work when things get quiet? Can you surface blockers early, clearly, and with options? Can you manage expectations without waiting to be asked? Can you ask good questions when context is missing?

Here’s an example from Tunga’s everyday practice: the client says their developer isn’t productive enough. At the same time, the developer is frustrated because they don’t have enough work.

What’s happening is usually not laziness or incompetence. It’s a mismatch of assumptions.

The client expects the developer to proactively identify tasks, ask questions, create clarity. The developer expects the manager to assign work when there’s more to do.

Both make sense inside their own frames. But in a distributed environment, those frames collide. And the collision looks like underperformance on the part of the developer.

In international teams, taking initiative is not a bonus. It is expected.

Why this gap exists

Problem-solving is a skill. Like any skill, it requires repetition. And repetition depends on your training environment.

I learned this partly through a football academy project in Uganda. Ugandan coaches kept asking: why don’t more of our talented players make it to European leagues? European clubs want players who make autonomous decisions under pressure, in real time. That ability is trained through practice.

But in Africa, in many educational contexts (both in schools and at home) the pedagogical culture tends to be hierarchical. Initiative is regularly penalized instead of encouraged. Taking risks and making mistakes often isn’t safe. The result is less practice in autonomous decision-making.

The same pattern shows up regularly in candidates wanting to join our platform: strong technical competence, but weaker autonomy in collaboration.

This isn’t permanent. But it is what needs addressing. That’s why we started Tunga Academy.

The market paradox

Eleven years ago when we started Tunga, there was a shortage of strong developers across Africa. Today, the situation has flipped.

The talent pool has grown enormously. But local job markets haven’t kept pace. The promise many developers heard – “learn to code, get a well-paid job” – runs into the reality of limited local demand.

So the pull toward international work grows. And international opportunities do exist. European and North American companies are looking for exactly the technical talent Africa produces.

But the barrier is rarely technical skill. It’s whether people can operate in the collaboration model those teams assume.

If anything, that barrier is becoming more important. As AI handles more standardized technical work, what scales in value is judgment, clarity, initiative, expectation management. The human mechanics.

Where to invest your energy

This is where most developers can make a practical shift.

Less time optimizing for credentials as an end in themselves. More time building stronger self-presentation and stronger operating behavior.

Concretely, that means:

  • Communication skills: Learn to manage expectations clearly. Practice giving and receiving feedback. Develop the ability to ask precise questions when context is missing.

  • Assertiveness training: Build comfort with speaking up, challenging assumptions respectfully, and advocating for what you need to do your work well.

  • Problem-solving practice: Work on real projects where you have to make decisions with incomplete information. Contribute to open source. Build something yourself.

  • Professional visibility: Improve how you present your work. Document your thinking. Show your process, not just your output.

Certificates can sit somewhere in that picture. But they’re not the lever.

What we do at Tunga Academy

We learned in hospitality how to see across frames and anticipate friction. Tunga Academy applies that: our job is to help people build the signals and behaviors that international teams actually respond to.

Problem-based development, not theory-first. No promises of jobs. Just ruthless clarity about what moves the needle, and structured practice around that. This is why our assertiveness training, for example, has become unexpectedly popular: it fills a gap people feel but can’t always name.

Closing the gap

Standing in that room in Nairobi, I could see both frustration and hope. These developers have worked hard to get where they are.

But the obstacle isn’t what they think it is.

Once you can see that certificates matter less than communication, that grinding technical skills matters less than initiative, the path becomes clearer.

The talent is real. The opportunities are real. What’s needed is seeing what’s actually in the way. That gap is closeable, and now let’s close it!