What Software Developer Job Postings Are Really Telling Us

Published:Feb 10, 2026 BY Ernesto Spruyt 168 MIN READ


We track 1000s of software developer job postings daily across Northern Europe and North America at Tunga. And we talk to the CTOs and founders posting them. Many of those job postings describe an impossible person.

A “Senior ML Engineer” position posted in January 2026 requires “5+ years experience with LLM fine-tuning and RAG architecture.”

But LLM fine-tuning entered mainstream practice in 2023. And RAG architecture achieved production adoption only in 2024.

The mathematics don’t work.

And this isn’t someone being unreasonable. It’s someone who knows the world is changing rapidly but doesn’t have time to fully grasp how. So they list everything they think they might need, just to be safe. 

Our data confirms the pattern. Job descriptions in 2025 are 30-40% longer than equivalent roles in 2023, almost entirely from more granular technical requirements. But the technologies specified haven’t existed long enough for developers to accumulate the experience requested.

According to IBM, 99% of developers are still “exploring” AI agent development. Yet postings increasingly demand “experience with agentic AI frameworks.” The agentic coding market is projected to grow from $7.8 billion in 2025 to $52.6 billion by 2030. Tools like Claude Code and Devin achieved production readiness mid-2025. Gartner forecasts 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025.

That’s 700% growth in twelve months. The talent pool cannot have developed proportionally.

What we’re seeing isn’t a description of the current market. It’s anxiety about the future, expressed as present-tense requirements.

The Seniority Paradox

Here’s something we saw recently: a developer who’d been highly successful in a senior role for one of our clients – strong technical work, good communication, reliable delivery – applied for a senior role at a different company. They were rejected for “lack of seniority”.

Same developer. Same skill level. Different company, different outcome.

This happens because “senior” doesn’t mean the same thing everywhere. For some companies it means framework proficiency. For others it means learning velocity and judgment. The distinction matters enormously, but job postings don’t clarify which one they mean.

Over 50% of software developer postings are now senior-level, compared to 30% historically. During the same period, entry-level postings increased 47% from their 2023 lows. Both ends growing simultaneously.

When we analyze postings using “senior,” certain language appears consistently: “fast-growing team” (73% correlation), multiple positions listed (68%), remote work where historically on-site (81%), detailed benefits sections (64%). These correlate with competitive pressure and hiring difficulty, not necessarily work complexity.

And this: 89% of “senior” postings require 7-10+ years experience, but 76% require experience with technologies less than five years old. Companies are designating roles as “senior” partly to signal they’re willing to pay competitively, partly to filter for people who won’t need hand-holding.  Whether the actual work requires senior judgment is often a separate question.

What Leadership Doesn’t Know Yet About AI

When we talk to company leadership about agentic AI, there’s a consistent pattern. They know it should be an opportunity to produce code at scale. They know their competitors are exploring it. But they struggle to fully grasp how it actually works in practice.

How do you organize software development when AI agents handle implementation? How do you ensure quality? What does “scalable” even mean in this context? They don’t know what to ask for in hiring, don’t know what to screen for, don’t know how to evaluate it.

So they add “experience with AI agents” to the posting and hope candidates will figure it out.

The disconnect: top candidates leave the market within 10 days of active job search, according to multiple recruiting agencies. Average time-to-fill is 44 days globally. If the best candidates commit in 10 days but your process takes 44, you never meet them. You’re interviewing whoever is still looking in week three.

Glassdoor found company-specific factors (interview stages, stakeholder availability, decision authority, approval processes) explain more variance in time-to-fill than role or candidate factors. Variables entirely within company control.

Specialized agencies report 7-14 day time-to-fill working with identical candidate pools. The difference is process: fewer stages, faster decisions, clearer requirements.

The constraint isn’t talent scarcity. It’s that by the time companies figure out what they actually need and move through their process, qualified candidates have already accepted other offers.

What Companies Say Versus What They Screen For

Communication ranks as the most frequently mentioned skill across nearly 2 million tech job postings. More than Python, JavaScript, or any framework!

Yet most processes screen exclusively on technical qualifications in early stages. Soft skills get evaluated in final rounds, if at all.

Here’s what we see from the developer side: many strong developers focus almost entirely on technical skills. They know intellectually that communication matters, but they have trouble imagining what “soft skills” actually encompasses in practice. What does “strong communication” mean day-to-day? How do you demonstrate “collaboration effectively across teams”? It’s abstract until you’ve done it.

So developers optimize for what they can measure: learn another framework, build another project, get another certification. Meanwhile, companies screen out candidates before their communication can be evaluated, then complain they can’t find people who can communicate.

The shift from execution-focused development (write code to spec) to orchestration-focused development (direct AI agents, communicate across functions, make architectural decisions) makes communication more predictive of success than framework mastery. But neither side has fully adjusted to this yet.

The Subsegment Rebalancing

Machine Learning Engineer postings grew 40% year-over-year (following 78% the previous year). Frontend-only positions declined 24-33%. Full-stack grew 9%, but now means: backend systems + frontend frameworks + AI integration + cloud infrastructure + DevOps fundamentals.

These skills historically developed through different career paths. Companies are asking for combinations that accumulate through distinct specializations, expecting to find them bundled in individual candidates.

Some exist. But at much lower density than posting volume suggests.

What Actually Works

The companies we see successfully hiring do a few things differently:

They separate core from aspirational. Internally, they know what the role genuinely requires day one versus what would be nice to have. The developer with 80% of core requirements plus demonstrated learning velocity will outperform someone with 100% of stated requirements but no adaptation evidence, especially when half the “required” technologies didn’t exist two years ago.

They optimize for process speed. When you know top candidates move in 10 days, every scheduling delay and approval bottleneck becomes visible as the actual problem. It’s not about rushing decisions. It’s about eliminating artificial friction.

They screen for what actually predicts success. If communication matters, evaluate it early. The technical filter that eliminates candidates before their communication can be assessed is screening for what’s easy to measure, not what determines outcomes.

They reconsider where they look. The patterns we observe – impossible requirement combinations, process bottlenecks, screening mismatches – affect local markets everywhere. They’re structural, not geographic.

We work with senior developers in Africa who have 8 years building distributed systems, strong communication developed working with international clients, and are learning agentic AI right now like everyone else. They sometimes get screened out of European and US processes for lacking specific credentials or local experience, while companies complain about talent shortages.

When everyone is learning agentic AI simultaneously (because the tools are six months old) “10 years Silicon Valley experience” stops being the differentiator it was. Architecture skills, communication ability, and learning velocity become primary.

What makes alternative talent models interesting isn’t cost arbitrage. It’s that developers systematically overlooked by credential-focused screening often possess what companies say they need. And they’re available while local markets remain gridlocked.

What This Means for Developers

We see many developers focus intensely on technical skills while underweighting soft skills, partly because “strong communication” or “collaboration” remains abstract until you’ve experienced what it means in day-to-day work with distributed teams.

The reality: soft skills now predict success more strongly than framework mastery. When 73% of backend postings mention “AI collaboration experience,” they’re asking whether you can work effectively in an environment where AI handles implementation while humans handle judgment, communication, and strategy.

Tool expertise has shorter half-life than ever. RAG wasn’t a term in 2023. By 2025 it’s “required.” This pattern continues. What persists: ability to learn tools quickly, understand their constraints, know when to use them.

Specialize in durable layers, not transient tools. The market demands specialization while punishing over-specialization in technologies that change yearly. Specialize in problem domains and architectural patterns: distributed systems, resilient data pipelines, interfaces humans use. Learn current tools to solve those problems, but recognize tools as temporary implementations of durable patterns.

Geographic barriers are narrowing. The “you’re not from here” disadvantage has narrowed substantially. Not because standards dropped, but because barriers that kept people out – local tool experience, local network, proximity – matter less when everyone learns the same new tools simultaneously and teams are distributed by default.

You still need to prove capability through portfolio work, open source, clear communication, and learning velocity. But the window is unusually open right now.

Where This Leaves Us

Job postings describe institutional anxiety, not market reality. They’re written by busy founders who know the world is changing but don’t have time to fully understand how, so they list everything that might matter.

The result: postings that describe impossible people, processes that screen out qualified candidates, and qualified candidates who move through the market faster than companies can respond.

For companies: The ones succeeding aren’t writing better job postings. They’re recognizing the divergence between what postings say and what success requires; then adjusting process, screening, and geographic constraints accordingly.

For developers: Tool mastery has never mattered less and human skills have never mattered more. The market is simultaneously more accessible (geographic barriers down, new tools level the field) and more demanding (communication and adaptability are baseline, not bonus).

The friction we’re seeing creates two futures: one for those stuck optimizing credentials that don’t exist, processes that don’t work, and local talent pools already exhausted. Another for those willing to see what’s actually changing.