“
6Pages write-ups are some of the most comprehensive and insightful I’ve come across – they lay out a path to the future that businesses need to pay attention to.
— Head of Deloitte Pixel
“
At 500 Startups, we’ve found 6Pages briefs to be super helpful in staying smart on a wide range of key issues and shaping discussions with founders and partners.
— Thomas Jeng, Director of Innovation & Partnerships, 500 Startups
“
6Pages is a fantastic source for quickly gaining a deep understanding of a topic. I use their briefs for driving conversations with industry players.
— Associate Investment Director, Cambridge Associates
Read by

Used at top MBA programs including
Feb 13 2026
12 min read
1. End-to-end SWE and AI fatigue
- One of the biggest stories in AI since Thanksgiving has been the advances in coding – Anthropic’s Claude Opus 4.5 and later 4.6, vibe-coding’s inflection point, and OpenClaw’s open-source AI agents. Opus 4.5 was so impressive that it kicked off a wave of pessimism about the future of the software industry. During the past week, however, the chatter has shifted from amazement at how capable AI is now to how AI is making software engineers’ lives harder, not easier as expected.
- Some engineers at OpenAI, Anthropic, and Spotify are reporting that nearly 100% of their code is now written by AI. The code may not be perfect, at times incorporating subtle bugs that can generate technical debt and counterbalance the productivity gains, or creating new security vulnerabilities. While AI can perform code review, some human attention is usually still needed. Still, this represents a major step-change in productivity from 6 months ago.
- It’s not just coding – more of the development lifecycle is being taken over by AI. According to Anthropic CEO Dario Amodei, “90% of the end-to-end SWE tasks – including things like compiling, setting up clusters and environments, testing features, writing memos – [can be] done by the models.” AI startup CEO Matt Shumer reports that not only can AI build an app by writing tens of thousands of lines of code, it can open the app the way a human would and click through the buttons to test all the features. It can iterate and apply its own judgment – something akin to taste – to fix and refine the app. “And when I test it, it's usually perfect.”
- Prior generations of AI models often tricked developers to think they were being more productive than they actually were. Now, as Shumer puts it, “The models available today are unrecognizable from what existed even six months ago.” (If you’re still using a free version, you still may not fully recognize what is going on.) While it’s not clear whether engineers using AI are actually 10x as productive, it is becoming evident that those wielding the latest models effectively are meaningfully more productive and likely some multiples more productive.
- Perhaps not too surprisingly, AI companies are at the forefront of using AI in their work developing AI. Last week, OpenAI released GPT-5.3 Codex, calling it “our first model that was instrumental in creating itself” – with early versions used to “debug its own training, manage its own deployment, and diagnose test results and evaluations.” At Anthropic, 95% of the code by the Claude Code team is written using Claude Code. Amodei believes we “may be only 1–2 years away from a point where the current generation of AI autonomously builds the next.”
- Unfortunately, all this capability doesn’t seem to be translating into a better work-life for engineers using AI. Industry watchers are using terms like “AI vampire” and “AI fatigue” to describe the burnout that engineers are experiencing when they are doing more in the same amount of time. Engineers are describing no longer being needed for the technical work of manually coding. Instead of generative tasks that provide flow, they are taking on evaluative tasks with the accompanying decision fatigue – which is a very different kind of work. Engineers are also taking on a broader scope of work and multi-tasking/context-switching more. AI engineer Siddhant Khare Khare frames this paradox as follows: “AI reduces the cost of production but increases the cost of coordination, review, and decision-making.”
- Engineers are also working faster and at a higher intensity, and sometimes longer as well, even when they’re not being asked to do so. In some cases, this is because they are finding more engagement in their work or perhaps wrestling with a “prompt spiral” (the AI equivalent of yak-shaving). It may also partly be due to engineers having a broader scope of work and the ease of delegating work to a model, resulting in a blurring of work time and non-work time – not dissimilar to how managers often have a less bounded workday than individual contributors.
- If, as a stylized fact, these engineers are 10x as productive, they are often capturing very little of that value. Some are reporting they are losing their ability to do generative work and think from scratch. It’s a recipe for burnout and uncritical acceptance of whatever slop AI produces. It’s not necessarily an easy problem to solve – even companies that are conscious of avoiding employee burnout still have to face the pressures of a market where their rivals are using AI as well.
- These models continue to get better. This year, we may see models that can work fully independently for days on end. Amodei believes we could solve “continual learning” on the job within 1-2 years, and that we could see AI that’s "better than almost all humans at almost all tasks" – what he calls “a country of geniuses in a datacenter” – in 1-3 years and almost definitely within 10 years. AI is already starting to tackle white-collar jobs in arenas like the legal sector, financial analysis, and even healthcare.
- Looking ahead, these dynamics will inevitably shift how software engineers work. Researchers suggest that best practices could include building in intentional pauses within the workday, sequencing work to protect focus windows, and providing time and space to connect with others for human grounding. Khare recommends time-boxing AI sessions, allocating mornings for thinking and afternoons for AI use, and accepting imperfection in what the AI can produce as well as the final output.
- Over time, it could also shift who will be an “engineer” in the future. The subset who became engineers because they enjoy coding will be less happy in their jobs. There will be fewer specialists in traditional programming skills, and more generalists and product management types. Entry-level engineers will be required to be “AI native” at a minimum, and their training will need to evolve to support them in developing judgment and evaluative capability. While AI will certainly play a role in that, we may end up seeing the return of old-fashioned apprenticeship as well.
Related Content:
- Feb 6 2026 (3 Shifts): Software, the legal sector, and the free tier
- Jan 30 2026 (3 Shifts): OpenClaw & open-source AI agents
Become an All-Access Member to read the full brief here
All-Access Members get unlimited access to the full 6Pages Repository of868 market shifts.
Become a Member
Already a Member?Log In
Disclosure: Contributors have financial interests in Meta, Alphabet, OpenAI, Anthropic, and Aurora Innovation. Google and OpenAI are vendors of 6Pages.
Have a comment about this brief or a topic you'd like to see us cover? Send us a note at tips@6pages.com.
All Briefs
Get unlimited access to all our briefs.
Make better and faster decisions with context on far-reaching shifts.
Become a Member
Already a Member?Log In
Get unlimited access to all our briefs.
Make better and faster decisions with context on what’s changing now.
Become a Member
Already a Member?Log In


