TL;DR
- Python is having a moment—its adoption jumped notably year-over-year, propelled by AI, data, and back-end work. Rust and Go are steadily rising too.
-
AI is now table stakes. 8 in 10 devs report using GPT models for development tasks, with Claude and Gemini in the mix; devs also say AI causes new frictions they turn to Stack Overflow to solve.
-
DORA’s playbook still defines “top skills.” Delivery performance (deploy frequency, lead time, change fail rate, time to restore) and platform engineering continue to be the levers that separate high performers—2024/2025 research adds how AI fits in.
-
What to learn next: Python (with AI libraries), Rust/Go systems skills, AI-assisted development workflows, observability & reliability basics, and the DORA Four Keys-plus just enough platform engineering to remove toil.
Why this post?
Every year, Stack Overflow’s Developer Survey and DORA’s State of DevOps cut through hype with hard numbers. If you want to know what developers actually learn in 2025 (not just what’s trending on X), these two sources are your compass. This post distills both into a clear learning plan.
1) Languages & runtimes developers are doubling down on
-
Python = the AI multiplier. The 2025 Stack Overflow survey highlights a sharp rise in Python usage—accelerating after a decade of steady growth—driven by its role in AI, data science, and back-end work.
-
Rust & Go keep climbing. The same results and Stack Overflow’s write-up note continued growth for Rust and Go, both central to AI infra, tooling, and high-performance services.
-
JS/TS remain the web bedrock. (No surprise.) But the new energy isn’t just more React—it’s about connecting front-ends to AI services reliably and safely.
What to learn (pragmatically):
-
Python + pydantic, FastAPI, polars, numpy, pandas, and an AI SDK (OpenAI/Claude/Gemini) to ship data-heavy backends with AI features.
-
Rust basics (ownership/borrowing, async) or Go (concurrency, profiling) for services and tooling that touch AI pipelines or high-throughput back ends.
2) AI skills move from “nice to have” to “expected”
-
Usage is mainstream: A large majority of developers say they used GPT models for dev work in the past year (GPT ~81%), with notable usage for Claude Sonnet and Gemini as well.
-
Reality check: AI also introduces new friction; ~35% of visits to Stack Overflow are now triggered by AI-related issues that require extra debugging/understanding. In other words, AI helps—and breaks—things.
What to learn (hands-on):
-
Prompt patterns for code (e.g., spec-first prompts, test-driven prompts, diff-only refactors).
-
RAG fundamentals for product features: chunking, embeddings, evaluation harnesses.
-
AI code-review & policy: linting prompts, PII/secret scanning, and requiring tests for AI-generated changes.
3) DORA’s 2024→2025 guidance: delivery performance + platform engineering + AI
DORA’s research continues to validate the Four Keys as north-star delivery metrics:
- Deployment Frequency
- Lead Time for Changes
- Change Failure Rate
- Time to Restore
The 2024 report emphasized platform engineering and user-centricity; the 2025 DORA research and Google’s release framing focus on AI-assisted software development—how teams adopt AI while protecting reliability and flow.
What to learn (career-durable skills):
- Map your team’s Four Keys (even in a side project).
- Platform engineering essentials: golden paths, paved roads, IDPs, and SRE guardrails that let devs ship safely.
- AI in the SDLC: where to automate (tests, doc, changelogs, runbooks) vs. where to require human gates (security, data usage, incident comms).
4) The 2025 Learning Roadmap (6 weeks, flexible)
Week 1–2: Python + AI backbone
- Build a small FastAPI service exposing an /answer endpoint that calls an LLM.
- Add pydantic models, type hints, and pytest.
- Capture latency & error rate as a proxy for Lead Time and Change Failure Rate learning.
Week 3: Rust or Go for systems fluency
- Write a chunking/indexing CLI (embeddings offline job).
- Add profiling and basic benchmarks to see how concurrency and memory models translate to throughput.
Week 4: DORA metrics in practice
Create a simple GitHub Actions pipeline that:
- runs tests → builds → deploys to a preview environment,
- posts metrics (deploy count, duration) to a lightweight store (even a CSV is fine).
Track: deploys/day (DF), PR-merge→deploy (LT), rollbacks (CFR), downtime to first healthy check (MTTR).
Week 5: Platform engineering starter kit
- Package your project as a template with a Makefile or Taskfile,
- add a docs/quickstart.md and a one-click “bootstrap” script (golden path).
- Bake in guardrails: secret scanning, SBOM, basic SAST.
Week 6: AI-assisted developer workflow
-
Define when to use AI (boilerplate, tests, translations) and when not to (security reviews, novel algorithms without tests).
-
Implement AI-assisted code review that refuses merges unless new/changed code includes tests & docs.
5) Shortlist: Top Skills for Developers in 2025
Core tech
- Python for AI/data/back-end; Rust/Go for systems & infra.
### AI fluency - LLM toolchains (prompting, evals, RAG), agentic flows in safe, testable envelopes; awareness of AI-introduced failure modes (hallucinations, security/data leakage).
Delivery excellence
- The DORA Four Keys and how to move them with CI/CD, testing strategy, and trunk-based development.
Platform engineering mindset
- Golden paths, internal developer portals, sensible defaults, paved roads over snowflake setups.
Observability & reliability basics
- SLOs, tracing, error budgets; tie incidents and postmortems back to Change Failure Rate and Time to Restore.
6) A tiny reference implementation (copy/paste to get rolling)
# project bootstrap
uv venv && source .venv/bin/activate # or your preferred venv
pip install fastapi uvicorn pydantic openai pytest httpx
# run
uvicorn app:api --reload
# app.py
from fastapi import FastAPI
from pydantic import BaseModel
import os
# Pseudo-code: swap in your preferred AI SDK (OpenAI/Claude/Gemini)
class Ask(BaseModel):
question: str
api = FastAPI()
@api.post("/answer")
def answer(q: Ask):
# call your LLM here; keep prompts spec-first & testable
return {"answer": f"TODO: LLM({q.question})"}
# .github/workflows/ci.yml
name: ci
on: [push]
jobs:
build-test-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: '3.12' }
- run: pip install -r requirements.txt
- run: pytest --maxfail=1 --disable-warnings -q
# deploy to preview env (stub)
- run: echo "deploying..." && date >> deploy.log
- name: record-four-keys
run: |
python - <<'PY'
import json, time, os
event = {
"ts": time.time(),
"event": "deploy",
"duration_s": 42 # pretend build+deploy time
}
with open("four_keys.jsonl", "a") as f: f.write(json.dumps(event)+"\n")
PY
This skeleton gives you a place to practice AI integration, tests, and DORA data capture from day one.
7) Pitfalls to avoid (learned from the data)
-
Unreviewed AI code. Treat AI like a junior pair: fast, helpful, occasionally wrong—always needs tests. (Developers increasingly report AI-related issues sending them back to Stack Overflow.)
-
Vanity metrics. Count deploys only if they’re meaningful. Tie changes to user-visible impact and incident learnings (DORA’s Four Keys exist for a reason).
-
DIY platform sprawl. Prefer paved roads and templates over bespoke pipelines—DORA’s recent reports connect stable priorities and platform engineering to better outcomes.
8) Your 2025 study checklist
- Ship a Python + AI microservice with tests
- Learn one systems language skill (Rust or Go)
- Instrument the Four Keys on a toy app
- Create a “golden path” template for your team
- Define AI guardrails (what’s automated vs. human-gated)
- Practice incident response & postmortems tied to DORA metrics
Final thought
If you focus your learning on Python + AI, systems fluency (Rust/Go), and the operational discipline captured by DORA’s Four Keys, you’ll be aligned with what top teams—and the data—say actually moves the needle in 2025.
See ya in the next post!