←  Back to overview
Article

Meet Milan, our new Engineering Lead and his outlook for 2026

Milan’s 2026 focus as Engineering Lead: explore fast with intention, make AI safer through eval-driven design, keep humans where it matters, different rules for good code, and turn chaos into clarity with a standardized, outcome-first AI stack.

TL;DR: Milan grew into the Engineering Lead role by bridging product and engineering on real customer problems. For 2026 his focus is clear: explore fast with intention, make AI safer and more autonomous through eval-driven design, keep humans-in-the-loop where it matters, build outcome-first teams, raise the quality bar for AI-assisted code (docs + reproducibility), and move from chaos to clarity with a more standardized, reliable AI stack.

The path in: product × engineering, not titles

Milan initially joined as Technical Strategist, started taking on pragmatic responsibilities where the Studio had needs, and together we realized the overlap with what an engineering lead should do here.

That overlap is the Nimble way: ship real value to real users, fast; where engineers sit closer to customer problems than in typical product companies. Milan’s leadership grew from that intersection, not from a job description.

Why this matters now

2025 was the year we went all-in on exploration. AI took center stage. What began as isolated PoCs evolved into over 80% of our teams building AI-driven solutions.

Now, as we step into 2026, Milan wants to channel that same relentless curiosity with more focus, more clarity, and a rock-solid stack. Because in times of seismic change, curiosity will become leadership.

Exploration, with intention

Exploration is what got us here. It’s the reason we moved fast, learned fast, and built things others only talked about. But there’s a fine line between being curious and being distracted.

The shiny-object syndrome is real.

Next year, we’re refining that superpower. Instead of trying everything, we’ll try and measure. Every idea gets a purpose, a simple hypothesis, and a way to see if it actually moves the needle.

And because curiosity needs boundaries, each team gets a clear budget for the unknown, space to test a new model, tool, or pattern, without losing focus on what works. Curiosity stays, but now it comes with intention and guardrails.

As Milan phrased it: “Don’t be cynical, keep trying; just attach incentives and constraints to that curiosity.”

Eval-driven design as the default

If LLMs are teammates, they need to earn our trust, and they do that through evaluation.

Before we ship, we test. Every new model or workflow gets task-level checks, scenario scripts, and confidence thresholds. We don’t guess; we measure.

Once in production, observability keeps us honest. Logs, traces, and error patterns turn failures from mysteries into lessons. And when confidence scores climb, automation expands. When they drop, humans step back in.

It’s not bureaucracy, it’s how we turn probabilistic systems into reliable products people can actually count on.

Human-in-the-loop, used wisely

It’s never a question of human or AI, it’s about balance. How much human, where, and when.

In high-stakes domains like healthcare, finance, or safety, oversight stays human by design. Elsewhere, as evaluations improve and real-world data builds trust, we can safely reduce manual checks.

We treat agents like team members: they get clear expectations, regular reviews, and limited permissions until they’ve earned more autonomy. It’s management, but for machines.

Outcome-first teams at the product–tech intersection

The role of the engineer is evolving.

It’s no longer just about how to build something, it’s about why. The best engineers now think like product people: focused on impact, fluent in trade-offs, and unafraid to challenge assumptions.

Those who can move naturally between customer problems and technical constraints are becoming the most valuable builders. They ship faster, make sharper calls, and create real momentum.

For hiring and growth, agency and ownership matter more than a perfect tech stack match. We value bias-to-ship over stack-fit.

The quality bar is evolving

We used to worship at the altar of clean code.

No duplication. No comments. No clutter. Just elegant abstractions and perfectly DRY logic. But AI-assisted development is challenging that religion.

Now, we write for copilots as much as for people. That means more documentation, more repetition, more scaffolding. Not elegant per se, but fast, and reproducible. Yes, it might look messier. Yes, it might offend your inner clean coder. But the tradeoff? Velocity.

The rules of “good code” are shifting. And maybe the cleanest thing we can do is let go.

Security & privacy: skeptical and sober

Milan’s approach to AI is optimistic, but - don’t be mistaken - never naive.

We share only the minimum data needed, isolate high-risk flows, and design as if every prompt might be attacked. Multiple layers of defense, from model evals to app-level approvals, make sure safety isn’t a single point of failure.

And while all-in-one platforms are tempting, we build with portability in mind. The goal is smart adoption, not blind dependence.

From chaos to clarity: standardizing the AI stack

2025 was a year of constant breakthroughs. 2026 is about stability.

We’re choosing tools and platforms that have staying power, not just novelty. Boilerplates and “golden paths” make evals, logging, and safety checks faster and more consistent.

The fewer seams, the fewer mistakes. By consolidating context, prompts, and credentials, we turn complexity into clarity, and speed into something sustainable.

What success looks like in 2026

  1. Measurably safer autonomy. More flows move from assisted to automated, backed by evals and real-world telemetry.
  2. Shorter validation loops. Idea → prototype → user signal in hours or days, not weeks.
  3. Clear guardrails. Security reviews, privacy scopes, and fallback behaviors are standardized.
  4. Teams with agency. Builders who own outcomes, not just tickets and ship accordingly.
  5. Improving the quality of life even more. Because all of the above reduces friction and increases trust.

A note on culture

“Tools will be widespread. What differentiates us is the people: their agency, judgment, and momentum.”

That’s why our hiring emphasizes ownership and outcomes over perfect stack alignment. And why our processes reward bias-to-action without compromising safety.

Closing thought

AI keeps adding layers of abstraction. The craft doesn’t disappear; it moves. Documentation, evaluation, and observability become part of engineering’s core. Milan’s plan keeps us curious and accountable so we can deliver AI-native products that work in the real world.

Join Bothrs

Bundle forces and let’s create impactful experiences together!
See positions

Agentic organizations don’t wait, they build.

Start your GenAI Discovery Track and unlock easier, smarter experiences.

Stef Nimmegeers, Co-Founder Nimble