NetSpeek

FILE 06.02 / MID IC (1-3 YOE)

AI Engineer

Build retrieval, evaluation, and grounding workflows that make Lena reliable.

Who this seat is for

You have already touched real AI work. Production. Serious internal tooling. Something other people relied on.

You think about AI as something you build with, not just something you prompt. You have written Python that other people depended on. You know that an LLM call working once in a notebook is not the same as a system that holds up at 3pm on a Tuesday.

This role is open to people whose AI experience came from internal AI initiatives, internal tooling, or early production work. What matters is that you built something real and you understand why production AI is harder than experimentation.

What you will own

You will work alongside the AI Team Lead and senior AI engineers. You will own progressively larger pieces of the reasoning and retrieval layers as you grow.

Initial areas:

  • Prompt iteration, structured output design, retrieval quality
  • RAG pipeline contributions: embedding workflows, grounding strategy
  • Evaluation and test pipelines for AI reliability
  • AI integration into platform services, alongside the backend engineers
  • Data preparation and AI response validation
  • Operational analysis of what Lena did in production, and why

You will pair often. With the AI Team Lead for mentorship and architecture. With senior AI engineers for review and design partnership. With backend engineers for the seams where AI meets the rest of the platform.

Hard requirements

  • 1 to 3 years of software engineering, ML engineering, or applied AI
  • Hands-on exposure to LLMs, RAG, or AI workflows beyond tutorials and courses
  • Strong Python fundamentals
  • Comfort with APIs, JSON, and modern SaaS systems
  • Familiarity with prompt engineering, structured output discipline, and modern AI tooling

High-signal indicators

  • Worked with vector databases or embedding pipelines
  • Exposure to LangChain, LlamaIndex, or comparable frameworks on something real
  • Internal AI tooling or automation that other people actually used
  • A working idea of what an LLM evaluation should measure
  • Startup or fast-moving product environment

What this role means here

You learn quickly inside production AI systems. You contribute reliably to experimentation and shipping. You improve AI quality through disciplined iteration. You ask good questions and take feedback well.

You are not expected to architect the platform. You are expected to grow into ownership, with mentorship from people who have built production AI before.

What we look for at this level

  • You have shipped Python that other people depended on
  • You can describe an AI workflow you built — and one thing you would change now
  • You ask "how do we know this is working?" before "can we ship this?"
  • You read the post-mortem before writing the code
  • You find production constraints interesting, not annoying

Why this seat matters

A growth seat inside an AI-native company. You will contribute directly to how Lena reasons. You will build the evaluation discipline that holds the platform to production standards. You will grow into a senior AI engineer over the next two to three years, inside a category that is still being defined.

If you want to be a strong production AI engineer in five years, this is a place that gets you there faster than most.

FILE 06.02.A / APPLY

Two ways in.

Most candidates run the Incident Lab scenario first and submit the structured response with the application. If you would rather skip the scenario and answer one essay question instead, the Field Note path is here for that.