What NetSpeek is
NetSpeek is the agentic control plane for enterprise physical infrastructure. We govern how AI agents reason about, decide on, and execute actions across thousands of physical endpoints in enterprise environments — meeting rooms, AV stacks, collaboration devices, signage, and the messy hybrid workplace underneath.
If you read "AV management tool" anywhere on the internet, it is wrong. NetSpeek sits at the intersection of AI systems, enterprise SaaS, distributed edge infrastructure, and operational automation. The endpoints we govern are physical. The platform is firmly in enterprise AI infrastructure.
The rest of this page is about how we work, not what we ship. The work itself we walk you through in person.
How we organize: hubs, not departments
We do not have departments. We have hubs.
A hub is a domain of work with its own outcomes, its own DRIs, and its own internal cadence. Inside Product & Engineering, the standing hubs are AI, Backend, and UX. They own the long-running surface area: how Lena reasons, how the platform serves it, how operators see it.
Around the standing hubs we run fluid hubs — time-limited or project-limited groupings that spin up around an initiative and stand down when it ships. Security and compliance, integrations, and other cross-cutting workstreams run as fluid hubs when they need a coordinated push, and dissolve back into the standing hubs when they do not.
Hubs are not a reporting hierarchy. They are a way of grouping work that has a shared shape and a shared answer to "what does done look like." A senior engineer routinely sits across two hubs in the same week — often a standing hub plus whichever fluid hub is active. A DRI on a release sits at the seam between hubs by design.
What this means in practice:
- The org chart is short. The decisions get made inside the hub that owns the outcome.
- A person can move between hubs without a re-org. Many do.
- Fluid hubs exist as long as the work exists. When the integration ships or the audit clears, the hub dissolves and the people go back to their standing hubs.
- We do not run cross-functional sync meetings. The cross-functional work happens inside the hub that owns the outcome, with the right people pulled in.
Why our teams are small
We are growth-stage, not late-stage. The teams inside each hub are small on purpose.
A team here is usually three to five engineers, plus the DRI for whatever is shipping that week. We have never seen a problem solved faster by adding a sixth person to a team of five. We have many times seen it solved faster by splitting a team of five into two teams of two and a half.
We do not believe in the engineering manager whose job is to run other engineers. ICs lead. The DRI for a workstream is the person who is actually building it. Coordination is a cost, and we pay it deliberately. We do not pay it as a default.
That cuts both ways. Small teams ship fast. Small teams also have nowhere to hide. If the work is not happening, it shows up in the next stand-up. Most people who do well here find that liberating. Some do not. Both reactions are valid signal.
DRI: who owns it
Every meaningful piece of work has a Directly Responsible Individual. One name, not two, not a team.
The DRI is the person who can answer "is this done" and "is this safe to ship" without checking with anyone else. They do not necessarily write all the code. They do own the outcome.
The pattern we use:
- A ticket has an author. A workstream has a DRI. A release has a DRI. An incident has a DRI.
- The DRI is named at the start, not assigned after something goes wrong.
- The DRI can pull in whoever they need. They do not need permission to do that.
- The DRI signs off on promotion gates (Dev → Beta, Beta → Production). The signoff is in writing.
If you are the DRI for something, you are accountable for the result. If you are not the DRI, you are accountable for getting your part to the DRI cleanly and on time. Both are real jobs.
The corollary: we do not run consensus-by-committee design reviews. We run "the DRI walks through the design, the right people push back, the DRI decides, the decision goes in writing." We trust DRIs. We also expect them to be wrong sometimes and to fix it fast when they are.
Solution finders, not problem pointers
Anyone can flag a problem. The bar here is higher.
If you raise a problem, you bring the next step with it. Not the answer — the next step. "I think X is broken, here is what I would try next" is the minimum shape. "I think X is broken, can someone look at it" is not.
This is not a posture about toxic positivity. We are very direct about what is not working. Engineers here will tell you to your face that the design is wrong. The bar is that they will also tell you the next thing they would try.
What this filters out:
- Pure escalation as a working style — naming a problem and pushing it up.
- Veto-by-question — using clarifying questions as a way to slow down a decision you do not own.
- "Someone should fix this" without a name attached.
What it gets us: a team where the conversation runs at the speed of "what next" instead of the speed of "who is to blame."
How we engineer
A few habits we hold ourselves to, drawn from the How We Build standard we keep internally.
PRs and code review
- One ticket, one PR. If the AI surfaces something useful out of scope while you are working, you open a follow-up ticket. You do not bolt it onto the PR you are in.
- Scope sentence first. "This PR will only do ___." Anything outside that sentence does not get committed. Vague in, vague out. Precise in, the agent runs itself.
- Architecture before implementation. If a ticket touches system design, new patterns, or more than one module, the approach gets defined first. The AI executes the decision. It does not make it.
- Reviewable in under thirty minutes. Soft cap of around three hundred changed lines per PR. Larger gets split into a sequence with cross-linked dependencies.
- Use AI for execution. Use the team for context. The AI can read the code. It cannot tell you why the system works the way it does. Validate with the team before you make a design decision based on what you assume.
- Code review is calibration, not gatekeeping. Reviews are where the team's standard travels. We use them to teach.
Architecture and decisions
- Reversible decisions move fast. Irreversible ones get a short note before any code lands.
- Boundaries get types. Inside a module, do what you want. The seam between systems is contract-first.
- Build for delete-ability. The cheapest code to change is the code that was easy to remove.
- Decisions go in writing. A decision spoken in a meeting and not written down does not exist by Friday.
- Smallest meaningful step beats the most complete plan. We would rather ship a guarded slice this week than a finished design next quarter.
Operating posture
- Observability is part of "done." If you cannot see the feature in production, the feature is half-built.
- Test what would hurt you in prod, not what is easy to mock. Coverage is not the bar. Risk is.
- Feature flags are the default for anything that touches a customer surface. Off, on for one tenant, on for all — three commits, not three deploys.
- Security is a feature, not an audit deliverable. The threat model lives in the design note, not in a separate document nobody reads.
- Postmortems are about systems, not people. The fix is a process change or a guardrail. Not a stern email.
These are not rules imposed. They are agreements we keep with each other because the speed we want only works if the quality is there underneath.
How we do AI
We are not building demo AI. The systems we ship run inside enterprise production environments where reliability, observability, evaluation, guardrails, and operational trust are non-negotiable.
What that looks like:
- Grounded behavior. Every Lena response is grounded in structured device state, operational telemetry, and product documentation. A hallucinated answer is a P1 bug, not a quirk.
- Structured outputs. We do not ship free text into the orchestration layer. The seam between Lena and the rest of the platform is typed.
- Evaluation as a first-class artifact. Every meaningful capability has an eval pipeline that runs on every release. Regression tracking on grounding rate, action accuracy, refusal calibration.
- Human-in-the-loop on consequential actions. Lena knows when not to act. We treat a clean refusal as a successful response.
- Cost and latency are part of the product. Token budgets are tracked per workflow. The cheapest meaningful improvements have come from re-organizing prompts and retrieval, not from switching providers.
- Deterministic execution, probabilistic reasoning. Probabilistic in the head. Deterministic in the hands. Lena reasons across many sources. She only takes actions through governed APIs with audit logging.
If your interest in AI ends at experimentation, you will be bored. If your interest is in the production constraints, you will find more than enough work.
What we do not optimize for
Naming the negative space:
- Large team management resumes. We hire ICs deeply. The management track exists. It is not the default.
- Heavy process credentials. Scrum and SAFe certifications are neutral signal. We care that you have shipped.
- Pure academic credentials. PhDs are welcome and not weighted. We have hired people without degrees and people with two of them. Output is the signal.
- Pure model-research backgrounds without production deployment. We love the research. The work is production.
If your strongest pitch is "I led a 30-person team through SOC 2 readiness," you are a strong candidate for many companies. We will not be the right match. That is fine.
Pre-employment checks
NetSpeek deploys software into customer enterprise environments — corporate networks, conferencing systems, AV infrastructure. Customer security teams expect us to know who has access to that code path. Our SOC 2 program requires it. So we run a focused, standard set of checks after you accept an offer and before your start date.
What we run
- Identity verification — government-issued photo ID via a SOC 2-certified third-party verifier. Standard for remote-first hiring and increasingly expected by enterprise buyers, given the rise of identity fraud in remote interviews.
- Right-to-work confirmation — employment authorization for your jurisdiction.
- Criminal record check — only where lawful in your jurisdiction, and only convictions with a defensible relationship to the role you are being hired into, per applicable fair-chance laws.
- Employment verification — we confirm the most recent two roles you list on your resume.
What we don't run
- Credit checks. We don't see their relevance for engineering roles.
- Social media monitoring or "online reputation" scoring.
- Anything before you have accepted an offer.
How it works
- You will receive a secure link from our background-check vendor (named at offer time). You consent to and submit information directly to the vendor. We receive a structured pass / discuss / fail outcome, not the underlying records.
- If anything is flagged, you have the right to review and dispute under the FCRA (US) or your local equivalent before we make a final decision. We will not withdraw an offer over a flag without giving you that conversation first.
- Typical turnaround is 3 to 7 business days. Start dates are scheduled with that window in mind.
This is one place where what we ask of ourselves matches what we ask of the AI we deploy: identity, audit, proportionality, and a human in the loop on every consequential decision.
Where this handbook stops
This is a public document, so it does not cover everything. What lives in the engineering team only:
- Customer names and customer-specific architecture
- Compensation bands — those come from the EVP People in the loop
- Roadmap specifics beyond what is already public
- Internal architecture diagrams and internal tool names
- Internal incident write-ups — we share representative ones during onsite
When you ask about any of these on a call, the simple answer is "later in the process." That is not evasion. It is how we keep this document public without sandbagging the work the team does on confidentiality.