It’s a pithy play on the seminal 2017 Google research paper, “Attention Is All You Need,” but the sentiment is no less critical when assessing an organization's cyber risk.
Security tools are excellent at identifying issues, but only within their own silos, not within your specific organizational context. Assessing risks through that organizational lens remains the "holy grail" of cyber risk management. This is precisely where humans add value: we take the raw output from these tools and reason through it based on the unique needs and structure of the business.
The Problem Nobody Talks About: Subjective Risk The industry has a dirty secret: Risk is subjective. It is fundamentally contextual to your organization, yet security tools have always treated it as a static, universal finding.
The Context Gap Traditionally, security vendors find a bug, assign a score, and ship it. This "one-size-fits-all" severity is blind to your reality. It doesn't know if a vulnerability is on an internet-facing server or an isolated test box. It doesn't know if that machine touches your production database or your lunch menu.
The Scaling Failure Experienced engineers bridge this gap by asking the right questions: What’s the exposure? Is there an active exploit? What’s the blast radius? But human reasoning doesn't scale. When faced with 10,000+ findings, teams resort to sorting by vendor scores. The result? Critical, context-heavy risks sit unaddressed in a backlog while teams chase "High" alerts that don't actually matter.
The New Reality As attackers use AI to automate discovery and exploitation, the traditional "scan-and-patch" model is no longer just inefficient—it’s dangerous.
Quantro solves this by automating contextual reasoning. We’ve built a system that applies the instincts of a senior security engineer across your entire environment, continuously and at scale, without the human bottleneck.
Context as a Data Architecture Problem We realized early on that context isn't a prompt engineering problem. You can't just shove environment metadata into a prompt and call it a risk assessment. Context has structure and a lifecycle; it flows from multiple sources and changes over time. Real contextual reasoning requires sophisticated retrieval semantics, not just a better-worded query.
When we talk about context in our system, we mean several distinct things operating together:
Environmental context , the actual state of your infrastructure. Network topology, cloud configuration, which machines can reach which, how services are exposed, how they're configured. This isn't static; it changes every time someone provisions a new instance or modifies a security group.
Asset context , the value and sensitivity of what you're protecting. Not just technical metadata but organizational meaning. Is this a production database? A dev box that an intern owns? A machine that processes payment data? The same vulnerability has a completely different risk profile depending on what it's sitting on.
Threat context , live intelligence on what's actually being exploited. Exploit availability, threat actor activity, whether a specific CVE is being weaponized in the wild right now versus sitting in a proof-of-concept state. This shifts constantly.
Organizational context , the decisions your team has already made. Which risk classes you've accepted, what your remediation capacity looks like, what you've historically prioritized. A system that ignores your past decisions will keep surfacing the same noise.
Historical context , patterns that emerge over time. How your environment has changed, how past risk decisions correlated with outcomes, what's been remediated and what hasn't.
Each of these has a different shape, a different update frequency, and different retrieval requirements. Some of it you want to search semantically, "what similar situations have we seen before, and how were they handled?" Some of it you need to query precisely, "what is the exact network exposure state of this host right now?" Building a system that handles both well is the hard part.
Sources of Context Context is fragmented, not only across systems, but also in your analysts' heads. The obvious sources of context in many organizations are:
Wikis Messaging systems PDFs Scan reports Red teaming activities CMDBs But there are also pieces of context where it’s tribal knowledge that analysts learn on the job. This is the least obvious and most difficult to extract up front. However in Agentic systems, we believe the expectation should be like a it is for a human junior analysts. We do expect analysts that join a team to have some onboarding period where they learn the environment and gather this tribal knowledge by collaborating with their peers. In the same way, an Agentic system needs to have the ability to capture context while being used.
How the System Is Built The architecture we landed on reflects this. Rather than bolting context onto an existing pipeline, context is the foundation on which everything else runs.
Context Ingestion Layer: This is the entry point for all context signals. User queries, API triggers, and automated pipeline events—everything passes through here. The ingestion layer doesn't just accept input; it normalizes and classifies it, then routes it to the appropriate processing paths.
Memory Architecture: We use two complementary stores. A vector memory store handles semantic retrieval. When the system needs to find relevant precedents, similar risk patterns, or historically analogous situations, it queries here. A structured memory store handles precise, relational queries: asset state, configuration data, organizational metadata, and anything that needs exact answers rather than similarity-based retrieval. The split is intentional. Forcing all context through a single retrieval mechanism means either losing precision or losing the ability to reason about fuzzy similarity. You need both.
LLM Embedding Generator: Raw context signals are embedded continuously and persisted to storage. This isn't a one-time import. As your environment changes, as new threat intelligence comes in, and as remediation events happen, the embeddings update. The context model stays current.
Compaction Engine: This is one of the more interesting components. As context accumulates across an organization's history, you run into a real problem: not all of it is equally relevant, and raw volume starts working against you. The compaction engine distills accumulated context into durable patterns, the way an analyst builds intuition from years of experience rather than trying to remember every individual event. It keeps the knowledge base signal-dense rather than just large.
Context Retrieval API: The interface that agents use to query relevant context for a given assessment. This is the boundary between the knowledge infrastructure and the reasoning layer. Agents don't query raw storage directly; they ask the retrieval API for context relevant to a specific question, and the API handles the work of knowing where to look and how to combine what it finds.
Agent Layer: The system operates in three distinct modes. The VM.Agent handles real-time analyst queries, where a security engineer is asking the system to assess a specific risk or answer a specific question and needs an answer now. The Prompt Task Engine handles scheduled and triggered assessments, regular re-evaluation of the risk posture as context changes. Background Jobs handle continuous processing: updating context as new signals arrive, re-scoring findings when environmental state shifts, and surfacing emerging patterns that warrant attention.
The Knowledge Base : Encoding How Experts ReasonOne of the less obvious components in our system is the knowledge base, and it's worth spending time on because it's central to how we get accurate, reproducible reasoning at scale.
The problem with LLM-based reasoning in isolation is that it's inconsistent. Ask the same risk question twice with slightly different phrasing and you might get different answers. In security, that's not acceptable. Risk assessment needs to be deterministic enough to be trusted, auditable, and defensible.
The knowledge base is how we encode the procedural knowledge of risk assessment in a structured form. Not just "what vulnerabilities exist" but the reasoning patterns that connect environmental signals to risk conclusions. How do you weigh network exposure against CVSS score? How does exploit availability interact with asset criticality to change remediation priority? What combinations of factors should escalate a medium-severity finding to urgent?
These are patterns that senior security engineers carry in their heads. We've worked to externalize them into queryable structures. The result is a system that can execute the same quality of reasoning consistently, across thousands of assets simultaneously, with auditable logic you can trace back to specific context signals and rules.
It also means the knowledge base can be updated. When the threat landscape shifts, when new vulnerability classes emerge, when your organization's priorities change, the knowledge updates without retraining a model.
Self-Learning : A System That Adapts to Your EnvironmentTraditional vulnerability management tools are static. The risk scores ship from the vendor and they don't change based on what your environment looks like or how you operate.
Our system learns continuously. Not in the vague sense of "AI gets smarter" but in specific, concrete ways.
When your team accepts a risk, decides that a specific finding isn't worth addressing given your environment, the system updates its context model. It learns that this class of risk, in this type of environment, with this set of mitigating factors, is an acceptable residual risk for your organization. It won't keep surfacing the same noise.
When remediation happens, the system observes what was fixed and what wasn't, and uses that to refine its prioritization model. Over time it develops a picture of what your team actually acts on, which is not always what the vendor severity scores predict.
When your infrastructure changes, a new service gets deployed, a network segment gets reconfigured, a machine gets promoted to production, the system re-evaluates existing findings in light of the new environmental context. A finding that was low priority last week might be urgent today if the context changed.
This is continuous re-assessment rather than point-in-time scanning. The risk model isn't frozen when the scan completes. It's live.
Human-Level Reasoning, with Machine-Level Speed and Accuracy The clearest way to illustrate what this makes possible is a concrete example.
Take the same vulnerability, a remote code execution vulnerability with a CVSS score of 9.8. It appears on two different hosts in your environment.
Host A is an internal development server. It's behind a firewall, not internet-facing, carries no production data, and is owned by your engineering team. The exploit requires authenticated access. There's no known active exploitation in the wild.
Host B is running in your production environment. It processes customer data, it has outbound connectivity to your database tier, and there's a working exploit that's been observed in active threat campaigns targeting your industry this month.
A system that just reads CVSS scores treats these identically. Both are 9.8. Both go on the critical list.
A system with context treats them completely differently. Host A is a medium-priority finding that goes on the backlog with a reasonable remediation window. Host B is urgent, gets escalated immediately, and might trigger a compensating control while the patch is being deployed.
That differentiation, applying the right reasoning to each finding based on its actual context, is what our system is built to do. Not for two hosts. For every host, every finding, continuously, as context changes.
The senior security engineer who could do this reasoning manually for ten assets is a bottleneck at ten thousand. The system doesn't have that constraint. It runs the same quality of contextual assessment regardless of scale.
What This Unlocks for Defensive Teams When context becomes a first-class component of your risk assessment infrastructure, a few things change that couldn't change before.
Prioritization becomes defensible. When you can trace a risk score back to specific context signals — this host is production-critical, this exploit is actively weaponized, this asset is exposed to the network path an attacker would use — you can explain decisions to stakeholders, to auditors, to your board. Risk assessment stops being a black box.
Backlogs get real. Most security backlogs are actually two things mixed together: genuine risk that needs to be addressed, and noise that got flagged because no one had time to evaluate it in context. A system that does contextual assessment at ingestion time means the backlog reflects real work, not the output of a scanner that doesn't know what matters in your environment.
The system improves with your environment. As your infrastructure evolves and as your team makes decisions, the context model updates. The system gets more accurate about your specific environment over time, not just more accurate in general.
Re-assessment becomes automatic. When threat context shifts, a new exploit drops, or a threat actor starts targeting a specific vulnerability class, findings that were previously deprioritized can be automatically re-evaluated and re-surfaced if the new context changes the risk calculus.
The goal isn't to replace security engineers. It's to let them operate at the level of judgment and strategy that they're most valuable for, rather than spending their time doing manual triage that a well-built system can handle.
We're still building. The problem is hard, and the surface area is large. But the architecture is working, and the early results suggest that contextual risk assessment at scale isn't just possible, it's the direction the industry needs to move in.