Trust, But Verify: The Evidence Gap in GRC Control Assessments
Eliminate blind trust in software. Verify what’s inside the binaries.
Modern GRC and security control programs run on documenting controls, assigning ownership, and validating control design and operating effectiveness through assessments, testing, and management attestations. Policies, control libraries, risk registers, and assessments feed into platforms like ServiceNow, Archer, MetricStream, and similar tools. We need these platforms because they structure accountability, organize evidence workflows, track issues, and support audits across integrated risk management (IRM) programs.
But evaluating your control environment primarily through documentation and attestations still covers only the application layer of software risk - the source code. That’s especially true in third-party risk reviews, where “effective” often means “attested and documented,” not independently verified in the delivered artifact.
The systems that introduce unseen risk into your environment are not policy statements or control descriptions. They are the compiled and linked artifacts running in production: firmware on devices, appliance images, container workloads, desktop agents, and, not least, the kernels and operations systems on which those applications run.
Between the control “on paper” and the software “in production” sit the packaged artifact that is actually delivered and installed. Applications are rarely deployed alone. They ship bundled with operating system components, libraries, dependencies, configuration files, and other software layers.
Those bundled components quietly shape what ends up in the binary image. They can introduce vulnerable libraries, hidden dependencies, misconfigurations, secrets, licenses, and cryptographic material—risk that questionnaires, narratives, and source-derived SBOMs often fail to capture.
NetRise exists to eliminate blind trust in software by giving organizations independent evidence of what is actually inside the compiled artifacts they build and buy - so control effectiveness and residual risk reflect what’s really installed and running - beyond the application itself.
As expectations for software transparency and “secure by design” increase, GRC teams need evidence that goes beyond attestations and reflects what suppliers and internal teams actually shipped.
As regulatory expectations increase — from the EU Cyber Resilience Act to SEC cybersecurity disclosure rules to sector-specific regulation like the FDA’s premarket guidance — organizations are expected to demonstrate how they identify, assess, and manage material cyber risk. That expectation raises the bar for evidence supporting software supply chain controls and residual risk conclusions.
Where GRC and Security Controls Stop
Most GRC programs emphasize three areas:
- Policies and control frameworks that define what should be true
- Assessments and questionnaires that ask internal teams and vendors whether controls are in place
- Evidence workflows that attach documents, tickets, and screenshots to demonstrate control operation
Those elements answer familiar questions:
- Does a documented control exist for this risk?
- Has the control owner documented and attested to how the control operates?
- Is there process evidence to satisfy auditors and regulators?
What they rarely answer is a deeper question:
Do our control attestations and evidence workflows reflect what’s actually delivered and running in the compiled binaries in our environment?
In GRC, that answer drives how you assess control operating effectiveness and set residual risk.
Because much of GRC evidence is qualitative and self-reported, the picture it paints is shaped by how vendors provide evidence and how internal teams create their controls, what appears in policy narratives and control descriptions, and what shows up in high-level inventories and source-based SBOMs.
Much of what happens inside a firmware image, a container layer, or a compiled binary sits outside that view.
A control may assume “secrets are not hard-coded,” but the compiled artifact may still contain tokens and credentials. A vendor may attest that “cryptographic material is properly protected,” while key material and certificates ship together in ways that make them easy to extract and misuse.
Beyond being a visibility gap, this disconnect is a governance gap because it drives conclusions about control operating effectiveness and becomes a key input to residual risk decisions—decisions that auditors, regulators, and boards increasingly expect to be supported by objective evidence.
When Heat Maps Turn Green for the Wrong Reasons
GRC teams communicate risk through ratings, heat maps, and executive summaries. These tools are useful—but only as accurate as their inputs.
In most organizations, inherent risk is adjusted based on control design and operating effectiveness to estimate residual risk. When operating effectiveness is inferred mainly from attestations and “paper evidence,” residual risk can be understated.
If the evidence is incomplete, the heat map may look green while exposure remains red. When an incident, exam, or audit forces a closer look, the question shifts from “Was the control documented?” to “Can you demonstrate it was effective in the software that actually ran?”
A control can be documented, assessed procedurally, and marked “effective.” Yet if the control’s intended outcome isn’t reflected in the production artifact, residual risk is being estimated on assumptions rather than evidence.
That matters because residual risk drives real governance decisions:
- prioritization and funding
- exception and waiver workflows
- accepted risk memos and sign-offs
- board and audit committee reporting
- assurance plans for internal audit and second-line oversight
Binary-level software evidence doesn’t replace risk scoring. It strengthens it, by grounding control effectiveness conclusions and residual risk estimates in a comprehensive and accurate software bill of materials.
Beyond Self-Attestation: What Compiled Artifacts Reveal
Control spreadsheets and vendor responses rarely allow for the degree of complexity that exists within your software artifacts. Inside compiled firmware images, application binaries, and containers, NetRise routinely finds:
Build-time drift: When control design doesn’t match the shipped product
Many controls are designed around intent: approved components, secure configurations, and required checks in the SDLC. But software risk is ultimately determined at build time—when source code, dependencies, kernels, operating systems, and configurations are assembled into the artifact that actually ships. If controls don’t explicitly account for build-time decisions, a system can look compliant on paper while the production artifact diverges from what the control assumes. The result is a gap between documented control effectiveness and the reality of what’s running.
Takeaway: If the shipped artifact diverges from what controls assume, you can end up with hidden exposure—even while controls are rated “effective” and residual risk appears within tolerance.
Access credentials in artifacts: When process controls don’t prevent artifact exposure
Controls may state that secrets live in approved vaults and are never embedded in software. In practice, “secrets” are the sensitive values that grant access—API keys, passwords, access tokens, and similar credentials. Yet secrets can still find their way into the shipped artifact through build-time packaging, templates, or release workflows. Because this risk lives inside the delivered software—not in a policy doc or questionnaire response—it often escapes traditional evidence collection even though it directly undermines the control’s intent.
Takeaway: If secrets are embedded in delivered software, attackers can bypass intended access controls entirely, turning a process control failure into a direct compromise risk and a governance exposure.
Trust and signing material in artifacts: When governance assumes separation that the artifact doesn’t enforce
Organizations document strong key management practices and vendors may attest to them. Unlike secrets (which grant access), cryptographic keys and certificates establish trust, encryption, and integrity—so exposure can enable impersonation, decryption, or malicious code signing.
But controls often stop at the process level and don’t confirm how cryptographic material is actually packaged in production software. When artifacts package sensitive cryptographic material in ways the control model doesn’t anticipate, it can contradict key management assumptions and change true control effectiveness.
Takeaway: When cryptographic material is exposed, bundled improperly, or left unmanaged (including expired certificates), trust boundaries can collapse - undermining encryption and integrity assumptions and creating outsized impact relative to what the control narrative suggests.
Expired certificates are less about a single finding and more about assurance maturity: they’re evidence the control isn’t being continuously monitored in the shipped artifact.
Self-attestation shows what controls claim to achieve. Binary evidence shows whether the software in scope actually reflects those controls in practice.
Why GRC Needs NetRise
GRC leaders don’t need another ticket queue or another questionnaire template. They need a defensible evidence base for software risk—one that complements existing GRC platforms by anchoring governance decisions in what is actually inside the binaries.
NetRise is a Software Supply Chain Security company that starts from compiled artifacts rather than source code or vendor declarations. It builds a comprehensive and accurate software asset inventory and full-stack SBOMs across firmware, kernels, operating systems, containers, and applications, then enriches that inventory with CVE and non-CVE risk so teams can answer: “Where am I exposed?” and create controls to remediate risk before incidents occur.
What GRC Looks Like in the NetRise Platform
Consider a third-party device that looks “clean” in your current reporting—no reported issues, no red flags, and a vendor attestation that vulnerability management controls are in place.
Then you inspect the compiled firmware. NetRise surfaces known exploited vulnerabilities in underlying components that weren’t visible beyond the application layer - the source code - from which the paper trail was derived.

Instead of treating inherited exposure as a vague or unbounded problem, NetRise narrows the evidence to what’s highest priority: known exploited, weaponized, and reachable issues. This is where binary evidence becomes operational—it turns broad exposure (often thousands of inherited vulnerabilities) into a prioritized set of items that can be tracked, owned, and reflected in residual risk.
The takeaway here doesn’t have to be “we found more CVEs.” Instead, the presence of vulnerabilities suggests that a core control assumption may be wrong, and the presence of thousands of expired certificates suggests that the vendor may not be aware of them, indicating a missing or broken control. And if they’re not monitoring this, what else are they missing?
Findings like these change the residual risk story you’re taking to leadership
With artifact-level evidence, you can treat the gap like any other governance issue: document a control exception or deficiency where appropriate, adjust residual risk based on what’s actually running, and make a risk acceptance decision that reflects real exposure, not a vendor narrative.
Seeing the Whole Iceberg
Your control environment is broader than any dashboard suggests. Policies, assessments, and GRC platforms illuminate only the visible portion of software risk.
NetRise reveals the rest: the binaries, hidden components, and non-CVE weaknesses that determine whether controls are truly effective—and whether residual risk is being reported accurately.
The next generation of GRC will depend on what can be verified, not just what is declared. As pressure for software transparency and secure-by-design practices grows, organizations that pair governance with binary evidence will have the most defensible view of software supply chain risk.
You cannot govern what you cannot see. With NetRise, you finally see what is really inside the software you build and buy. In an environment of rising regulatory expectations and supply chain scrutiny, “trust but verify” is no longer a best practice—it is becoming a baseline expectation.
Stay up to date with the news
Sign Up To Get Our Free Insights Delivered To Your Inbox
