Zero Trust is not Trustless!

Introduction

From time-to-time people from the cybersecurity industry find a new buzzword, mainly to market their new “breakthrough product”. Recently, Zero Trust has become one of cybersecurity’s most used buzzwords.

With this article, I will try to explain from the perspective of application security what Zero Trust is, as well as what Zero Trust is not.

Implicit trust in defense in depth model

To explain Zero Trust, I will first explain implicit trust using a typical infrastructure diagram of an organization that invested quite a lot in security infrastructure. I must emphasize that I kept the architecture deliberately simple so that I don’t lose the story to the details. Cloud deployment (especially if it is hybrid) brings additional complexity, however, for this example, we will ignore the location of the infrastructure, as well as encryption and key management within the infrastructure.

Figure 1: Theoretical infrastructure

In the above drawing, I have placed several security devices however in most cases the deployment can be done using 1 or 2 independent appliances.

When we look at the drawing, we can immediately point out that “Zone 0” is not under the control of the organization and thereby cannot be trusted. If we follow the incoming requests from an untrusted source, we see that there are security controls in every zone. Every control point implicitly increases the trust level of the system behind the control point. Because the database server can be accessed only after several security controls, it is the most trusted system assuming the security controls are properly set up and work in all conditions without any exception.

However, there are several unseen assumptions about this model, and can be summarized as follows:

  • Security appliances do not have any security flaws
  • The application accepting requests originating from the untrusted zone (the internet) does not have any security flaws:
    • The libraries and components used to build the application
    • The operation system hosting the application
    • The application server
    • The application itself

Unfortunately, the reality is quite different, all components in the application chain have time-to-time security issues, and patch deployment can take some time after discovery. The situation creates a vulnerable moment that attackers can use to infiltrate into the “trusted” zones of the infrastructure. Acknowledging that at any given moment a system might have undiscovered vulnerabilities that attackers can use to penetrate your environment, one realizes that at any given moment, systems are vulnerable and thereby cannot be fully trusted.

A nightmare scenario

Let’s think of a scenario for the above drawing: the security appliances, application servers, database servers, and applications all use log4J for logging. Can we still trust any of the systems after CVE-2021-44228 is discovered, or before it was publicly acknowledged? Looking from the defender’s perspective: if we sometimes cannot trust systems, we can assume systems are breached.

What to do if we assume “trusted” systems as breached?

The first thing one can think of is that, as defenders, we have to monitor the systems to be able to see what is going on. Monitoring the systems brings another security appliance and team into the picture. Security Information and Event Management (SIEM) and Security Operation Center (SOC). The task of the SOC team and the SIEM tool is to collect all relevant logs from the system to create a baseline of the normal situation and try to detect anomalies so that you can (as defenders) act and respond to a particular abnormal event hoping to stop the attackers before they can do any harm.

How to detect anomalies?

The very first challenge of the SOC and security architecture teams is to collect all relevant event information (logs) into SIEM. Although not easy, an achievable task given your SIEM tool can handle the throughput (EPS) your systems generate. The real challenge however is determining the normal so that if an event deviates, the SOC team can see it as abnormal and hopefully act and respond.

Zero Trust Model

The zero trust model has 3 pillars:

  • Assume breach
  • Least privileged access
  • Verify explicitly

Let’s assume the infrastructure components in our drawing only communicate using pre-arranged protocols with authentication which is also restricted based on IP addresses. Together with SIEM implementation and the SOC team we have:

  • Assume breach methodology
  • Least privileged access

However, we still verify implicitly since we rely on a ruleset and user authentication. The defender’s new challenge is to verify explicitly every request and bind it in the application chain to a legitimate user request.

Adding Identity Service into the picture

The missing element in our infrastructure + SIEM + SOC is the implicit authorization of each request including API calls in the application chain. To be able to overcome this we need to place a central identity federation service to authenticate and explicitly verify all requests and API calls. Let’s check a diagram published on the Microsoft website which is also one of the leading identity federation service providers.

Figure 2: Example deployment of an Identity service by Microsoft

Link: https://learn.microsoft.com/en-us/azure/architecture/example-scenario/security/apps-zero-trust-identity

If we check what happens behind the scenes in the application structure in Figure 2, we can quickly spot that every request creates a request to identity service for authentication and authorization. So explicit verification is complete thereby, we can name our new structure as Zero Trust implementation.

Problems in the new architecture

Before explaining the problems, I would like to point out that as defenders we tend to miss some of our new assumptions while we try to find a solution to a certain problem. We should not forget that any assumption is implicit trust.

An Identity Federation Service should have the following components:

  • An algorithm to authenticate requests, usually an HMAC. The following elements are required:
    • Verified and widely accepted and tested algorithm
    • Proper implementation of the algorithm
    • Proper key management
  • A secure protocol to be used by the identity service
  • Temper-resistant application server without any security flaw
  • Temper-resistant database without any security flaw

As you can imagine now, from the moment we add the identity federation service into the picture, we implicitly trust that the federation service implementation will not have any security flaws and can be trusted at all times. The reality however is the opposite, a quick search of a vulnerability database will give many previous vulnerabilities that attackers could (and maybe still can) use against your infrastructure.

Quite a lot of “we assume” in the Zero Trust Model

Let’s list the number of elements that we unfortunately unwillingly need to trust in the Zero Trust model, namely:

  • That we collect all related logs, traces, and metrics to analyze in SIEM
  • That anomalies do not fall in our normal bucket, so our SOC team has expertise in understanding what our data points mean
  • That the responsible SOC team has enough expertise to set up, monitor, and act on events
  • That the identity federation does not have flaws and security issues

Conclusion

Defenders should see Zero Trust as a natural evolution of their security infrastructure. It is not a miracle, and for success, it requires many well-designed pieces of structure to be in place, including the basics. If the basics are not properly set up, it will be meaningless to apply Zero Trust. In such a case, you will have a SOC team that has a lot of data but cannot provide valuable information for your organization. Furthermore, the Zero Trust model still has assumptions and implicit trust, and unfortunately, in a centralized architecture, it is not possible to avoid assumptions and thereby trust.

What is next

For the next shift, I can think of including AI so that SOC to determine anomalies so that SOC teams can make their analysis correctly. The market is already bought this approach, and some vendors already have certain automated “AI” futures. Another improvement can be the introduction of a decentralized identity federation to supplement Zero Trust. I must admit anything decentralized is not an easy task, especially if it has to scale without “burning” money.

How to check, whether your organization is ready to move towards to Zero Trust model

Answering this question is subject to another article, as you can imagine the assessment is not an easy task. However, I would like to share here how to start to check the readiness. I would start to check whether the defense-in-depth methodology is implemented properly including the additional safety nets. For example:

  • Do you rely on a single vendor for your security appliances (firewall, IPS, etc)? if yes, you should consider adding a perimeter using another vendor.
  • Does your application use any authentication or restriction mechanism? If yes, does it have a safety net? Consider implementing 2-way TLS in your backend communication with IP restrictions.
  • Do you have a secure application development lifecycle, and as a safety net do you have a code review and regular penetration testing? Consider implementing, a code review as a starting point.
  • Can your IPS scan all traffic, or do you have incoming encrypted traffic that your IPS cannot scan? Consider decrypting the traffic before an IPS scan.

Blog Categories

Got a hard technology problem? Let's discuss to start working on a solution!

Curious to learn more? Let's get in touch and meet