Zero trust is one of cybersecurity’s least understood, yet most trendy buzz phrases. Looking back at the past few years, one easily can understand why. Today, trust is in short supply. Given exponential increases in ransomware and cryptojacking attacks to increasing geopolitical tensions, these are tenuous times – especially when it comes to running a business. So, it’s no surprise that the concept of “zero trust” and its presumed implications speaks to a broad range of enterprises.
The irony is, to enable a zero trust framework, you have to have a highly-validated repository of identities, assets, applications, and networks upon which you can rely.
So what exactly is zero trust?
While the drivers of conversations about cybersecurity may have changed – from a pandemic-era boom in workforce distribution to a move towards hybrid cloud infrastructures – the term “zero trust” is not. First coined in 1994, the concept was later developed into a holistic security philosophy by former Forrester analyst John Kindervag. The term previously made rounds throughout the industry as “deny by default" or "never trust, always verify" policies.
Simply put, zero trust is a security strategy. More broadly, it’s an enterprise-wide security mindset, which considers all end-points and accounts as untrusted. Whereas other security systems – such the once-preferred perimeter philosophy – may only require location-based or two-factor authentication, with zero trust, users and applications are granted access only when and where they need it.
By denying access by default, a zero trust approach enforces a dynamic and continuous system of verification for users and their devices. In our current climate, where data breaches are no longer a question of if but of when, zero trust enables enterprises to better protect data and minimize the potential impact of an attack, while also facilitating a more localized, rapid response.