… then security will go to the data mountains




Later this year, Dutch researchers hope to complete a quantum internet between Delft and the Hague. The parties claim that this network will be the first “unhackable internet”, making use of a quantum behaviour of atomic particles called photon entanglement. It’s pretty cool R&D. However, the techniques used can only run for distances of under 1.5km at the moment, and it will be at least another decade before we can expect to see the first global network.

Certainly one to watch for the future of secure point-to-point data delivery, but not something that’s particularly helpful for enterprises dealing with the requirements of the next decade – nor will it be useful for securing anything in the cloud as the moment you exit the quantum link you introduce the standard risks of hacking.

So what should security and network managers be looking to for the immediate future when considering their data access and security architectures?

For many organisations, security architecture hasn’t changed all that much in recent years.  Networks have got faster and cleverer, applications are being delivered in wholly new ways, but security continues to take place primarily on premise in a little physical box. This is fast proving to be unworkable and unscalable, forcing network and security engineers into endless compromises.

You’ve probably heard of the old Turkish saying; “If the mountain won’t come to Mohammed, then Mohammed will go to the mountain.” This is the decade where we will finally stop expecting our ever-growing data to “check in” to a security hub and instead we will expect the security to follow the data. The best news is, it’s not only more logical, but it also makes things much easier.

Why must security travel to the cloudy data mountain?

The average organisation uses 2,415 cloud services and apps and in 2019, and 44% of all cyber threats were cloud-enabled. We are quite used to viewing unsanctioned cloud apps as a security threat, but in reality, the apps with the most threats tracked against them are regularly sanctioned for enterprise use; Office365 OneDrive, Box and G Drive. These are popular with malicious actors specifically because they are sanctioned by enterprises and often whitelisted, enabled with lax workarounds of traditional security, or sometimes left entirely open. It is too easy to evade legacy defences.

Cloud adoption also brings boundary crossings that legacy defences miss due to either a lack of visibility or coarse grain allow / block controls with no understanding of context. Data can flow between company and personal instances of cloud apps, between managed and unmanaged cloud apps, and between approved low-risk and unapproved high-risk cloud apps.

More than half of all enterprise web traffic sessions are now cloud-based, requiring inline API JSON decoding to effectively secure the data. There is simply no way that a security architecture can police this new world unless it is designed to operate inline. To use a more modern saying; “you have to be in it to win it”.  

Watching like a hawk…

Let’s take a look at Zero Trust Network Access. The principle of Zero Trust operates on the basis that trust should never be implicit, and access is granted on a need-to-know or “least privileged” basis. The concept was first introduced in 2010 and, alongside the growth of cloud, its popularity has recently surged. A recent survey by Cybersecurity Insiders (CSI) found that 78% of organisations have plans to adopt cloud-based ZTNA over the next 18 months. 

Zero Trust is the principle of not trusting anything, either within or without the organisation’s perimeter, without first verifying both user and device. Here we again find a principle that cannot be enacted without a change to security architecture. Organisations need security to be present and active inline in order for Zero Trust Network Access to become a reality.

Remote users

The ‘20s have kicked off with an event that will likely define the decade in many ways.  Covid-19 has already disrupted life as we know it for the vast majority of the world’s population. For organisations, one of the early effects of the various lockdowns imposed by governments around the world was the overnight expansion of the ranks of remote worker – something which has illuminated just how inappropriate remote access security architectures (usually VPNs) are in the age of cloud.

In the months leading up to the pandemic, 39% of the CSI survey respondents were already reporting that they were unable to deploy their preferred remote VPN appliance in public cloud environments. Because of this, the most common workaround mentioned by survey participants was “hairpinning” remote users through data centres to access public clouds (47%). This has a serious impact on user experience, but perhaps even more alarmingly 31% of respondents also said that they go so far as to publicly expose cloud apps in order to enable remote user access.

ZTNA becomes increasingly logical for organisations providing remote access to either public or private cloud. Almost half (45%) of respondents to the CSI survey said that ensuring remote access to private applications hosted in public cloud (such as AWS, Azure or GCP) was a security priority. When delivered in the cloud using a high-capacity global network infrastructure, ZTNA can also enable remote access that scales to meet the needs of any dramatic increase in remote working requirements, without slowing access times or routing data backwards and forwards unnecessarily.

We are just five months into the current decade and our world is already significantly different to the one we knew last year. Some forces have been gradual (the adoption of cloud) and some have taken us by surprise, but they are all pointing strongly to a need for a significant shake up in the way we design access and security architectures in future. The age of hauling data through the security appliance is over. Moving forward, security must go to the data.

Read the latest from the Data Economy Newsroom: