Cybersecurity catastrophes cured by AI




Enterprises handling data without clearly defined IT policies, run the risk of exposing themselves to security gaps that can be exploited. Abigail Opiah goes undercover to dig deeper into AI as a weapon to eliminate cyber threats.

The latest edition of the Data Economy Magazine hits the shelves and is shipped across the globe. Read more here.

When it comes to predicting the way in which Artificial Intelligence will have an impact on the world in its developed form, there are many mixed opinions. As we watch the world scramble around trying to make sense of all thing AI-related, one prominent question that weighs heavy in the tech sector is how AI will enhance cybersecurity for data centres and cloud services.

Specific, I know – but if you take a look at the severity of cyber-attacks, and add in the increased number of hack attacks over the last few years, it’s an area that warrants highlighting as enterprises continue to fight the continuous fight of protecting sensitive data from getting into the hands of the wrong people.

Sneak attacks

 Significant challenges manifest around maintaining visibility and effective controls. This can impact an organisations’ security posture. Matt Walmsley, EMEA Director at Vectra revealed that many organisations remain blind to the active attacks that gain access into their networks and cloud instances.

“Cloud services, with all their many benefits, also comes with unique security risks to be managed, such as attacks directly aimed at Cloud PaaS using stolen credentials, which would remain invisible to workload and cloud instance-centric security controls,” he said.

“Pervasive visibility across the enterprise, agnostic of environment type, is fundamental to security success. AI is now being used to combat cybersecurity adversaries by analysing digital communications in real time, and spotting the hidden signals to identify nefarious behaviour whether they’re in the cloud or operating in your local infrastructure.”

Identifying the developing attack at the early stages through the use of AI in order to eliminate security blind spots is a common view shared by most. However, it is easier said than done.

Ofer Wolf, COO at Guardicore, a company that provides cloud workload protection preventing the spread of breaches inside data centre and cloud environments, proposes that typical data centre and cloud security is full of holes.

“With hackers already in there, data centre chiefs need to focus more on modern ‘micro-segmentation’ techniques that compartmentalise data centres and block hackers from taking lateral movements,” he revealed.

“AI is all about automation and taking manual labour out of the equation. It will essentially speed up the time it takes to create security measurements to protect data centres.” He agrees that having as much visibility as possible is the first step to dealing with cybersecurity.

AI’s possibilities

Last year, the McKinsey Global Institute estimated that AI techniques have the potential to create between $3.5tn and $5.8tn in value annually across nine business functions in 19 industries. AI is also forecast to grow from $21.46bn in 2018 to $190bn by 2025, growing by nine-folds over the next seven-year period. With AI capabilities becoming more powerful and increasingly widespread, John Titmus, Director EMEA, Sales Engineering, CrowdStrike proposes that businesses ought to make sure they have the appropriate frameworks in place to identify and prevent malicious attacks, and quell the potential for AI misuse.

“Data centres are like candy for adversaries due the sheer amount of information they hold, making it all the more important to have the right defences and threat detection methods in place to effectively mitigate and remediate an attack within the ‘desired breach’ time,” he added.

“It is going to be an ongoing race between the adversaries and AI as a defensive approach. On one side, AI and the speed and power of the cloud are crucial technologies that offer detection against ransomware variants. On the other side, AI is going to be beneficial for criminals innovating their attack tools, using collected data as a weapon against an organisation.

“AI-based defence is not a panacea, especially when we look beyond traditional data centre defences. In the end however, AI is going to be more beneficial to the defensive side, as where AI shines is in mass data collection, applying more to defence than offence.”

Hackers on board the AI train

The soaring growth of AI poses positives and negatives regarding data centre and cloud security. Tom Ilube, Crossword Cybersecurity CEO suggests that we need to consider a change of the attack landscape. “The new cybersecurity adversaries are not just vandals trying to deface your website or petty criminals stealing credit card numbers,” he added.

“Soon they are going to employ a new generation of AIs to support their attacks based on advanced science like Deep Reinforced Learning, Attention and Self-Supervising.

This new generation will bypass any existing traditional protections, and only defences based on similar technologies will have a fighting chance against them.

“Deep Reinforcement Learning uses large artificial neural networks to solve problems of long term planning of action to reach a goal in a changing environment, often when some adversary is present. AI can also be used on a variety of malicious tasks from writing convincing phishing emails, large scale propaganda, fraud, to even maybe planning attacks.”

The AI shift

Companies are already changing things on the cybersecurity front by leveraging AI to automate the protection of their data.

“Infrastructure in the data centres are starting to play less of a role in the overall security stack,” said Vlad Nisic, VP of channel sales EMEA at private security company Wallarm.

“However, if the data centres are external to the organisation’s perimeter, physical security and infrastructure management effect the overall security practices and security management thinking. Wallarm’s AI engine relies on multiple unique learning principles including hierarchical clusterisation, enabling companies to effectively discover and fix critical vulnerabilities, and prioritise security risks they may not have otherwise found.”

For an effective security operations strategy, all roads are pointing towards the use of artificial intelligence. James Spiteri, Cyber Security Specialist EMEA at Elastic explained that AI and ML allows an organisation to develop statistical models to determine whether an action is potentially malicious, without having to rely on signatures and static rulesets.

“With the right amount of data and malware samples, it is possible to model malware activity. This is where AI and ML comes into play. If an organisation is collecting many different data points and correlating it with the actions of malware samples, an effective model can be built to determine outlier activity,” he said.

“There are many projects openly available providing modelling techniques, and the community just keeps on giving.”