Thursday, October 19, 2017


There’s a global crisis around software code quality: Why enterprises are still making the mistake of not paying full attention to their lines of code in the data economy



Data Economy speaks to Lev Lesokhin, EVP Strategy and Analytics at CAST, following a report looking at over one billion lines of code exposes the naked truth of code quality worldwide.

The quality of business application software is crucial in a time when reliance on code and the way this functions within an enterprise’s IT ecosystem to conduct the right tasks at speed and with the right value proves to be the differentiator between winners and losers.

In a recent report by CAST, the company has found that despite businesses understanding the important of their code, the overall quality of too many mission critical functions across the globe is poor.

Financial services were found to be particularly susceptible to security risks, and for an industry carrying large amounts of sensitive data, organisations operating within this sector are at risk of severe regulatory fines, the CRASH Report has found.

The report was based on over one billion lines of code across almost 2,000 enterprise applications as run by 300 organisations such as banks, insurers and government departments.

To discuss more on why enterprises are still failing to arrange their coding lines to protect their own data, Data Economy (DE) spoke to Lev Lesokhin (LL), EVP Strategy and Analytics at CAST.

 

DE: Why are enterprises still making the mistake of not paying full attention to their lines of code when their business is becoming ever more dependent on IT/code? 

LL: Paying attention to the quality of the engineering and construction of software systems falls through the cracks between developers and management in most IT organisations.

Most app dev managers believe the responsibility for software integrity is something developers should take care of themselves. At the same time, project managers are constantly pressed by the business to deliver more, and more quickly.

This leaves no time, nor incentive, for the development teams to pay attention to software structure and integrity with any rigour.

This problem is most pronounced in the UK, followed by the US. In continental Europe, there is a slightly higher consideration for software engineering, structure and integrity.

 

DE: How can enterprises (especially financial ones) mitigate risks associated with bad coding? 

LL: The first thing enterprise IT departments need to do is to establish ownership of the software structural quality problem, and assign responsibility for managing software risk. Successful organisations have a senior level Structural Quality Officer, at the level of an Enterprise Architect.

This senior level individual should be given the right to veto releases to production and should run a Center of Excellence (CoE) that provides system level analysis of applications as a service to all development teams.

The biggest risks to enterprise applications come from the way modern IT applications are built across multiple components, languages, frameworks and data stores.

No individual developer can see the issues he/she will introduce, even in the highest quality code they write, when those issues are a function of multi-component interfaces that are abstracted from that developer.

Levels of abstraction, multi-technology applications, legacy wrappers, service busses, are all contributing to a level of complexity that drives increased software risk and can only be analysed at the system level.

 

DE: Can you provide an example of a catastrophic scenario that could be sparked by a bad line of code? 

LL: A common example would be having an expensive statement made inside a loop in a procedure written somewhere near the front end, which is the user interface, in a large complex application.

If a loop contains a call to a method outside that procedure, to the developer it will seem a rather innocuous structure.

The piece of code might have the best code quality imaginable, but the call to the external method might mask a great deal of resource consumption.

The external method may call other methods or routines, which may encapsulate calls to old COBOL structures, which then reach back into a legacy DB2 database.

This will cause a great deal of network traffic, Central Processing Unit (CPU) thrashing and MIPS consumption, causing the system to slow down drastically – and not just for the user invoking the offending interface, but for everyone else as well.

Usually, that’s handled by throwing more iron at the problem, or lately by the elasticity of the Cloud, but it’s the kind of issue which limits vertical scalability and causes time-outs, outages and poor user experience.

This type of issue can only be detected when examining the interactions of components across the system.