Ageing technology could put the UK government’s digital strategy at risk
By Guest Contributor Published: 19:20, 12 March, 2017 Updated: 19:22, 12 March, 2017
by Darren Watkins, MD of VIRTUS Data Centres
With the release of the government’s delayed digital strategy and its pledge to make the UK the “best place to start and grow a digital business” helping businesses to go digital, digital transformation remains a key priority despite it seeming like an overused term.
The challenge of digital is that it is not a finite state and cannot ever be ticked off as being complete. It is the ever-evolving transformation of business activities, processes, competencies and models. Today, the influence of cloud, mobile, analytics and social media are having the most impact, but will almost definitely be superseded by new technologies on the horizon.
Research conducted by Fujitsu found that ageing technology is the major barrier to digital transformation, and 57 per cent of the businesses surveyed admitted that technology is struggling to keep up with the demands of digitisation.
So how on earth are organisations expected to keep pace, let alone stay ahead of the technology needed to cope with constant change and the ever-increasing speed of doing business?
The conundrum brings about the classic build vs. buy question that raises its head whenever it’s time to provision for additional capacity, IT infrastructure and operations. Does it make more sense to build a (new) data centre, or buy (lease, or outsource IT needs to a colocation provider)?
The most sensible and cost effective solution is to outsource what you can. Coming back to cloud, mobile, analytics and social media, as companies grow and scale, it’s a forgone conclusion that they will need more infrastructure capacity.
Businesses going through digital transformation across a variety of industries—from healthcare, to financial services, to retail, and everything in between—are all amassing large quantities of data and evolving capabilities that need rich computing power, robust infrastructure and resilient connectivity.
As the IT industry is one of the most rapidly evolving sectors in the world, the data centre colocation market needs to emulate this, as it provides the foundation to all things digital. Data centre providers commit huge resources to R&D to ensure they their facilities are built to the highest level of efficiency.
This is an advantage to organisations that buy outsourced colocation space, where they can be assured that not only is the space and power they are purchasing today is future proofed technology and efficiency for several years into the future, but also that their interests are being looked after by experienced, certified professionals. This alleviates the headache and expense of the necessary regular infrastructure upgrades.
Press ‘Next Page’ to explore the pros and cons of build vs buy.
Why Rising Outages Mean Days Without the Internet Could Soon be Reality
By Guest Contributor Published: 13:00, 15 March, 2017 Updated: 21:10, 14 March, 2017
by Paul Gampe, Chief Technology Officer, Console Connect
Amazon’s AWS S3 outage is further evidence that days without the internet could soon be a reality for enterprises.
When AWS went down earlier this month, so did a host of other popular apps and sites including Quora, Business Insider, Giphy and team communications service Slack. As a knock-on effect, connected lightbulbs, thermostats and other IoT hardware were also affected. The AWS outage is proof that even best-in-class solutions can suffer downtime.
To say that one single outage, from one single cloud provider is symptomatic of the possibility of the internet being out-of-action for days at a time may seem alarmist. But consider this: Amazon S3 is used by around 148,213 websites, and 121,761 unique domains, according to data tracked by SimilarTech.
That’s a huge knock-on effect when you consider that all of these websites rely on the availability of Amazon’s systems to deliver their back-end capabilities. What’s even more alarming is that the AWS outage wasn’t even down to security, but human error. But regardless of the root of the problem, the lack of service is key.
And it’s indicative of where we’re heading more regularly, particularly from a security perspective.
Most of us experience internet outages on an individual – and generally irregular – level. Perhaps we’re unable to check in with friends on social media, stream a film or look up a recipe for a meal we’re about to cook. The inconvenience is frustrating, but the impact is limited.
2016 may not have been when many people first became aware of the vulnerabilities featured in the
underlying routing architecture of the public internet, but it remains a watershed year where these vulnerabilities became such prominent and repeated targets.
This yielded a wider impact on communities and businesses from large-scale internet outages, caused by deliberate and malicious cyber-attacks. As industries, services and governments have grown more reliant on the public internet, malicious characters have also grown more daring in their disruption. DDoS attacks have increased not only in frequency but in size; and experts predict that 2017 will see the rise of the feared terabit DDoS attack.
We are moving from an era where the internet’s underlying routing vulnerabilities were often accidentally triggered to one where they are now being actively exploited. And, as these attacks escalate in both number and severity, the damage they wreak will increase proportionately. Personal inconveniences will pale in comparison to the long-term impacts of widespread internet outages on enterprises and organisations. Just consider the possible outcomes of the following scenarios:
- Financial markets in Europe and North America going dark
- Cloud-connected utilities systems going off the grid
- Hospitals losing connections to cloud-based patient data and reference materials
These scenarios, even if they last for just a few hours, will have dramatic and costly impacts at levels we are only now beginning to understand and appreciate – on businesses, on the economy and on people’s lives.
The goal for 2017 must be to find an appropriate alternative to the public internet, so that organisations are not reliant on a single source for their business to run smoothly and securely.
We can no longer rely on ‘hope and prayer’ as our connectivity and security strategy. This isn’t just imperative to businesses. Our personal and professional lives are already intertwined through technology, and as time goes on, these bonds will only become tighter. Just as the enterprise moving mission-critical apps outside of its internal network can open it to the failings of the public internet, and potentially become victims of those failings, the individual will become just as exposed to these same vulnerabilities. In other words, individuals have just as much skin in this game as companies do.
We need to find a practical alternative to the public internet. Thanks to software-defined interconnection technology, setting up direct connections to critical cloud services, vendors and partners is no longer an expensive, painstaking process. A network of private, secure connections that bypass the public internet and protect against the impacts of an outage is now just a few clicks away.
That means anyone who relies on a network connection to operate now has a viable alternative to the public internet. But, it also means that they no longer have an excuse to delay the critical steps required to protect against the inevitable outages that the internet will see from now on.
How businesses are using APIs in their IT infrastructure to drive innovation
By Guest Contributor Published: 06:00, 9 March, 2017 Updated: 00:00, 14 March, 2017
by David Grimes, VP of Product Engineering at Navisite
Companies are often seeking ways to reduce costs and increase efficiency, while simultaneously maintaining excellent quality in their products and services. IT departments and service providers are increasingly looking to use APIs (Application Programming Interfaces – sets of routines, protocols, and tools for building software applications) to enable automation and therefore increase consistency and efficiency while significantly reducing costs. How are APIs being used by IT departments and what are the opportunities for further development?
Enhancing operational efficiency
One important outcome of the automation enabled by APIs is consistency. Through automation, businesses remove human error (and human expense) from operational processes. Even when a repeatable task is well-documented with a clear procedure, when human workers perform the task it is likely that you will end up with varied outcomes. On the other hand, if that repeatable task is automated, it will be performed in the same way every time, improving operational reliability and in turn operational efficiency. API enabled platforms are driving a true re-think in how we manage IT; we are moving quickly from a process-driven, reactive world to an automation-driven, proactive world.
Automating DevOps processes
APIs allow for more dynamic systems that can scale up and down to deliver just the right amount of infrastructure to the application at all times. For example, instrumentation in your application that provides visibility to an orchestration layer can tell when more capacity is required in the web or app tier. The orchestration layer can then come back to the APIs provided by the infrastructure and begin spinning up new web servers and adding them to the load balancer pool to increase capacity. Likewise, systems built on APIs will then have the instrumentation to tell when they are overbuilt, for example at night and can then use the APIs to wind down unnecessary servers in order to reduce costs.
Indeed, through the ability to script the powering-on of development and testing environments at the start of the business day and automatically powering-off at the end of the business day, businesses can realise huge cost-savings on their hosting up to 50-60 per cent in some cases.
Overall, leveraging APIs in support of a DevOps strategy is always a blend of optimising for cost, for performance and the ability to have deep app-level visibility.
Automating reporting using APIs
APIs are also highly useful in reporting procedures, as many applications are now producing vast amounts of data that are often an untapped asset. IT teams therefore need to think about how to make those datasets available efficiently in order to build a dynamic reporting engine that can potentially be configured by the end user, who will be the person that understands the nature of the information that he or she needs to extract from the data.
This is frequently accomplished through APIs. IT teams and application services providers can use APIs to build systems that process the data and make it accessible to end users immediately, so that they do not have to go through a reporting team and do not lose any of the real-time value of their data.
Using APIs to enable business continuity and disaster recovery
The benefits of automation through APIs make them a crucial part of modern disaster recovery approaches. The assumption that you’ll be able to access all of the tools that you would need during a disaster through the typical user interfaces is not always true. In the modern world of highly virtualised infrastructure, APIs are the enabler for the core building blocks of disaster recovery, in particular replication, which is driven from the APIs exposed by the virtualisation platforms. The final act of orchestrating DR, failover, is also often highly API dependent, for these reasons.
In essence, disaster recovery is one specific use case of the way that APIs enable efficiency and operations automation. Humans make mistakes and processes can become very difficult to maintain and update. Therefore a DR plan based on humans executing processes is not an ideal option to ensure the safety of your business in the event of a disaster. Kicking off DR can be likened to “pressing the big red button”. However, if you can make it one button that kick starts a set of automated processes, this will be much more manageable and reliable than thirteen different buttons, each of which has a thirty-page policy and procedure document that must be executed during a disaster.
The future role of APIs
Despite the clear benefits of API-enabled automation and technology, the broader IT industry has not yet fully realised the potential of this technology, particularly in industries that have been leveraging information technology for a long time. In these industries, we are seeing a critical mass of legacy applications, legacy approaches to managing infrastructure, and legacy staff skillsets.
It is likely that the younger generation coming into the IT industry will move towards more comprehensive API use and maximise the value of APIs, because this generation has grown up with them and been trained in their use.
As we see disruptors displace incumbent packaged software players and new entrants to the enterprise IT community, we are likely to see more realisation of the benefits of API use – particularly when these organisations utilise their cloud infrastructures fully.
However, this will take time, and we may be one to two full education cycles away from producing and maturing enough entry level IT professionals that have the education and training required to fully make use of the opportunities offered by APIs, particularly cloud ones.
When used as part of cloud computing solutions, APIs can reduce the cost of idea development, as innovative new businesses no longer need to invest in equipment up front to get their ideas up and running. Instead, using cloud infrastructure as a service platforms, start ups and entrepreneurs can quickly start their business on a pay as you go model and can keep costs low by using APIs to control and power systems up and down as needed.
Organisations can then quickly scale on the same cloud infrastructure, according to how quickly their product or service grows. To fully take advantage of the benefit of APIs, they should be fully integrated in the design and development of cloud solutions, rather than being an add on feature, which is implemented at a later stage, at an additional cost.
We are likely to see an increasing number of creative uses of APIs to drive efficiency, automation and consistency, as more innovative start-ups enter the technology industry. To gain competitive edge and stay ahead of the market, businesses must make full use of new API-enabled software in order to fully realise the benefits and cost savings that they can offer.
The law we all thought a safe “zombie” bill is alive – key things you need to know about the Snooper’s Charter
By Guest Contributor Published: 10:14, 6 March, 2017 Updated: 10:14, 6 March, 2017
by Ben Rafferty, Global Solutions Director, Semafone
You could be forgiven if the introduction of The Investigatory Powers Act, or the “Snooper’s Charter” managed to slip by you under the cover of Trump’s election as President and the UK Brexit.
However, considering the magnitude of its impending impact in the world of data security, particularly for communications providers, I don’t think the subterfuge will have worked.
If you, like most businesses, are storing your customers’ data, here are some key points that you should bear in mind.
What are the motivations behind the law?
The government has passed the IP Act with the intention of using it as a tool to fight terrorism after an increasing number of terror campaigns hit Europe over the last few years.
However, several industry experts have voiced their fears that in the act’s attempt to curb terrorism, it will seriously infringe upon people’s privacy. Hence the nickname: ‘Snooper’s Charter’.
How is the government addressing these privacy concerns?
One feature of the IP Act is the ‘double lock’, which aims to tackle the criticism from security experts by regulating the ease with which governmental employees can apply for a warrant to access people’s data.
The double lock means that ministerial authorisation is required before anyone can request a warrant. A panel of judges, who are given the power to veto, then assess the warrant, all of which is overseen by the Investigatory Powers Commissioner, who will act as a senior judge.
What are the implications for communications service providers (CSPs)?
All communication service providers, including telecommunications service providers, will need to store complete web records of every customer from the last 12 months.
With a warrant, the police and 50 other public bodies, including the Food Standards Agency and the Competition and Markets Authority, can request this information. With the types of data that can be requested ranging from NHS records to internet history, companies are going to have to store huge amounts of their customers’ data.
Once a warrant has been granted, all encrypted data, including information sent via apps such as Whatsapp and iMessage, must be un-encrypted by the CSP, and sent to the relevant government body.
Additionally, these public bodies have been given the right to apply for a warrant to hack into computers, mobile devices and networks without alerting the owner.
Additional pressure to keep data safe
Considering the numerous breaches that global companies have experienced over the last few years, combined with the new onus on businesses to retain more data under the IP Act, big investment in IT security will be needed to ensure that customers’ data is kept safe.
Thanks to this, companies that fall under the umbrella of the new policy will find themselves facing a substantial economic burden. And this is before they even begin to consider the financial impact of complying with the EU GDPR when it comes into effect in 2018.
The EU GDPR throws up another challenge in the form of conflicting legislation. While the IP Act requires companies to store complete records of web data for each customer, one of the key functions of the GDPR includes allowing consumers to have the ‘right to be forgotten’.
Clearly, there are some competing priorities. Yet this only begins to scratch the surface of how these two laws are likely to butt heads.
In an ideal world, all personal information and data should not have to be stored by communications service providers. However, the IP Act is forcing communication companies to hold enormous amounts of personal data, and appears contrary to recent warnings in the media.
For example, Labour MP Meg Hillier, Chair of the Commons Public Accounts Committee, recently stated that handling of personal data breaches by the government has been “chaotic”, which undermines confidence in the government’s ability to protect the UK from cyber-attacks.
With this in mind, creating new data ‘honeypots’ that the IP Act demands of each ISP seems like creating a set of hugely desirable jackpots for hackers! With the act now enshrined in law, it is therefore paramount that companies take data security seriously.
Ultimately, customers’ sensitive information should be kept under lock and key behind several layers of security.
A rock and a hard place
To truly lock down data securely, companies need to take advantage of the latest and greatest technology, such as tokenisation, truncation, or “salting and hashing” the data (adding random data and encrypting the mix), whilst ensuring that robust processes are in place to consider things like non-repudiation and the principle of least privilege are implemented to safeguard that only the very few that are granted access do so with full auditable access.
While this is certainly best practice, to make the security process all the more complicated, companies have to ensure that they can still access the data and un-encrypt it quickly in the event that it is requested by a governmental body.
As the world of data security continues to grow in complexity, laws and regulations around information protection and storage are not going away. So, whether you are a communications service provider, a public body or just a business handling customer data, you need to take heed.
Keeping up to date with the ever-growing list of regulations and taking action to make sure you are compliant with them is the only way to avoid the damaging economic and legal consequences.
The digital skills gap isn’t going away: Here’s what big business can do to fix it
By Guest Contributor Published: 04:14, 23 February, 2017 Updated: 04:14, 23 February, 2017
by George Smyth, Director R&D, Rocket Software
Given the rate of technological development and the fact that software interacts with so many different parts of our lives, it would seem like it’s a good time to be a young programmer. Computing makes up the backbone of most companies’ business functions and processes, and the skills gap is consistently acknowledged within the technology industry. Based on this, you could forgive most people for thinking that getting a job in this sector after graduation would be relatively easy.
However, despite the huge demand for computer science skills, 11.7% of graduates remain unemployed six months after completing their degree, which is much higher than other STEM subject degrees such as maths, biological sciences and engineering. Thanks to these concerning figures, in 2016 the UK government commissioned the independent Shadbolt Review to investigate the reasons behind the skills gap and put forward recommendations for how to best address the issue. So, looking at the review and its main findings, where do the challenges lie, and how can we as an industry begin to take positive steps towards solving the skills gap?
Where does the problem stem from?
The Shadbolt Review found that different employers often disagreed on what technical skills computer science students should be taught. For example, it found that some smaller companies, such as start-ups, think grads should have computing skills that reflect the most up to date technology. This has led to a surge in new computer programming languages that support technologies such as the cloud.
But then on the other hand, the review found that big tech companies were more likely to support Higher Education providers that taught the fundamental principles of traditional computer science; languages like COBOL.
The reason behind the demand for COBOL lies in the fact that it is the traditional computing language that was developed alongside the mainframe. Considering the average person ‘touches’ a mainframe every day; whenever we check our bank accounts online, book a train ticket, or request a quote from an energy provider, we are transacting with a mainframe, it’s understandable that companies will want to employ new graduates who can demonstrate COBOL skills.
For example, when looking to hire a young programmer, large enterprises such as IBM are hoping to find graduates who have first and foremost been taught the fundamental principles of computer science. They are then encouraged by the company to learn and adapt to new technologies over the course of their careers.
However, these varying attitudes from employers has resulted in many computer science graduates coming out of university with different computing skills. And this has resulted in a wide and varying skills gap.
Ported tools are part of the solution
A good place to start in rectifying this issue is to adapt current technologies so that COBOL is no longer the only requirement when working on mainframes. This is a great way to bridge the gap between opposing employers’ needs.
“Ported tools” act as translators, which allow the use of languages such as Python, PHP or Java, along with tools such as Git and Bash, to programme a z/OS machine that might once have only recognised COBOL.This way, even graduates who haven’t been taught COBOL, as much as is required by some IT companies, are still able to work on mainframes.
Collaboration is crucial
Developing a clearer view of the skills that companies are looking for is crucial. This can only be done if employers collaborate with each other and agree on a basic course design with Higher Education providers.
What’s more, companies need to form partnerships with universities and students. The Shadbolt Review found that many computer science graduates were lacking in work experience and commercial awareness, which was making them less employable. Clearly there is a disconnect between what employers need, and what universities are teaching. Building meaningful partnerships with universities will allow companies to demonstrate the benefits of practical, working-world knowledge, and to offer short internships and placements to build these skills.
Soft skills are important too
Another finding of the review was that many graduates lacked awareness of the importance of soft skills in the workplace. Employers are increasingly looking for graduates with both the technical requirements, and soft skills including business awareness.
For example, some of the larger companies stated that they tended to interview graduates on the assumption that their degree courses had given them the technical skills needed, and differentiated interviewees based on other skills that were linked to employability, such as problem solving, leadership, and communication skills.
This lack of soft skills is a trend that employers are seeing across all sectors. The best way to address the gap in knowledge is by placing more importance on these skills, while also incorporating more practical lessons into the higher education curriculum.
It’s only up from here
As employers, it’s important that we work together with universities and students to bridge the skills gap and share our expertise on the future of the industry. After all, it is the joint responsibility of everyone involved to do their part in solving the issue by giving graduates the tools they need to succeed in the world of business.