Monday, January 16, 2017
These are the tasks your data centre team should be focused on
By Guest Contributor Published: 00:40, 16 January, 2017 Updated: 00:40, 16 January, 2017
by Chris Wellfair, Projects Director, Secure I.T. Environments
When you are lucky enough to be taking some time off, there will of course be thoughts in your mind about whether the data centre, and everything it represents for your business is safe, especially if it still going to have active users and workloads. Moreover, if a problem does occur you want to find out fast, so you can act accordingly!
Planning is the key, and here are my top tips for the things you need to think about now!
Failover, not fall over
You no doubt test the failover between individual servers and clusters already to ensure the DC continues to fulfill all services in the event of a crash or hardware failure, but what about when a power failure occurs? Is everything that should happen when the DC loses power happening? Is it switching over to generators, informing staff and shutting down any unnecessary servers consuming power? These kinds of tests should be happening regularly and should also be run for connectivity.
Check it twice, roleplay and test
Keeping an eye on the DC needs a combination of monitoring systems, processes and people. You may think all of this is in place, but when did you last test them, and how many levels of human redundancy do you have? Who has codes, spare keys, what is the rota? How will you know if the alert systems fail, and how will you manage each disaster that could befall the DC?
This is the time of year, not to confirm that the processes are written down and accessible, but that they are known to all that need to play their role in protecting the DC. It is time to test!
In the same way that the military or an airline pilot will go through simulations, the same approach should be taken with the IT staff that will be on call over quieter periods. They should know the steps to take with each incident that could occur and just as importantly understand the path of escalation if the problem is worse than initially thought or deteriorates beyond their skillset.
Configuration is key
Think about how your monitoring systems are configured. Now is not the time to put a new system in, but it is important to check what it is monitoring, the conditions and parameters that will trigger an action and alert.
Are they tight enough, or in place at all? Use this time to fully assess them against your processes and IT ‘red list’ of problems. Check alerts are going to the right people – it is more common than most would like to admit that someone that left the company two years ago, is still in the monitoring software.
Silence is not golden
Finally, get your monitoring software to bring you good news too. Better to get a daily report and know all is well, rather than have a system configured only to send alerts with bad news! Silence breeds fear, and you’ll just worry about whether the DC has disappeared down a sink hole!
If it is your turn to do the graveyard shift in the office, then a couple of other things are really important. Make sure you find the oldest server and rack that you can in the DC, as its probably the most inefficient with poor cooling. This will be the perfect place for your television, armchair, and a place to keep the mince pies warm!
Gaining confidence in the cloud
By Guest Contributor Published: 22:45, 13 January, 2017 Updated: 22:45, 13 January, 2017
Do we really have anything to fear from data stored in the public cloud? Which is likely to be more secure, your internal datacentre, or your public cloud environment?
- Understand your security and governance requirements. Often when considerations for security deployment are made there is little understanding of the issues they are actually trying to solve.
- Define the issues you are trying to resolve up front and then look for the solutions that directly deal with those issues.
- Understand that controlling the access to your data is more important than the location of your data when it comes to security.
- Look at how the data is or will be accessed and look specifically at the opportunities to breach. Most data breaches happen through vulnerabilities, regardless of whether the data is on public/private or on premise.
- Make sure you have a vulnerability testing programme in place. This is absolutely necessary, as untested systems can be insecure and vulnerable either in the cloud or on premise.
6 data centre infrastructure trends for 2017
By Guest Contributor Published: 17:46, 3 January, 2017 Updated: 18:39, 3 January, 2017
by Giordano Albertazzi, EMEA president, Vertiv
In 2016, global macro trends significantly impacted the industry, with new cloud innovations and social responsibility taking the spotlight.
As cloud computing has integrated even further into IT operations, the focus will move to improving underlying critical infrastructure as businesses look to manage new data volumes.
We believe that 2017 will be the year in which IT professionals will invest in future-proofing their data centre facilities to ensure that they remain nimble and flexible in the years to come.
Here are the key infrastructure trends we see shaping the data centre ecosystem in 2017:
- Infrastructure races to keep up with connectivity at the edge
While the data centre remains core to delivering applications and services, such as point of sale and inventory management, network closets and micro data centres are growing in number and importance as internet-connected sensors and devices proliferate and remote users demand faster access to information.
In response, organisations will turn to pre-configured micro data centre solutions that support fast deployment, greater standardisation and remote management across distributed IT locations.
Standardisation and modularity are becoming as important in distributed IT locations as they are in large data centres.
- Thermal management expands to sustainability
Fuelled by the desire to drive down energy costs, traditional approaches that focused on delivering “maximum cooling” have been displaced by more sophisticated approaches focused on removing heat as efficiently as possible.
Increased use of advanced economiser technologies and the continued evolution of intelligent thermal controls have enabled highly resilient thermal management strategies that support PUEs below 1.2.
However, while energy efficiency remains a core concern, water consumption and refrigerant use are important considerations in select geographies. Data centre operators are tailoring thermal management based on location and resource availability, and there has been a global increase in the use of evaporative and adiabatic cooling technologies which deliver highly efficient, reliable and economical thermal management.
Where water availability or costs are an issue, waterless cooling systems such as pumped-refrigerant economisers have gained traction.
- Security responsibilities extend to data centre management
While data breaches continue to garner the majority of security-related headlines, security has become a data centre availability issue as well. As more devices get connected to enable simpler management and eventual automation, threat vectors also increase.
Data centre professionals are adding security to their growing list of priorities and beginning to seek solutions that help them identify vulnerabilities and improve response to attacks. Management gateways that consolidate data from multiple devices to support DCIM are emerging as a potential solution.
With some modifications, they can identify unsecured ports across the critical infrastructure and provide early warning of denial of service attacks.
- DCIM proves its value
DCIM is continuing to expand its relevance, both in the issues it can address and its ability to manage the increasingly complex data centre ecosystem. Forward-thinking operators are using DCIM to address data centre challenges, such as regulatory compliance, Information Technology Infrastructure Library (ITIL), and managing hybrid environments.
Finally, colocation providers are finding DCIM a valuable tool in analysing their costs by customer and in providing them with remote visibility into their assets.
- Alternatives to lead-acid batteries become viable
New solutions are emerging to the weak link in data centre power systems as operators seek to reduce the footprint, weight and total costs of traditional valve-regulated lead-acid (VRLA) batteries.
The most promising of these is lithium-ion batteries. With prices decreasing, and chemistries and construction continuing to advance, lithium-ion batteries are becoming a viable option for the data centre and are being scaled to handle row- and room-level requirements.
While this battery technology has been available previously, the improving economics have spurred increased commercialisation efforts in the data centre industry.
- Data centre design and deployment become more integrated
Technology integration has been increasing in the data centre space for the last several years as operators seek modular, integrated solutions that can be deployed quickly, scaled easily and operated efficiently. Now, this same philosophy is being applied to data centre development.
Speed-to-market is one of the key drivers of the companies developing the bulk of data centre capacity today, and they’ve found the traditional silos between the engineering and construction phases cumbersome and unproductive.
As a result, they are embracing a turnkey approach to data centre design and deployment that leverages integrated, modular designs, off-site construction and disciplined project management.
For businesses looking to stay competitive and seamlessly transition to new, cloud based technologies, the strength of their IT infrastructure continues to be the cornerstone of success.
With data volumes rapidly rising, IT infrastructures will continue to evolve throughout 2017 to offer faster, more secure and more efficient services needed to meet these new demands.
Investment in the right infrastructure – not just a new infrastructure – is essential. It’s therefore vital that a partner with a strong history of data centre operations is involved throughout the system upgrade – from planning and design, to project management and ongoing maintenance and optimisation.
Data Centres: A Case Study in Job Creation Aided by Automation
By Guest Contributor Published: 10:00, 23 December, 2016 Updated: 18:59, 22 December, 2016
by Clifford Federspiel, President & CTO at Vigilent Corporation
Technology, specifically automation, was called out as a job killer by MIT Sloan School of Management professors Erik Brynjolfsson and Andrew McAfee in the MIT Technology Review.
Their research showed how middle class jobs can be erased by automation. Any casual observer can see how this affected the vote in key Electoral states in America’s industrial heartland during the recent US Presidential race.
In contrast, the Wall Street Journal recently profiled a Deloitte study stating that automation can drive increases in jobs, and can alter the quality of jobs – mostly in a good way.
So, where’s the truth? Does automation help or hurt workers? As is often the case, it depends. Slow-growth industries use automation to reduce costs in the face of stiff price competition, displacing workers in the process.
Meanwhile, high-growth industries mostly use automation to deal with the challenges that high growth creates, making workers more valuable and necessary.
In many cases, automation is assistive. Assistive automation increases productivity without eliminating jobs. The increased productivity enables job creation in other areas.
Fortunately for us, the data center industry is in a hyper-growth phase that makes it an oasis of opportunity and job creation, enabled by advances in assistive automation. In fact, job growth aided by assistive automation is the only way that data centers can possibly keep up with the growth and complexity they face.
And yet, despite the exponentially increasing demand for data and resulting impact on data center operations, parts of the industry have been slow to embrace change, for example:
- Capacity investment decisions often rely on tribal knowledge and best practices that are only “best” in the absence of information that is available today from actual operating data and analytics. For example, a decision to add IT load to a facility should be based on the actual, measured capacity of the existing cooling infrastructure. Doing so allows businesses to grow faster when sufficient cooling capacity already exists, while avoiding the risk of overprovisioning a data hall with too much IT load.
- Maintenance is often performed on a fixed schedule. But maintenance incurs both cost and risk. Performing maintenance when analytics indicate that maintenance is actually warranted helps reduce cost and reduce risk.
Automate or Be Left Behind
Traditional modes of operations management as described above are not sustainable in an industry facing explosive growth. Data centers now operate with greater variability in server density and migration of IT load between facilities, creating complex and interdependent systems that long ago surpassed the ability of humans to manage manually.
To deal with these complexities, hyperscale leaders like Google and Facebook have already fully embraced the use of automation technologies in their data center operations.
Facebook has been amassing robotics engineers with data center experience as it simultaneously expands its massive build-out of server farms, and is collaborating with others with the Open Compute Project. Google, which joined OCP this year, has used their DeepMind machine learning to reduce cooling by 40% in their already-efficient data centers.
It’s my view that data centers in other sectors — colocation, telecom, enterprise and government — benefit from automation even more than hyperscale operators.
Why? Because hyperscale companies have an abundance of technical talent that they can apply to optimization, while other sectors face a shortage of data center engineers. Automation can fill the gap, make jobs more productive and enjoyable, simultaneously opening up new opportunities for growth and employment.
Better Jobs, and More
Running mission-critical infrastructure is complicated and difficult, and continues to get more challenging as the number and size of these facilities grows rapidly. Large data centers can have thousands of racks supported by hundreds of cooling units. Large network operators have thousands of “edge” facilities that typically run “lights-out.”
These facilities are as critical to the health and welfare of our economy as our electrical grid and our highway system. The problem is that there aren’t enough operators and engineers to keep up with the growth rate and the round-the-clock service requirements, which is stressing the workforce and limiting growth.
Assistive automation can help solve these challenges. Controls assisted by machine learning can make minute-by-minute operating decisions, and at the same time provide operators with data-driven insights so that they can more reliably schedule and perform maintenance, and deal with expansion and change management.
The data that’s collected by sensors and software can be used to create predictive and prescriptive analytics, which can be used to inform decisions about capacity, reliability and efficiency.
New hires – data analysts – are needed to take full advantage of these analytics. Automation can handle minute-to-minute tasks that require round-the-clock vigilance, and assistive automation can help operators, planners, engineers and analysts with decisions that only humans can make.
The point is that automation helps data centers run better, makes current employees’ jobs more productive and enjoyable, and provides high-value information that requires incremental hires to utilize the information to further optimize the company’s facilities.
The Future is Bright
As the Deloitte Study states “… the last 200 years demonstrates that when a machine replaces a human, the result, paradoxically, is faster growth and, in time, rising employment.
The work of the future is likely to be varied and have a bigger share of social interaction and empathy, thought, creativity and skills.”
Nowhere will this scenario be more likely to play out than in the data center industry. The sheer scale of growth facing the industry requires automation technologies to deal with the complexity of operations and planning.
Automation technologies need more people, with more skills and in more interesting jobs, to deliver on their promise.