Brazil’s Ascenty secures $190m to become one of LATAM’s giant data centre operators
By João Marques Lima Published: 00:05, 17 March, 2017 Updated: 23:48, 16 March, 2017
Company also unveils a 75% year-on-year growth and opens its fifth facility in São Paulo with several more to come online in the near future.
Brazilian data centre services provider Ascenty has secured $190m to help finance five facilities under construction and further expansion.
The syndicated funding was led by four financial institutions, including ING and Itaú BBA. The company has not disclosed the name of the other two organisations.
In addition to its expansion in the Latin America region, the money, set to be used over a period of five years, will be used to refinance existing debt.
The five new data centres currently being built across region will expand the operator’s footprint by 85%. All facilities expected to open before the end of 2017.
Chris Torto, CEO of Ascenty, said: “We decided to increase our debt financing to allow us to accelerate the company’s expansion into new markets.
“In 2017, we will launch five new data centres, which will be used to help reduce the shortage of high quality technology services across a range of market verticals in Brazil.
“In a time when the country’s economy is being challenged, this new debt financing provides proof of last year’s great results as well as renewed commitment from our banking partners for Ascenty’s healthy expansion.”
Ascenty has also revealed that its business grew 75% in 2016 when compared to 2015.
Torto said: “This growth happened because Ascenty has world class data centre infrastructure and that is the reason why we are the ones to which the main global technology companies in the country go to.”
Following from the growth and financing results, Ascenty has also announced that it has initiated operations at its first data centre in São Paulo and fifth hub in Brazil.
The São Paulo 1 facility had a building cost of $32m and counts with 4,000 sqm of hosting space and 10 MVA of power.
In addition to São Paulo 1, the company operates data centres in Campinas, Jundiaí, Hortolândia and Fortaleza.
However, despite just cutting the ribbon, Ascenty is also building a second data centre, São Paulo 2, which is expected to open in April 2017.
In February, Ascenty announced it will be opening its eight colocation facility in Rio de Janeiro in an investment that is set to reach $48.2m.
Roberto Rio Branco, marketing, institutional and commercial director at Ascenty, said: “Within our expansion plans, the Rio de Janeiro market is extremely important.”
Equinix opens its largest Latin America data centre to date aimed at the digital edge
By João Marques Lima Published: 14:05, 9 March, 2017 Updated: 14:05, 9 March, 2017
215,000 sqf facility not part of Verizon’s data centre portfolio acquired by Equinix in December 2016.
Global provider Equinix has opened a $69m International Business Exchange (IBX) data centre in São Paulo, Brazil, the company’s largest to date.
The facility in Santana de Parnaíba, dubbed as SP3, has expanded the company’s portfolio in the country to five data centres with one more to be added following the completion of the acquisition of Verizon’s data centres in the Americas which include one building in São Paulo.
The opening of the data centre followed demand from enterprises, financial services firms and cloud and IT providers to interconnect with business partners at the digital edge.
The first phase of SP3 provides a total capacity of 725 cabinets. Upon completion of the multi-phase build out, the facility will provide a total capacity of 2,775 cabinets, doubling Equinix’s available space in Brazil, and making SP3 the largest multi-tenant data centre in Latin America, according to Equinix.
SP3 has over 215,000 sqf of gross data centre space, of which more than 90,000 sqf will be colocation space when fully built out.
The building has a power of nearly 13.3 MW and a PUE of 1.35 once fully operational.
Karl Strohmeyer, president of Americas, Equinix, said: “Our continued growth in Brazil highlights strong demand for hybrid cloud and greater interconnection as enterprises are looking to move to a globally distributed architecture so they can better interconnect people, locations, clouds and data at the edge of corporate networks.”
Equinix’s São Paulo operations serve as a business hub for approximately 1,000 companies including 270 cloud and IT services companies and more than 70 telecommunications carriers
Jeff Paschke, research director, 451 Research, said: “Equinix is opening SP3, its largest data centre in Brazil, which will double Equinix’s capacity in Brazil. SP3 is the largest multi-tenant data centre we are aware of in Brazil and Latin America.
“With the underserved nature of the data centre market in Brazil we believe that Equinix can encounter success with the new facility.
“We also expect that Equinix may continue investing in Latin America and we wouldn’t be surprised to see additional data centre expansions by Equinix in the region over the next few years.”
Watch bellow a tour of Equinix’s newest facility in Brazil.
Ascenty plans eighth Brazilian data centre for Rio de Janeiro
By João Marques Lima Published: 04:23, 15 February, 2017 Updated: 23:07, 16 March, 2017
Strong market demand lead colocation player to establish a presence in one of the most populated regions in the Southern Hemisphere.
Brazilian data centre services provider Ascenty has announced it will be opening its eight colocation facility in Rio de Janeiro in an investment that is set to reach R$150m ($48.2m as of February 2017).
The facility is expected to be brought online in Q3 2017 and will join others in Campinas, Jundiaí, Hortolândia, Fortaleza, São Paulo and Sumaré.
The Rio de Janeiro data centre will be the company’s first in the city and is designed to have a total IT load power of 12MW with a tri-bus electrical system – essentially three power lines servicing each data hall.
With a PUE of 1.7, the facility is expected to also gain an Uptime Institute certification for Tier III services as all other Ascenty data centres have one.
Cooling will be done via direct expansion with condensers above the roof for heat exchange, and an N+2 backup system is to be deployed.
The data centre will also have direct connection the internet exchange points (IXP) in São Paulo and Campinas.
Further connectivity will be provided by fiber optic networks connected into the data centre by telecom operators.
Roberto Rio Branco, marketing, institutional and commercial director at Ascenty, said: “Within our expansion plans, the Rio de Janeiro market is extremely important.
“There are large enterprises carrying out business in the region and some of them are already our partners in some of our other locations.
“We are at the early stages of construction but a large stake of the data centre has already been sold.”
EdgeConneX lands new Miami edge data center to serve South America
By João Marques Lima Published: 16:31, 30 January, 2017 Updated: 16:31, 30 January, 2017
Site links into several subsea cables including Americas II, BDNSi, COLUMBUS III, GlobeNet, MAYA-1, Mid-Atlantic Crossing (MAC) and SAm-1.
Edge infrastructure provider EdgeConneX has opened a second edge data centre in Miami, Florida, ten miles north of the NAP of Americas, as LATAM customers’ demand soars.
The Miami Edge Data Center has direct access to LATAM, South America and the Caribbean via dark and lit fiber for local customers, including wireless carriers, service providers, Content Delivery Networks (CDNs), cloud providers and enterprises.
The site has an IT load of up to 10MW, a N+1 design and was deployed in conjunction with anchor tenants.
EdgeConneX has arrangements with multiple carriers for dark and lit fiber, as well as for out-of-band (OOB) signalling and dedicated internet access (DIA) at the Miami 2 EDC.
The current list of carriers includes AT&T, CenturyLink, Comcast Business, Fiberlight, FPL FiberNet, NuVox, Windstream and XO Communications.
Located outside of the downtown Miami flood zone, the new Miami EDC acts as a direct gateway to the region’s primary subsea cable landing station.
It provides access to several subsea cables, including Americas II, BDNSi, COLUMBUS III, GlobeNet, MAYA-1, Mid-Atlantic Crossing (MAC) and SAm-1.
Don MacNeil, chief technology officer at EdgeConneX, said: “Other providers are running out of available space and power to service new customers or to meet the expansion requirements of existing customers.
“Our goal is to ensure that we are providing customers with diverse peering options, future scalability and a secure colocation facility that has the ability to deliver bandwidth-intensive content and applications with the lowest possible latency.”
EdgeConneX also runs edge data centres in several other states, including Texas, Arizona, California, Washington, Wisconsin, Minneapolis, Utah, Denver and Massachusetts. A Chicago (Illinois) data centre is set to open soon.
In Europe, the company has one data centre currently operational in the Netherlands and two under construction in the UK and Ireland. In addition, the provider is considering setting up facilities in Austria, Italy and France.
How 16,000 organisations could have avoided losing access to data in Brazil’s ‘Largest Digital Blackout’
By João Marques Lima Published: 16:44, 25 January, 2017 Updated: 16:44, 25 January, 2017
In the wake of one of the largest digital blackouts in South America, Data Economy sits down with David Mytton, founder and CEO at Server Density, to talk avoiding falling victim to scams.
Server downtime is a real issue with not only expensive consequences but also brand reputation harm that ultimately could take a company out of business.
Recently, a case in Brazil, where 16,000 customers lost access to their servers after a provider misled them and other outsourced services providers,
As Data Economy reported exclusively, the issue is far from resolved and could see the world’s largest colocation company being sued.
Following the case in Brazil, Data Economy (DE) spoke with David Mytton (DM), founder and CEO at Server Density, a server monitoring company, on how companies can avoid falling victim to such situation and the real side of server downtime.
DE: Recently in Brazil, 16,000 customers were left with no services for days after a company misled them and service providers. What should companies – not just in Brazil – do to prevent being caught up in a situation like this?
DM: This is a challenging situation for the customers in this case as their data was only located on the turned off servers, meaning they had no control over the situation. It’s difficult when an entire server provider fails because building cross-vendor redundancy is a complex task.
Even using the biggest vendors as an example, if you used Amazon’s SQS queuing service, you could create a backup with Google’s pub/sub product, but because they’re not directly relatable on an API level, so it would take a lot of work to abstract it out.
The solution here is to have a strong recovery plan. Companies need to make regular backups and, more importantly, run regular tests of restoring those backups. It’s vital to write down a checklisted plan for what needs to happen in this type of event and make sure key players are aware of that plan.
This way, in the undoubtedly stressful event that your provider ceases service, you will know that you can get your data and bring your systems back online with another vendor quickly.
DE: How do you quantify the real value of server downtime, in monetary or man-hour terms?
DM: The true cost of server downtime can be tricky to estimate, as it varies depending on the type of business you’re running.
However some rough calculations can help put the issue into perspective. If your website has 100% uptime a year then that means it is never down, but if that drops down to 99% uptime that then means that your website is inaccessible for 3.65 days of the year. There’s a lot of lost revenue in that 1%.
For small businesses with unsophisticated setups, server downtime can be a major headache. Often these companies won’t have redundancies built in, with their websites running off a single server. While this may seem cost-effective, when that lone server goes down the website does too. The impact is therefore huge, but difficult to measure effectively.
Larger businesses can afford the insurance cost of server redundancy. The cost of potential downtime is orders of magnitude larger than it would be for a smaller business, so most prefer the constant and predictable cost of server redundancy.
DE: What are the most commonly occurring and expensive server issues?
DM: Sometimes the most obvious problems are the ones which are most easily overlooked. When we looked into this we were very surprised to see that running out of disk space was one of the more frequent problems.
This problem is resolved easily, and the costs for not paying attention to disk space are potentially great: applications running on servers with low disk space behave unpredictably, freezing, throwing up strange errors or crashing completely.
In pure financial terms, with more applications being deployed to the cloud where everything is chargeable on a pay as you go basis, it can be very easy to rack up significant network fees.
This could be caused by inefficient page design e.g. very large images, resources, etc. But it can also be caused by bugs or malicious software running on your servers. It’s important not only to monitor your networking usage, but to have alerts on out of date software too.
DE: What is the importance of agile working practices in a cloud environment?
DM: Simply moving legacy systems into a cloud environment isn’t sufficient to gain all the benefits associated with hosting with the major cloud vendors: flexibility, scalable capacity and fast product iteration.
Adopting a traditional sysadmin mindset to cloud will likely mean a lot of frustration and increased costs. The old way of working with specific servers looked after individually and manually wastes a lot of time. The correct way to do it is through automation and APIs – you should rarely if ever be logging into individual servers, which should be all be built from templates.
Being an agile team (in this case moving quickly, as opposed to Agile software development) is a prerequisite, and this typically occurs through adopting devops principles. Simply put, development teams should be able to run their own systems.
Operations teams help to provide the underlying platform infrastructure, but developers should be the ones to request, manage and monitor resources, and developers should have responsibility over the uptime and availability of the systems they build.
This is quite different from more traditional ways of working but is now necessary and possible. The proliferation of SaaS-based cloud products means that developers are now able to provision and manage tools like persistent storage, queuing, email delivery and monitoring rather than having to build those services themselves, with operations providing high level oversight.