Industry Buzz

How to Create a Perfect Buyer Persona to connect with your ideal customers

Pickaweb Blog -

Creating a buyer persona is the first step towards building a brand. It is all about knowing the prospective customers better to offer a better solution to suffice their requirements. This is possible by tailoring the content you deliver them, connect using various techniques and identify who they really are. Creating a buyer persona to The post How to Create a Perfect Buyer Persona to connect with your ideal customers appeared first on Pickaweb.

9 Ways To Get Popular On Social Media

Pickaweb Blog -

You are a tech-savvy person and a digital marketer. So, it is definitely important for you to remain on top of all the social media trends and changes. Like a living being social media is a constantly developing and ever-evolving field. Every few days there will be a one or the other new feature that The post 9 Ways To Get Popular On Social Media appeared first on Pickaweb.

From ‘Day 1’ to Ongoing Optimization: Building a Multi-Layered Security Strategy on AWS

The Rackspace Blog & Newsroom -

The average cost of a data breach was almost $4 million in 2018. Given today’s advanced threat landscape, smart organizations know they must make the security of their digital environments a top priority and the financial risks of not doing so continue to increase. But the landscape is not all grim. While companies are facing […] The post From ‘Day 1’ to Ongoing Optimization: Building a Multi-Layered Security Strategy on AWS appeared first on The Official Rackspace Blog.

10 Ideas to Use Colors for Effective Marketing and Branding Strategies

Pickaweb Blog -

Everybody is talking about color psychology and how to use it for branding and marketing. And here I am, stuck with white paper, thinking on how to fill it with black words. Such a black and white life of mine. Can you imagine a life with black and white colors only? Did I make you The post 10 Ideas to Use Colors for Effective Marketing and Branding Strategies appeared first on Pickaweb.

Dark Patterns: The Real Reason You’ve Been Tricked Online

Nexcess Blog -

“Be careful online.” It’s a phrase uttered by parents, corporations, and law enforcement in relation to browsing and interacting with the web. We’re told it almost daily by internet watchdogs and security policies. But how careful are people actually being? Over the last several years, expectations with regards to user interface (UI) and design have… Continue reading →

How to Filter Spam Bots in Google Analytics [Step by Step Guide]

HostGator Blog -

The post How to Filter Spam Bots in Google Analytics [Step by Step Guide] appeared first on HostGator Blog. You know how valuable Google Analytics is and you’re ready to take all the insights it can offer to improve your website’s performance. But as you pull up the Acquisition data to see how people are finding your website, you notice some strange entries.  Chances are, this means that you’ve become a victim of spam bots. What Is Google Analytics Referral Spam? Spammers will do anything to drive more traffic to their websites. One of the tactics they’ve employed to this effect is finding ways to show up in Google Analytics, hoping that website owners will click on a site to see why it’s sending traffic their way. Google Analytics referral spam used to be much more common, but Google works hard to keep those spammy sites from showing up in your data. Nonetheless, many websites will still see some results in their Google Analytics data produced by spam bots. If you care about getting accurate data about your website’s performance—and you should, because it’s the only way to understand what’s working—then you need to filter spam bots in Google Analytics.  Here’s a handy guide on how to do just that.   How to Filter Spam Bots from Your Google Analytics Results There are two main types of filters you should set up to capture most referral spam from bots. For both, you have the same first few steps.  Getting Started 1. Keep an unfiltered view. When you make any technical change, you always want to have a backup. In Google Analytics, that means keeping an unfiltered view. This provides you with data you can use for comparison with the filtered results you get, to make sure they’re working. And it provides you with a view you can revert back to if one of your filters doesn’t work right. To do this, go to the Admin section in Google Analytics by clicking on the Gear icon in the bottom left corner. Click on View Settings in the third column.  Click on Copy View, then name your view Unfiltered, or something similar.   2. Click on Filters under the View column. With that done, go back to main Admin page by either clicking the back icon or the gear icon again. Click Filters in the View section (Note: this is different than All Filters in the Account section). 3. Click +Add Filter.  Click the red “+add filter” button. Then move onto the next section for the specific filters to create.  2 Google Analytics Filters to Set Up Valid Hostname Filter A valid hostname filter is the best way to filter out ghost spam. These are the spam bots that manage to ping your Google Analytics without ever actually visiting your website. Ghost spammers use automated scripts to send traffic to random websites, usually using a fake host. By telling Google Analytics how to recognize a valid host, this type of filter cuts the ghost spam from your analytics view.   1. Find your hostnames in Google Analytics.  A valid hostname is anywhere that you’ve legitimately set up Google Analytics tracking. That includes your website, most obviously, but also services like marketing tools you use and payment gateways. You can find a hostname report in Google Analytics in the Audience section by selecting Technology, then Network. Select Hostname as your Primary Dimension. Set your date range to go back at least a year. Scan the list to identify your valid hostnames. You should be able to recognize these as your own domain name, and any tools you use and knowingly allowed access to your Google Analytics tracking. Anything you don’t recognize or don’t manage yourself is probably spam.  If there’s an entry you’re not sure about, do some Googling. For example, Google Web Light isn’t something I manage directly, but it’s a service Google provides to load speedier pages on mobile devices with slow connections. That makes it legit.  2. Create a filter listing your hostnames. Back over in our Add Filter screen (scroll back up to the Getting Started section if you need a reminder), name the filter something like “Valid Hostnames.” Select Custom under Filter Types, Include in the list of bullets below that, and Hostname from the dropdown menu.  Under Filter Pattern, list all your valid hostnames in this format:|hostname2|hostname3|hostname4  You want to fit all of your valid hostnames into one filter here—you can’t create more than one filter that includes hostnames.  3. Test your filter.  Before you click save, take a few seconds to test the filter out and make sure you configured it right. You can use the Verify Filter option right there on the page to run a basic test and see how the filter would affect your data for the past 7 days. Note that, if your website doesn’t currently get that many spam hits, 7 days might not be enough of a sample set to show a difference. Once you’re confident your filter is accurate, click Save.  Crawler Spam Filter The other main category of spam bots that show up in Google Analytics is crawler spam. These are bots that actually do visit your site. They leave a correct hostname, so won’t get caught in your valid hostname filter. Instead, you need to exclude these from your analytics.  1. Find the crawler spam in your analytics. To start, identify the crawler spam that shows up in your analytics now. In the Acquisition menu, choose All Traffic, then Referrals. Change your date range to include at least a year. Now browse the list of websites to look for any that appear to be spammy.  Some will look immediately suspicious. For example, jumps out in the list above as probably spam. But for anything you’re not sure about, do a Google search for “what is <URL>” and you can usually get your answer for whether or not it’s spam. If the list here is long, it’s probably not worth your time to try and filter out every single spam bot, but if there are a main few sending a lot of fake traffic to your site, make note of them to include in your filter.   2. Look up common crawler spam lists. In addition to the spam examples you find in your own analytics, you can find pre-created filters that list many of the most common offenders on sites around the web (such as here and here). These will cover many of the spam bots that may not have hit your website yet, but could.  3. Create a filter (or multiple filters) listing the crawler spam. Back in our Add Filter screen, name your filter something like “Referral Spam.” Choose Custom as your Filter type, click on the Exclude button, and select Campaign Source in the dropdown menu.  For the pre-created filters you find, you can simply copy-and-paste them into your Google Analytics. For any you manually create, use the same format you did for your hostname filter: Spamname|spamname2|spamname3 Since you have a limited number of characters you can use for each filter, you’ll likely be creating several different filters in this step. Be sure to give them each a unique name. 4. Test your filter. For each filter you create, take a minute to test it. If you’re satisfied it’s accurate, click Save. Filtering Spam Bots on a WordPress Site Setting up filters within Google Analytics can feel pretty complicated. But if you have Google Analytics set up for your WordPress website, you have an easier solution you can take advantage of: plugins.  There are a number of WordPress plugins devoted to blocking referral spam, including: Block Referrer SpamSpamReferrerBlockWP Block Referrer SpamStop Referrer Spam You can block a significant amount of spam from your analytics simply by choosing one of these plugins, installing it to your WordPress site, and activating it.  If you’re not on WordPress now, but liking the idea of a simpler process for filtering spam bots, the first step to setting up a WordPress site is investing in WordPress hosting. Many aspects of designing, managing, and maintaining a website are easier with WordPress, so for website owners without extensive tech skills, it’s worth considering.  Google Analytics Spam Bots FAQs Those are the main steps you need to know to filter spam bots in your Google Analytics. But if you still have questions about Google Analytics spam bots, here are answers to some of the most common questions people wonder about.  1. How do I detect spam in Google Analytics? First things first, don’t click on the link! If you visit the website itself, the spammers are getting what they want from their shady tactics.  Instead, either do a search for the website in quotation marks, e.g. “” or a search like “what is” That will ensure Google doesn’t take you to the spammer’s website—the thing we’re trying to avoid here—and instead you’ll see results from other websites about it. If the website’s a known source for analytics spam, someone’s probably written about it.  2. Why does filtering spam from my Google Analytics results matter? Website analytics are a rich source of information about what your audience responds to. They can show you what your website gets right now, and reveal areas for improvement. And they’re your best way to track the success of your online marketing activities so you know what tactics are worth the investment. Referral spam clouds the accuracy of your analytics. It puts you at risk of misinterpreting the data you have, because the data itself isn’t accurate. You don’t want to spend time and money on tactics that aren’t working because a spam bot makes you think a particular page is more popular than it truly is with your audience. By cleaning up your data, spam bot filters ensure your analytics deliver insights that are more accurate and useful.  3. Can I clean past Google Analytics data? These filters will mean you get cleaner data moving forward, but they won’t be applied retroactively. Your historical data will still include inaccuracies caused by spam bots. But, seeing the comparison between your analytics before and after applying the filters can help you make an educated guess about how much of your traffic was due to bots. You can take that into account when analyzing the data you have to help you get closer to an accurate picture.    Gain Clarity by Skipping the Spam Google Analytics is one of the most valuable tools available to every website owner. While you can’t completely avoid spammers online (they have an obnoxious skill for being everywhere), you can control the influence they have on your website data. Applying the right filters and plugins to your website analytics will rob spammers of their power, and give you back the accuracy you need to build a stronger website for your audience.  Find the post on the HostGator Blog

7 Ways to Scale Your Facebook Ad Campaigns

Social Media Examiner -

Wondering how to take your Facebook advertising campaigns to the next level? Looking for ideas to improve your Facebook ad conversions? In this article, you’ll discover seven ways to scale your Facebook ad campaigns. #1: Make Small Salary-Like Bumps to Facebook Ad Spend Every 4–7 Days As the name suggests, “salary-like” bumps are small increases […] The post 7 Ways to Scale Your Facebook Ad Campaigns appeared first on Social Media Marketing | Social Media Examiner.

Ultimate Guide to Using an Amazon Affiliate Site Builder

Grow Traffic Blog -

What is an Amazon Affiliate Site? Simply put, it’s a website you build and fill with content as a means to float your affiliate link, to get referrals, sales, and commission payments. What is an Amazon Affiliate Site Builder? Well, a site builder is a tool that helps you build a website, usually from stock templates or assets rather than having to code it from the ground up. Is there such a thing as an Amazon Affiliate Site Builder? Not really.  Any site builder can build a website capable of being an Amazon Affiliate site. There’s nothing really special about an affiliate site compared to other websites, except maybe the lack of a storefront and landing pages. That said, let’s look at the sort of site builders you might come across. Amazon Affiliate Site Builders There are a ton of different site builders out there. Some of them are simple and easy to use, and others are more complicated. Some of them are open to anyone, while others require that you have web hosting with a specific host to use their builder. In fact, pretty much every web host has their own site builder built in, since it’s an easy feature to add and it helps them get more customers. Squarespace. This is one of the more common and widely advertised site builders around. Since you don’t need a storefront, you can use the cheaper versions, which come with a handful of default features you may find useful, like analytics and a mobile website format. Weebly. This is a free site builder that is simple and easy to use, but lacks many of the top-end features that a high end site would want to use. It’s simple, and perhaps that works to its detriment. Format. This site builder is more suited to photography and art, and is aimed at being a portfolio rather than a blog or a storefront. Shopify. This site builder is aimed at e-commerce and includes a ton of features for running a storefront that you don’t need as an affiliate site. Wix. This is a free website builder with a ton of flexibility. It’s often one of the best entry-level website builders around, and while it lacks some advanced features, it’s very flexible and works as a good base for a new site owner. Not to be confused with the self-hosted WordPress system, the .com version is a hosted blog platform with a site builder attached. It’s not as robust as the .org version, but it works if you want to set something up for free. There are dozens more out there too. There are so many site builders available primarily because setting up a website based on some basic templates is not difficult to do. It can be a daunting task if you’re not otherwise experienced with web and code shenanigans, but it’s really a low bar to clear in terms of education. You can teach yourself to set up something like a site in a few days, at most. You’ll note that none of these are Amazon Affiliate site builders. That’s because there’s functionally no difference between an Amazon Affiliate site and a normal blog-based website. All you need is something that hosts content, possible with the support of a few advanced features like URL redirects and charts, but even those aren’t strictly necessary. Setting up an Amazon Affiliate site is simply a matter of knowing what you want to do with it. Things like choosing a domain name and producing content will rely on you knowing your niche ahead of time. Therefore, choosing your niche is the first major decision you have to make when it comes to setting up a site. Choosing an Affiliate Niche If you’re not already eyeing a deal and know exactly what you want to promote, you probably have to choose your niche. The right niche is the lifeblood of a site. The days of broad-spectrum “deals” sites are long over. Google likes focus in the content it indexes, and a site that tries to cover too many bases will cover none of them well. One potential mistake you might make in choosing a niche is choosing something you’re passionate about. I say this is a “potential” mistake, because it’s not always a mistake. Passion is good for marketing. Your readers will be able to sense when you know and care about what you’re talking about. It’s not always easy to portray passion for some topics, of course; the author of isn’t going to be a passionate purveyor of faucets, because very few people in this world are passionate about faucets. However, someone passionate with the idea of outdoor life, with expertise in hiking, camping, mountaineering, and other outdoor activities, will be able to convey their passion to their audience. There’s a certain level of authenticity and personality that comes through even in promotional writing that you can’t find elsewhere. That said, passion can be a mistake in two cases. The first is when you really love what you’re trying to promote, and you find your impression of the industry systematically decaying. They say that if you do what you love, you’ll never work a day in your life, but that’s not quite right. If you do what you love, the corporate oppression of free thinking and inspiration will drive the passion out of you and will leave you with no love of what you formerly enjoyed. Turning a hobby into a job is often the end of your enjoyment of that hobby. The other reason following your passion might be a mistake is if you’re passionate about something that just isn’t a very deep niche. You might be very passionate about a hobby of yours, but if only 1,200 other people in the world share that hobby, your audience for your affiliate links is going to be very small. Affiliate programs only work when you send in volume. To pick a solid niche, you need to find something that has two things. First, it needs to have a level of traffic sufficient to promote your items. Second, it needs to have an array of high value products to sell. With Amazon Affiliates, you earn something like a 4% commission on sales at base, with scaling fees based on sales volume. If you’re selling a $4 can of spraypaint, you’re not going to be making a lot of money on that sale, so you have to sell hundreds of them to make any real profit. By contrast, if you’re selling a $2,000 television or other high value item, you might only need to sell one or two items a month to make a reasonable profit. At the extreme end, there are affiliate programs for things like yachts and private jets. You might only get one sale a year, but that sale bankrolls you for the year. Amazon doesn’t really sell those kinds of products though, so that’s for another post. Neil Patel published a pretty good guide on finding an affiliate niche a while back, which you can read here. If you’re not entirely sold on the niche you’ve been eyeing, or if you have no idea where to start, this is a great article to help you solidify your plan. What an Amazon Affiliate Site Needs So you have a niche, and now you want to set up a site. What does your site need to be a success? At the top level, you need a good domain name. I know a lot of the free website builders don’t let you use a custom domain name without a fee, but it’s a fee that’s well worth paying. People don’t trust the free or sites anymore. Moreover, they have a harder time ranking in Google search, and thus a harder time attracting an audience. Come up with a domain name that is both relevant to your niche and easily brandable. You can try using an exact match domain if you like, but I would caution you against it. EMDs are often expensive, they’re difficult to get ranking, and they’re subject to more scrutiny. Next, your site needs a strong architecture. Most website builders are fine with this. You can’t really screw up a site made with a website builder, they don’t let you. Just make something that has normal user navigation and doesn’t try to do anything screwy like scroll horizontally. Many site builders even have templates you can choose that do most of the work for you. Make sure any site builder you’re using makes a responsive site. You want your site to be mobile compatible. This is a search ranking factor, and it’s a usability factor. With over half of the modern day web traffic coming from mobile devices, if you can’t present your content and links to mobile users, you’re leaving half or more of your potential money on the table. If you want to go a little more advanced, you can purchase web hosting from a reputable seller – something like Bluehost, HostGator, or InMotion – and set up a site. That process is a little more involved, but you have a lot more room for customization than you do with a site builder. Once you build your site, you need a few structural pages that will hold important information. Primarily, you want an About page that contains information about who you are and why you’re into the niche you’re into. Feel free to lie, no one is going to fact check you here. I mean, unless you start breaking laws, anyways. You also want a disclosure page that mentions that your links can be affiliate links. Keep in mind that your links need to be disclosed in your blog posts as well, as per FTC guidelines. Creating Content for Your Affiliate Site Once everything is up and running, you need to start populating your site with content. I recommend creating somewhere around a dozen articles, preferably in-depth articles, which you can publish all at once. Create more on an ongoing basis, at least once per week, to keep your site fresh and alive. So what kind of content should you be creating – or paying to have written for you? The in-depth review. These are the bread and butter of affiliate marketers. You pick a specific product and you write a lengthy, detailed review of that product. Ideally you want personal experience with the product so you can point out specifics unique to that item, like a design flaw you encountered or a personal use you didn’t think of. You want something that provides information for the user to use when making the decision to buy. Avoid being all glowing praise; it comes across as insincere. The comparison post. These are staples for your site, but generally work best once you have enough other pieces of content up that you can use them as a sort of table of contents as well. Basically, you take 2-4 products that are similar to one another and write a post comparing each of them. How do they stack up in terms of price, features, durability, and so on? If you can link these to the deeper reviews of each individual product, all the better. The tutorial post. This is a post you write when you know the typical use case and pain point for a product. You know the problem, you know how the product solves it, so write a blog post giving a tutorial on how to use that product to solve that problem. Alternatively, create a tutorial on how to install one of the products you’re promoting. The clickbait post. No, we’re not going all-in on clickbait headlines. Those have mostly died out, and good riddance to them. No, I just mean the low-bar gimmick posts that serve as a shell for links to some of your products. For example, “the top 50 patio designs” could be a nice list to show 50 designs of patios where you identify and link to patio furniture you can sell through them. As you populate your site with content, you will see more and more traffic coming in and more and more sales going through. That’s pretty much it! Everything else is optimization. Not to say that optimization is trivial, of course, but the hard part is setting everything up. The post Ultimate Guide to Using an Amazon Affiliate Site Builder appeared first on Growtraffic Blog.

How Do You Know If You’re the Best – Quality Web Hosting Made Simple

InMotion Hosting Blog -

When it comes to the best WordPress hosting, just because you offer a huge amount of storage, doesn’t make you a quality host. In fact, most websites don’t need unlimited storage, and most websites will never use that amount or anywhere near it. Just like the old saying goes, sometimes less is more. While obviously you don’t want a bare bones hosting plan, you don’t really need unlimited storage and bandwidth. What if, instead, you had a quality support system, a managed account, a chance to grow, and a few extra perks? Continue reading How Do You Know If You’re the Best – Quality Web Hosting Made Simple at The Official InMotion Hosting Blog.

LinkedIn Updates Pages and Algorithm

Social Media Examiner -

Welcome to this week’s edition of the Social Media Marketing Talk Show, a news show for marketers who want to stay on the leading edge of social media. On this week’s Social Media Marketing Talk Show, we explore new Instagram chat stickers and ad placements, as well as updates to LinkedIn’s pages and algorithm with […] The post LinkedIn Updates Pages and Algorithm appeared first on Social Media Marketing | Social Media Examiner.

What is Captain America’s Favorite WordPress Plugin?

InMotion Hosting Blog -

Captain America once said, “Without my WordPress backup plugin, my website would have been taken down by Hydra and I would have lost everything.” I was in just as much shock as you because I didn’t even know Captain America knew what WordPress was. But here he is, talking about how a plugin saved his site from being shut down. And he could not be more right. As we know, Captain America was frozen in ice for a long time and had to play some serious catch up when finally teaming up with the Avengers. Continue reading What is Captain America’s Favorite WordPress Plugin? at The Official InMotion Hosting Blog.

People of WordPress: Ugyen Dorji News -

You’ve probably heard that WordPress is open source software, and may know that it’s created and run by volunteers. WordPress enthusiasts share many examples of how WordPress changed people’s lives for the better. This monthly series shares some of those lesser-known, amazing stories. Meet Ugyen Dorji from Bhutan Ugyen lives in Bhutan, a landlocked country situated between two giant neighbors, India to the south and China to the north. He works for ServMask Inc and is responsible for the Quality Assurance process for All-in-One WP Migration plugin. He believes in the Buddhist teaching that “the most valuable service is one rendered to our fellow humans,” and his contributions demonstrates this through his WordPress translation work and multi-lingual support projects for WordPress. Bhutanese contributors to the Dzongkha locale on WordPress Translation Day How Ugyen started his career with WordPress Back in 2016, Ugyen was looking for a new job after his former cloud company ran into financial difficulties. During one interview he was asked many questions about WordPress and, although he had a basic understanding of WordPress, he struggled to give detailed answers. After that interview he resolved to develop his skills and learn as much about WordPress as he could.  A few months passed and he received a call from ServMask Inc, who had developed a plugin called All-in-One WP Migration. They offered him a position, fulfilling his wish to work with WordPress full-time. And because of that, Ugyen is now an active contributor to the WordPress community. WordCamp Bangkok 2018 WordCamp Bangkok 2018 was a turning point event for Ugyen. WordCamps are a great opportunity to meet WordPress community members you don’t otherwise get to know, and he was able to attend his first WordCamp through the sponsorship of his company. The first day of WordCamp Bangkok was a Contributor Day, where people volunteer to work together to contribute to the development of WordPress. Ugyen joined the Community team to have conversations with WordPress users from all over the world. He was able to share his ideas for supporting new speakers, events and organizers to help build the WordPress community in places where it is not yet booming. During the main day of the event, Ugyen managed a photo booth for speakers, organizers, and attendees to capture their memories of WordCamp. He also got to take some time out to attend several presentations during the conference. What particularly stuck in Ugyen’s mind was learning that having a website content plan has been shown to lead to 100% growth in business development. Co-Organizing Thimphu‘s WordPress Meetup After attending WordCamp Bangkok 2018 as well as a local Meetup event, Ugyen decided to introduce WordPress to his home country and cities.  As one of the WordPress Translation Day organizers, he realized that his local language, Dzongkha, was not as fully translated as other languages in the WordPress Core Translation. That is when Ugyen knew that he wanted to help build his local community. He organized Thimphu’s first WordPress Meetup to coincide with WordPress Translation Day 4, and it was a huge success! Like all WordPress Meetups, the Thimpu WordPress Meetup is an easygoing, volunteer-organized, non-profit meetup which covers everything related to WordPress. But it also keeps in mind the Bhutanese Gross National Happiness four pillars by aiming to preserve and promote their unique culture and national language.  Big dreams get accomplished one step at a time Ugyen has taken an active role in preserving his national language by encouraging his community to use WordPress, including Dzongkha bloggers, online Dzongkha news outlets, and government websites. And while Ugyen has only been actively involved in the community for a short period, he has contributed much to the WordPress community, including: becoming a Translation Contributor for WordPress Core Translation for Dzongkha;participating in the Global WordPress Translation Day 4 Livestream and organizing team;inviting WordPress Meetup Thimphu members and WordPress experts from other countries to join the local Slack instance;encouraging ServMask Inc. to become an event sponsor;providing the Dzongkha Development Commission the opportunity to involve their language experts. When it comes to WordPress, Ugyen particularly focuses on encouraging local and international language WordPress bloggers; helping startups succeed with WordPress; and sharing what he has learned from WordPress with his Bhutanese WordPress community. As a contributor, Ugyen hopes to accomplish even more for the Bhutan and Asian WordPress Communities. His dreams for his local community are big, including teaching more people about open source, hosting a local WordCamp, and helping to organize WordCamp Asia in 2020 — all while raising awareness of his community. This post is based on an article originally published on, a community initiative created by Topher DeRosia. HeroPress highlights people in the WordPress community who have overcome barriers and whose stories would otherwise go unheard. Meet more WordPress community members over at!

Details of the Cloudflare outage on July 2, 2019

CloudFlare Blog -

Almost nine years ago, Cloudflare was a tiny company and I was a customer not an employee. Cloudflare had launched a month earlier and one day alerting told me that my little site,, didn’t seem to have working DNS any more. Cloudflare had pushed out a change to its use of Protocol Buffers and it had broken DNS.I wrote to Matthew Prince directly with an email titled “Where’s my dns?” and he replied with a long, detailed, technical response (you can read the full email exchange here) to which I replied:From: John Graham-Cumming Date: Thu, Oct 7, 2010 at 9:14 AM Subject: Re: Where's my dns? To: Matthew Prince Awesome report, thanks. I'll make sure to call you if there's a problem. At some point it would probably be good to write this up as a blog post when you have all the technical details because I think people really appreciate openness and honesty about these things. Especially if you couple it with charts showing your post launch traffic increase. I have pretty robust monitoring of my sites so I get an SMS when anything fails. Monitoring shows I was down from 13:03:07 to 14:04:12. Tests are made every five minutes. It was a blip that I'm sure you'll get past. But are you sure you don't need someone in Europe? :-) To which he replied:From: Matthew Prince Date: Thu, Oct 7, 2010 at 9:57 AM Subject: Re: Where's my dns? To: John Graham-Cumming Thanks. We've written back to everyone who wrote in. I'm headed in to the office now and we'll put something on the blog or pin an official post to the top of our bulletin board system. I agree 100% transparency is best. And so, today, as an employee of a much, much larger Cloudflare I get to be the one who writes, transparently about a mistake we made, its impact and what we are doing about it.The events of July 2On July 2, we deployed a new rule in our WAF Managed Rules that caused CPUs to become exhausted on every CPU core that handles HTTP/HTTPS traffic on the Cloudflare network worldwide. We are constantly improving WAF Managed Rules to respond to new vulnerabilities and threats. In May, for example, we used the speed with which we can update the WAF to push a rule to protect against a serious SharePoint vulnerability. Being able to deploy rules quickly and globally is a critical feature of our WAF.Unfortunately, last Tuesday’s update contained a regular expression that backtracked enormously and exhausted CPU used for HTTP/HTTPS serving. This brought down Cloudflare’s core proxying, CDN and WAF functionality. The following graph shows CPUs dedicated to serving HTTP/HTTPS traffic spiking to nearly 100% usage across the servers in our network.CPU utilization in one of our PoPs during the incidentThis resulted in our customers (and their customers) seeing a 502 error page when visiting any Cloudflare domain. The 502 errors were generated by the front line Cloudflare web servers that still had CPU cores available but were unable to reach the processes that serve HTTP/HTTPS traffic.We know how much this hurt our customers. We’re ashamed it happened. It also had a negative impact on our own operations while we were dealing with the incident.It must have been incredibly stressful, frustrating and frightening if you were one of our customers. It was even more upsetting because we haven’t had a global outage for six years. The CPU exhaustion was caused by a single WAF rule that contained a poorly written regular expression that ended up creating excessive backtracking. The regular expression that was at the heart of the outage is (?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*))) Although the regular expression itself is of interest to many people (and is discussed more below), the real story of how the Cloudflare service went down for 27 minutes is much more complex than “a regular expression went bad”. We’ve taken the time to write out the series of events that lead to the outage and kept us from responding quickly. And, if you want to know more about regular expression backtracking and what to do about it, then you’ll find it in an appendix at the end of this post.What happenedLet’s begin with the sequence of events. All times in this blog are UTC.At 13:42 an engineer working on the firewall team deployed a minor change to the rules for XSS detection via an automatic process. This generated a Change Request ticket. We use Jira to manage these tickets and a screenshot is below.Three minutes later the first PagerDuty page went out indicating a fault with the WAF. This was a synthetic test that checks the functionality of the WAF (we have hundreds of such tests) from outside Cloudflare to ensure that it is working correctly. This was rapidly followed by pages indicating many other end-to-end tests of Cloudflare services failing, a global traffic drop alert, widespread 502 errors and then many reports from our points-of-presence (PoPs) in cities worldwide indicating there was CPU exhaustion.Some of these alerts hit my watch and I jumped out of the meeting I was in and was on my way back to my desk when a leader in our Solutions Engineering group told me we had lost 80% of our traffic. I ran over to SRE where the team was debugging the situation. In the initial moments of the outage there was speculation it was an attack of some type we’d never seen before.Cloudflare’s SRE team is distributed around the world, with continuous, around-the-clock coverage. Alerts like these, the vast majority of which are noting very specific issues of limited scopes in localized areas, are monitored in internal dashboards and addressed many times every day. This pattern of pages and alerts, however, indicated that something gravely serious had happened, and SRE immediately declared a P0 incident and escalated to engineering leadership and systems engineering.The London engineering team was at that moment in our main event space listening to an internal tech talk. The talk was interrupted and everyone assembled in a large conference room and others dialed-in. This wasn’t a normal problem that SRE could handle alone, it needed every relevant team online at once.At 14:00 the WAF was identified as the component causing the problem and an attack dismissed as a possibility. The Performance Team pulled live CPU data from a machine that clearly showed the WAF was responsible. Another team member used strace to confirm. Another team saw error logs indicating the WAF was in trouble. At 14:02 the entire team looked at me when it was proposed that we use a ‘global kill’, a mechanism built into Cloudflare to disable a single component worldwide. But getting to the global WAF kill was another story. Things stood in our way. We use our own products and with our Access service down we couldn’t authenticate to our internal control panel (and once we were back we’d discover that some members of the team had lost access because of a security feature that disables their credentials if they don’t use the internal control panel frequently).And we couldn’t get to other internal services like Jira or the build system. To get to them we had to use a bypass mechanism that wasn’t frequently used (another thing to drill on after the event). Eventually, a team member executed the global WAF kill at 14:07 and by 14:09 traffic levels and CPU were back to expected levels worldwide. The rest of Cloudflare's protection mechanisms continued to operate.Then we moved on to restoring the WAF functionality. Because of the sensitivity of the situation we performed both negative tests (asking ourselves “was it really that particular change that caused the problem?”) and positive tests (verifying the rollback worked) in a single city using a subset of traffic after removing our paying customers’ traffic from that location.At 14:52 we were 100% satisfied that we understood the cause and had a fix in place and the WAF was re-enabled globally.How Cloudflare operatesCloudflare has a team of engineers who work on our WAF Managed Rules product; they are constantly working to improve detection rates, lower false positives, and respond rapidly to new threats as they emerge. In the last 60 days, 476 change requests have been handled for the WAF Managed Rules (averaging one every 3 hours).This particular change was to be deployed in “simulate” mode where real customer traffic passes through the rule but nothing is blocked. We use that mode to test the effectiveness of a rule and measure its false positive and false negative rate. But even in the simulate mode the rules actually need to execute and in this case the rule contained a regular expression that consumed excessive CPU.As can be seen from the Change Request above there’s a deployment plan, a rollback plan and a link to the internal Standard Operating Procedure (SOP) for this type of deployment. The SOP for a rule change specifically allows it to be pushed globally. This is very different from all the software we release at Cloudflare where the SOP first pushes software to an internal dogfooding network point of presence (PoP) (which our employees pass through), then to a small number of customers in an isolated location, followed by a push to a large number of customers and finally to the world.The process for a software release looks like this: We use git internally via BitBucket. Engineers working on changes push code which is built by TeamCity and when the build passes, reviewers are assigned. Once a pull request is approved the code is built and the test suite runs (again). If the build and tests pass then a Change Request Jira is generated and the change has to be approved by the relevant manager or technical lead. Once approved deployment to what we call the “animal PoPs” occurs: DOG, PIG, and the Canaries.The DOG PoP is a Cloudflare PoP (just like any of our cities worldwide) but it is used only by Cloudflare employees. This dogfooding PoP enables us to catch problems early before any customer traffic has touched the code. And it frequently does.If the DOG test passes successfully code goes to PIG (as in “Guinea Pig”). This is a Cloudflare PoP where a small subset of customer traffic from non-paying customers passes through the new code. If that is successful the code moves to the Canaries. We have three Canary PoPs spread across the world and run paying and non-paying customer traffic running through them on the new code as a final check for errors.Cloudflare software release processOnce successful in Canary the code is allowed to go live. The entire DOG, PIG, Canary, Global process can take hours or days to complete, depending on the type of code change. The diversity of Cloudflare’s network and customers allows us to test code thoroughly before a release is pushed to all our customers globally. But, by design, the WAF doesn’t use this process because of the need to respond rapidly to threats.WAF ThreatsIn the last few years we have seen a dramatic increase in vulnerabilities in common applications. This has happened due to the increased availability of software testing tools, like fuzzing for example (we just posted a new blog on fuzzing here). Source: is commonly seen is a Proof of Concept (PoC) is created and often published on Github quickly, so that teams running and maintaining applications can test to make sure they have adequate protections. Because of this, it’s imperative that Cloudflare are able to react as quickly as possible to new attacks to give our customers a chance to patch their software.A great example of how Cloudflare proactively provided this protection was through the deployment of our protections against the SharePoint vulnerability in May (blog here). Within a short space of time from publicised announcements, we saw a huge spike in attempts to exploit our customer’s Sharepoint installations. Our team continuously monitors for new threats and writes rules to mitigate them on behalf of our customers.The specific rule that caused last Tuesday’s outage was targeting Cross-site scripting (XSS) attacks. These too have increased dramatically in recent years.Source: standard procedure for a WAF Managed Rules change indicates that Continuous Integration (CI) tests must pass prior to a global deploy. That happened normally last Tuesday and the rules were deployed. At 13:31 an engineer on the team had merged a Pull Request containing the change after it was approved. At 13:37 TeamCity built the rules and ran the tests, giving it the green light. The WAF test suite tests that the core functionality of the WAF works and consists of a large collection of unit tests for individual matching functions. After the unit tests run the individual WAF rules are tested by executing a huge collection of HTTP requests against the WAF. These HTTP requests are designed to test requests that should be blocked by the WAF (to make sure it catches attacks) and those that should be let through (to make sure it isn’t over-blocking and creating false positives). What it didn’t do was test for runaway CPU utilization by the WAF and examining the log files from previous WAF builds shows that no increase in test suite run time was observed with the rule that would ultimately cause CPU exhaustion on our edge.With the tests passing, TeamCity automatically began deploying the change at 13:42.QuicksilverBecause WAF rules are required to address emergent threats they are deployed using our Quicksilver distributed key-value (KV) store that can push changes globally in seconds. This technology is used by all our customers when making configuration changes in our dashboard or via the API and is the backbone of our service’s ability to respond to changes very, very rapidly.We haven’t really talked about Quicksilver much. We previously used Kyoto Tycoon as a globally distributed key-value store, but we ran into operational issues with it and wrote our own KV store that is replicated across our more than 180 cities. Quicksilver is how we push changes to customer configuration, update WAF rules, and distribute JavaScript code written by customers using Cloudflare Workers.From clicking a button in the dashboard or making an API call to change configuration to that change coming into effect takes seconds, globally. Customers have come to love this high speed configurability. And with Workers they expect near instant, global software deployment. On average Quicksilver distributes about 350 changes per second.And Quicksilver is very fast.  On average we hit a p99 of 2.29s for a change to be distributed to every machine worldwide. Usually, this speed is a great thing. It means that when you enable a feature or purge your cache you know that it’ll be live globally nearly instantly. When you push code with Cloudflare Workers it's pushed out at the same speed. This is part of the promise of Cloudflare fast updates when you need them.However, in this case, that speed meant that a change to the rules went global in seconds. You may notice that the WAF code uses Lua. Cloudflare makes use of Lua extensively in production and details of the Lua in the WAF have been discussed before. The Lua WAF uses PCRE internally and it uses backtracking for matching and has no mechanism to protect against a runaway expression. More on that and what we're doing about it below.Everything that occurred up to the point the rules were deployed was done “correctly”: a pull request was raised, it was approved, CI/CD built the code and tested it, a change request was submitted with an SOP detailing rollout and rollback, and the rollout was executed. Cloudflare WAF deployment processWhat went wrongAs noted, we deploy dozens of new rules to the WAF every week, and we have numerous systems in place to prevent any negative impact of that deployment. So when things do go wrong, it’s generally the unlikely convergence of multiple causes. Getting to a single root cause, while satisfying, may obscure the reality. Here are the multiple vulnerabilities that converged to get to the point where Cloudflare’s service for HTTP/HTTPS went offline.An engineer wrote a regular expression that could easily backtrack enormously.A protection that would have helped prevent excessive CPU use by a regular expression was removed by mistake during a refactoring of the WAF weeks prior—a refactoring that was part of making the WAF use less CPU.The regular expression engine being used didn’t have complexity guarantees.The test suite didn’t have a way of identifying excessive CPU consumption.The SOP allowed a non-emergency rule change to go globally into production without a staged rollout.The rollback plan required running the complete WAF build twice taking too long.The first alert for the global traffic drop took too long to fire.We didn’t update our status page quickly enough.We had difficulty accessing our own systems because of the outage and the bypass procedure wasn’t well trained on.SREs had lost access to some systems because their credentials had been timed out for security reasons.Our customers were unable to access the Cloudflare Dashboard or API because they pass through the Cloudflare edge.What’s happened since last TuesdayFirstly, we stopped all release work on the WAF completely and are doing the following:Re-introduce the excessive CPU usage protection that got removed. (Done)Manually inspecting all 3,868 rules in the WAF Managed Rules to find and correct any other instances of possible excessive backtracking. (Inspection complete)Introduce performance profiling for all rules to the test suite. (ETA:  July 19)Switching to either the re2 or Rust regex engine which both have run-time guarantees. (ETA: July 31)Changing the SOP to do staged rollouts of rules in the same manner used for other software at Cloudflare while retaining the ability to do emergency global deployment for active attacks.Putting in place an emergency ability to take the Cloudflare Dashboard and API off Cloudflare's edge.Automating update of the Cloudflare Status page.In the longer term we are moving away from the Lua WAF that I wrote years ago. We are porting the WAF to use the new firewall engine. This will make the WAF both faster and add yet another layer of protection.ConclusionThis was an upsetting outage for our customers and for the team. We responded quickly to correct the situation and are correcting the process deficiencies that allowed the outage to occur and going deeper to protect against any further possible problems with the way we use regular expressions by replacing the underlying technology used.We are ashamed of the outage and sorry for the impact on our customers. We believe the changes we’ve made mean such an outage will never recur. Appendix: About Regular Expression BacktrackingTo fully understand how (?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*)))  caused CPU exhaustion you need to understand a little about how a standard regular expression engine works. The critical part is .*(?:.*=.*). The (?: and matching ) are a non-capturing group (i.e. the expression inside the parentheses is grouped together as a single expression). For the purposes of the discussion of why this pattern causes CPU exhaustion we can safely ignore it and treat the pattern as .*.*=.*. When reduced to this, the pattern obviously looks unnecessarily complex; but what's important is any "real-world" expression (like the complex ones in our WAF rules) that ask the engine to "match anything followed by anything" can lead to catastrophic backtracking. Here’s why.In a regular expression, . means match a single character, .* means match zero or more characters greedily (i.e. match as much as possible) so .*.*=.* means match zero or more characters, then match zero or more characters, then find a literal = sign, then match zero or more characters.Consider the test string x=x. This will match the expression .*.*=.*. The .*.* before the equal can match the first x (one of the .* matches the x, the other matches zero characters). The .* after the = matches the final x.It takes 23 steps for this match to happen. The first .* in .*.*=.* acts greedily and matches the entire x=x string. The engine moves on to consider the next .*. There are no more characters left to match so the second .* matches zero characters (that’s allowed). Then the engine moves on to the =. As there are no characters left to match (the first .* having consumed all of x=x) the match fails.At this point the regular expression engine backtracks. It returns to the first .* and matches it against x= (instead of x=x) and then moves onto the second .*. That .* matches the second x and now there are no more characters left to match. So when the engine tries to match the = in .*.*=.* the match fails. The engine backtracks again.This time it backtracks so that the first .* is still matching x= but the second .* no longer matches x; it matches zero characters. The engine then moves on to try to find the literal = in the .*.*=.* pattern but it fails (because it was already matched against the first .*). The engine backtracks again.This time the first .* matches just the first x. But the second .* acts greedily and matches =x. You can see what’s coming. When it tries to match the literal = it fails and backtracks again.The first .* still matches just the first x. Now the second .* matches just =. But, you guessed it, the engine can’t match the literal = because the second .* matched it. So the engine backtracks again. Remember, this is all to match a three character string.Finally, with the first .* matching just the first x, the second .* matching zero characters the engine is able to match the literal = in the expression with the = in the string. It moves on and the final .* matches the final x.23 steps to match x=x. Here’s a short video of that using the Perl Regexp::Debugger showing the steps and backtracking as they occur.That’s a lot of work but what happens if the string is changed from x=x to x=xx? This time is takes 33 steps to match. And if the input is x=xxx it takes 45. That’s not linear. Here’s a chart showing matching from x=x to x=xxxxxxxxxxxxxxxxxxxx (20 x’s after the =). With 20 x’s after the = the engine takes 555 steps to match! (Worse, if the x= was missing, so the string was just 20 x’s, the engine would take 4,067 steps to find the pattern doesn’t match).This video shows all the backtracking necessary to match x=xxxxxxxxxxxxxxxxxxxx:That’s bad because as the input size goes up the match time goes up super-linearly. But things could have been even worse with a slightly different regular expression. Suppose it had been .*.*=.*; (i.e. there’s a literal semicolon at the end of the pattern). This could easily have been written to try to match an expression like foo=bar;.This time the backtracking would have been catastrophic. To match x=x takes 90 steps instead of 23. And the number of steps grows very quickly. Matching x= followed by 20 x’s takes 5,353 steps. Here’s the corresponding chart. Look carefully at the Y-axis values compared the previous chart.To complete the picture here are all 5,353 steps of failing to match x=xxxxxxxxxxxxxxxxxxxx against .*.*=.*;Using lazy rather than greedy matches helps control the amount of backtracking that occurs in this case. If the original expression is changed to .*?.*?=.*? then matching x=x takes 11 steps (instead of 23) and so does matching x=xxxxxxxxxxxxxxxxxxxx. That’s because the ? after the .* instructs the engine to match the smallest number of characters first before moving on.But laziness isn’t the total solution to this backtracking behaviour. Changing the catastrophic example .*.*=.*; to .*?.*?=.*?; doesn’t change its run time at all. x=x still takes 555 steps and x= followed by 20 x’s still takes 5,353 steps.The only real solution, short of fully re-writing the pattern to be more specific, is to move away from a regular expression engine with this backtracking mechanism. Which we are doing within the next few weeks.The solution to this problem has been known since 1968 when Ken Thompson wrote a paper titled “Programming Techniques: Regular expression search algorithm”. The paper describes a mechanism for converting a regular expression into an NFA (non-deterministic finite automata) and then following the state transitions in the NFA using an algorithm that executes in time linear in the size of the string being matched against.Thompson’s paper doesn’t actually talk about NFA but the linear time algorithm is clearly explained and an ALGOL-60 program that generates assembly language code for the IBM 7094 is presented. The implementation may be arcane but the idea it presents is not.Here’s what the .*.*=.* regular expression would look like when diagrammed in a similar manner to the pictures in Thompson’s paper.Figure 0 has five states starting at 0. There are three loops which begin with the states 1, 2 and 3. These three loops correspond to the three .* in the regular expression. The three lozenges with dots in them match a single character. The lozenge with an = sign in it matches the literal = sign. State 4 is the ending state, if reached then the regular expression has matched.To see how such a state diagram can be used to match the regular expression .*.*=.* we’ll examine matching the string x=x. The program starts in state 0 as shown in Figure 1. The key to making this algorithm work is that the state machine is in multiple states at the same time. The NFA will take every transition it can, simultaneously.Even before it reads any input, it immediately transitions to both states 1 and 2 as shown in Figure 2.Looking at Figure 2 we can see what happened when it considers  first x in x=x. The x can match the top dot by transitioning from state 1 and back to state 1. Or the x can match the dot below it by transitioning from state 2 and back to state 2.So after matching the first x in x=x the states are still 1 and 2. It’s not possible to reach state 3 or 4 because a literal = sign is needed.Next the algorithm considers the = in x=x. Much like the x before it, it can be matched by either of the top two loops transitioning from state 1 to state 1 or state 2 to state 2, but additionally the literal = can be matched and the algorithm can transition state 2 to state 3 (and immediately state 4). That’s illustrated in Figure 3.Next the algorithm reaches the final x in x=x. From states 1 and 2 the same transitions are possible back to states 1 and 2. From state 3 the x can match the dot on the right and transition back to state 3. At that point every character of x=x has been considered and because state 4 has been reached the regular expression matches that string. Each character was processed once so the algorithm was linear in the length of the input string. And no backtracking was needed.It might also be obvious that once state 4 was reached (after x= was matched) the regular expression had matched and the algorithm could terminate without considering the final x at all.This algorithm is linear in the size of its input.

DC BLOX Opens Birmingham Data Center

My Host News -

ATLANTA — DC BLOX, a multi-tenant data center provider delivering the infrastructure and connectivity essential to power today’s digital business, announces the opening of its fourth center facility in Birmingham, Alabama. The first phase of the facility, now customer ready, delivers up to 5MW of power, 18,000 square feet of white space and 13,000 square feet of office space, featuring conference rooms, demo space, hoteling cubes and workstations. This location is DC BLOX’s flagship facility and is capable of expanding to over 200,000 square feet with over 60MW of critical IT load to serve as a technology and innovation hub for the surrounding area. As data centers move more toward the edge of the network to accommodate a growing number of applications demanding local processing and storage – low-latency, high-capacity connectivity is a key component of this evolving architecture. DC BLOX’s Birmingham data center offers access to the company’s full breadth of solutions including cloud storage, colocation and rich connectivity to support enterprise, government and education customers, as well as managed service providers, Software-as-a-Service (SaaS) companies and content providers that do business in the Southeast. The facility is part of DC BLOX,s private, high-speed network fabric, which provides 100Gb+ bandwidth, low-latency connections to Internet Exchanges, access to numerous carriers across data centers and secure cloud connectivity. “We live in a digital age, and the world is not standing still. DC BLOX’s new data center is certainly a welcome addition to the Birmingham community,” said Alabama Governor Kay Ivey. “It will connect the city with high-performance networks to ensure business continuity, and ultimately, it will drive the digital economy. In addition to elevating Birmingham’s technological capabilities, the new data center will bring several high-paying jobs for Alabamians. DC BLOX’s efforts are a much-appreciated investment into Alabama’s future success, and their increased presence in this great state will help propel us forward.” DC BLOX offers the highest data center performance, reliability and connectivity available in the markets it serves. The company is dedicated to meeting the infrastructure needs of businesses and communities in emerging and underserved markets throughout the Southeastern U.S., where robust connectivity and Tier 3 data center availability is limited. This new local data center enables Birmingham-area businesses and government entities to offload the cost and complexity of managing their own data centers and provides the connectivity needed to address an increasingly distributed IT ecosystem. The grand opening of the Birmingham, AL data center is taking place on July 11th. “With construction designed to withstand 150+ mph winds, N+1 power and cooling systems, a fully-protected private network and enhanced security developed to accommodate Controlled Unclassified Information (CUI) standards, the Birmingham facility is designed for security and reliability,” states Mark Masi, DC BLOX Chief Operating Officer. “Our data hall is designed to accommodate cabinets of varying densities and can be adapted for custom solutions as well.” “We are thrilled to be joining the Birmingham community,” adds Jeff Uphues, Chief Executive Officer of DC BLOX. “The State of Alabama and the City of Birmingham care deeply about the prosperity of their citizens and are working to bring in companies like ours to invest in their communities and bring jobs to the region. They understand that a data center is core infrastructure that attracts other technology-dependent companies, and we couldn’t be more excited to be a part of it.” About DC BLOX DC BLOX is a multi-tenant data center provider delivering the infrastructure and connectivity essential to power today’s digital business. DC BLOX’s software-defined network services enable access to a wealth of providers, partners and platforms to businesses across the Southeast. DC BLOX’s connected data centers are in Atlanta, GA; Huntsville, AL; Chattanooga, TN, and Birmingham, AL. For more information, please visit

Iron Mountain Expands Data Services to Support Amazon Web Services

My Host News -

BOSTON – Iron Mountain Incorporated (NYSE: IRM), a global leader in storage and information management services with more than 42,000 data management customers and 120 exabytes in storage, today announced an expansion to its Data Restoration and Migration Services (DRMS), offering customers seeking to migrate tape-based data into Amazon Web Services (AWS). With the expanded offering, companies will have the ability to modernize their data management strategy while maintaining high levels of security, access, and governance. In addition, Iron Mountain announced it has joined the AWS Partner Network (APN) as a Select Technology Partner, enabling customers to accelerate their digital transformation journey with AWS. With DRMS, customers will have the ability to seamlessly migrate data stored on tape to Amazon Simple Storage Service (Amazon S3), Amazon S3 Glacier (S3 Glacier), or Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term data retention and storage. Customers can work with Iron Mountain to develop an optimal strategy for migration of their tape-based data to multiple AWS storage classes, either on demand or via bulk transfer. Customers can leverage tape-based data, protected and managed by Iron Mountain for migration to AWS, to further build big-data, artificial intelligence/machine learning-powered applications and solutions. Iron Mountain partners with customers to formulate a data strategy that helps save time and money and maximizes the value of tape-based data migrated to AWS. The expanded DRMS accelerates IT modernization, aligning to an organization’s digital transformation initiative while providing data protection to meet with regulatory and compliance requirements. “Bringing together DRMS and AWS delivers on our strategy to align our services with companies like AWS, bringing tremendous value to our mutual customers,” said Tom Fetters, vice president and general manager, Data Protection, Iron Mountain. “As more organizations embrace cloud as part of their digital transformation journey, they seek partners who can help them solve the data protection, integration, migration, access and governance challenges they encounter along the way. Customers understand that their tapes continue to be an essential data source in this journey. Our expanded offering delivers compelling capabilities to AWS customers, leveraging Iron Mountain’s proven experience and trusted expertise in helping store, secure, access, and derive value from their data.” To learn more about Iron Mountain’s Data Restoration and Migration Services, which feature on-site or off-site support, data shuttle or network transfer options, and hosting within an Iron Mountain Data Center or storage environment of choice, visit About Iron Mountain Iron Mountain Incorporated (NYSE: IRM), founded in 1951, is the global leader for storage and information management services. Trusted by more than 225,000 organizations around the world, and with a real estate network of more than 90 million square feet across more than 1,450 facilities in approximately 50 countries, Iron Mountain stores and protects billions of valued assets, including critical business information, highly sensitive data, and cultural and historical artifacts. Providing solutions that include information management, digital transformation, secure storage, secure destruction, as well as data centers, cloud services and art storage and logistics, Iron Mountain helps customers lower cost and risk, comply with regulations, recover from disaster, and enable a digital way of working. Visit for more information.

iomart Approved to Supply Cloud Services on G-Cloud 11

My Host News -

GLASGOW – iomart Group plc (AIM: IOM) has been approved to supply a comprehensive portfolio of managed cloud services on the G-Cloud 11 framework which will help public sector bodies reduce management costs and simplify compliance. In all, iomart, together with its digital transformation consultancy SystemsUp and storage specialist Cristie Data, has been approved to deliver a total of 29 separate cloud services across the three lots of hosting, support and software, to support central government, local authorities, healthcare, education and blue light services as they continue to transform the way they deliver their services to the public. Declan Sharpe, UK Sales Director, iomart, said: “We have the people, skills, knowledge, technologies, infrastructure and partnerships to offer public sector organisations a one-stop shop for cloud strategy, implementation, security and management. Instead of having to deal with multiple suppliers, we offer a single point of access to a large portfolio of cloud services, producing a better experience and genuine economies of scale for any public sector body that chooses to work with us.” The approved G-Cloud 11 services from iomart include Backup as a Service, Disaster Recovery as a Service, Desktop as a Service, IaaS Private Cloud, IaaS Public Cloud, Managed AWS, Managed Azure, Microsoft CSP for Azure and Office 365 and Managed Security. The addition this year of Microsoft CSP for Azure and Office 365 means iomart can also help customers in the public sector leverage funding from Microsoft for proof of concepts. Through its consultancy SystemsUp, iomart can help public sector organisations with cloud strategy and security, application optimisation, workplace collaboration and data analytics, plus secure internet connectivity via partner ZScaler. While through its storage brand Cristie Data iomart offers a number of transitional cloud services for public sector organisations that are still on premise, as well as web security and Office 365 security software from partner Barracuda. iomart has been an approved supplier since the early days of the G-Cloud framework. To find out more search for iomart on the Digital Marketplace visit About iomart For over 20 years iomart Group plc (AIM: IOM) has been helping growing organisations to maximise the flexibility, cost effectiveness and scalability of the cloud. From data centres we own and operate in the U.K. and from connected facilities across the globe, we deliver 24/7 storage and protection for data across the most complex of cloud and legacy infrastructures. Our team of over 400 dedicated staff work with our customers at the strategy stage through to delivery and ongoing management, to implement the secure cloud solutions that deliver to their business requirements. For more information visit

Hivelocity’s Global Edge Compute Expansion Starts in Frankfurt, Germany

My Host News -

Hivelocity, a leading provider of IaaS, announced today the availability of its bare-metal edge compute services in Frankfurt, Germany. Frankfurt is the first in a line of several new European and APAC locations Hivelocity will be introducing over the coming months as it continues to expand the reach of its edge compute platform. Frankfurt joins Dallas, New York City, Los Angeles, Tampa, Miami, Atlanta and Seattle in the list of markets Hivelocity offers its suite of infrastructure services. Hivelocity’s platform lets users instantly deploy hundreds of Linux and Windows dedicated servers in any of these 8 global markets. Once bare-metal is deployed users can view server health and resource usage data, establish data recovery points, perform OS reloads, interact with technical support and much more. Each server has the option of being self-managed or managed with the latter including things like proactive security patches and monitoring. Hivelocity’s expansion plans include adding edge compute to new markets like London, Paris, Amsterdam, Singapore, Sydney and Sao Paulo over the next three months. “With customers hailing from over 130 countries, Hivelocity has long served a global market. As our customers’ businesses have grown and matured, so have their needs to optimize and scale the performance of their applications all over the world. By enabling our customers to deploy their compute and storage resources wherever in the world their end users are best served, we are providing them with a much better opportunity to maximize the end user experience and their own bottom line,” says Hivelocity CTO Ben Linton. With more and more businesses recognizing the benefits of having their compute nodes at the edge there has been a recent surge in upstart edge providers. Hivelocity believes its 17 years of IaaS experience and its obsessive focus on customer support gives it a leg up on competitors. “Our mantra has always been to be the best service provider our customers have ever worked with. We maintain a Net Promoter Score of 74+ which is a testament to the level of satisfaction our customers feel, and frankly heads and shoulders above our competitors. If a business needs to deploy 1000 servers or just 10 servers around the globe, you can guarantee they are going to need some help and technical support along the way. Most of our competitors are new to the arena and all of their capital is invested in developers and hardware. We spend a lot of money on developers and hardware too, but we also employ nearly 100 technicians and engineers who work inside our data centers 24/7, providing the most exceptional technical support in the industry. Our support solutions involve experts with years of experience working with you in real time, their solution is to have you fix it yourself or reload the OS,” says Hivelocity COO Steve Eschweiler. Hivelocity was founded in 2002 and serves roughly 6000 businesses out of its 12 North American data centers. For more information please visit


Recommended Content

Subscribe to Complete Hosting Guide aggregator