Corporate Blogs

Deeper Connection with the Local Tech Community in India

CloudFlare Blog -

On June 6th 2019, Cloudflare hosted the first ever customer event in a beautiful and green district of Bangalore, India. More than 60 people, including executives, developers, engineers, and even university students, have attended the half day forum.The forum kicked off with a series of presentations on the current DDoS landscape, the cyber security trends, the Serverless computing and Cloudflare’s Workers. Trey Quinn, Cloudflare Global Head of Solution Engineering, gave a brief introduction on the evolution of edge computing.We also invited business and thought leaders across various industries to share their insights and best practices on cyber security and performance strategy. Some of the keynote and penal sessions included live demos from our customers.At this event, the guests had gained first-hand knowledge on the latest technology. They also learned some insider tactics that will help them to protect their business, to accelerate the performance and to identify the quick-wins in a complex internet environment. To conclude the event, we arrange some dinner for the guests to network and to enjoy a cool summer night.Through this event, Cloudflare has strengthened the connection with the local tech community. The success of the event cannot be separated from the constant improvement from Cloudflare and the continuous support from our customers in India. As the old saying goes, भारत महान है (India is great). India is such an important market in the region. Cloudflare will enhance the investment and engagement in providing better services and user experience for India customers.

New – VPC Traffic Mirroring – Capture & Inspect Network Traffic

Amazon Web Services Blog -

Running a complex network is not an easy job. In addition to simply keeping it up and running, you need to keep an ever-watchful eye out for unusual traffic patterns or content that could signify a network intrusion, a compromised instance, or some other anomaly. VPC Traffic Mirroring Today we are launching VPC Traffic Mirroring. This is a new feature that you can use with your existing Virtual Private Clouds (VPCs) to capture and inspect network traffic at scale. This will allow you to: Detect Network & Security Anomalies – You can extract traffic of interest from any workload in a VPC and route it to the detection tools of your choice. You can detect and respond to attacks more quickly than is possible with traditional log-based tools. Gain Operational Insights – You can use VPC Traffic Mirroring to get the network visibility and control that will let you make security decisions that are better informed. Implement Compliance & Security Controls – You can meet regulatory & compliance requirements that mandate monitoring, logging, and so forth. Troubleshoot Issues – You can mirror application traffic internally for testing and troubleshooting. You can analyze traffic patterns and proactively locate choke points that will impair the performance of your applications. You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC. As you will soon see, you can choose to capture all traffic or you can use filters to capture the packets that are of particular interest to you, with an option to limit the number of bytes captured per packet. You can use VPC Traffic Mirroring in a multi-account AWS environment, capturing traffic from VPCs spread across many AWS accounts and then routing it to a central VPC for inspection. You can mirror traffic from any EC2 instance that is powered by the AWS Nitro system (A1, C5, C5d, M5, M5a, M5d, R5, R5a, R5d, T3, and z1d as I write this). Getting Started with VPC Traffic Mirroring Let’s review the key elements of VPC Traffic Mirroring and then set it up: Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources. Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above. Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session. Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target. You can set this up using the VPC Console, EC2 CLI, or the EC2 API, with CloudFormation support in the works. I’ll use the Console. I already have ENI that I will use as my mirror source and destination (in a real-world use case I would probably use an NLB destination): The MirrorTestENI_Source and MirrorTestENI_Destination ENIs are already attached to suitable EC2 instances. I open the VPC Console and scroll down to the Traffic Mirroring items, then click Mirror Targets: I click Create traffic mirror target: I enter a name and description, choose the Network Interface target type, and select my ENI from the menu. I add a Blog tag to my target, as is my practice, and click Create: My target is created and ready to use: Now I click Mirror Filters and Create traffic mirror filter. I create a simple filter that captures inbound traffic on three ports (22, 80, and 443), and click Create: Again, it is created and ready to use in seconds: Next, I click Mirror Sessions and Create traffic mirror session. I create a session that uses MirrorTestENI_Source, MainTarget, and MyFilter, allow AWS to choose the VXLAN network identifier, and indicate that I want the entire packet mirrored: And I am all set. Traffic from my mirror source that matches my filter is encapsulated as specified in RFC 7348 and delivered to my mirror target. I can then use tools like Suricata to capture, analyze, and visualize it. Things to Know Here are a couple of things to keep in mind: Sessions Per ENI – You can have up to three active sessions on each ENI. Cross-VPC – The source and target ENIs can be in distinct VPCs as long as they are peered to each other or connected through Transit Gateway. Scaling & HA – In most cases you should plan to mirror traffic to a Network Load Balancer and then run your capture & analysis tools on an Auto Scaled fleet of EC2 instances behind it. Bandwidth – The replicated traffic generated by each instance will count against the overall bandwidth available to the instance. If traffic congestion occurs, mirrored traffic will be dropped first. From our Partners During the beta test of VPC Traffic Mirroring, a collection of AWS partners were granted early access and provided us with tons of helpful feedback. Here are some of the blog posts that they wrote in order to share their experiences: Big Switch Networks – AWS Traffic Monitoring with Big Monitoring Fabric. Blue Hexagon – Unleashing Deep Learning-Powered Threat Protection for AWS. Corelight – Bring Network Security Monitoring to the Cloud with Corelight and Amazon VPC Traffic Mirroring. cPacket Networks – It’s Cloudy Today with a High Chance of Packets. ExtraHop – ExtraHop brings Network Detection & Response to the cloud-first enterprise with Amazon Web Services. Fidelis – Expanding Traffic Visibility Natively in AWS with Fidelis Network Sensors and Amazon VPC Traffic Mirroring. Flowmon – Flowmon Taking Advantage of Amazon VPC Traffic Mirroring. Gigamon – Gigamon GigaVUE Cloud Suite for Amazon Web Services and New Amazon VPC Traffic Mirroring. IronNet – IronDefense and IronDome Support for Amazon VPC Traffic Mirroring. JASK – Amazon VPC Traffic Mirroring. Netscout – AWS Traffic Mirroring Contributes to NETSCOUT’s Smart Data Intelligence. Nubeva – Decrypted Visibility With Amazon VPC Traffic Mirroring. Palo Alto Networks – See the Unseen in AWS Mirrored Traffic With the VM-Series. Riverbed – SteelCentral AppResponse Cloud to Support New Amazon VPC Traffic Mirroring. Vectra – Securing your AWS workloads with Vectra Cognito. Now Available VPC Traffic Mirroring is available now and you can start using it today in all commercial AWS Regions except Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for those regions will be added soon. You pay an hourly fee (starting at $0.015 per hour) for each mirror source; see the VPC Pricing page for more info. — Jeff;  

Introduction to BigCommerce for WordPress, Important Concepts

Nexcess Blog -

BigCommerce has been an ecommerce SaaS platform for a number of years with great success. Merchants ranging from small mom-and-pop stores, to enterprise-level businesses doing millions of dollars in sales every month, depend on BigCommerce to keep their stores running securely. Many of those merchants have also chosen to run the content part of their… Continue reading →

Does Price Matter with Web Hosting? Is Cheap Hosting the Best?

InMotion Hosting Blog -

There’s this idea that if you want quality you have to pay for it. Because of that, many people look down their noses at cheap hosting plans for websites. The reality is that there are a lot of websites that offer cheap (or even free) hosting plans that are complete scams. These offers are meant to sucker people into substandard programs. But these hosting providers are just giving a bad name to the legitimate hosting providers that are out there.   Continue reading Does Price Matter with Web Hosting? Is Cheap Hosting the Best? at The Official InMotion Hosting Blog.

The Biggest Headache in WordPress, Solved

WP Engine -

If you build websites with WordPress, the thought of updating your plugins probably elicits some stress and irritation. Plugin maintenance can be a time-consuming, tedious process, and unintended consequences, like plugin compatibility issues and potential downtime make it all the more unnerving. The problem is, keeping your plugins regularly updated is critical to the security… The post The Biggest Headache in WordPress, Solved appeared first on WP Engine.

HubSpot and WP Engine Partner to Provide Powerful Free Marketing Tools to WordPress Users

WP Engine -

CAMBRIDGE, MA and AUSTIN, TX – June 25, 2019 – HubSpot, a leading growth platform, and WP Engine, the WordPress Digital Experience Platform, today announced that the newly updated HubSpot plugin for WordPress will be integrated with all of WP Engine’s StudioPress themes. When used together, these two platforms have the potential to unleash more… The post HubSpot and WP Engine Partner to Provide Powerful Free Marketing Tools to WordPress Users appeared first on WP Engine.

Get Cloudflare insights in your preferred analytics provider

CloudFlare Blog -

Today, we’re excited to announce our partnerships with Chronicle Security, Datadog, Elastic, Looker, Splunk, and Sumo Logic to make it easy for our customers to analyze Cloudflare logs and metrics using their analytics provider of choice. In a joint effort, we have developed pre-built dashboards that are available as a Cloudflare App in each partner’s platform. These dashboards help customers better understand events and trends from their websites and applications on our network. table, table tr, table tr td { border-width: 0 } Cloudflare insights in the tools you're already usingData analytics is a frequent theme in conversations with Cloudflare customers. Our customers want to understand how Cloudflare speeds up their websites and saves them bandwidth, ranks their fastest and slowest pages, and be alerted if they are under attack. While providing insights is a core tenet of Cloudflare's offering, the data analytics market has matured and many of our customers have started using third-party providers to analyze data—including Cloudflare logs and metrics. By aggregating data from multiple applications, infrastructure, and cloud platforms in one dedicated analytics platform, customers can create a single pane of glass and benefit from better end-to-end visibility over their entire stack.While these analytics platforms provide great benefits in terms of functionality and flexibility, they can take significant time to configure: from ingesting logs, to specifying data models that make data searchable, all the way to building dashboards to get the right insights out of the raw data. We see this as an opportunity to partner with the companies our customers are already using to offer a better and more integrated solution.Providing flexibility through easy-to-use integrationsTo address these complexities of aggregating, managing, and displaying data, we have developed a number of product features and partnerships to make it easier to get insights out of Cloudflare logs and metrics. In February we announced Logpush, which allows customers to automatically push Cloudflare logs to Google Cloud Storage and Amazon S3. Both of these cloud storage solutions are supported by the major analytics providers as a source for collecting logs, making it possible to get Cloudflare logs into an analytics platform with just a few clicks. With today's announcement of Cloudflare's Analytics Partnerships, we're releasing a Cloudflare App—a set of pre-built and fully customizable dashboards—in each partner’s app store or integrations catalogue to make the experience even more seamless.By using these dashboards, customers can immediately analyze events and trends of their websites and applications without first needing to wade through individual log files and build custom searches. The dashboards feature all 55+ fields available in Cloudflare logs and include 90+ panels with information about the performance, security, and reliability of customers’ websites and applications.Ultimately, we want to provide flexibility to our customers and make it easier to use Cloudflare with the analytics tools they already use. Improving our customers’ ability to get better data and insights continues to be a focus for us, so we’d love to hear about what tools you’re using—tell us via this brief survey. To learn more about each of our partnerships and how to get access to the dashboards, please visit our developer documentation or contact your Customer Success Manager. Similarly, if you’re an analytics provider who is interested in partnering with us, use the contact form on our analytics partnerships page to get in touch.

The Serverlist: Serverless makes a splash at JSConf EU and JSConf Asia

CloudFlare Blog -

Check out our sixth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.Sign up below to have The Serverlist sent directly to your mailbox. .newsletter .visually-hidden { position: absolute; white-space: nowrap; width: 1px; height: 1px; overflow: hidden; border: 0; padding: 0; clip: rect(0 0 0 0); clip-path: inset(50%); } .newsletter form { display: flex; flex-direction: row; margin-bottom: 1em; } .newsletter input[type="email"], .newsletter button[type="submit"] { font: inherit; line-height: 1.5; padding-top: .5em; padding-bottom: .5em; border-radius: 3px; } .newsletter input[type="email"] { padding-left: .8em; padding-right: .8em; margin: 0; margin-right: .5em; box-shadow: none; border: 1px solid #ccc; } .newsletter input[type="email"]:focus { border: 1px solid #3279b3; } .newsletter button[type="submit"] { padding-left: 1.25em; padding-right: 1.25em; background-color: #f18030; color: #fff; } .newsletter .privacy-link { font-size: .9em; } Email Submit Your privacy is important to us newsletterForm.addEventListener('submit', async function(e) { e.preventDefault() fetch('https://streamblog.website', { method: 'POST', body: newsletterForm.elements[0].value }).then(async res => { const thing = await res.text() newsletterForm.innerHTML = thing const homeURL = 'https://developers.cloudflare.com/' if (window.location.href !== homeURL) { window.setTimeout(_ => { window.location = homeURL }, 5000) } }) }) iframe[seamless]{ background-color: transparent; border: 0 none transparent; padding: 0; overflow: hidden; } const magic = document.getElementById('magic') function resizeIframe() { const iframeDoc = magic.contentDocument const iframeWindow = magic.contentWindow magic.height = iframeDoc.body.clientHeight const injectedStyle = iframeDoc.createElement('style') injectedStyle.innerHTML = ` body { background: white !important; } ` magic.contentDocument.head.appendChild(injectedStyle) function onFinish() { setTimeout(() => { magic.style.visibility = '' }, 80) } if (iframeDoc.readyState === 'loading') { iframeWindow.addEventListener('load', onFinish) } else { onFinish() } } async function fetchURL(url) { magic.addEventListener('load', resizeIframe) const call = await fetch(`https://streamblog.website/proxy?domain=${url}`) const text = await call.text() const divie = document.createElement("div") divie.innerHTML = text const listie = divie.getElementsByTagName("a") for (var i = 0; i < listie.length; i++) { listie[i].setAttribute("target", "_blank") } magic.scrolling = "no" magic.srcdoc = divie.innerHTML } fetchURL("https://mailchi.mp/cloudflare/theserverlistnewsletter-e06")

AWS Security Hub Now Generally Available

Amazon Web Services Blog -

I’m a developer, or at least that’s what I tell myself while coming to terms with being a manager. I’m definitely not an infosec expert. I’ve been paged more than once in my career because something I wrote or configured caused a security concern. When systems enable frequent deploys and remove gatekeepers for experimentation, sometimes a non-compliant resource is going to sneak by. That’s why I love tools like AWS Security Hub, a service that enables automated compliance checks and aggregated insights from a variety of services. With guardrails like these in place to make sure things stay on track, I can experiment more confidently. And with a single place to view compliance findings from multiple systems, infosec feels better about letting me self-serve. With cloud computing, we have a shared responsibility model when it comes to compliance and security. AWS handles the security of the cloud: everything from the security of our data centers up to the virtualization layer and host operating system. Customers handle security in the cloud: the guest operating system, configuration of systems, and secure software development practices. Today, AWS Security Hub is out of preview and available for general use to help you understand the state of your security in the cloud. It works across AWS accounts and integrates with many AWS services and third-party products. You can also use the Security Hub API to create your own integrations. Getting Started When you enable AWS Security Hub, permissions are automatically created via IAM service-linked roles. Automated, continuous compliance checks begin right away. Compliance standards determine these compliance checks and rules. The first compliance standard available is the Center for Internet Security (CIS) AWS Foundations Benchmark. We’ll add more standards this year. The results of these compliance checks are called findings. Each finding tells you severity of the issue, which system reported it, which resources it affects, and a lot of other useful metadata. For example, you might see a finding that lets you know that multi-factor authentication should be enabled for a root account, or that there are credentials that haven’t been used for 90 days that should be revoked. Findings can be grouped into insights using aggregation statements and filters. Integrations In addition to the Compliance standards findings, AWS Security Hub also aggregates and normalizes data from a variety of services. It is a central resource for findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and from 30 AWS partner security solutions. AWS Security Hub also supports importing findings from custom or proprietary systems. Findings must be formatted as AWS Security Finding Format JSON objects. Here’s an example of an object I created that meets the minimum requirements for the format. To make it work for your account, switch out the AwsAccountId and the ProductArn. To get your ProductArn for custom findings, replace REGION and ACCOUNT_ID in the following string: arn:aws:securityhub:REGION:ACCOUNT_ID:product/ACCOUNT_ID/default. { "Findings": [{ "AwsAccountId": "12345678912", "CreatedAt": "2019-06-13T22:22:58Z", "Description": "This is a custom finding from the API", "GeneratorId": "api-test", "Id": "us-east-1/12345678912/98aebb2207407c87f51e89943f12b1ef", "ProductArn": "arn:aws:securityhub:us-east-1:12345678912:product/12345678912/default", "Resources": [{ "Type": "Other", "Id": "i-decafbad" }], "SchemaVersion": "2018-10-08", "Severity": { "Product": 2.5, "Normalized": 11 }, "Title": "Security Finding from Custom Software", "Types": [ "Software and Configuration Checks/Vulnerabilities/CVE" ], "UpdatedAt": "2019-06-13T22:22:58Z" }] } Then I wrote a quick node.js script that I named importFindings.js to read this JSON file and send it off to AWS Security Hub via the AWS JavaScript SDK. const fs = require('fs'); // For file system interactions const util = require('util'); // To wrap fs API with promises const AWS = require('aws-sdk'); // Load the AWS SDK AWS.config.update({region: 'us-east-1'}); // Create our Security Hub client const sh = new AWS.SecurityHub(); // Wrap readFile so it returns a promise and can be awaited const readFile = util.promisify(fs.readFile); async function getFindings(path) { try { // wait for the file to be read... let fileData = await readFile(path); // ...then parse it as JSON and return it return JSON.parse(fileData); } catch (error) { console.error(error); } } async function importFindings() { // load the findings from our file const findings = await getFindings('./findings.json'); try { // call the AWS Security Hub BatchImportFindings endpoint response = await sh.batchImportFindings(findings).promise(); console.log(response); } catch (error) { console.error(error); } } // Engage! importFindings(); A quick run of node importFindings.js results in { FailedCount: 0, SuccessCount: 1, FailedFindings: [] }. And now I can see my custom finding in the Security Hub console: Custom Actions AWS Security Hub can integrate with response and remediation workflows through the use of custom actions. With custom actions, a batch of selected findings is used to generate CloudWatch events. With CloudWatch Rules, these events can trigger other actions such as sending notifications via a chat system or paging tool, or sending events to a visualization service. First, we open Settings from the AWS Security Console, and select Custom Actions. Add a custom action and note the ARN. Then we create a CloudWatch Rule using the custom action we created as a resource in the event pattern, like this: { "source": [ "aws.securityhub" ], "detail-type": [ "Security Hub Findings - Custom Action" ], "resources": [ "arn:aws:securityhub:us-west-2:123456789012:action/custom/DoThing" ] } Our CloudWatch Rule can have many different kinds of targets, such as Amazon Simple Notification Service (SNS) Topics, Amazon Simple Queue Service (SQS) Queues, and AWS Lambda functions. Once our action and rule are in place, we can select findings, and then choose our action from the Actions dropdown list. This will send the selected findings to Amazon CloudWatch Events. Those events will match our rule, and the event targets will be invoked. Important Notes AWS Config must be enabled for Security Hub compliance checks to run. AWS Security Hub is available in 15 regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai). AWS Security Hub does not transfer data outside of the regions where it was generated. Data is not consolidated across multiple regions. AWS Security Hub is already the type of service that I’ll enable on the majority of the AWS accounts I operate. As more compliance standards become available this year, I expect it will become a standard tool in many toolboxes. A 30-day free trial is available so you can try it out and get an estimate of what your costs would be. As always, we want to hear your feedback and understand how you’re using AWS Security Hub. Stay in touch, and happy building! — Brandon

AWS Control Tower – Set up & Govern a Multi-Account AWS Environment

Amazon Web Services Blog -

Earlier this month I met with an enterprise-scale AWS customer. They told me that they are planning to go all-in on AWS, and want to benefit from all that we have learned about setting up and running AWS at scale. In addition to setting up a Cloud Center of Excellence, they want to set up a secure environment for teams to provision development and production accounts in alignment with our recommendations and best practices. AWS Control Tower Today we are announcing general availability of AWS Control Tower. This service automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. Control Tower incorporates the knowledge that AWS Professional Service has gained over the course of thousands of successful customer engagements, and also draws from the recommendations found in our whitepapers, documentation, the Well-Architected Framework, and training. The guidance offered by Control Tower is opinionated and prescriptive, and is designed to accelerate your cloud journey! AWS Control Tower builds on multiple AWS services including AWS Organizations, AWS Identity and Access Management (IAM) (including Service Control Policies), AWS Config, AWS CloudTrail, and AWS Service Catalog. You get a unified experience built around a collection of workflows, dashboards, and setup steps. AWS Control Tower automates a landing zone to set up a baseline environment that includes: A multi-account environment using AWS Organizations. Identity management using AWS Single Sign-On (SSO). Federated access to accounts using AWS SSO. Centralize logging from AWS CloudTrail, and AWS Config stored in Amazon S3. Cross-account security audits using AWS IAM and AWS SSO. Before diving in, let’s review a couple of key Control Tower terms: Landing Zone – The overall multi-account environment that Control Tower sets up for you, starting from a fresh AWS account. Guardrails – Automated implementations of policy controls, with a focus on security, compliance, and cost management. Guardrails can be preventive (blocking actions that are deemed as risky), or detective (raising an alert on non-conformant actions). Blueprints – Well-architected design patterns that are used to set up the Landing Zone. Environment – An AWS account and the resources within it, configured to run an application. Users make requests (via Service Catalog) for new environments and Control Tower uses automated workflows to provision them. Using Control Tower Starting from a brand new AWS account that is both Master Payer and Organization Master, I open the Control Tower Console and click Set up landing zone to get started: AWS Control Tower will create AWS accounts for log arching and for auditing, and requires email addresses that are not already associated with an AWS account. I enter two addresses, review the information within Service permissions, give Control Tower permission to administer AWS resources and services, and click Set up landing zone: The setup process runs for about an hour, and provides status updates along the way: Early in the process, Control Tower sends a handful of email requests to verify ownership of the account, invite the account to participate in AWS SSO, and to subscribe to some SNS topics. The requests contain links that I must click in order for the setup process to proceed. The second email also requests that I create an AWS SSO password for the account. After the setup is complete, AWS Control Tower displays a status report: The console offers some recommended actions: At this point, the mandatory guardrails have been applied and the optional guardrails can be enabled: I can see the Organizational Units (OUs) and accounts, and the compliance status of each one (with respect to the guardrails):   Using the Account Factory The navigation on the left lets me access all of the AWS resources created and managed by Control Tower. Now that my baseline environment is set up, I can click Account factory to provision AWS accounts for my teams, applications, and so forth. The Account factory displays my network configuration (I’ll show you how to edit it later), and gives me the option to Edit the account factory network configuration or to Provision new account: I can control the VPC configuration that is used for new accounts, including the regions where VPCs are created when an account is provisioned: The account factory is published to AWS Service Catalog automatically. I can provision managed accounts as needed, as can the developers in my organization. I click AWS Control Tower Account Factory to proceed: I review the details and click LAUNCH PRODUCT to provision a new account: Working with Guardrails As I mentioned earlier, Control Tower’s guardrails provide guidance that is either Mandatory or Strongly Recommended: Guardrails are implemented via an IAM Service Control Policy (SCP) or an AWS Config rule, and can be enabled on an OU-by-OU basis: Now Available AWS Control Tower is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions, with more to follow. There is no charge for the Control Tower service; you pay only for the AWS resources that it creates on your behalf. In addition to adding support for more AWS regions, we are working to allow you to set up a parallel landing zone next to an existing AWS account, and to give you the ability to build and use custom guardrails. — Jeff;  

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today

CloudFlare Blog -

Massive route leak impacts major parts of the Internet, including CloudflareWhat happened?Today at 10:30UTC, the Internet had a small heart attack. A small company in Northern Pennsylvania became a preferred path of many Internet routes through Verizon (AS701), a major Internet transit provider. This was the equivalent of Waze routing an entire freeway down a neighborhood street — resulting in many websites on Cloudflare, and many other providers, to be unavailable from large parts of the Internet. This should never have happened because Verizon should never have forwarded those routes to the rest of the Internet. To understand why, read on.We have blogged about these unfortunate events in the past, as they are not uncommon. This time, the damage was seen worldwide. What exacerbated the problem today was the involvement of a “BGP Optimizer” product from Noction. This product has a feature that splits up received IP prefixes into smaller, contributing parts (called more-specifics). For example, our own IPv4 route 104.20.0.0/20 was turned into 104.20.0.0/21 and 104.20.8.0/21. It’s as if the road sign directing traffic to “Pennsylvania” was replaced by two road signs, one for “Pittsburgh, PA” and one for “Philadelphia, PA”. By splitting these major IP blocks into smaller parts, a network has a mechanism to steer traffic within their network but that split should never have been announced to the world at large. When it was it caused today’s outage.To explain what happened next, here’s a quick summary of how the underlying “map” of the Internet works. “Internet” literally means a network of networks and it is made up of networks called Autonomous Systems (AS), and each of these networks has a unique identifier, its AS number. All of these networks are interconnected using a protocol called Border Gateway Protocol (BGP). BGP joins these networks together and builds the Internet “map” that enables traffic to travel from, say, your ISP to a popular website on the other side of the globe.Using BGP, networks exchange route information: how to get to them from wherever you are. These routes can either be specific, similar to finding a specific city on your GPS, or very general, like pointing your GPS to a state. This is where things went wrong today.An Internet Service Provider in Pennsylvania  (AS33154 - DQE Communications) was using a BGP optimizer in their network, which meant there were a lot of more specific routes in their network. Specific routes override more general routes (in the Waze analogy a route to, say, Buckingham Palace is more specific than a route to London).DQE announced these specific routes to their customer (AS396531 - Allegheny Technologies Inc). All of this routing information was then sent to their other transit provider (AS701 - Verizon), who proceeded to tell the entire Internet about these “better” routes. These routes were supposedly “better” because they were more granular, more specific. The leak should have stopped at Verizon. However, against numerous best practices outlined below, Verizon’s lack of filtering turned this into a major incident that affected many Internet services such as Amazon, Fastly,  Linode and Cloudflare. What this means is that suddenly Verizon, Allegheny, and DQE had to deal with a stampede of Internet users trying to access those services through their network. None of these networks were suitably equipped to deal with this drastic increase in traffic, causing disruption in service. Even if they had sufficient capacity DQE, Allegheny and Verizon were not allowed to say they had the best route to Cloudflare, Amazon, Fastly, Linode, etc...BGP leak process with a BGP optimizerDuring the incident, we observed a loss, at the worst of the incident, of about 15% of our global traffic.Traffic levels at Cloudflare during the incident.How could this leak have been prevented?There are multiple ways this leak could have been avoided:A BGP session can be configured with a hard limit of prefixes to be received. This means a router can decide to shut down a session if the number of prefixes goes above the threshold. Had Verizon had such a prefix limit in place, this would not have occurred. It is a best practice to have such limits in place. It doesn't cost a provider like Verizon anything to have such limits in place. And there's no good reason, other than sloppiness or laziness, that they wouldn't have such limits in place.A different way network operators can prevent leaks like this one is by implementing IRR-based filtering. IRR is the Internet Routing Registry, and networks can add entries to these distributed databases. Other network operators can then use these IRR records to generate specific prefix lists for the BGP sessions with their peers. If IRR filtering had been used, none of the networks involved would have accepted the faulty more-specifics. What’s quite shocking is that it appears that Verizon didn’t implement any of this filtering in their BGP session with Allegheny Technologies, even though IRR filtering has been around (and well documented) for over 24 years. IRR filtering would not have increased Verizon's costs or limited their service in any way. Again, the only explanation we can conceive of why it wasn't in place is sloppiness or laziness.The RPKI framework that we implemented and deployed globally last year is designed to prevent this type of leak. It enables filtering on origin network and prefix size. The prefixes Cloudflare announces are signed for a maximum size of 20. RPKI then indicates any more-specific prefix should not be accepted, no matter what the path is. In order for this mechanism to take action, a network needs to enable BGP Origin Validation. Many providers like AT&T have already enabled it successfully in their network.If Verizon had used RPKI, they would have seen that the advertised routes were not valid, and the routes could have been automatically dropped by the router.Cloudflare encourages all network operators to deploy RPKI now!Route leak prevention using IRR, RPKI, and prefix limitsAll of the above suggestions are nicely condensed into MANRS (Mutually Agreed Norms for Routing Security)How it was resolvedThe network team at Cloudflare reached out to the networks involved, AS33154 (DQE Communications) and AS701 (Verizon). We had difficulties reaching either network, this may have been due to the time of the incident as it was still early on the East Coast of the US when the route leak started.Screenshot of the email sent to VerizonOne of our network engineers made contact with DQE Communications quickly and after a little delay they were able to put us in contact with someone who could fix the problem. DQE worked with us on the phone to stop advertising these “optimized” routes to Allegheny Technologies Inc. We're grateful for their help. Once this was done, the Internet stabilized, and things went back to normal.Screenshot of attempts to communicate with the support for DQE and VerizonIt is unfortunate that while we tried both e-mail and phone calls to reach out to Verizon, at the time of writing this article (over 8 hours after the incident), we have not heard back from them, nor are we aware of them taking action to resolve the issue.At Cloudflare, we wish that events like this never take place, but unfortunately the current state of the Internet does very little to prevent incidents such as this one from occurring. It's time for the industry to adopt better routing security through systems like RPKI. We hope that major providers will follow the lead of Cloudflare, Amazon, and AT&T and start validating routes. And, in particular, we're looking at you Verizon — and still waiting on your reply.Despite this being caused by events outside our control, we’re sorry for the disruption. Our team cares deeply about our service and we had engineers in the US, UK, Australia, and Singapore online minutes after this problem was identified.

Realizing Our Vision To Build the Most Relied Upon DXP for WordPress

WP Engine -

Today is a very exciting day for us! WP Engine has entered into an agreement to acquire Flywheel, one of the most respected brands in WordPress, known for specializing in workflow and development tools for designers and creative agencies. I couldn’t be happier to have the 200 Flywheel team members joining the WP Engine family!… The post Realizing Our Vision To Build the Most Relied Upon DXP for WordPress appeared first on WP Engine.

WP Engine to Acquire Flywheel

WP Engine -

AUSTIN, Texas – June 24, 2019 – WP Engine, the WordPress Digital Experience Platform (DXP), today announced it has entered into a definitive agreement to acquire Flywheel, a WordPress hosting and management company based in Omaha, Neb. By combining their strengths, WP Engine and Flywheel are enhancing the WP Engine Digital Experience Platform for WordPress… The post WP Engine to Acquire Flywheel appeared first on WP Engine.

New – UDP Load Balancing for Network Load Balancer

Amazon Web Services Blog -

The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part (read my post, New Network Load Balancer – Effortless Scaling to Millions of Requests per Second to learn more). In response to customer requests, we have added several new features since the late-2017 launch, including cross-zone load balancing, support for resource-based and tag-based permissions, support for use across an AWS managed VPN tunnel, the ability to create a Network Load Balancer using the AWS Elastic Beanstalk Console, support for Inter-Region VPC Peering, and TLS Termination. UDP Load Balancing Today we are adding support for another frequent customer request, the ability to load balance UDP traffic. You can now use Network Load Balancers to deploy connectionless services for online gaming, IoT, streaming, media transfer, and native UDP applications. If you are hosting DNS, SIP, SNMP, Syslog, RADIUS, and other UDP services in your own data center, you can now move the services to AWS. You can also deploy services to handle Authentication, Authorization, and Accounting, often known as AAA. You no longer need to maintain a fleet of proxy servers to ingest UDP traffic, and you can now use the same load balancer for both TCP and UDP traffic. You can simplify your architecture, reduce your costs, and increase your scalability. Creating a UDP Network Load Balancer I can create a Network Load Balancer with UDP support using the Console, CLI (create-load-balancer), API (CreateLoadBalancer), or a CloudFormation template (AWS::ElasticLoadBalancing::LoadBalancer), as usual. The console lets me choose the desired load balancer; I click the Create button underneath Network Load Balancer: I name my load balancer, choose UDP from the protocol menu, and select a port (514 is for Syslog): I already have suitable EC2 instances in us-east-1b and us-east-1c so I’ll use those AZs: Then I set up a target group for the UDP protocol on port 514: I choose my instances and click Add to registered: I review my settings on the next page, and my new UDP Load Balancer is ready to accept traffic within a minute or so (the state starts out as provisioning and transitions to active when it is ready): I’ll test this out by configuring my EC2 instances as centralized Syslogd servers. I simply edit the configuration file (/etc/rsyslog.conf) on the instances to make them listen on port 514, and restart the service: Then I launch another EC2 instance and configure it to use my NLB endpoint: And I can see log entries in my servers (ip-172-31-29-40 is my test instance): I did have to do make one small configuration change in order to get this to work! Using UDP to check on the health of a service does not really make sense, so I clicked override and specified a health check on port 80 instead: In a real-world scenario you would want to build a TCP-style health check into your service, of course. And, needless to say, I would run a custom implementation of Syslog that stores the log messages centrally and in a highly durable form. Things to Know Here are a couple of things to know about this important new NLB feature: Supported Targets – UDP on Network Load Balancers is supported for Instance target types (IP target types and PrivateLink are not currently supported). Health Checks – As I mentioned above, health checks must be done using TCP, HTTP, or HTTPS. Multiple Protocols – A single Network Load Balancer can handle both TCP and UDP traffic. You can add another listener to an existing load balancer to gain UDP support, as long as you use distinct ports. In situations such as DNS where you need support for both TCP and UDP on the same port, you can set up a multi-protocol target group and a multi-protocol listener (use TCP_UDP for the listener type and the TargetGroup). New CloudWatch Metrics – The existing CloudWatch metrics (ProcessedBytes, ActiveFlowCount, and NewFlowCount) now represent the aggregate traffic processed by the TCP, UDP, and TLS listeners on a given Network Load Balancer. Available Now This feature is available now and you can start using it today in all commercial AWS Regions. For pricing, see the Elastic Load Balancing Pricing page. — Jeff;  

How To Get the Most Out of Any Hosting Plan

InMotion Hosting Blog -

Web hosting services should help you deliver the best customer experience possible through your website. That means delivering information quickly in a glitch-free, secure manner. If your web host doesn’t provide the appropriate amount of data or bandwidth for your website, files can take too long to load, and visitors will leave. If you find that you don’t have enough disk space or bandwidth as part of your hosting plan, there are certain measures you can take to improve efficiency – before upgrading to a larger plan. Continue reading How To Get the Most Out of Any Hosting Plan at The Official InMotion Hosting Blog.

Adventures in Timezones: How a Server’s Timezone Can Go Wrong

Nexcess Blog -

For the average American living in Chicago, being able to tell the time in New York is easy. Simply take the time in Chicago and add one hour: 10am becomes 11am. Yet timezones becomes more complicated when geopolitics are involved, and for any tasks that involve time processing, knowledge of the correct timezone is vital.… Continue reading →

Join Cloudflare & Moz at our next meetup, Serverless in Seattle!

CloudFlare Blog -

Photo by oakie / UnsplashCloudflare is organizing a meetup in Seattle on Tuesday, June 25th and we hope you can join. We’ll be bringing together members of the developers community and Cloudflare users for an evening of discussion about serverless compute and the infinite number of use cases for deploying code at the edge. To kick things off, our guest speaker Devin Ellis will share how Moz uses Cloudflare Workers to reduce time to first byte 30-70% by caching dynamic content at the edge. Kirk Schwenkler, Solutions Engineering Lead at Cloudflare, will facilitate this discussion and share his perspective on how to grow and secure businesses at scale. Next up, Developer Advocate Kristian Freeman will take you through a live demo of Workers and highlight new features of the platform. This will be an interactive session where you can try out Workers for free and develop your own applications using our new command-line tool.Food and drinks will be served til close so grab your laptop and a friend and come on by!View Event Details & Register HereAgenda: 5:00 pm Doors open, food and drinks 5:30 pm Customer use case by Devin and Kirk 6:00 pm Workers deep dive with Kristian 6:30 - 8:30 pm Networking, food and drinks

Liquid Web Becomes a Million Kilowatt Hour Efficiency Partner

Liquid Web Official Blog -

Liquid Web was recently honored with the Million Kilowatt Hour Efficiency Partner Award from the Lansing Board of Water and Light/Hometown Energy Savers. This award recognized efforts made in 2018 which significantly reduced the energy footprint made by our data center and the equipment running within. Other winners this year for the award included Meijer, Lansing Mall, East Lansing Public Schools, and the State of Michigan. Past recipients have included GM, Auto-Owners Insurance, Boji Towers, Lansing Schools, and McLaren Hospital to name a few. Liquid Web was recognized for the award at a ceremony in Lansing, Michigan, where the award was presented to Aaron Reif, the Data Center Project Manager, and Kearn Reif, one of our fantastic Maintenance Technicians. Both individuals contributed greatly to the achievement of this award through leading efforts during the energy projects that led to the reduction in wattage used by our data center in Lansing. It was good for the team’s hard work, planning, and management to be recognized,” stated Scott Haraburda, Liquid Web’s Director of Facilities and Infrastructure. While the award was a pleasant surprise to the team at Liquid Web, it was hard earned. Our team has been busy making energy improvements for the last 18 months, which included a complete company-wide LED lighting conversion, along with a replacement of our HVAC equipment for cooling the data center and the server equipment as part of the Lean and Green Michigan PACE project. When asked for an update on the PACE improvements to the cooling equipment at the Lansing data center, Haraburda mentioned it was going excellent. “It is a very unique project solving a unique problem. Total replacement of an HVAC system in a live data center is tricky, creating a new way to cool the site and having non-standard situations make it even more complex,” commented Haraburda. Liquid Web is committed to an aggressive energy savings campaign through continuous energy improvements and the reduction of the carbon footprint left by the data centers. And as always – are committed to providing world-class infrastructure and service to all of our customers. The post Liquid Web Becomes a Million Kilowatt Hour Efficiency Partner appeared first on Liquid Web.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs