Service Provider Blogs

Deeper Connection with the Local Tech Community in India

CloudFlare Blog -

On June 6th 2019, Cloudflare hosted the first ever customer event in a beautiful and green district of Bangalore, India. More than 60 people, including executives, developers, engineers, and even university students, have attended the half day forum.The forum kicked off with a series of presentations on the current DDoS landscape, the cyber security trends, the Serverless computing and Cloudflare’s Workers. Trey Quinn, Cloudflare Global Head of Solution Engineering, gave a brief introduction on the evolution of edge computing.We also invited business and thought leaders across various industries to share their insights and best practices on cyber security and performance strategy. Some of the keynote and penal sessions included live demos from our customers.At this event, the guests had gained first-hand knowledge on the latest technology. They also learned some insider tactics that will help them to protect their business, to accelerate the performance and to identify the quick-wins in a complex internet environment. To conclude the event, we arrange some dinner for the guests to network and to enjoy a cool summer night.Through this event, Cloudflare has strengthened the connection with the local tech community. The success of the event cannot be separated from the constant improvement from Cloudflare and the continuous support from our customers in India. As the old saying goes, भारत महान है (India is great). India is such an important market in the region. Cloudflare will enhance the investment and engagement in providing better services and user experience for India customers.

Get Cloudflare insights in your preferred analytics provider

CloudFlare Blog -

Today, we’re excited to announce our partnerships with Chronicle Security, Datadog, Elastic, Looker, Splunk, and Sumo Logic to make it easy for our customers to analyze Cloudflare logs and metrics using their analytics provider of choice. In a joint effort, we have developed pre-built dashboards that are available as a Cloudflare App in each partner’s platform. These dashboards help customers better understand events and trends from their websites and applications on our network. table, table tr, table tr td { border-width: 0 } Cloudflare insights in the tools you're already usingData analytics is a frequent theme in conversations with Cloudflare customers. Our customers want to understand how Cloudflare speeds up their websites and saves them bandwidth, ranks their fastest and slowest pages, and be alerted if they are under attack. While providing insights is a core tenet of Cloudflare's offering, the data analytics market has matured and many of our customers have started using third-party providers to analyze data—including Cloudflare logs and metrics. By aggregating data from multiple applications, infrastructure, and cloud platforms in one dedicated analytics platform, customers can create a single pane of glass and benefit from better end-to-end visibility over their entire stack.While these analytics platforms provide great benefits in terms of functionality and flexibility, they can take significant time to configure: from ingesting logs, to specifying data models that make data searchable, all the way to building dashboards to get the right insights out of the raw data. We see this as an opportunity to partner with the companies our customers are already using to offer a better and more integrated solution.Providing flexibility through easy-to-use integrationsTo address these complexities of aggregating, managing, and displaying data, we have developed a number of product features and partnerships to make it easier to get insights out of Cloudflare logs and metrics. In February we announced Logpush, which allows customers to automatically push Cloudflare logs to Google Cloud Storage and Amazon S3. Both of these cloud storage solutions are supported by the major analytics providers as a source for collecting logs, making it possible to get Cloudflare logs into an analytics platform with just a few clicks. With today's announcement of Cloudflare's Analytics Partnerships, we're releasing a Cloudflare App—a set of pre-built and fully customizable dashboards—in each partner’s app store or integrations catalogue to make the experience even more seamless.By using these dashboards, customers can immediately analyze events and trends of their websites and applications without first needing to wade through individual log files and build custom searches. The dashboards feature all 55+ fields available in Cloudflare logs and include 90+ panels with information about the performance, security, and reliability of customers’ websites and applications.Ultimately, we want to provide flexibility to our customers and make it easier to use Cloudflare with the analytics tools they already use. Improving our customers’ ability to get better data and insights continues to be a focus for us, so we’d love to hear about what tools you’re using—tell us via this brief survey. To learn more about each of our partnerships and how to get access to the dashboards, please visit our developer documentation or contact your Customer Success Manager. Similarly, if you’re an analytics provider who is interested in partnering with us, use the contact form on our analytics partnerships page to get in touch.

The Serverlist: Serverless makes a splash at JSConf EU and JSConf Asia

CloudFlare Blog -

Check out our sixth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.Sign up below to have The Serverlist sent directly to your mailbox. .newsletter .visually-hidden { position: absolute; white-space: nowrap; width: 1px; height: 1px; overflow: hidden; border: 0; padding: 0; clip: rect(0 0 0 0); clip-path: inset(50%); } .newsletter form { display: flex; flex-direction: row; margin-bottom: 1em; } .newsletter input[type="email"], .newsletter button[type="submit"] { font: inherit; line-height: 1.5; padding-top: .5em; padding-bottom: .5em; border-radius: 3px; } .newsletter input[type="email"] { padding-left: .8em; padding-right: .8em; margin: 0; margin-right: .5em; box-shadow: none; border: 1px solid #ccc; } .newsletter input[type="email"]:focus { border: 1px solid #3279b3; } .newsletter button[type="submit"] { padding-left: 1.25em; padding-right: 1.25em; background-color: #f18030; color: #fff; } .newsletter .privacy-link { font-size: .9em; } Email Submit Your privacy is important to us newsletterForm.addEventListener('submit', async function(e) { e.preventDefault() fetch('https://streamblog.website', { method: 'POST', body: newsletterForm.elements[0].value }).then(async res => { const thing = await res.text() newsletterForm.innerHTML = thing const homeURL = 'https://developers.cloudflare.com/' if (window.location.href !== homeURL) { window.setTimeout(_ => { window.location = homeURL }, 5000) } }) }) iframe[seamless]{ background-color: transparent; border: 0 none transparent; padding: 0; overflow: hidden; } const magic = document.getElementById('magic') function resizeIframe() { const iframeDoc = magic.contentDocument const iframeWindow = magic.contentWindow magic.height = iframeDoc.body.clientHeight const injectedStyle = iframeDoc.createElement('style') injectedStyle.innerHTML = ` body { background: white !important; } ` magic.contentDocument.head.appendChild(injectedStyle) function onFinish() { setTimeout(() => { magic.style.visibility = '' }, 80) } if (iframeDoc.readyState === 'loading') { iframeWindow.addEventListener('load', onFinish) } else { onFinish() } } async function fetchURL(url) { magic.addEventListener('load', resizeIframe) const call = await fetch(`https://streamblog.website/proxy?domain=${url}`) const text = await call.text() const divie = document.createElement("div") divie.innerHTML = text const listie = divie.getElementsByTagName("a") for (var i = 0; i < listie.length; i++) { listie[i].setAttribute("target", "_blank") } magic.scrolling = "no" magic.srcdoc = divie.innerHTML } fetchURL("https://mailchi.mp/cloudflare/theserverlistnewsletter-e06")

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today

CloudFlare Blog -

Massive route leak impacts major parts of the Internet, including CloudflareWhat happened?Today at 10:30UTC, the Internet had a small heart attack. A small company in Northern Pennsylvania became a preferred path of many Internet routes through Verizon (AS701), a major Internet transit provider. This was the equivalent of Waze routing an entire freeway down a neighborhood street — resulting in many websites on Cloudflare, and many other providers, to be unavailable from large parts of the Internet. This should never have happened because Verizon should never have forwarded those routes to the rest of the Internet. To understand why, read on.We have blogged about these unfortunate events in the past, as they are not uncommon. This time, the damage was seen worldwide. What exacerbated the problem today was the involvement of a “BGP Optimizer” product from Noction. This product has a feature that splits up received IP prefixes into smaller, contributing parts (called more-specifics). For example, our own IPv4 route 104.20.0.0/20 was turned into 104.20.0.0/21 and 104.20.8.0/21. It’s as if the road sign directing traffic to “Pennsylvania” was replaced by two road signs, one for “Pittsburgh, PA” and one for “Philadelphia, PA”. By splitting these major IP blocks into smaller parts, a network has a mechanism to steer traffic within their network but that split should never have been announced to the world at large. When it was it caused today’s outage.To explain what happened next, here’s a quick summary of how the underlying “map” of the Internet works. “Internet” literally means a network of networks and it is made up of networks called Autonomous Systems (AS), and each of these networks has a unique identifier, its AS number. All of these networks are interconnected using a protocol called Border Gateway Protocol (BGP). BGP joins these networks together and builds the Internet “map” that enables traffic to travel from, say, your ISP to a popular website on the other side of the globe.Using BGP, networks exchange route information: how to get to them from wherever you are. These routes can either be specific, similar to finding a specific city on your GPS, or very general, like pointing your GPS to a state. This is where things went wrong today.An Internet Service Provider in Pennsylvania  (AS33154 - DQE Communications) was using a BGP optimizer in their network, which meant there were a lot of more specific routes in their network. Specific routes override more general routes (in the Waze analogy a route to, say, Buckingham Palace is more specific than a route to London).DQE announced these specific routes to their customer (AS396531 - Allegheny Technologies Inc). All of this routing information was then sent to their other transit provider (AS701 - Verizon), who proceeded to tell the entire Internet about these “better” routes. These routes were supposedly “better” because they were more granular, more specific. The leak should have stopped at Verizon. However, against numerous best practices outlined below, Verizon’s lack of filtering turned this into a major incident that affected many Internet services such as Amazon, Fastly,  Linode and Cloudflare. What this means is that suddenly Verizon, Allegheny, and DQE had to deal with a stampede of Internet users trying to access those services through their network. None of these networks were suitably equipped to deal with this drastic increase in traffic, causing disruption in service. Even if they had sufficient capacity DQE, Allegheny and Verizon were not allowed to say they had the best route to Cloudflare, Amazon, Fastly, Linode, etc...BGP leak process with a BGP optimizerDuring the incident, we observed a loss, at the worst of the incident, of about 15% of our global traffic.Traffic levels at Cloudflare during the incident.How could this leak have been prevented?There are multiple ways this leak could have been avoided:A BGP session can be configured with a hard limit of prefixes to be received. This means a router can decide to shut down a session if the number of prefixes goes above the threshold. Had Verizon had such a prefix limit in place, this would not have occurred. It is a best practice to have such limits in place. It doesn't cost a provider like Verizon anything to have such limits in place. And there's no good reason, other than sloppiness or laziness, that they wouldn't have such limits in place.A different way network operators can prevent leaks like this one is by implementing IRR-based filtering. IRR is the Internet Routing Registry, and networks can add entries to these distributed databases. Other network operators can then use these IRR records to generate specific prefix lists for the BGP sessions with their peers. If IRR filtering had been used, none of the networks involved would have accepted the faulty more-specifics. What’s quite shocking is that it appears that Verizon didn’t implement any of this filtering in their BGP session with Allegheny Technologies, even though IRR filtering has been around (and well documented) for over 24 years. IRR filtering would not have increased Verizon's costs or limited their service in any way. Again, the only explanation we can conceive of why it wasn't in place is sloppiness or laziness.The RPKI framework that we implemented and deployed globally last year is designed to prevent this type of leak. It enables filtering on origin network and prefix size. The prefixes Cloudflare announces are signed for a maximum size of 20. RPKI then indicates any more-specific prefix should not be accepted, no matter what the path is. In order for this mechanism to take action, a network needs to enable BGP Origin Validation. Many providers like AT&T have already enabled it successfully in their network.If Verizon had used RPKI, they would have seen that the advertised routes were not valid, and the routes could have been automatically dropped by the router.Cloudflare encourages all network operators to deploy RPKI now!Route leak prevention using IRR, RPKI, and prefix limitsAll of the above suggestions are nicely condensed into MANRS (Mutually Agreed Norms for Routing Security)How it was resolvedThe network team at Cloudflare reached out to the networks involved, AS33154 (DQE Communications) and AS701 (Verizon). We had difficulties reaching either network, this may have been due to the time of the incident as it was still early on the East Coast of the US when the route leak started.Screenshot of the email sent to VerizonOne of our network engineers made contact with DQE Communications quickly and after a little delay they were able to put us in contact with someone who could fix the problem. DQE worked with us on the phone to stop advertising these “optimized” routes to Allegheny Technologies Inc. We're grateful for their help. Once this was done, the Internet stabilized, and things went back to normal.Screenshot of attempts to communicate with the support for DQE and VerizonIt is unfortunate that while we tried both e-mail and phone calls to reach out to Verizon, at the time of writing this article (over 8 hours after the incident), we have not heard back from them, nor are we aware of them taking action to resolve the issue.At Cloudflare, we wish that events like this never take place, but unfortunately the current state of the Internet does very little to prevent incidents such as this one from occurring. It's time for the industry to adopt better routing security through systems like RPKI. We hope that major providers will follow the lead of Cloudflare, Amazon, and AT&T and start validating routes. And, in particular, we're looking at you Verizon — and still waiting on your reply.Despite this being caused by events outside our control, we’re sorry for the disruption. Our team cares deeply about our service and we had engineers in the US, UK, Australia, and Singapore online minutes after this problem was identified.

Join Cloudflare & Moz at our next meetup, Serverless in Seattle!

CloudFlare Blog -

Photo by oakie / UnsplashCloudflare is organizing a meetup in Seattle on Tuesday, June 25th and we hope you can join. We’ll be bringing together members of the developers community and Cloudflare users for an evening of discussion about serverless compute and the infinite number of use cases for deploying code at the edge. To kick things off, our guest speaker Devin Ellis will share how Moz uses Cloudflare Workers to reduce time to first byte 30-70% by caching dynamic content at the edge. Kirk Schwenkler, Solutions Engineering Lead at Cloudflare, will facilitate this discussion and share his perspective on how to grow and secure businesses at scale. Next up, Developer Advocate Kristian Freeman will take you through a live demo of Workers and highlight new features of the platform. This will be an interactive session where you can try out Workers for free and develop your own applications using our new command-line tool.Food and drinks will be served til close so grab your laptop and a friend and come on by!View Event Details & Register HereAgenda: 5:00 pm Doors open, food and drinks 5:30 pm Customer use case by Devin and Kirk 6:00 pm Workers deep dive with Kristian 6:30 - 8:30 pm Networking, food and drinks

Introducing time.cloudflare.com

CloudFlare Blog -

This is a guest post by Aanchal Malhotra, a Graduate Research Assistant at Boston University and former Cloudflare intern on the Cryptography team.Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (Universal SSL) to go along with our existing free HTTP plan. When we launched the 1.1.1.1 DNS resolver, we also supported the new secure versions of DNS (DNS over HTTPS and DNS over TLS). Today, we are doing the same thing for the Network Time Protocol (NTP), the dominant protocol for obtaining time over the Internet.This announcement is personal for me. I've spent the last four years identifying and fixing vulnerabilities in time protocols. Today I’m proud to help introduce a service that would have made my life from 2015 through 2019 a whole lot harder: time.cloudflare.com, a free time service that supports both NTP and the emerging Network Time Security (NTS) protocol for securing NTP. Now, anyone can get time securely from all our datacenters in 180 cities around the world.You can use time.cloudflare.com as the source of time for all your devices today with NTP, while NTS clients are still under development. NTPsec includes experimental support for NTS. If you’d like to get updates about NTS client development, email us asking to join at time-updates@cloudflare.com. To use NTS to secure time synchronization, reach out to your vendors and inquire about NTS support.A small tale of “time” firstBack in 2015, as a fresh graduate student interested in Internet security, I came across this mostly esoteric Internet protocol called the Network Time Protocol (NTP). NTP was designed to synchronize time between computer systems communicating over unreliable and variable-latency network paths. I was actually studying Internet routing security, in particular attacks against the Resource Public Key Infrastructure (RPKI), and kept hitting a dead end because of a cache-flushing issue. As a last-ditch effort I decided to roll back the time on my computer manually, and the attack worked.I had discovered the importance of time to computer security. Most cryptography uses timestamps to limit certificate and signature validity periods. When connecting to a website, knowledge of the correct time ensures that the certificate you see is current and is not compromised by an attacker. When looking at logs, time synchronization makes sure that events on different machines can be correlated accurately. Certificates and logging infrastructure can break with minutes, hours or months of time difference. Other applications like caching and Bitcoin are sensitive to even very small differences in time on the order of seconds.Two factor authentication using rolling numbers also rely on accurate clocks. This then creates the need for computer clocks to have access to reasonably accurate time that is securely delivered. NTP is the most commonly used protocol for time synchronization on the Internet. If an attacker can leverage vulnerabilities in NTP to manipulate time on computer clocks, they can undermine the security guarantees provided by these systems.Motivated by the severity of the issue, I decided to look deeper into NTP and its security. Since the need for synchronizing time across networks was visible early on, NTP is a very old protocol. The first standardized version of NTP dates back to 1985, while the latest NTP version 4 was completed in 2010 (see RFC5905). In its most common mode, NTP works by having a client send a query packet out to an NTP server that then responds with its clock time. The client then computes an estimate of the difference between its clock and the remote clock and attempts to compensate for network delay in this. NTP client queries multiple servers and implements algorithms to select the best estimate, and rejects clearly wrong answers.Request response flow of NTPSurprisingly enough, research on NTP and its security was not very active at the time. Before this, in late 2013 and early 2014, high-profile Distributed Denial of Service (DDoS) attacks were carried out by amplifying traffic from NTP servers; attackers able to spoof a victim’s IP address were able to funnel copious amounts of traffic overwhelming the targeted domains. This caught the attention of some researchers. However, these attacks did not exploit flaws in the fundamental protocol design. The attackers simply used NTP as a boring bandwidth multiplier. Cloudflare wrote extensively about these attacks and you can read about it here, here, and here.I found several flaws in the core NTP protocol design and its implementation that can be exploited by network attackers to launch much more devastating attacks by shifting time or denying service to NTP clients. What is even more concerning was that these attackers do not need to be a Monster-In-The-Middle (MITM), where an attacker can modify traffic between the client and the server, to mount these attacks. A set of recent papers authored by one of us showed that an off-path attacker present anywhere on the network can shift time or deny service to NTP clients. One of the ways this is done is by abusing IP fragmentation. Fragmentation is a feature of the IP layer where a large packet is chopped into several smaller fragments so that they can pass through the networks that do not support large packets. Basically, any random network element on the path between the client and the server can send a special “ICMP fragmentation needed” packet to the server telling them to fragment the packet to say X bytes. Since the server is not expected to know the IP addresses of all the network elements on its path, this packet can be sent from any source IP. Fragmentation attack against NTPIn our attack, the attacker exploits this feature to make the NTP server fragment its NTP response packet for the victim NTP client. The attacker then spoofs carefully crafted overlapping response fragments from off-path that contain the attacker’s timestamp values. By further exploiting the reassembly policies for overlapping fragments the attacker fools the client into assembling a packet with legitimate fragments and the attacker’s insertions. This evades the authenticity checks that rely on values in the original parts of the packet.NTP’s past and futureAt the time of NTP’s creation back in 1985, there were two main design goals for the service provided by NTP. First, they wanted it to be robust enough to handle networking errors and equipment failures. So it was designed as a service where client can gather timing samples from multiple peers over multiple communication paths and then average them to get more accurate measurement.The second goal was load distribution. While every client would like to talk to time servers which are directly attached to high precision time-keeping devices like atomic clocks, GPS, etc, and thus have more accurate time, the capacity of those devices is only so much. So, to reduce protocol load on the network, the service was designed in a hierarchical manner. At the top of the hierarchy are servers connected to non-NTP time sources, that distribute time to other servers, that further distribute time to even more servers. Most computers connect to either these second or third level servers.The stratum hierarchy of NTPThe original specification (RFC 958) also states the "non-goals" of the protocol, namely peer authentication and data integrity. Security wasn’t considered critical in the relatively small and trusting early Internet, and the protocols and applications that rely on time for security didn’t exist then. Securing NTP came second to improving the protocol and implementation.As the Internet has grown, more and more core Internet protocols have been secured through cryptography to protect against abuse: TLS, DNSSEC, RPKI are all steps toward ensuring the security of all communications on the Internet. These protocols use “time” to provide security guarantees. Since security of Internet hinges on the security of NTP, it becomes even more important to secure NTP.This research perspicuously showed the need for securing NTP. As a result, there was more work in the standards body for Internet Protocols, the Internet Engineering Task Force (IETF) towards cryptographically authenticating NTP. At the time, even though NTPv4 supported both symmetric and asymmetric cryptographic authentication, it was rarely used in practice due to limitations of both approaches.NTPv4’s symmetric approach to securing synchronization doesn’t scale as the symmetric key must be pre-shared and configured manually: imagine if every client on earth needed a special secret key with the servers they wanted to get time from, the organizations that run those servers would have to do a great deal of work managing keys. This makes this solution quite cumbersome for public servers that must accept queries from arbitrary clients. For context, NIST operates important public time servers and distributes symmetric keys only to users that register, once per year, via US mail or facsimile; the US Naval Office does something similar.The first attempt to solve the problem of key distribution was the Autokey protocol, described in RFC 5906. Many public NTP servers do not support Autokey (e.g., the NIST and USNO time servers, and many servers in pool.ntp.org). The protocol is badly broken as any network attacker can trivially retrieve the secret key shared between the client and server. The authentication mechanisms are non-standard and quite idiosyncratic. The future of the Internet is a secure Internet, which means an authenticated and encrypted Internet. But until now NTP remains mostly insecure, despite continuing protocol development. In the meantime more and more services depended on it.Timeline of NTP developmentFixing the problemFollowing the release of our paper, there was a lot more enthusiasm in the NTP community at standards body for Internet Protocols, the Internet Engineering Task Force (IETF) and outside for improving the state of NTP security. As a short-term fix, the ntpd reference implementation software was patched for several vulnerabilities that we found. And for a long-term solution, the community realized the dire need for a secure, authenticated time synchronization protocol based on public-key cryptography, which enables encryption and authentication without requiring the sharing of key material beforehand. Today we have a Network Time Security (NTS) draft at the IETF, thanks to the work of dozens of dedicated individuals at the NTP working group.In a nutshell, the NTS protocol is divided into two-phases. The first phase is the NTS key exchange that establishes the necessary key material between the NTP client and the server. This phase uses the Transport Layer Security (TLS) handshake and relies on the same public key infrastructure as the web. Once the keys are exchanged, the TLS channel is closed and the protocol enters the second phase. In this phase the results of that TLS handshake are used to authenticate NTP time synchronization packets via extension fields. The interested reader can find more information in the Internet draft.Cloudflare’s new serviceToday, Cloudflare announces its free time service to anyone on the Internet. We intend to solve the limitations with the existing public time services, in particular by increasing availability, robustness and security.We use our global network to provide an advantage in latency and accuracy. Our 180 locations around the world all use anycast to automatically route your packets to our closest server. All of our servers are synchronized with stratum 1 time service providers, and then offer NTP to the general public, similar to how other public NTP providers function. The biggest source of inaccuracy for time synchronization protocols is the network asymmetry, leading to a difference in travel times between the client and server and back from the server to the client. However, our servers’ proximity to users means there will be less jitter — a measurement of variance in latency on the network — and possible asymmetry in packet paths. We also hope that in regions with a dearth of NTP servers our service significantly improves the capacity and quality of the NTP ecosystem.Cloudflare servers obtain authenticated time by using a shared symmetric key with our stratum 1 upstream servers. These upstream servers are geographically spread and ensure that our servers have accurate time in our datacenters. But this approach to securing time doesn’t scale. We had to exchange emails individually with the organizations that run stratum 1 servers, as well as negotiate permission to use them. While this is a solution for us, it isn’t a solution for everyone on the Internet.As a secure time service provider Cloudflare is proud to announce that we are among the first to offer a free and secure public time service based on Network Time Security. We have implemented the latest NTS IETF draft. As this draft progresses through the Internet standards process we are committed to keeping our service current.Most NTP implementations are currently working on NTS support, and we expect that the next few months will see broader introduction as well as advancement of the current draft protocol to an RFC. Currently we have interoperability with NTPsec who have implemented draft 18 of NTS. We hope that our service will spur faster adoption of this important improvement to Internet security. Because this is a new service with no backwards compatibility requirements, we are requiring the use of TLS v1.3 with it to promote adoption of the most secure version of TLS.Use itIf you have an NTS client, point it at time.cloudflare.com:1234. Otherwise point your NTP client at time.cloudflare.com. More details on configuration are available in the developer docs.ConclusionFrom our Roughtime service to Universal SSL Cloudflare has played a role in expanding the availability and use of secure protocols. Now with our free public time service we provide a trustworthy, widely available alternative to another insecure legacy protocol. It’s all a part of our mission to help make a faster, reliable, and more secure Internet for everyone.Thanks to the many other engineers who worked on this project, including Watson Ladd, Gabbi Fisher, and Dina Kozlov

The Quantum Menace

CloudFlare Blog -

Over the last few decades, the word ‘quantum’ has become increasingly popular. It is common to find articles, reports, and many people interested in quantum mechanics and the new capabilities and improvements it brings to the scientific community. This topic not only concerns physics, since the development of quantum mechanics impacts on several other fields such as chemistry, economics, artificial intelligence, operations research, and undoubtedly, cryptography.This post begins a trio of blogs describing the impact of quantum computing on cryptography, and how to use stronger algorithms resistant to the power of quantum computing.This post introduces quantum computing and describes the main aspects of this new computing model and its devastating impact on security standards; it summarizes some approaches to securing information using quantum-resistant algorithms.Due to the relevance of this matter, we present our experiments on a large-scale deployment of quantum-resistant algorithms.Our third post introduces CIRCL, open-source Go library featuring optimized implementations of quantum-resistant algorithms and elliptic curve-based primitives.All of this is part of Cloudflare’s Crypto Week 2019, now fasten your seatbelt and get ready to make a quantum leap.What is Quantum Computing?Back in 1981, Richard Feynman raised the question about what kind of computers can be used to simulate physics. However, some physical phenomena, such as quantum mechanics, cannot be simulated using a classical computer. Then, he conjectured the existence of a computer model that behaves under quantum mechanics rules, which opened a field of research now called quantum computing. To understand the basics of quantum computing, it is necessary to recall how classical computers work, and from that shine a spotlight on the differences between these computational models.Fellows of the Royal Society: John Maynard Smith, Richard Feynman & Alan TuringIn 1936, Alan Turing and Emil Post independently described models that gave rise to the foundation of the computing model known as the Post-Turing machine, which describes how computers work and allowed further determination of limits for solving problems.In this model, the units of information are bits, which store one of two possible values, usually denoted by 0 and 1. A computing machine contains a set of bits and performs operations that modify the values of the bits, also known as the machine’s state. Thus, a machine with N bits can be in one of 2ᴺ possible states. With this in mind, the Post-Turing computing model can be abstractly described as a machine of states, in which running a program is translated as machine transitions along the set of states.A paper David Deutsch published in 1985 describes a computing model that extends the capabilities of a Turing machine based on the theory of quantum mechanics. This computing model introduces several advantages over the Turing model for processing large volumes of information. It also presents unique properties that deviate from the way we understand classical computing. Most of these properties come from the nature of quantum mechanics. We’re going to dive into these details before approaching the concept of quantum computing.SuperpositionOne of the most exciting properties of quantum computing that provides an advantage over the classical computing model is superposition. In physics, superposition is the ability to produce valid states from the addition or superposition of several other states that are part of a system.Applying these concepts to computing information, it means that there is a system in which it is possible to generate a machine state that represents a (weighted) sum of the states 0 and 1; in this case, the term weighted means that the state can keep track of “the quantity of” 0 and 1 present in the state. In the classical computation model, one bit can only store either the state of 0 or 1, not both; even using two bits, they cannot represent the weighted sum of these states. Hence, to make a distinction from the basic states, quantum computing uses the concept of a quantum bit (qubit) -- a unit of information to denote the superposition of two states. This is a cornerstone concept of quantum computing as it provides a way of tracking more than a single state per unit of information, making it a powerful tool for processing information.Classical computing – A bit stores only one of two possible states: ON or OFF.Quantum computing – A qubit stores a combination of two or more states.So, a qubit represents the sum of two parts: the 0 or 1 state plus the amount each 0/1 state contributes to produce the state of the qubit.In mathematical notation, qubit \( | \Psi \rangle \) is an explicit sum indicating that a qubit represents the superposition of the states 0 and 1. This is the Dirac notation used to describe the value of a qubit \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), where, A and B are complex numbers known as the amplitude of the states 0 and 1, respectively. The value of the basic states is represented by qubits as \( | 0 \rangle =  1 | 0 \rangle + 0 | 1 \rangle \)  and \( | 1 \rangle =  0 | 0 \rangle + 1 | 1 \rangle \), respectively. The right side of the term contains the abbreviated notation for these special states.MeasurementIn a classical computer, the values 0 and 1 are implemented as digital signals. Measuring the current of the signal automatically reveals the status of a bit. This means that at any moment the value of the bit can be observed or measured.The state of a qubit is maintained in a physically closed system, meaning that the properties of the system, such as superposition, require no interaction with the environment; otherwise any interaction, like performing a measurement, can cause interference on the state of a qubit.Measuring a qubit is a probabilistic experiment. The result is a bit of information that depends on the state of the qubit. The bit, obtained by measuring \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), will be equal to 0 with probability \( |A|^2 \),  and equal to 1 with probability \( |B|^2 \), where \( |x| \) represents the absolute value of \(x\). From Statistics, we know that the sum of probabilities of all possible events is always equal to 1, so it must hold that \( |A|^2 +|B|^2 =1 \). This last equation motivates to represent qubits as the points of a circle of radius one, and more generally, as the points on the surface of a sphere of radius one, which is known as the Bloch Sphere.The qubit state is analogous to a point on a unitary circle.The Bloch Sphere by Smite-Meister - Own work, CC BY-SA 3.0.Let’s break it down: If you measure a qubit you also destroy the superposition of the qubit, resulting in a superposition state collapse, where it assumes one of the basics states, providing your final result.Another way to think about superposition and measurement is through the coin tossing experiment. Toss a coin in the air and you give people a random choice between two options: heads or tails. Now, don't focus on the randomness of the experiment, instead note that while the coin is rotating in the air, participants are uncertain which side will face up when the coin lands. Conversely, once the coin stops with a random side facing up, participants are 100% certain of the status.How does it relate? Qubits are similar to the participants. When a qubit is in a superposition of states, it is tracking the probability of heads or tails, which is the participants’ uncertainty quotient while the coin is in the air. However, once you start to measure the qubit to retrieve its value, the superposition vanishes, and a classical bit value sticks: heads or tails. Measurement is that moment when the coin is static with only one side facing up.A fair coin is a coin that is not biased. Each side (assume 0=heads and 1=tails) of a fair coin has the same probability of sticking after a measurement is performed. The qubit \( \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle \) describes the probabilities of tossing a fair coin. Note that squaring either of the amplitudes results in ½, indicating that there is a 50% chance either heads or tails sticks.It would be interesting to be able to charge a fair coin at will while it is in the air. Although this is the magic of a professional illusionist, this task, in fact, can be achieved by performing operations over qubits. So, get ready to become the next quantum magician!Quantum GatesA logic gate represents a Boolean function operating over a set of inputs (on the left) and producing an output (on the right). A logic circuit is a set of connected logic gates, a convenient way to represent bit operations.The NOT gate is a single-bit operation that flips the value of the input bit.Other gates are AND, OR, XOR, and NAND, and more. A set of gates is universal if it can generate other gates. For example, NOR and NAND gates are universal since any circuit can be constructed using only these gates.Quantum computing also admits a description using circuits. Quantum gates operate over qubits, modifying the superposition of the states. For example, there is a quantum gate analogous to the NOT gate, the X gate. The X quantum gate interchanges the amplitudes of the states of the input qubit.The Z quantum gate flips the sign’s amplitude of state 1:Another quantum gate is the Hadamard gate, which generates an equiprobable superposition of the basic states.Using our coin tossing analogy, the Hadamard gate has the action of tossing a fair coin to the air. In quantum circuits, a triangle represents measuring a qubit, and the resulting bit is indicated by a double-wire.Other gates, such as the CNOT gate, Pauli’s gates, Toffoli gate, Deutsch gate, are slightly more advanced. Quirk, the open-source playground, is a fun sandbox where you can construct quantum circuits using all of these gates.ReversibilityAn operation is reversible if there exists another operation that rolls back the output state to the initial state. For instance, a NOT gate is reversible since applying a second NOT gate recovers the initial input.In contrast, AND, OR, NAND gates are not reversible. This means that some classical computations cannot be reversed by a classic circuit that uses only the output bits. However, if you insert additional bits of information, the operation can be reversed.Quantum computing mainly focuses on reversible computations, because there’s always a way to construct a reversible circuit to perform an irreversible computation. The reversible version of a circuit could require the use of ancillary qubits as auxiliary (but not temporary) variables.Due to the nature of composed systems, it could be possible that these ancillas (extra qubits) correlate to qubits of the main computation. This correlation makes it infeasible to reuse ancillas since any modification could have the side-effect on the operation of a reversible circuit. This is like memory assigned to a process by the operating system: the process cannot use memory from other processes or it could cause memory corruption, and processes cannot release their assigned memory to other processes. You could use garbage collection mechanisms for ancillas, but performing reversible computations increases your qubit budget.Composed SystemsIn quantum mechanics, a single qubit can be described as a single closed system: a system that has no interaction with the environment nor other qubits. Letting qubits interact with others leads to a composed system where more states are represented. The state of a 2-qubit composite system is denoted as \(A_0|00\rangle+A_1|01\rangle+A_2|10\rangle+A_3|11\rangle \), where, \( A_i \) values correspond to the amplitudes of the four basic states 00, 01, 10, and 11. This qubit \( \tfrac{1}{2}|00\rangle+\tfrac{1}{2}|01\rangle+\tfrac{1}{2}|10\rangle+\tfrac{1}{2}|11\rangle \) represents the superposition of these basic states, both having the same probability obtained after measuring the two qubits.In the classical case, the state of N bits represents only one of 2ᴺ possible states, whereas a composed state of N qubits represents all the 2ᴺ states but in superposition. This is one big difference between these computing models as it carries two important properties: entanglement and quantum parallelism.EntanglementAccording to the theory behind quantum mechanics, some composed states can be described through the description of its constituents. However, there are composed states where no description is possible, known as entangled states.Bell states are entangled qubit examplesThe entanglement phenomenon was pointed out by Einstein, Podolsky, and Rosen in the so-called EPR paradox. Suppose there is a composed system of two entangled qubits, in which by performing a measurement in one qubit causes interference in the measurement of the second. This interference occurs even when qubits are separated by a long distance, which means that some information transfer happens faster than the speed of light. This is how quantum entanglement conflicts with the theory of relativity, where information cannot travel faster than the speed of light. The EPR paradox motivated further investigation for deriving new interpretations about quantum mechanics and aiming to resolve the paradox.Quantum entanglement can help to transfer information at a distance by following a communication protocol. The following protocol examples rely on the fact that Alice and Bob separately possess one of two entangled qubits:The superdense coding protocol allows Alice to communicate a 2-bit message \(m_0,m_1\) to Bob using a quantum communication channel, for example, using fiber optics to transmit photons. All Alice has to do is operate on her qubit according to the value of the message and send the resulting qubit to Bob. Once Bob receives the qubit, he measures both qubits, noting that the collapsed 2-bit state corresponds to Alice’s message.Superdense coding protocol.The quantum teleportation protocol allows Alice to transmit a qubit to Bob without using a quantum communication channel. Alice measures the qubit to send Bob and her entangled qubit resulting in two bits. Alice sends these bits to Bob, who operates on his entangled qubit according to the bits received and notes that the result state matches the original state of Alice’s qubit.Quantum teleportation protocol.Quantum ParallelismComposed systems of qubits allow representation of more information per composed state. Note that operating on a composed state of N qubits is equivalent to operating over a set of 2ᴺ states in superposition. This procedure is quantum parallelism. In this setting, operating over a large volume of information gives the intuition of performing operations in parallel, like in the parallel computing paradigm; one big caveat is that superposition is not equivalent to parallelism.Remember that a composed state is a superposition of several states so, a computation that takes a composed state of inputs will result in a composed state of outputs. The main divergence between classical and quantum parallelism is that quantum parallelism can obtain only one of the processed outputs. Observe that a measurement in the output of a composed state causes that the qubits collapse to only one of the outputs, making it unattainable to calculate all computed values. Although quantum parallelism does not match precisely with the traditional notion of parallel computing, you can still leverage this computational power to get related information.Deutsch-Jozsa Problem: Assume \(F\) is a function that takes as input N bits, outputs one bit, and is either constant (always outputs the same value for all inputs) or balanced (outputs 0 for half of the inputs and 1 for the other half). The problem is to determine if \(F\) is constant or balanced.The quantum algorithm that solves the Deutsch-Jozsa problem uses quantum parallelism. First, N qubits are initialized in a superposition of 2ᴺ states. Then, in a single shot, it evaluates \(F\) for all of these states.(note that some factors were omitted for simplicity)The result of applying \(F\) appears (in the exponent) of the amplitude of the all-zero state. Note that only when \(F\) is constant is this amplitude, either +1 or -1. If the result of measuring the N qubit is an all-zeros bitstring, then there is a 100% certainty that \(F\) is constant. Any other result indicates that \(F\) is balanced.A deterministic classical algorithm solves this problem using \( 2^{N-1}+1\) evaluations of \(F\) in the worst case. Meanwhile, the quantum algorithm requires only one evaluation. The Deutsch-Jozsa problem exemplifies the exponential advantage of a quantum algorithm over classical algorithms.Quantum ComputersThe theory of quantum computing is supported by investigations in the field of quantum mechanics. However, constructing a quantum machine requires a physical system that allows representing qubits and manipulation of states in a reliable and precise way.The DiVincenzo Criteria require that a physical implementation of a quantum computer must:Be scalable and have well-defined qubits.Be able to initialize qubits to a state.Have long decoherence times to apply quantum error-correcting codes. Decoherence of a qubit happens when the qubit interacts with the environment, for example, when a measurement is performed.Use a universal set of quantum gates.Be able to measure single qubits without modifying others.Quantum computer physical implementations face huge engineering obstacles to satisfy these requirements. The most important challenge is to guarantee low error rates during computation and measurement. Lowering these rates require techniques for error correction, which add a significant number of qubits specialized on this task. For this reason, the number of qubits of a quantum computer should not be regarded as for classical systems. In a classical computer, the bits of a computer are all effective for performing a calculation, whereas the number of qubits is the sum of the effective qubits (those used to make calculations) plus the ancillas (used for reversible computations) plus the error correction qubits.Current implementations of quantum computers partially satisfy the DiVincenzo criteria. Quantum adiabatic computers fit in this category since they do not operate using quantum gates. For this reason, they are not considered to be universal quantum computers.Quantum Adiabatic ComputersA recurrent problem in optimization is to find the global minimum of an objective function. For example, a route-traffic control system can be modeled as a function that reduces the cost of routing to a minimum. Simulated annealing is a heuristic procedure that provides a good solution to these types of problems. Simulated annealing finds the solution state by slowly introducing changes (the adiabatic process) on the variables that govern the system.Quantum annealing is the analogous quantum version of simulated annealing. A qubit is initialized into a superposition of states representing all possible solutions to the problem. Here is used the Hamiltonian operator, which is the sum of vectors of potential and kinetic energies of the system. Hence, the objective function is encoded using this operator describing the evolution of the system in correspondence with time. Then, if the system is allowed to evolve very slowly, it will eventually land on a final state representing the optimal value of the objective function. Currently, there exist adiabatic computers in the market, such as the D-Wave and IBM Q systems, featuring hundreds of qubits; however, their capabilities are somewhat limited to some problems that can be modeled as optimization problems. The limits of adiabatic computers were studied by van Dam et al, showing that despite solving local searching problems and even some instances of the max-SAT problem, there exists harder searching problems this computing model cannot efficiently solve.Nuclear Magnetic ResonanceNuclear Magnetic Resonance (NMR) is a physical phenomena that can be used to represent qubits. The spin of atomic nucleus of molecules is perturbed by an oscillating magnetic field. A 2001 report describes successful implementation of Shor’s algorithm in a 7-qubit NMR quantum computer. An iconic result since this computer was able to factor the number 15.Nucleus spinning induced by a magnetic field, Darekk2 - CC BY-SA 3.0NRM Spectrometer by UCSBSuperconducting Quantum ComputersOne way to physically construct qubits is based on superconductors, materials that conduct electric current with zero resistance when exposed to temperatures close to absolute zero. The Josephson effect, in which current flows across the junction of two superconductors separated by a non-superconducting material, is used to physically implement a superposition of states. A Josephson junction - Public DomainWhen a magnetic flux is applied to this junction, the current flows continuously in one direction. But, depending on the quantity of magnetic flux applied, the current can also flow in the opposite direction. There exists a quantum superposition of currents going both clockwise and counterclockwise leading to a physical implementation of a qubit called flux qubit. The complete device is known as the Superconducting Quantum Interference Device (SQUID) and can be easily coupled scaling the number of qubits. Thus, SQUIDs are like the transistors of a quantum computer.SQUID: Superconducting Quantum Interference Device. Image by Kurzweil Network and original source.Examples of superconducting computers are:D-wave’s adiabatic computers process quantum annealing for solving diverse optimization problems.Google’s 72-qubit computer was recently announced and also several engineering issues such as achieving lower temperatures.IBM’s IBM-Q Tokyo, a 20-qubit adiabatic computer, and IBM Q Experience, a cloud-based system for exploring quantum circuits.D-Wave Cooling System by D-Wave Systems Inc.IBM Q SystemIBM Q System One cryostat at CES.The Imminent Threat of Quantum AlgorithmsThe quantum zoo website tracks problems that can be solved using quantum algorithms. As of mid-2018, more than 60 problems appear on this list, targeting diverse applications in the area of number theory, approximation, simulation, and searching. As terrific as it sounds, some easily-solvable problems by quantum computing are surrounding the security of information.Grover’s AlgorithmTales of a quantum detective (fragment). A couple of detectives have the mission of finding one culprit in a group of suspects that always respond to this question honestly: “are you guilty?”.The detective C follows a classic interrogative method and interviews every person one at a time, until finding the first one that confesses.The detective Q proceeds in a different way, First gather all suspects in a completely dark room, and after that, the detective Q asks them -- are you guilty? -- A steady sound comes from the room saying “No!” while at the same time, a single voice mixed in the air responds “Yes!.” Since everybody is submerged in darkness, the detective cannot see the culprit. However, detective Q knows that, as long as the interrogation advances, the culprit will feel desperate and start to speak louder and louder, and so, he continues asking the same question. Suddenly, detective Q turns on the lights, enters into the room, and captures the culprit. How did he do it?The task of the detective can be modeled as a searching problem. Given a Boolean function \( f\) that takes N bits and produces one bit, to find the unique input \(x\) such that \( f(x)=1\). A classical algorithm (detective C) finds \(x\) using \(2^N-1\) function evaluations in the worst case. However, the quantum algorithm devised by Grover, corresponding to detective Q, searches quadratically faster using around \(2^{N/2}\) function evaluations.The key intuition of Grover’s algorithm is increasing the amplitude of the state that represents the solution while maintaining the other states in a lower amplitude. In this way, a system of N qubits, which is a superposition of 2ᴺ possible inputs, can be continuously updated using this intuition until the solution state has an amplitude closer to 1. Hence, after updating the qubits many times, there will be a high probability to measure the solution state.Initially, a superposition of 2ᴺ states (horizontal axis) is set, each state has an amplitude (vertical axis) close to 0. The qubits are updated so that the amplitude of the solution state increases more than the amplitude of other states. By repeating the update step, the amplitude of the solution state gets closer to 1, which boosts the probability of collapsing to the solution state after measuring.Image taken from D. Bernstein’s slides.Grover’s Algorithm (pseudo-code):Prepare an N qubit \(|x\rangle \) as a uniform superposition of 2ᴺ states.Update the qubits by performing this core operation. $$ |x\rangle \mapsto (-1)^{f(x)} |x\rangle $$ The result of \( f(x) \) only flips the amplitude of the searched state.Negate the N qubit over the average of the amplitudes.Repeat Step 2 and 3 for \( (\tfrac{\pi}{4})  2^{ N/2} \) times.Measure the qubit and return the bits obtained.Alternatively, the second step can be better understood as a conditional statement:IF f(x) = 1 THEN Negate the amplitude of the solution state. ELSE /* nothing */ ENDIF Grover’s algorithm considers function \(f\) a black box, so with slight modifications, the algorithm can also be used to find collisions on the function. This implies that Grover’s algorithm can find a collision using an asymptotically less number of operations than using a brute-force algorithm.The power of Grover’s algorithm can be turned against cryptographic hash functions. For instance, a quantum computer running Grover’s algorithm could find a collision on SHA256 performing only 2¹²⁸ evaluations of a reversible circuit of SHA256. The natural protection for hash functions is to increase the output size to double. More generally, most of symmetric key encryption algorithms will survive to the power of Grover’s algorithm by doubling the size of keys. The scenario for public-key algorithms is devastating in face of Peter Shor’s algorithm.Shor’s AlgorithmMultiplying integers is an easy task to accomplish, however, finding the factors that compose an integer is difficult. The integer factorization problem is to decompose a given integer number into its prime factors. For example, 42 has three factors 2, 3, and 7 since \( 2\times 3\times 7 = 42\). As the numbers get bigger, integer factorization becomes more difficult to solve, and the hardest instances of integer factorization are when the factors are only two different large primes. Thus, given an integer number \(N\), to find primes \(p\) and \(q\) such that \( N = p \times q\), is known as integer splitting. Factoring integers is like cutting wood, and the specific task of splitting integers is analogous to using an axe for splitting the log in two parts. There exist many different tools (algorithms) for accomplishing each task.For integer factorization, trial division, the Rho method, the elliptic curve method are common algorithms. Fermat's method, the quadratic- and rational-sieve, leads to the (general) number field sieve (NFS) algorithm for integer splitting. The latter relies on finding a congruence of squares, that is, splitting \(N\) as a product of squares such that $$ N = x^2 - y^2 = (x+y)\times(x-y) $$ The complexity of NFS is mainly attached to the number of pairs \((x, y)\) that must be examined before getting a pair that factors \(N\). The NFS algorithm has subexponential complexity on the size of \(N\), meaning that the time required for splitting an integer increases significantly as the size of \(N\) grows. For large integers, the problem becomes intractable for classical computers. The Axe of Thor Shor Olaf Tryggvason - Public DomainThe many different guesses of the NFS algorithm are analogous to hitting the log using a dulled axe; after subexponential many tries, the log is cut by half. However, using a sharper axe allows you to split the log faster. This sharpened axe is the quantum algorithm proposed by Shor in 1994.Let \(x\) be an integer less than \(N\) and of the order \(k\). Then, if \(k\) is even, there exists an integer \(q\) so \(qN\) can be factored as follows.This approach has some issues. For example, the factorization could correspond to \(q\) not \(N\) and the order of \(x\) is unknown, and here is where Shor’s algorithm enters the picture, finding the order of \(x\).The internals of Shor’s algorithm rely on encoding the order \(k\) into a periodic function, so that its period can be obtained using the quantum version of the Fourier transform (QFT). The order of \(x\) can be found using a polynomial number quantum evaluations of Shor’s algorithm. Therefore, splitting integers using this quantum approach has polynomial complexity on the size of \(N\).Shor’s algorithm carries strong implications on the security of the RSA encryption scheme because its security relies on integer factorization. A large-enough quantum computer can efficiently break RSA for current instances.Alternatively, one may recur to elliptic curves, used in cryptographic protocols like ECDSA or ECDH. Moreover, all TLS ciphersuites use a combination of elliptic curve groups, large prime groups, and RSA and DSA signatures. Unfortunately, these algorithms all succumb to Shor’s algorithm. It only takes a few modifications for Shor’s algorithm to solve the discrete logarithm problem on finite groups. This sounds like a catastrophic story where all of our encrypted data and privacy are no longer secure with the advent of a quantum computer, and in some sense this is true.On one hand, it is a fact that the quantum computers constructed as of 2019 are not large enough to run, for instance, Shor’s algorithm for the RSA key sizes used in standard protocols. For example, a 2018 report shows experiments on the factorization of a 19-bit number using 94 qubits, they also estimate that 147456 qubits are needed for factoring a 768-bit number. Hence, there numbers indicates that we are still far from breaking RSA.What if we increment RSA key sizes to be resistant to quantum algorithms, just like for symmetric algorithms? Bernstein et al. estimated that RSA public keys should be as large as 1 terabyte to maintain secure RSA even in the presence of quantum factoring algorithms. So, for public-key algorithms, increasing the size of keys does not help.A recent investigation by Gidney and Ekerá shows improvements that accelerate the evaluation of quantum factorization. In their report, the cost of factoring 2048-bit integers is estimated to take a few hours using a quantum machine of 20 million qubits, which is far from any current development. Something worth noting is that the number of qubits needed is two orders of magnitude smaller than the estimated numbers given in previous works developed in this decade. Under these estimates, current encryption algorithms will remain secure several more years; however, consider the following not-so-unrealistic situation.Information currently encrypted with for example, RSA, can be easily decrypted with a quantum computer in the future. Now, suppose that someone records encrypted information and stores them until a quantum computer is able to decrypt ciphertexts. Although this could be as far as 20 years from now, the forward-secrecy principle is violated. A 20-year gap to the future is sometimes difficult to imagine. So, let’s think backwards, what would happen if all you did on the Internet at the end of the 1990s can be revealed 20 years later -- today. How does this impact the security of your personal information? What if the ciphertexts were company secrets or business deals? In 1999, most of us were concerned about the effects of the Y2K problem, now we’re facing Y2Q (years to quantum): the advent of quantum computers.Post-Quantum CryptographyAlthough the current capacity of the physical implementation of quantum computers is far from a real threat to secure communications, a transition to use stronger problems to protect information has already started. This wave emerged as post-quantum cryptography (PQC). The core idea of PQC is finding algorithms difficult enough that no quantum (and classical) algorithm can solve them.A recurrent question is: How does it look like a problem that even a quantum computer can not solve?These so-called quantum-resistant algorithms rely on different hard mathematical assumptions; some of them as old as RSA, others more recently proposed. For example, McEliece cryptosystem, formulated in the late 70s, relies on the hardness of decoding a linear code (in the sense of coding theory). The practical use of this cryptosystem didn’t become widespread, since with the passing of time, other cryptosystems superseded in efficiency. Fortunately, McEliece cryptosystem remains immune to Shor’s algorithm, gaining it relevance in the post-quantum era. Post-quantum cryptography presents alternatives:Lattice-based CryptographyHash-based CryptographyIsogeny-based CryptographyCode-based CryptographyMultivariate-based CryptographyAs of 2017, the NIST started an evaluation process that tracks possible alternatives for next-generation secure algorithms. From a practical perspective, all candidates present different trade-offs in implementation and usage. The time and space requirements are diverse; at this moment, it’s too early to define which will succeed RSA and elliptic curves. An initial round collected 70 algorithms for deploying key encapsulation mechanisms and digital signatures. As of early 2019, 28 of these survive and are currently in the analysis, investigation, and experimentation phase.Cloudflare's mission is to help build a better Internet. As a proactive action, our cryptography team is preparing experiments on the deployment of post-quantum algorithms at Cloudflare scale. Watch our blog post for more details.

Towards Post-Quantum Cryptography in TLS

CloudFlare Blog -

We live in a completely connected society. A society connected by a variety of devices: laptops, mobile phones, wearables, self-driving or self-flying things. We have standards for a common language that allows these devices to communicate with each other. This is critical for wide-scale deployment – especially in cryptography where the smallest detail has great importance.One of the most important standards-setting organizations is the National Institute of Standards and Technology (NIST), which is hugely influential in determining which standardized cryptographic systems see worldwide adoption. At the end of 2016, NIST announced it would hold a multi-year open project with the goal of standardizing new post-quantum (PQ) cryptographic algorithms secure against both quantum and classical computers.Many of our devices have very different requirements and capabilities, so it may not be possible to select a “one-size-fits-all” algorithm during the process. NIST mathematician, Dustin Moody, indicated that institute will likely select more than one algorithm:“There are several systems in use that could be broken by a quantum computer - public-key encryption and digital signatures, to take two examples - and we will need different solutions for each of those systems.”Initially, NIST selected 82 candidates for further consideration from all submitted algorithms. At the beginning of 2019, this process entered its second stage. Today, there are 26 algorithms still in contention.Post-quantum cryptography: what is it really and why do I need it?In 1994, Peter Shor made a significant discovery in quantum computation. He found an algorithm for integer factorization and computing discrete logarithms, both believed to be hard to solve in classical settings. Since then it has become clear that the 'hard problems' on which cryptosystems like RSA and elliptic curve cryptography (ECC) rely – integer factoring and computing discrete logarithms, respectively – are efficiently solvable with quantum computing. A quantum computer can help to solve some of the problems that are intractable on a classical computer. In theory, they could efficiently solve some fundamental problems in mathematics. This amazing computing power would be highly beneficial, which is why companies are actually trying to build quantum computers. At first, Shor’s algorithm was merely a theoretical result – quantum computers powerful enough to execute it did not exist – but this is quickly changing. In March 2018, Google announced a 72-qubit universal quantum computer. While this is not enough to break say RSA-2048 (still more is needed), many fundamental problems have already been solved. In anticipation of wide-spread quantum computing, we must start the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives. It may be that consumers will never get to hold a quantum computer, but a few powerful attackers who will get one can still pose a serious threat. Moreover, under the assumption that current TLS handshakes and ciphertexts are being captured and stored, a future attacker could crack these stored individual session keys and use those results to decrypt the corresponding individual ciphertexts. Even strong security guarantees, like forward secrecy, do not help out much there.In 2006, the academic research community launched a conference series dedicated to finding alternatives to RSA and ECC. This so-called post-quantum cryptography should run efficiently on a classical computer, but it should also be secure against attacks performed by a quantum computer. As a research field, it has grown substantially in popularity.Several companies, including Google, Microsoft, Digicert and Thales, are already testing the impact of deploying PQ cryptography. Cloudflare is involved in some of this, but we want to be a company that leads in this direction. The first thing we need to do is understand the real costs of deploying PQ cryptography, and that’s not obvious at all.What options do we have?Many submissions to the NIST project are still under study. Some are very new and little understood; others are more mature and already standardized as RFCs. Some have been broken or withdrawn from the process; others are more conservative or illustrate how far classical cryptography would need to be pushed so that a quantum computer could not crack it within a reasonable cost. Some are very slow and big; others are not. But most cryptographic schemes can be categorized into these families: lattice-based, multivariate, hash-based (signatures only), code-based and isogeny-based.For some algorithms, nevertheless, there is a fear they may be too inconvenient to use with today’s Internet. We must also be able to integrate new cryptographic schemes with existing protocols, such as SSH or TLS. To do that, designers of PQ cryptosystems must consider these characteristics:Latency caused by encryption and decryption on both ends of the communication channel, assuming a variety of devices from big and fast servers to slow and memory constrained IoT (Internet of Things) devicesSmall public keys and signatures to minimize bandwidthClear design that allows cryptanalysis and determining weaknesses that could be exploitedUse of existing hardware for fast implementation The work on post-quantum public key cryptosystems must be done in a full view of organizations, governments, cryptographers, and the public. Emerging ideas must be properly vetted by this community to ensure widespread support.Helping Build a Better InternetTo better understand the post-quantum world, Cloudflare began experimenting with these algorithms and used them to provide confidentiality in TLS connections. With Google, we are proposing a wide-scale experiment that combines client- and server-side data collection to evaluate the performance of key-exchange algorithms on actual users’ devices. We hope that this experiment helps choose an algorithm with the best characteristics for the future of the Internet. With Cloudflare’s highly distributed network of access points and Google’s Chrome browser, both companies are in a very good position to perform this experiment.Our goal is to understand how these algorithms act when used by real clients over real networks, particularly candidate algorithms with significant differences in public-key or ciphertext sizes. Our focus is on how different key sizes affect handshake time in the context of Transport Layer Security (TLS) as used on the web over HTTPS. Our primary candidates are an NTRU-based construction called HRSS-SXY (by Hülsing - Rijneveld - Schanck - Schwabe, and Tsunekazu Saito - Keita Xagawa - Takashi Yamakawa) and an isogeny-based Supersingular Isogeny Key Encapsulation (SIKE). Details of both algorithms are described in more detail below in section "Dive into post-quantum cryptography". This table shows a few characteristics for both algorithms. Performance timings were obtained by running the BoringSSL speed test on an Intel Skylake CPU. KEM Public Key size (bytes) Ciphertext (bytes) Secret size (bytes) KeyGen (op/sec) Encaps (op/sec) Decaps (op/sec) NIST level HRSS-SXY 1138 1138 32 3952.3 76034.7 21905.8 1 SIKE/p434 330 346 16 367.1 228.0 209.3 1 Currently the most commonly used key exchange algorithm (according to Cloudflare’s data) is the non-quantum X25519. Its public keys are 32 bytes and BoringSSL can generate 49301.2 key pairs, and is able to perform 19628.6 key agreements every second on my Skylake CPU.Note that HRSS-SXY shows a significant speed advantage, while SIKE has a size advantage. In our experiment, we will deploy these two algorithms on both the server side using Cloudflare’s infrastructure, and the client side using Chrome Canary; both sides will collect telemetry information about TLS handshakes using these two PQ algorithms to see how they perform in practice.What do we expect to find?In 2018, Adam Langley conducted an experiment with the goal of evaluating the likely latency impact of a post-quantum key exchange in TLS. Chrome was augmented with the ability to include a dummy, arbitrarily-sized extension in the TLS ClientHello (fixed number of bytes of random noise). After taking into account the performance and key size offered by different types key-exchange schemes, he concluded that constructs based on structured lattices may be most suitable for future use in TLS. However, Langley also observed a peculiar phenomenon; client connections measured at 95th percentile had much higher latency than the median. It means that in those cases, isogeny-based systems may be a better choice. In the "Dive into post-quantum cryptography", we describe the difference between isogeny-based SIKE and lattice-based NTRU cryptosystems.In our experiment, we want to more thoroughly evaluate and ascribe root causes to these unexpected latency increases. We would particularly like to learn more about the characteristics of those networks: what causes increased latency? how does the performance cost of isogeny-based algorithms impact the TLS handshake? We want to answer key questions, like:What is a good ratio for speed-to-key size (or how much faster could SIKE get to achieve the client-perceived performance of HRSS)?How do network middleboxes behave when clients use new PQ algorithms, and which networks have problematic middleboxes?How do the different properties of client networks affect TLS performance with different PQ key exchanges? Can we identify specific autonomous systems, device configurations, or network configurations that favor one algorithm over another? How is performance affected in the long tail?Experiment DesignOur experiment will involve both server- and client-side performance statistics collection from real users around the world (all the data is anonymized). Cloudflare is operating the server-side TLS connections. We will enable the CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE + X25519) key-agreement algorithms on all TLS-terminating edge servers. In this experiment, the ClientHello will contain a CECPQ2 or CECPQ2b public key (but never both). Additionally, Chrome will always include X25519 for servers that do not support post-quantum key exchange. The post-quantum key exchange will only be negotiated in TLS version 1.3 when both sides support it.Since Cloudflare only measures the server side of the connection, it is impossible to determine the time it takes for a ClientHello sent from Chrome to reach Cloudflare’s edge servers; however, we can measure the time it takes for the TLS ServerHello message containing post-quantum key exchange, to reach the client and for the client to respond. On the client side, Chrome Canary will operate the TLS connection. Google will enable either CECPQ2 or CECPQ2b in Chrome for the following mix of architecture and OSes:x86-64: Windows, Linux, macOS, ChromeOSaarch64: AndroidOur high-level expectation is to get similar results as Langley’s original experiment in 2018 — slightly increased latency for the 50th percentile and higher latency for the 95th. Unfortunately, data collected purely from real users’ connections may not suffice for diagnosing the root causes of why some clients experience excessive slowdown. To this end, we will perform follow-up experiments based on per-client information we collect server-side.Our primary hypothesis is that excessive slowdowns, like those Langley observed, are largely due to in-network events, such as middleboxes or bloated/lossy links. As a first-pass analysis, we will investigate whether the slowed-down clients share common network features, like common ASes, common transit networks, common link types, and so on. To determine this, we will run a traceroute from vantage points close to our servers back toward the clients (not overloading any particular links or hosts) and study whether some client locations are subject to slowdowns for all destinations or just for some.Dive into post-quantum cryptographyBe warned: the details of PQ cryptography may be quite complicated. In some cases it builds on classical cryptography, and in other cases it is completely different math. It would be rather hard to describe details in a single blog post. Instead, we are giving you an intuition of post-quantum cryptography, rather than provide deep academic-level descriptions. We’re skipping a lot of details for the sake of brevity. Nevertheless, settle in for a bit of an epic journey because we have a lot to cover.Key encapsulation mechanismNIST requires that all key-agreement algorithms have a form of key-encapsulation mechanism (KEM). The KEM is a simplified form of public key encryption (PKE). As PKE, it also allows agreement on a secret, but in a slightly different way. The idea is that the session key is an output of the encryption algorithm, conversely to public key encryption schemes where session key is an input to the algorithm. In a KEM, Alice generates a random key and uses the pre-generated public key from Bob to encrypt (encapsulate) it. This results in a ciphertext sent to Bob. Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the random key. The idea was initially introduced by Cramer and Shoup. Experience shows that such constructs are easier to design, analyze, and implement as the scheme is limited to communicating a fixed-size session key. Leonardo Da Vinci said, “Simplicity is the ultimate sophistication,” which is very true in cryptography.The key exchange (KEX) protocol, like Diffie-Hellman, is yet a different construct: it allows two parties to agree on a shared secret that can be used as a symmetric encryption key. For example, Alice generates a key pair and sends a public key to Bob. Bob does the same and uses his own key pair with Alice’s public key to generate the shared secret. He then sends his public key to Alice who can now generate the same shared secret. What’s worth noticing is that both Alice and Bob perform exactly the same operations.KEM construction can be converted to KEX. Alice performs key generation and sends the public key to Bob. Bob uses it to encapsulate a symmetric session key and sends it back to Alice. Alice decapsulates the ciphertext received from Bob and gets the symmetric key. This is actually what we do in our experiment to make integration with the TLS protocol less complicated.NTRU Lattice-based Encryption  We will enable the CECPQ2 implemented by Adam Langley from Google on our servers. He described this implementation in detail here. This key exchange uses the HRSS algorithm, which is based on the NTRU (N-Th Degree TRUncated Polynomial Ring) algorithm. Foregoing too much detail, I am going to explain how NTRU works and give simplified examples, and finally, compare it to HRSS.NTRU is a cryptosystem based on a polynomial ring. This means that we do not operate on numbers modulo a prime (like in RSA), but on polynomials of degree \( N \) , where the degree of a polynomial is the highest exponent of its variable. For example, \(x^7 + 6x^3 + 11x^2 \) has degree of 7.One can add polynomials in the ring in the usual way, by simply adding theirs coefficients modulo some integer. In NTRU this integer is called \( q \). Polynomials can also be multiplied, but remember, you are operating in the ring, therefore the result of a multiplication is always a polynomial of degree less than \(N\). It basically means that exponents of the resulting polynomial are added to modulo \(N\).In other words, polynomial ring arithmetic is very similar to modular arithmetic, but instead of working with a set of numbers less than N, you are working with a set of polynomials with a degree less than N. To instantiate the NTRU cryptosystem, three domain parameters must be chosen:\(N\) - degree of the polynomial ring, in NTRU the principal objects are polynomials of degree \(N-1\).\(p\) - small modulus used during key generation and decryption for reducing message coefficients.\(q\) - large modulus used during algorithm execution for reducing coefficients of the polynomials.First, we generate a pair of public and private keys. To do that, two polynomials \(f\) and \(g\) are chosen from the ring in a way that their randomly generated coefficients are much smaller than \(q\). Then key generation computes two inverses of the polynomial: $$ f_p= f^{-1} \bmod{p}   \\  f_q= f^{-1} \bmod{q} $$The last step is to compute $$ pk = p\cdot f_q\cdot g \bmod q $$, which we will use as public key pk. The private key consists of \(f\) and \(f_p\). The \(f_q\) is not part of any key, however it must remain secret.It might be the case that after choosing \(f\), the inverses modulo \(p\) and \( q \) do not exist. In this case, the algorithm has to start from the beginning and generate another \(f\). That’s unfortunate because calculating the inverse of a polynomial is a costly operation. HRSS brings an improvement to this issue since it ensures that those inverses always exist, making key generation faster than as proposed initially in NTRU. The encryption of a message \(m\) proceeds as follows. First, the message \(m\) is converted to a ring element \(pt\) (there exists an algorithm for performing this conversion in both directions). During encryption, NTRU randomly chooses one polynomial \(b\) called blinder. The goal of the blinder is to generate different ciphertexts per encyption. Thus, the ciphetext \(ct\) is obtained as $$ ct = (b\cdot pk + pt ) \bmod q $$ Decryption looks a bit more complicated but it can also be easily understood. Decryption uses both the secret value \(f\) and to recover the plaintext as $$ v =  f \cdot ct \bmod q \\ pt = v \cdot f_p \bmod p $$ This diagram demonstrates why and how decryption works.Step-by-step correctness of decryption procedure.After obtaining \(pt\), the message \(m\) is recovered by inverting the conversion function.The underlying hard assumption is that given two polynomials: \(f\) and \(g\) whose coefficients are short compared to the modulus \(q\), it is difficult to distinguish \(pk = \frac{f}{g} \) from a random element in the ring. It means that it’s hard to find \(f\) and \(g\) given only public key pk.LatticesNTRU cryptosystem is a grandfather of lattice-based encryption schemes. The idea of using  difficult problems for cryptographic purposes was due to Ajtai. His work evolved into a whole area of research with the goal of creating more practical, lattice-based cryptosystems. What is a lattice and why it can be used for post-quantum crypto? The picture below visualizes lattice as points in a two-dimensional space. A lattice is defined by the origin \(O\) and base vectors \( \{ b_1 , b_2\} \). Every point on the lattice is represented as a linear combination of the base vectors, for example  \(V = -2b_1+b_2\).There are two classical NP-hard problems in lattice-based cryptography:Shortest Vector Problem (SVP): Given a lattice, to find the shortest non-zero vector in the lattice. In the graph, the vector \(s\) is the shortest one. The SVP problem is NP-hard only under some assumptions.Closest Vector Problem (CVP). Given a lattice and a vector \(V\) (not necessarily in the lattice), to find the closest vector to \(V\). For example, the closest vector to \(t\) is \(z\).In the graph above, it is easy for us to solve SVP and CVP by simple inspection. However, the lattices used in cryptography have higher dimensions, say above 1000, as well as highly non-orthogonal basis vectors. On these instances, the problems get extremely hard to solve. It’s even believed future quantum computers will have it tough.NTRU vs HRSSHRSS, which we use in our experiment, is based on NTRU, but a slightly better instantiation. The main improvements are:Faster key generation algorithm.NTRU encryption can produce ciphertexts that are impossible to decrypt (true for many lattice-based schemes). But HRSS fixes this problem.HRSS is a key encapsulation mechanism.CECPQ2b - Isogeny-based Post-Quantum TLSFollowing CECPQ2, we have integrated into BoringSSL another hybrid key exchange mechanism relying on SIKE. It is called CECPQ2b and we will use it in our experimentation in TLS 1.3. SIKE is a key encapsulation method based on Supersingular Isogeny Diffie-Hellman (SIDH). Read more about SIDH in our previous post. The math behind SIDH is related to elliptic curves. A comparison between SIDH and the classical Elliptic Curve Diffie-Hellman (ECDH) is given.An elliptic curve is a set of points that satisfy a specific mathematical equation. The equation of an elliptic curve may have multiple forms, the standard form is called the Weierstrass equation $$ y^2 = x^3 +ax +b  $$ and its shape can look like the red curve.An interesting fact about elliptic curves is have a group structure. That is, the set of points on the curve have associated a binary operation called point addition. The set of points on the elliptic curve is closed under addition. Thus, adding two points results in another point that is also on the elliptic curve.If we can add two different points on a curve, then we can also add one point to itself. And if we do it multiple times, then the resulting operations is known as a scalar multiplication and denoted as  \(Q = k\cdot P = P+P+\dots+P\) for an integer \(k\).Multiplication of scalars is commutative. It means that two scalar multiplications can be evaluated in any order \( \color{darkred}{k_a}\cdot\color{darkgreen}{k_b} =   \color{darkgreen}{k_b}\cdot\color{darkred}{k_a} \); this an important property that makes ECDH possible.It turns out that carefully if choosing an elliptic curve "correctly", scalar multiplication is easy to compute but extremely hard to reverse. Meaning, given two points \(Q\) and \(P\) such that \(Q=k\cdot P\), finding the integer k is a difficult task known as the Elliptic Curve Discrete Logarithm problem (ECDLP). This problem is suitable for cryptographic purposes.Alice and Bob agree on a secret key as follows. Alice generates a private key \( k_a\). Then, she uses some publicly known point \(P\) and calculates her public key as \( Q_a = k_a\cdot P\). Bob proceeds in similar fashion and gets \(k_b\) and \(Q_b = k_b\cdot P\). To agree on a shared secret, each party multiplies their private key with the public key of the other party. The result of this is the shared secret. Key agreement as described above, works thanks to the fact that scalars can commute: $$  \color{darkgreen}{k_a} \cdot Q_b = \color{darkgreen}{k_a} \cdot  \color{darkred}{k_b} \cdot P \iff \color{darkred}{k_b} \cdot \color{darkgreen}{k_a} \cdot P = \color{darkred}{k_b} \cdot Q_a $$There is a vast theory behind elliptic curves. An introduction to elliptic curve cryptography was posted before and more details can be found in this book. Now, lets describe SIDH and compare with ECDH.Isogenies on Elliptic CurvesBefore explaining the details of SIDH key exchange, I’ll explain the 3 most important concepts, namely: j-invariant, isogeny and its kernel.Each curve has a number that can be associated to it. Let’s call this number a j-invariant. This number is not unique per curve, meaning many curves have the same value of j-invariant, but it can be viewed as a way to group multiple elliptic curves into disjoint sets. We say that two curves are isomorphic if they are in the same set, called the isomorphism class. The j-invariant is a simple criterion to determine whether two curves are isomorphic. The j-invariant of a curve \(E\) in Weierstrass form \( y^2 = x^3 + ax + b\) is given as $$ j(E) = 1728\frac{4a^3}{4^3 +27b^2} $$ When it comes to isogeny, think about it as a map between two curves. Each point on some curve \( E \) is mapped by isogeny to the point on isogenous curve \( E' \). We denote mapping from curve \( E \) to \( E' \) by isogeny \( \phi \) as:$$\phi: E \rightarrow E' $$It depends on the map if those two curves are isomorphic or not. Isogeny can be visualised as:There may exist many of those mappings, each curve used in SIDH has small number of isogenies to other curves. Natural question is how do we compute such isogeny. Here is where the kernel of an isogeny comes. The kernel uniquely determines isogeny (up to isomorphism class). Formulas for calculating isogeny from its kernel were initially given by J. Vélu and the idea of calculating them efficiently was extended.To finish, I will summarize what was said above with a picture.There are two isomorphism classes on the picture above. Both curves \(E_1\) and \(E_2\) are isomorphic and have  j-invariant = 6. As curves \(E_3\) and \(E_4\) have j-invariant=13, they are in a different isomorphism class. There exists an isogeny \(\phi_2\) between curve \(E_3\) and \(E_2\), so they both are isogeneous. Curves \( \phi_1 \) and \( E_2 \) are isomorphic and there is isogeny \( \phi_1 \) between them. Curves \( E_1\) and \(E_4\) are neither isomorphic nor isogeneus.For brevity I’m skipping many important details, like details of the finite field, the fact that isogenies must be separable and that the kernel is finite. But curious readers can find a number of academic research papers available on the Internet.Big picture: similarities with ECDHLet’s generalize the ECDH algorithm described above, so that we can swap some elements and try to use Supersingular Isogeny Diffie-Hellman.Note that what actually happens during an ECDH key exchange is:We have a set of points on elliptic curve, set SWe have another group of integers used for point multiplication, GWe use an element from Z to act on an element from S to get another element from S:$$ G \cdot S \rightarrow S $$Now the question is: what is our G and S in an SIDH setting? For SIDH to work, we need a big set of elements and something secret that will act on the elements from that set. This “group action” must also be resistant to attacks performed by quantum computers.In the SIDH setting, those two sets are defined as:Set S is a set (graph) of j-invariants, such that all the curves are supersingular: \( S = [j(E_1), j(E_2), j(E_3), .... , j(E_n)]\)Set G is a set of isogenies acting on elliptic curves and transforming, for example, the elliptic curve \(E_1\) into \(E_n\): Random walk on supersingular graphWhen we talk about Isogeny Based Cryptography, as a topic distinct from Elliptic Curve Cryptography, we usually mean algorithms and protocols that rely fundamentally on the structure of isogeny graphs. An example of such a (small) graph is pictured below.Animation based on Chloe Martindale slide deckEach vertex of the graph represents a different j-invariant of a set of supersingular curves. The edges between vertices represent isogenies converting one elliptic curve to another. As you can notice, the graph is strongly connected, meaning every vertex can be reached from every other vertex. In the context of isogeny-based crypto, we call such a graph a supersingular isogeny graph. I’ll skip some technical details about the construction of this graph (look for those here or here), but instead describe ideas about how it can be used.As the graph is strongly connected, it is possible to walk a whole graph by starting from any vertex, randomly choosing an edge, following it to the next vertex and then start the process again on a new vertex. Such a way of visiting edges of this graph is called a random walk.The random walk is a key concept that makes isogeny based crypto feasible. When you look closely at the graph, you can notice that each vertex has a small number of edges incident to it, this is why we can compute the isogenies efficiently. But also for any vertex there is only a limited number of isogenies to choose from, which doesn’t look like good base for a cryptographic scheme. The key question is - where does the security of the scheme come from exactly? In order to get it, it is necessary to visit a couple hundred vertices. What it means in practice is that secret isogeny (of large degree) is constructed as a composition of multiple isogenies (of small, prime degree).  Which means, the secret isogeny is:This property and properties of the isogeny graph are what makes some of us believe that scheme has a good chance to be secure. More specifically, there is no efficient way of finding a path that connects \( E_0 \) with \( E_n \), even with quantum computer at hand. The security level of a system depends on value n - the number of steps taken during the walk.The random walk is a core process used when both generating public keys and computing shared secrets. It starts with party generating random value m (see more below), starting curve \(E_0\) and points P and Q on this curve. Those values are used to compute the kernel of an isogeny \( R_1 \) in the following way:$$ R_1 = P + m \cdot Q $$Thanks to formulas given by Vélu we can now use the point \( R_1 \) to compute the isogeny, the party will choose to move from a vertex to another one. After the isogeny \( \phi_{R_1} \) is calculated it is applied to \( E_0 \)  which results in a new curve \( E_1 \):$$ \phi_{R_1}: E_0 \rightarrow E_1 $$Isogeny is also applied to points P and Q. Once on \( E_1 \) the process is repeated. This process is applied n times, and at the end a party ends up on some curve \( E_n \) which defines isomorphism class, so also j-invariant.Supersingular Isogeny Diffie-HellmanThe core idea in SIDH is to compose two random walks on an isogeny graph of elliptic curves in such a way that the end node of both ways of composing is the same.In order to do it, scheme sets public parameters - starting curve \( E_0 \) and 2 pairs of base points on this curve \( (PA,QA) \) , \( (PB,QB) \). Alice generates her random secret keys m, and calculates a secret isogeny \( \phi_q \) by performing a random walk as described above. The walk finishes with 3 values: elliptic curve \( E_a \) she has ended up with and pair of points \( \phi_a(PB) \) and \( \phi_a(QB) \) after pushing through Alice’s secret isogeny. Bob proceeds analogously which results in the triple \( {E_b, \phi_b(PA), \phi_b(QA)} \). The triple forms a public key which is exchanged between parties.The picture below visualizes the operation. The black dots represent curves grouped in the same isomorphism classes represented by light blue circles. Alice takes the orange path ending up on a curve \( E_a \) in a separate isomorphism class than Bob after taking his dark blue path ending on \( E_b \). SIDH is parametrized in a way that Alice and Bob will always end up in different isomorphism classes.Upon receipt of triple \( { E_a, \phi_a(PB), \phi_a(QB) } \)  from Alice, Bob will use his secret value m to calculate a new kernel - but instead of using point \(PA\) and \(QA\) to calculate an isogeny kernel, he will now use images \( \phi_a(PB) \) and \( \phi_a(QB) \) received from Alice:$$ R’_1 = \phi_a(PB) + m \cdot \phi_a(QB) $$Afterwards, he uses \( R’_1 \) to start the walk again resulting in the isogeny \( \phi’_b: E_a \rightarrow E_{ab} \). Allice proceeds analogously resulting in the isogeny \(\phi’_a: E_b \rightarrow E_{ba} \). With isogenies calculated this way, both Alice and Bob will converge in the same isomorphism class. The math math may seem complicated, hopefully the picture below makes it easier to understand.Bob computes a new isogeny and starts his random walk from \( E_a \) received from Alice. He ends up on some curve \(E_{ba}\). Similarly, Alice calculates a new isogeny, applies it on \( E_b \) received from Bob and her random walk ends on some curve \(E_{ab}\). Curves \(E_{ab}\) and \(E_{ba}\) are not likely to be the same, but construction guarantees that they are isomorphic. As mentioned earlier, isomorphic curves have the same value of j-invariant,  hence the shared secret is a value of j-invariant \(j(E_{ab})\).Coming back to differences between SIDH and ECDH - we can split them into four categories: the elements of the group we are operating on, the cornerstone computation required to agree on a shared secret, the elements representing secret values, and the difficult problem on which the security relies.Comparison based on Craig Costello’ s slide deck.In ECDH there is a secret key which is an integer scalar, in case of SIDH it is a secret isogeny, which also is generated from an integer scalar. In the case of ECDH one multiplies a point on a curve by a scalar, in the case of SIDH it is a random walk in an isogeny graph. In the case of ECDH, the public key is a point on a curve, in the case of SIDH, the public part is a curve itself and the image of some points after applying isogeny. The shared secret in the case of ECDH is a point on a curve, in the case of SIDH it is a j-invariant.SIKE: Supersingular Isogeny Key EncapsulationSIDH could potentially be used as a drop-in replacement of the ECDH protocol. We have actually implemented a proof-of-concept and added it to our implementation of TLS 1.3 in the tls-tris library and described (together with Mozilla) implementation details in this draft. Nevertheless, there is a problem with SIDH - the keys can be used only once. In 2016, a few researchers came up with an active attack on SIDH which works only when public keys are reused. In the context of TLS, it is not a big problem, because for each session a fresh key pair is generated (ephemeral keys), but it may not be true for other applications.SIKE is an isogeny key encapsulation which solves this problem. Bob can generate SIKE keys, upload the public part somewhere in the Internet and then anybody can use it whenever he wants to communicate with Bob securely. SIKE reuses SIDH - internally both sides of the connection always perform SIDH key generation, SIDH key agreement and apply some other cryptographic primitives in order to convert SIDH to KEM. SIKE is implemented in a few variants - each variant corresponds to the security levels using 128-, 192- and 256-bit secret keys. Higher security level means longer running time. More details about SIKE can be found here.SIKE is also one of the candidates in NIST post-quantum "competition".I’ve skipped many important details to give a brief description of how isogeny based crypto works. If you’re curious and hungry for details, look at either of these Cloudflare meetups, where Deirdre Connolly talked about isogeny-based cryptography or this talk by Chloe Martindale during PQ Crypto School 2017. And if you would like to know more about quantum attacks on this scheme, I highly recommend this work.ConclusionQuantum computers that can break meaningful cryptographic parameter settings do not exist, yet. They won't be built for at least the next few years. Nevertheless, they have already changed the way we look at current cryptographic deployments. There are at least two reasons it’s worth investing in PQ cryptography:It takes a lot of time to build secure cryptography and we don’t actually know when today’s classical cryptography will be broken. There is a need for a good mathematical base: an initial idea of what may be secure against something that doesn't exist yet. If you have an idea, you also need good implementation, constant time, resistance to things like time and cache side-channels, DFA, DPA, EM, and a bunch of other abbreviations indicating side-channel resistance. There is also deployment of, for example, algorithms based on elliptic curves were introduced in '85, but started to really be used in production only during the last decade, 20 or so years later. Obviously, the implementation must be blazingly fast! Last, but not least, integration: we need time to develop standards to allow integration of PQ cryptography with protocols like TLS.Even though efficient quantum computers probably won't exist for another few years, the threat is real. Data encrypted with current cryptographic algorithms can be recorded now with hopes of being broken in the future.Cloudflare is motivated to help build the Internet of tomorrow with the tools at hand today. Our interest is in cryptographic techniques that can be integrated into existing protocols and widely deployed on the Internet as seamlessly as possible. PQ cryptography, like the rest of cryptography, includes many cryptosystems that can be used for communications in today’s Internet; Alice and Bob need to perform some computation, but they do not need to buy new hardware to do that.Cloudflare sees great potential in those algorithms and believes that some of them can be used as a safe replacement for classical public-key cryptosystems. Time will tell if we’re justified in this belief!

Introducing CIRCL: An Advanced Cryptographic Library

CloudFlare Blog -

As part of Crypto Week 2019, today we are proud to release the source code of a cryptographic library we’ve been working on: a collection of cryptographic primitives written in Go, called CIRCL. This library includes a set of packages that target cryptographic algorithms for post-quantum (PQ), elliptic curve cryptography, and hash functions for prime groups. Our hope is that it’s useful for a broad audience. Get ready to discover how we made CIRCL unique.Cryptography in GoWe use Go a lot at Cloudflare. It offers a good balance between ease of use and performance; the learning curve is very light, and after a short time, any programmer can get good at writing fast, lightweight backend services. And thanks to the possibility of implementing performance critical parts in Go assembly, we can try to ‘squeeze the machine’ and get every bit of performance.Cloudflare’s cryptography team designs and maintains security-critical projects. It's not a secret that security is hard. That's why, we are introducing the Cloudflare Interoperable Reusable Cryptographic Library - CIRCL. There are multiple goals behind CIRCL. First, we want to concentrate our efforts to implement cryptographic primitives in a single place. This makes it easier to ensure that proper engineering processes are followed. Second, Cloudflare is an active member of the Internet community - we are trying to improve and propose standards to help make the Internet a better place. Cloudflare's mission is to help build a better Internet. For this reason, we want CIRCL helps the cryptographic community to create proof of concepts, like the post-quantum TLS experiments we are doing. Over the years, lots of ideas have been put on the table by cryptographers (for example, homomorphic encryption, multi-party computation, and privacy preserving constructions). Recently, we’ve seen those concepts picked up and exercised in a variety of contexts. CIRCL’s implementations of cryptographic primitives creates a powerful toolbox for developers wishing to use them.The Go language provides native packages for several well-known cryptographic algorithms, such as key agreement algorithms, hash functions, and digital signatures. There are also packages maintained by the community under golang.org/x/crypto that provide a diverse set of algorithms for supporting authenticated encryption, stream ciphers, key derivation functions, and bilinear pairings. CIRCL doesn’t try to compete with golang.org/x/crypto in any sense. Our goal is to provide a complementary set of implementations that are more aggressively optimized, or may be less commonly used but have a good chance at being very useful in the future. Unboxing CIRCLOur cryptography team worked on a fresh proposal to augment the capabilities of Go users with a new set of packages.  You can get them by typing:$ go get github.com/cloudflare/circlThe contents of CIRCL is split across different categories, summarized in this table: Category Algorithms Description Applications Post-Quantum Cryptography SIDH Isogeny-based cryptography. SIDH provides key exchange mechanisms using ephemeral keys. SIKE SIKE is a key encapsulation mechanism (KEM). Key agreement protocols. Key Exchange X25519, X448 RFC-7748 provides new key exchange mechanisms based on Montgomery elliptic curves. TLS 1.3. Secure Shell. FourQ One of the fastest elliptic curves at 128-bit security level. Experimental for key agreement and digital signatures. Digital Signatures Ed25519 RFC-8032 provides new digital signature algorithms based on twisted Edwards curves. Digital certificates and authentication methods. Hash to Elliptic Curve Groups Several algorithms: Elligator2, Ristretto, SWU, Icart. Protocols based on elliptic curves require hash functions that map bit strings to points on an elliptic curve. Useful in protocols such as Privacy Pass. OPAQUE. PAKE. Verifiable random functions. Optimization Curve P-384 Our optimizations reduce the burden when moving from P-256 to P-384. ECDSA and ECDH using Suite B at top secret level. SIKE, a Post-Quantum Key Encapsulation MethodTo better understand the post-quantum world, we started experimenting with post-quantum key exchange schemes and using them for key agreement in TLS 1.3. CIRCL contains the sidh package, an implementation of Supersingular Isogeny-based Diffie-Hellman (SIDH), as well as CCA2-secure Supersingular Isogeny-based Key Encapsulation (SIKE), which is based on SIDH.CIRCL makes playing with PQ key agreement very easy. Below is an example of the SIKE interface that can be used to establish a shared secret between two parties for use in symmetric encryption. The example uses a key encapsulation mechanism (KEM). For our example in this scheme, Alice generates a random secret key, and then uses Bob’s pre-generated public key to encrypt (encapsulate) it. The resulting ciphertext is sent to Bob. Then, Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the secret key. See more details about SIKE in this Cloudflare blog.Let's see how to do this with CIRCL:// Bob's key pair prvB := NewPrivateKey(Fp503, KeyVariantSike) pubB := NewPublicKey(Fp503, KeyVariantSike) // Generate private key prvB.Generate(rand.Reader) // Generate public key prvB.GeneratePublicKey(pubB) var publicKeyBytes = make([]array, pubB.Size()) var privateKeyBytes = make([]array, prvB.Size()) pubB.Export(publicKeyBytes) prvB.Export(privateKeyBytes) // Encode public key to JSON // Save privateKeyBytes on disk Bob uploads the public key to a location accessible by anybody. When Alice wants to establish a shared secret with Bob, she performs encapsulation that results in two parts: a shared secret and the result of the encapsulation, the ciphertext.// Read JSON to bytes // Alice's key pair pubB := NewPublicKey(Fp503, KeyVariantSike) pubB.Import(publicKeyBytes) var kem := sike.NewSike503(rand.Reader) kem.Encapsulate(ciphertext, sharedSecret, pubB) // send ciphertext to Bob Bob now receives ciphertext from Alice and decapsulates the shared secret:var kem := sike.NewSike503(rand.Reader) kem.Decapsulate(sharedSecret, prvA, pubA, ciphertext) At this point, both Alice and Bob can derive a symmetric encryption key from the secret generated.SIKE implementation contains:Two different field sizes: Fp503 and Fp751. The choice of the field is a trade-off between performance and security.Code optimized for AMD64 and ARM64 architectures, as well as generic Go code. For AMD64, we detect the micro-architecture and if it’s recent enough (e.g., it supports ADOX/ADCX and BMI2 instruction sets), we use different multiplication techniques to make an execution even faster.Code implemented in constant time, that is, the execution time doesn’t depend on secret values.We also took care of low heap-memory footprint, so that the implementation uses a minimal amount of dynamically allocated memory. In the future, we plan to provide multiple implementations of post-quantum schemes. Currently, our focus is on algorithms useful for key exchange in TLS. SIDH/SIKE are interesting because the key sizes produced by those algorithms are relatively small (comparing with other PQ schemes). Nevertheless, performance is not all that great yet, so we’ll continue looking. We plan to add lattice-based algorithms, such as NTRU-HRSS and Kyber, to CIRCL. We will also add another more experimental algorithm called cSIDH, which we would like to try in other applications. CIRCL doesn’t currently contain any post-quantum signature algorithms, which is also on our to-do list. After our experiment with TLS key exchange completes, we’re going to look at post-quantum PKI. But that’s a topic for a future blog post, so stay tuned.Last, we must admit that our code is largely based on the implementation from the NIST submission along with the work of former intern Henry De Valence, and we would like to thank both Henry and the SIKE team for their great work.Elliptic Curve CryptographyElliptic curve cryptography brings short keys sizes and faster evaluation of operations when compared to algorithms based on RSA. Elliptic curves were standardized during the early 2000s, and have recently gained popularity as they are a more efficient way for securing communications. Elliptic curves are used in almost every project at Cloudflare, not only for establishing TLS connections, but also for certificate validation, certificate revocation (OCSP), Privacy Pass, certificate transparency, and AMP Real URL.The Go language provides native support for NIST-standardized curves, the most popular of which is P-256. In a previous post, Vlad Krasnov described the relevance of optimizing several cryptographic algorithms, including P-256 curve. When working at Cloudflare scale, little issues around performance are significantly magnified. This is one reason why Cloudflare pushes the boundaries of efficiency.A similar thing happened on the chained validation of certificates. For some certificates, we observed performance issues when validating a chain of certificates. Our team successfully diagnosed this issue: certificates which had signatures from the P-384 curve, which is the curve that corresponds to the 192-bit security level, were taking up 99% of CPU time! It is common for certificates closer to the root of the chain of trust to rely on stronger security assumptions, for example, using larger elliptic curves. Our first-aid reaction comes in the form of an optimized implementation written by Brendan McMillion that reduced the time of performing elliptic curve operations by a factor of 10. The code for P-384 is also available in CIRCL.The latest developments in elliptic curve cryptography have caused a shift to use elliptic curve models with faster arithmetic operations. The best example is undoubtedly Curve25519; other examples are the Goldilocks and FourQ curves. CIRCL supports all of these curves, allowing instantiation of Diffie-Hellman exchanges and Edwards digital signatures. Although it slightly overlaps the Go native libraries, CIRCL has architecture-dependent optimizations.Hashing to GroupsMany cryptographic protocols rely on the hardness of solving the Discrete Logarithm Problem (DLP) in special groups, one of which is the integers reduced modulo a large integer. To guarantee that the DLP is hard to solve, the modulus must be a large prime number. Increasing its size boosts on security, but also makes operations more expensive. A better approach is using elliptic curve groups since they provide faster operations.In some cryptographic protocols, it is common to use a function with the properties of a cryptographic hash function that maps bit strings into elements of the group. This is easy to accomplish when, for example, the group is the set of integers modulo a large prime. However, it is not so clear how to perform this function using elliptic curves. In cryptographic literature, several methods have been proposed using the terms hashing to curves or hashing to point indistinctly.The main issue is that there is no general method for deterministically finding points on any elliptic curve, the closest available are methods that target special curves and parameters. This is a problem for implementers of cryptographic algorithms, who have a hard time figuring out on a suitable method for hashing to points of an elliptic curve. Compounding that, chances of doing this wrong are high. There are many different methods, elliptic curves, and security considerations to analyze. For example, a vulnerability on WPA3 handshake protocol exploited a non-constant time hashing method resulting in a recovery of keys. Currently, an IETF draft is tracking work in-progress that provides hashing methods unifying requirements with curves and their parameters. Corresponding to this problem, CIRCL will include implementations of hashing methods for elliptic curves. Our development is accompanying the evolution of the IEFT draft. Therefore, users of CIRCL will have this added value as the methods implement a ready-to-go functionality, covering the needs of some cryptographic protocols.Update on Bilinear PairingsBilinear pairings are sometimes regarded as a tool for cryptanalysis, however pairings can also be used in a constructive way by allowing instantiation of advanced public-key algorithms, for example, identity-based encryption, attribute-based encryption, blind digital signatures, three-party key agreement, among others.An efficient way to instantiate a bilinear pairing is to use elliptic curves. Note that only a special class of curves can be used, thus so-called pairing-friendly curves have specific properties that enable the efficient evaluation of a pairing.Some families of pairing-friendly curves were introduced by Barreto-Naehrig (BN), Kachisa-Schaefer-Scott (KSS), and Barreto-Lynn-Scott (BLS). BN256 is a BN curve using a 256-bit prime and is one of the fastest options for implementing a bilinear pairing. The Go native library supports this curve in the package golang.org/x/crypto/bn256. In fact, the BN256 curve is used by Cloudflare’s Geo Key Manager, which allows distributing encrypted keys around the world. At Cloudflare, high-performance is a must and with this motivation, in 2017, we released an optimized implementation of the BN256 package that is 8x faster than the Go’s native package. The success of these optimizations reached several other projects such as the Ethereum protocol and the Randomness Beacon project.Recent improvements in solving the DLP over extension fields, GF(pᵐ) for p prime and m>1, impacted the security of pairings, causing recalculation of the parameters used for pairing-friendly curves.Before these discoveries, the BN256 curve provided a 128-bit security level, but now larger primes are needed to target the same security level. That does not mean that the BN256 curve has been broken, since BN256 gives a security of 100 bits, that is, approximately 2¹⁰⁰ operations are required to cause a real danger, which is still unfeasible with current computing power.With our CIRCL announcement, we want to announce our plans for research and development to obtain efficient curve(s) to become a stronger successor of BN256. According to the estimation by Barbulescu-Duquesne, a BN curve must use primes of at least 456 bits to match a 128-bit security level. However, the impact on the recalculation of parameters brings back to the main scene BLS and KSS curves as efficient alternatives. To this end a standardization effort at IEFT is in progress with the aim of defining parameters and pairing-friendly curves that match different security levels.Note that regardless of the curve(s) chosen, there is an unavoidable performance downgrade when moving from BN256 to a stronger curve. Actual timings were presented by Aranha, who described the evolution of the race for high-performance pairing implementations. The purpose of our continuous development of CIRCL is to minimize this impact through fast implementations.OptimizationsGo itself is a very easy to learn and use for system programming and yet makes it possible to use assembly so that you can stay close “to the metal”. We have blogged about improving performance in Go few times in the past (see these posts about encryption, ciphersuites, and image encoding).When developing CIRCL, we crafted the code to get the best possible performance from the machine. We leverage the capabilities provided by the architecture and the architecture-specific instructions. This means that in some cases we need to get our hands dirty and rewrite parts of the software in Go assembly, which is not easy, but definitely worth the effort when it comes to performance. We focused on x86-64, as this is our main target, but we also think that it’s worth looking at ARM architecture, and in some cases (like SIDH or P-384), CIRCL has optimized code for this platform.We also try to ensure that code uses memory efficiently - crafting it in a way that fast allocations on the stack are preferred over expensive heap allocations. In cases where heap allocation is needed, we tried to design the APIs in a way that, they allow pre-allocating memory ahead of time and reuse it for multiple operations.SecurityThe CIRCL library is offered as-is, and without a guarantee. Therefore, it is expected that changes in the code, repository, and API occur in the future. We recommend to take caution before using this library in a production application since part of its content is experimental.As new attacks and vulnerabilities arise over the time, security of software should be treated as a continuous process. In particular, the assessment of cryptographic software is critical, it requires the expertise of several fields, not only computer science. Cryptography engineers must be aware of the latest vulnerabilities and methods of attack in order to defend against them.The development of CIRCL follows best practices on the secure development. For example, if time execution of the code depends on secret data, the attacker could leverage those irregularities and recover secret keys. In our code, we take care of writing constant-time code and hence prevent timing based attacks.Developers of cryptographic software must also be aware of optimizations performed by the compiler and/or the processor since these optimizations can lead to insecure binary codes in some cases. All of these issues could be exploited in real attacks aimed at compromising systems and keys. Therefore, software changes must be tracked down through thorough code reviews. Also static analyzers and automated testing tools play an important role on the security of the software.SummaryCIRCL is envisioned as an effective tool for experimenting with modern cryptographic algorithms yet providing high-performance implementations. Today is marked as the starting point of a continuous machinery of innovation and retribution to the community in the form of a cryptographic library. There are still several other applications such as homomorphic encryption, multi-party computation, and privacy-preserving protocols that we would like to explore.We are team of cryptography, security, and software engineers working to improve and augment Cloudflare products. Our team keeps the communication channels open for receiving comments, including improvements, and merging contributions. We welcome opinions and contributions! If you would like to get in contact, you should check out our github repository for CIRCL github.com/cloudflare/circl. We want to share our work and hope it makes someone else’s job easier as well. Finally, special thanks to all the contributors who has either directly or indirectly helped to implement the library - Ko Stoffelen, Brendan McMillion, Henry de Valence, Michael McLoughlin and all the people who invested their time in reviewing our code.

Cloudflare's Ethereum Gateway

CloudFlare Blog -

Today, we are excited to announce Cloudflare's Ethereum Gateway, where you can interact with the Ethereum network without installing any additional software on your computer.This is another tool in Cloudflare’s Distributed Web Gateway tool set. Currently, Cloudflare lets you host content on the InterPlanetary File System (IPFS) and access it through your own custom domain. Similarly, the new Ethereum Gateway allows access to the Ethereum network, which you can provision through your custom hostname.This setup makes it possible to add interactive elements to sites powered by Ethereum smart contracts, a decentralized computing platform. And, in conjunction with the IPFS gateway, this allows hosting websites and resources in a decentralized manner, and has the extra bonus of the added speed, security, and reliability provided by the Cloudflare edge network. You can access our Ethereum gateway directly at https://cloudflare-eth.com. This brief primer on how Ethereum and smart contracts work has examples of the many possibilities of using the Cloudflare Distributed Web Gateway.Primer on EthereumYou may have heard of Ethereum as a cryptocurrency. What you may not know is that Ethereum is so much more. Ethereum is a distributed virtual computing network that stores and enforces smart contracts.So, what is a smart contract?Good question. Ethereum smart contracts are simply a piece of code stored on the Ethereum blockchain. When the contract is triggered, it runs on the Ethereum Virtual Machine (EVM). The EVM is a distributed virtual machine that runs smart contract code and produces cryptographically verified changes to the state of the Ethereum blockchain as its result. To illustrate the power of smart contracts, let's consider a little example.Anna wants to start a VPN provider but she lacks the capital. To raise funds for her venture she decides to hold an Initial Coin Offering (ICO). Rather than design an ICO contract from scratch Anna bases her contract off of ERC-20. ERC-20 is a template for issuing fungible tokens, perfect for ICOs. Anna sends her ERC-20 compliant contract to the Ethereum network, and starts to sell stock in her new company, VPN Co. Once she's sorted out funds, Anna sits down and starts to write a smart contract. Anna’s contract asks customers to send her their public key, along with some Ether (the coin product of Ethereum). She then authorizes the public key to access her VPN service. All without having to hold any secret information. Huzzah!Next, rather than set up the infrastructure to run a VPN herself, Anna decides to use the blockchain again, but this time as a customer. Cloud Co. sells managed cloud infrastructure using their own smart contract. Anna programs her contract to send the appropriate amount of Ether to Cloud Co.'s contract. Cloud Co. then provisions the servers she needs to host her VPN. By automatically purchasing more infrastructure every time she has a new customer, her VPN company can scale totally autonomously. Finally, Anna pays dividends to her investors out of the profits, keeping a little for herself.And there you have it.A decentralised, autonomous, smart VPN provider.A smart contract stored on the blockchain has an associated account for storing funds, and the contract is triggered when someone sends Ether to that account. So for our VPN example, the provisioning contract triggers when someone transfers money into the account associated with Anna’s contract. What distinguishes smart contracts from ordinary code?The "smart" part of a smart contract is they run autonomously. The "contract" part is the guarantee that the code runs as written.Because this contract is enforced cryptographically, maintained in the tamper-resistant medium of the blockchain and verified by the consensus of the network, these contracts are more reliable than regular contracts which can provoke dispute.Ethereum Smart Contracts vs. Traditional ContractsA regular contract is enforced by the court system, litigated by lawyers. The outcome is uncertain; different courts rule differently and hiring more or better lawyers can swing the odds in your favor.Smart contract outcomes are predetermined and are nearly incorruptible. However, here be dragons: though the outcome can be predetermined and incorruptible, a poorly written contract might not have the intended behavior, and because contracts are immutable, this is difficult to fix.How are smart contracts written?You can write smart contracts in a number of languages, some of which are Turing complete, e.g. Solidity. A Turing complete language lets you write code that can evaluate any computable function. This puts Solidity in the same class of languages as Python and Java. The compiled bytecode is then run on the EVM.The EVM differs from a standard VM in a number of ways: The EVM is distributedEach piece of code is run by numerous nodes. Nodes verify the computation before accepting a block, and therefore ensure that miners who want their blocks accepted must always run the EVM honestly. A block is only considered accepted when more than half of the network accepts it. This is the consensus part of Ethereum.The EVM is entirely deterministicThis means that the same inputs to a function always produce the same outputs. Because regular VMs have access to file storage and the network, the results of a function call can be non-deterministic. Every EVM has the same start state, thus a given set of inputs always gives the same outputs. This makes the EVM more reliable than a standard VM.There are two big gotchas that come with this determinism:EVM bytecode is Turing complete and therefore discerning the outputs without running the computation is not always possible.Ethereum smart contracts can store state on the blockchain. This means that the output of the function can vary as the blockchain changes. Although, technically this is deterministic in that the blockchain is an input to the function, it may still be impossible to derive the output in advance.This however means that they suffer from the same problems as any piece of software – bugs. However, unlike normal code where the authors can issue a patch, code stored on the blockchain is immutable. More problematically, even if the author provides a new smart contract, the old one is always still available on the blockchain.This means that when writing contracts authors must be especially careful to write secure code, and include a kill switch to ensure that if bugs do reside in the code, they can be squashed. If there is no kill switch and there are vulnerabilities in the smart contract that can be exploited, it can potentially lead to the theft of resources from the smart contract or from other individuals. EVM Bytecode includes a special SELFDESTRUCT opcode that deletes a contract, and sends all funds to the specified address for just this purpose. The need to include a kill switch was brought into sharp focus during the infamous DAO incident. The DAO smart contract acted as a complex decentralized venture capital (VC) fund and held Ether worth $250 million at its peak collected from a group of investors. Hackers exploited vulnerabilities in the smart contract and stole Ether worth $50 million.Because there is no way to undo transactions in Ethereum, there was a highly controversial “hard fork,” where the majority of the community agreed to accept a block with an “irregular state change” that essentially drained all DAO funds into a special “WithdrawDAO” recovery contract. By convincing enough miners to accept this irregular block as valid, the DAO could return funds.Not everyone agreed with the change. Those who disagreed rejected the irregular block and formed the Ethereum Classic network, with both branches of the fork growing independently.Kill switches, however, can cause their own problems. For example, when a contract used as a library flips its kill switch, all contracts relying on this contract can no longer operate as intended, even though the underlying library code is immutable. This caused over 500,000 ETH to become stuck in multi-signature wallets when an attacker triggered the kill switch of an underlying library.Users of the multi-signature library assumed the immutability of the code meant that the library would always operate as anticipated. But the smart contracts that interact with the blockchain are only deterministic when accounting for the state of the blockchain. In the wake of the DAO, various tools were created that check smart contracts for bugs or enable bug bounties, for example Securify and The Hydra. Come here, you ...Another way smart contracts avoid bugs is using standardized patterns. For example, ERC-20 defines a standardized interface for producing tokens such as those used in ICOs, and ERC-721 defines a standardized interface for implementing non-fungible tokens. Non-fungible tokens can be used for trading-card games like CryptoKitties. CryptoKitties is a trading-card style game built on the Ethereum blockchain. Players can buy, sell, and breed cats, with each cat being unique.CryptoKitties is built on a collection of smart contracts that provides an open-source Application Binary Interface (ABI) for interacting with the KittyVerse -- the virtual world of the CryptoKitties application. An ABI simply allows you to call functions in a contract and receive any returned data. The KittyBase code may look like this:Contract KittyBase is KittyAccessControl { event Birth(address owner, uint256 kittyId, uint256 matronId, uint256 sireId, uint256 genes); event Transfer(address from, address to, uint256 tokenId); struct Kitty { uint256 genes; uint64 birthTime; uint64 cooldownEndBlock; uint32 matronId; uint32 sireId; uint32 siringWithId; uint16 cooldownIndex; uint16 generation; } [...] function _transfer(address _from, address _to, uint256 _tokenId) internal { ... } function _createKitty(uint256 _matronId, uint256 _sireId, uint256 _generation, uint256 _genes, address _owner) internal returns (uint) { ... } [...] }Besides defining what a Kitty is, this contract defines two basic functions for transferring and creating kitties. Both are internal and can only be called by contracts that implement KittyBase. The KittyOwnership contract implements both ERC-721 and KittyBase, and implements an external transfer function that calls the internal _transfer function. This code is compiled into bytecode written to the blockchain. By implementing a standardised interface like ERC-721, smart contracts that aren’t specifically aware of CryptoKitties can still interact with the KittyVerse. The CryptoKitties ABI functions allow users to create distributed apps (dApps), of their own design on top of the KittyVerse, and allow other users to use their dApps. This extensibility helps demonstrate the potential of smart contracts.How is this so different?Smart contracts are, by definition, public. Everyone can see the terms and understand where the money goes. This is a radically different approach to providing transparency and accountability. Because all contracts and transactions are public and verified by consensus, trust is distributed between the people, rather than centralized in a few big institutions.The trust given to institutions is historic in that we trust them because they have previously demonstrated trustworthiness. The trust placed in consensus-based algorithms is based on the assumption that most people are honest, or more accurately, that no sufficiently large subset of people can collude to produce a malicious outcome. This is the democratisation of trust. In the case of the DAO attack, a majority of nodes agreed to accept an “irregular” state transition. This effectively undid the damage of the attack and demonstrates how, at least in the world of blockchain, perception is reality. Because most people “believed” (accepted) this irregular block, it became a “real,” valid block. Most people think of the blockchain as immutable, and trust the power of consensus to ensure correctness, however if enough people agree to do something irregular, they don't have to keep the rules. So where does Cloudflare fit in?Accessing the Ethereum network and its attendant benefits directly requires running complex software, including downloading and cryptographically verifying hundreds of gigabytes of data, which apart from producing technical barriers to entry for users, can also exclude people with low-power devices. To help those users and devices access the Ethereum network, the Cloudflare Ethereum gateway allows any device capable of accessing the web to interact with the Ethereum network in a safe, reliable way. Through our gateway, not only can you explore the blockchain, but if you give our gateway a signed transaction, we’ll push it to the network to allow miners to add it to their blockchain. This means that you can send Ether and even put new contracts on the blockchain without having to run a node. "But Jonathan," I hear you say, "by providing a gateway aren't you just making Cloudflare a centralizing institution?"That’s a fair question. Thankfully, Cloudflare won’t be alone in offering these gateways. We’re joining alongside organizations, such as Infura, to expand the constellation of gateways that already exist. We hope that, by providing a fast, reliable service, we can enable people who never previously used smart-contracts to do so, and in so doing bring the benefits they offer to billions of regular Internet users. "We're excited that Cloudflare is bringing their infrastructure expertise to the Ethereum ecosystem. Infura has always believed in the importance of standardized, open APIs and compatibility between gateway providers, so we look forward to collaborating with their team to build a better distributed web." - E.G. Galano, Infura co-founder.By providing a gateway to the Ethereum network, we help users make the jump from general web-user to cryptocurrency native, and eventually make the distributed web a fundamental part of the Internet. What can you do with Cloudflare's Gateway?Visit cloudflare-eth.com to interact with our example app. But to really explore the Ethereum world, access the RPC API, where you can do anything that can be done on the Ethereum network itself, from examining contracts, to transferring funds. Our Gateway accepts POST requests containing JSON. For a complete list of calls, visit the Ethereum github page. So, to get the block number of the most recent block, you could run:curl https://cloudflare-eth.com -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'and you would get a response something like this:{ "jsonrpc": "2.0", "id": 1, "result": "0x780f17" } We also invite developers to build dApps based on our Ethereum gateway using our API. Our API allows developers to build websites powered by the Ethereum blockchain. Check out developer docs to get started. If you want to read more about how Ethereum works check out this deep dive.The architectureCloudflare is uniquely positioned to host an Ethereum gateway, and we have the utmost faith in the products we offer to customers. This is why the Cloudflare Ethereum gateway runs as a Cloudflare customer and we dogfood our own products to provide a fast and reliable gateway. The domain we run the gateway on (https://cloudflare-eth.com) uses Cloudflare Workers to cache responses for popular queries made to the gateway. Responses for these queries are answered directly from the Cloudflare edge, which can result in a ~6x speed-up.We also use Load balancing and Argo Tunnel for fast, redundant, and secure content delivery. With Argo Smart Routing enabled, requests and responses to our Ethereum gateway are tunnelled directly from our Ethereum node to the Cloudflare edge using the best possible routing.Similar to our IPFS gateway, cloudflare-eth.com is an SSL for SaaS provider. This means that anyone can set up the Cloudflare Ethereum gateway as a backend for access to the Ethereum network through their own registered domains. For more details on how to set up your own domain with this functionality, see the Ethereum tab on cloudflare.com/distributed-web-gateway.With these features, you can use Cloudflare’s Distributed Web Gateway to create a fully decentralized website with an interactive backend that allows interaction with the IPFS and Ethereum networks. For example, you can host your content on IPFS (using something like Pinata to pin the files), and then host the website backend as a smart contract on Ethereum. This architecture does not require a centralized server for hosting files or the actual website. Added to the power, speed, and security provided by Cloudflare’s edge network, your website is delivered to users around the world with unparalleled efficiency.Embracing a distributed futureAt Cloudflare, we support technologies that help distribute trust. By providing a gateway to the Ethereum network, we hope to facilitate the growth of a decentralized future. We thank the Ethereum Foundation for their support of a new gateway in expanding the distributed web:“Cloudflare's Ethereum Gateway increases the options for thin-client applications as well as decentralization of the Ethereum ecosystem, and I can't think of a better person to do this work than Cloudflare. Allowing access through a user's custom hostname is a particularly nice touch. Bravo.” - Dr. Virgil Griffith, Head of Special Projects, Ethereum Foundation. We hope that by allowing anyone to use the gateway as the backend for their domain, we make the Ethereum network more accessible for everyone; with the added speed and security brought by serving this content directly from Cloudflare’s global edge network. So, go forth and build our vision – the distributed crypto-future!

Continuing to Improve our IPFS Gateway

CloudFlare Blog -

When we launched our InterPlanetary File System (IPFS) gateway last year we were blown away by the positive reception. Countless people gave us valuable suggestions for improvement and made open-source contributions to make serving content through our gateway easy (many captured in our developer docs). Since then, our gateway has grown to regularly handle over a thousand requests per second, and has become the primary access point for several IPFS websites.We’re committed to helping grow IPFS and have taken what we have learned since our initial release to improve our gateway. So far, we’ve done the following:Automatic Cache PurgeOne of the ways we tried to improve the performance of our gateway when we initially set it up was by setting really high cache TTLs. After all, content on IPFS is largely meant to be static. The complaint we heard though, was that site owners were frustrated at wait times upwards of several hours for changes to their website to propagate.The way an IPFS gateway knows what content to serve when it receives a request for a given domain is by looking up the value of a TXT record associated with the domain – the DNSLink record. The value of this TXT record is the hash of the entire site, which changes if any one bit of the website changes. So we wrote a Worker script that makes a DNS-over-HTTPS query to 1.1.1.1 and bypasses cache if it sees that the DNSLink record of a domain is different from when the content was originally cached.Checking DNS gives the illusion of a much lower cache TTL and usually adds less than 5ms to a request, whereas revalidating the cache with a request to the origin could take anywhere from 30ms to 300ms. And as an additional usability bonus, the 1.1.1.1 cache automatically purges when Cloudflare customers change their DNS records. Customers who don’t manage their DNS records with us can purge their cache using this tool.Beta Testing for Orange-to-OrangeOur gateway was originally based on a feature called SSL for SaaS. This tweaks the way our edge works to allow anyone, Cloudflare customers or not, to CNAME their own domain to a target domain on our network, and have us send traffic we see for their domain to the target domain’s origin. SSL for SaaS keeps valid certificates for these domains in the Cloudflare database (hence the name), and applies the target domain’s configuration to these requests (for example, enforcing Page Rules) before they reach the origin.The great thing about SSL for SaaS is that it doesn’t require being on the Cloudflare network. New people can start serving their websites through our gateway with their existing DNS provider, instead of migrating everything over. All Cloudflare settings are inherited from the target domain. This is a huge convenience, but also means that the source domain can’t customize their settings even if they do migrate.This can be improved by an experimental feature called Orange-to-Orange (O2O) from the Cloudflare Edge team. O2O allows one zone on Cloudflare to CNAME to another zone, and apply the settings of both zones in layers. For example, cloudflare-ipfs.com has Always Use HTTPS turned off for various reasons, which means that every site served through our gateway also does. O2O allows site owners to override this setting by enabling Always Use HTTPS just for their website, if they know it’s okay, as well as adding custom Page Rules and Worker scripts to embed all sorts of complicated logic.If you’d like to try this out on your domain, open a support ticket with this request and we will enable it for you in the coming weeks.Subdomain-based GatewayTo host an application on IPFS it’s pretty much essential to have a custom domain for your app. We discussed all the reasons for this in our post, End-to-End Integrity with IPFS – essentially saying that because browsers only sandbox websites at the domain-level, serving an app directly from a gateway’s URL is not secure because another (malicious) app could steal its data.Having a custom domain gives apps a secure place to keep user data, but also makes it possible for whoever controls the DNS for the domain to change a website’s content without warning. To provide both a secure context to apps as well as eternal immutability, Cloudflare set up a subdomain-based gateway at cf-ipfs.com.cf-ipfs.com doesn’t respond to requests to the root domain, only at subdomains, where it interprets the subdomain as the hash of the content to serve. This means a request to https://<hash>.cf-ipfs.com is the equivalent of going to https://cloudflare-ipfs.com/ipfs/<hash>. The only technicality is that because domain names are case-insensitive, the hash must be re-encoded from Base58 to Base32. Luckily, the standard IPFS client provides a utility for this!As an example, we’ll take the classic Wikipedia mirror on IPFS:https://cloudflare-ipfs.com/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/First, we convert the hash, QmXoyp...6uco to base32:$ ipfs cid base32 QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq which tells us we can go here instead:https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.cf-ipfs.com/wiki/The main downside of the subdomain approach is that for clients without Encrypted SNI support, the hash is leaked to the network as part of the TLS handshake. This can be bad for privacy and enable network-level censorship.Enabling Session AffinityLoading a website usually requires fetching more than one asset from a backend server, and more often than not, “more than one” is more like “more than a dozen.” When that website is being loaded over IPFS, it dramatically improves performance when the IPFS node can make one connection and re-use it for all assets.Behind the curtain, we run several IPFS nodes to reduce the likelihood of an outage and improve throughput. Unfortunately, with the way it was originally setup, each request for a different asset on a website would likely go to a different IPFS node and all those connections would have to be made again.We fixed this by replacing the original backend load balancer with our own Load Balancing product that supports Session Affinity and automatically directs requests from the same user to the same IPFS node, minimizing redundant network requests.Connecting with PinataAnd finally, we’ve configured our IPFS nodes to maintain a persistent connection to the nodes run by Pinata, a company that helps people pin content to the IPFS network. Having a persistent connection significantly improves the performance and reliability of requests to our gateway, for content on their network. Pinata has written their own blog post, which you can find here, that describes how to upload a website to IPFS and keep it online with a combination of Cloudflare and Pinata.As always, we look forward to seeing what the community builds on top of our work, and hearing about how else Cloudflare can improve the Internet.

Securing Certificate Issuance using Multipath Domain Control Validation

CloudFlare Blog -

This blog post is part of Crypto Week 2019.Trust on the Internet is underpinned by the Public Key Infrastructure (PKI). PKI grants servers the ability to securely serve websites by issuing digital certificates, providing the foundation for encrypted and authentic communication.Certificates make HTTPS encryption possible by using the public key in the certificate to verify server identity. HTTPS is especially important for websites that transmit sensitive data, such as banking credentials or private messages. Thankfully, modern browsers, such as Google Chrome, flag websites not secured using HTTPS by marking them “Not secure,” allowing users to be more security conscious of the websites they visit.This blog post introduces a new, free tool Cloudflare offers to CAs so they can further secure certificate issuance. But before we dive in too deep, let’s talk about where certificates come from.Certificate AuthoritiesCertificate Authorities (CAs) are the institutions responsible for issuing certificates. When issuing a certificate for any given domain, they use Domain Control Validation (DCV) to verify that the entity requesting a certificate for the domain is the legitimate owner of the domain. With DCV the domain owner:creates a DNS resource record for a domain;uploads a document to the web server located at that domain; ORproves ownership of the domain’s administrative email account. The DCV process prevents adversaries from obtaining private-key and certificate pairs for domains not owned by the requestor.  Preventing adversaries from acquiring this pair is critical: if an incorrectly issued certificate and private-key pair wind up in an adversary’s hands, they could pose as the victim’s domain and serve sensitive HTTPS traffic. This violates our existing trust of the Internet, and compromises private data on a potentially massive scale. For example, an adversary that tricks a CA into mis-issuing a certificate for gmail.com could then perform TLS handshakes while pretending to be Google, and exfiltrate cookies and login information to gain access to the victim’s Gmail account. The risks of certificate mis-issuance are clearly severe.Domain Control ValidationTo prevent attacks like this, CAs only issue a certificate after performing DCV. One way of validating domain ownership is through HTTP validation, done by uploading a text file to a specific HTTP endpoint on the webserver they want to secure.  Another DCV method is done using email verification, where an email with a validation code link is sent to the administrative contact for the domain.HTTP ValidationSuppose Alice buys the domain name aliceswonderland.com and wants to get a dedicated certificate for this domain. Alice chooses to use Let’s Encrypt as their certificate authority. First, Alice must generate their own private key and create a certificate signing request (CSR). She sends the CSR to Let’s Encrypt, but the CA won’t issue a certificate for that CSR and private key until they know Alice owns aliceswonderland.com. Alice can then choose to prove that she owns this domain through HTTP validation.When Let’s Encrypt performs DCV over HTTP, they require Alice to place a randomly named file in the /.well-known/acme-challenge path for her website. The CA must retrieve the text file by sending an HTTP GET request to http://aliceswonderland.com/.well-known/acme-challenge/<random_filename>. An expected value must be present on this endpoint for DCV to succeed.For HTTP validation, Alice would upload a file to http://aliceswonderland.com/.well-known/acme-challenge/YnV0dHNz where the body contains: curl http://aliceswonderland.com/.well-known/acme-challenge/YnV0dHNz GET /.well-known/acme-challenge/YnV0dHNz Host: aliceswonderland.com HTTP/1.1 200 OK Content-Type: application/octet-stream YnV0dHNz.TEST_CLIENT_KEY The CA instructs them to use the Base64 token YnV0dHNz. TEST_CLIENT_KEY in an account-linked key that only the certificate requestor and the CA know. The CA uses this field combination to verify that the certificate requestor actually owns the domain. Afterwards, Alice can get her certificate for her website!DNS ValidationAnother way users can validate domain ownership is to add a DNS TXT record containing a verification string or token from the CA to their domain’s resource records. For example, here’s a domain for an enterprise validating itself towards Google:$ dig TXT aliceswonderland.com aliceswonderland.com. 28 IN TXT "google-site-verification=COanvvo4CIfihirYW6C0jGMUt2zogbE_lC6YBsfvV-U" Here, Alice chooses to create a TXT DNS resource record with a specific token value. A Google CA can verify the presence of this token to validate that Alice actually owns her website.Types of BGP Hijacking AttacksCertificate issuance is required for servers to securely communicate with clients. This is why it’s so important that the process responsible for issuing certificates is also secure. Unfortunately, this is not always the case. Researchers at Princeton University recently discovered that common DCV methods are vulnerable to attacks executed by network-level adversaries. If Border Gateway Protocol (BGP) is the “postal service” of the Internet responsible for delivering data through the most efficient routes, then Autonomous Systems (AS) are individual post office branches that represent an Internet network run by a single organization. Sometimes network-level adversaries advertise false routes over BGP to steal traffic, especially if that traffic contains something important, like a domain’s certificate. Bamboozling Certificate Authorities with BGP highlights five types of attacks that can be orchestrated during the DCV process to obtain a certificate for a domain the adversary does not own. After implementing these attacks, the authors were able to (ethically) obtain certificates for domains they did not own from the top five CAs: Let’s Encrypt, GoDaddy, Comodo, Symantec, and GlobalSign. But how did they do it?Attacking the Domain Control Validation ProcessThere are two main approaches to attacking the DCV process with BGP hijacking:Sub-Prefix AttackEqually-Specific-Prefix AttackThese attacks create a vulnerability when an adversary sends a certificate signing request for a victim’s domain to a CA. When the CA verifies the network resources using an HTTP GET  request (as discussed earlier), the adversary then uses BGP attacks to hijack traffic to the victim’s domain in a way that the CA’s request is rerouted to the adversary and not the domain owner. To understand how these attacks are conducted, we first need to do a little bit of math.Every device on the Internet uses an IP (Internet Protocol) address as a numerical identifier. IPv4 addresses contain 32 bits and follow a slash notation to indicate the size of the prefix. So, in the network address 123.1.2.0/24, “/24” refers to how many bits the network contains. This means that there are 8 bits left that contain the host addresses, for a total of 256 host addresses. The smaller the prefix number, the more host addresses remain in the network. With this knowledge, let’s jump into the attacks! Attack one: Sub-Prefix AttackWhen BGP announces a route, the router always prefers to follow the more specific route. So if 123.0.0.0/8 and 123.1.2.0/24 are advertised, the router will use the latter as it is the more specific prefix. This becomes a problem when an adversary makes a BGP announcement to a specific IP address while using the victim’s domain IP address. Let’s say the IP address for our victim, leagueofentropy.com, is 123.0.0.0/8. If an adversary announces the prefix 123.1.2.0/24, then they will capture the victim’s traffic, launching a sub-prefix hijack attack. For example, in an attack during April 2018, routes were announced with the more specific /24 vs. the existing /23. In the diagram below, /23 is Texas and /24 is the more specific Austin, Texas. The new (but nefarious) routes overrode the existing routes for portions of the Internet. The attacker then ran a nefarious DNS server on the normal IP addresses with DNS records pointing at some new nefarious web server instead of the existing server. This attracted the traffic destined for the victim’s domain within the area the nefarious routes were being propagated. The reason this attack was successful was because a more specific prefix is always preferred by the receiving routers.Attack two: Equally-Specific-Prefix AttackIn the last attack, the adversary was able to hijack traffic by offering a more specific announcement, but what if the victim’s prefix is /24 and a sub-prefix attack is not viable? In this case, an attacker would launch an equally-specific-prefix hijack, where the attacker announces the same prefix as the victim. This means that the AS chooses the preferred route between the victim and the adversary’s announcements based on properties like path length. This attack only ever intercepts a portion of the traffic. There are more advanced attacks that are covered in more depth in the paper. They are fundamentally similar attacks but are more stealthy.Once an attacker has successfully obtained a bogus certificate for a domain that they do not own, they can perform a convincing attack where they pose as the victim’s domain and are able to decrypt and intercept the victim’s TLS traffic. The ability to decrypt the TLS traffic allows the adversary to completely Monster-in-the-Middle (MITM) encrypted TLS traffic and reroute Internet traffic destined for the victim’s domain to the adversary. To increase the stealthiness of the attack, the adversary will continue to forward traffic through the victim’s domain to perform the attack in an undetected manner. DNS SpoofingAnother way an adversary can gain control of a domain is by spoofing DNS traffic by using a source IP address that belongs to a DNS nameserver. Because anyone can modify their packets’ outbound IP addresses, an adversary can fake the IP address of any DNS nameserver involved in resolving the victim’s domain, and impersonate a nameserver when responding to a CA.This attack is more sophisticated than simply spamming a CA with falsified DNS responses. Because each DNS query has its own randomized query identifiers and source port, a fake DNS response must match the DNS query’s identifiers to be convincing. Because these query identifiers are random, making a spoofed response with the correct identifiers is extremely difficult.Adversaries can fragment User Datagram Protocol (UDP) DNS packets so that identifying DNS response information (like the random DNS query identifier) is delivered in one packet, while the actual answer section follows in another packet. This way, the adversary spoofs the DNS response to a legitimate DNS query.Say an adversary wants to get a mis-issued certificate for victim.com by forcing packet fragmentation and spoofing DNS validation. The adversary sends a DNS nameserver for victim.com a DNS packet with a small Maximum Transmission Unit, or maximum byte size. This gets the nameserver to start fragmenting DNS responses. When the CA sends a DNS query to a nameserver for victim.com asking for victim.com’s TXT records, the nameserver will fragment the response into the two packets described above: the first contains the query ID and source port, which the adversary cannot spoof, and the second one contains the answer section, which the adversary can spoof. The adversary can continually send a spoofed answer to the CA throughout the DNS validation process, in the hopes of sliding their spoofed answer in before the CA receives the real answer from the nameserver.In doing so, the answer section of a DNS response (the important part!) can be falsified, and an adversary can trick a CA into mis-issuing a certificate.SolutionAt first glance, one could think a Certificate Transparency log could expose a mis-issued certificate and allow a CA to quickly revoke it. CT logs, however, can take up to 24 hours to include newly issued certificates, and certificate revocation can be inconsistently followed among different browsers. We need a solution that allows CAs to proactively prevent this attacks, not retroactively address them.We’re excited to announce that Cloudflare provides CAs a free API to leverage our global network to perform DCV from multiple vantage points around the world. This API bolsters the DCV process against BGP hijacking and off-path DNS attacks.Given that Cloudflare runs 175+ datacenters around the world, we are in a unique position to perform DCV from multiple vantage points. Each datacenter has a unique path to DNS nameservers or HTTP endpoints, which means that successful hijacking of a BGP route can only affect a subset of DCV requests, further hampering BGP hijacks. And since we use RPKI, we actually sign and verify BGP routes.This DCV checker additionally protects CAs against off-path, DNS spoofing attacks. An additional feature that we built into the service that helps protect against off-path attackers is DNS query source IP randomization. By making the source IP unpredictable to the attacker, it becomes more challenging to spoof the second fragment of the forged DNS response to the DCV validation agent.By comparing multiple DCV results collected over multiple paths, our DCV API makes it virtually impossible for an adversary to mislead a CA into thinking they own a domain when they actually don’t. CAs can use our tool to ensure that they only issue certificates to rightful domain owners.Our multipath DCV checker consists of two services:DCV agents responsible for performing DCV out of a specific datacenter, anda DCV orchestrator that handles multipath DCV requests from CAs and dispatches them to a subset of DCV agents.When a CA wants to ensure that DCV occurred without being intercepted, it can send a request to our API specifying the type of DCV to perform and its parameters.The DCV orchestrator then forwards each request to a random subset of over 20 DCV agents in different datacenters. Each DCV agent performs the DCV request and forwards the result to the DCV orchestrator, which aggregates what each agent observed and returns it to the CA. This approach can also be generalized to performing multipath queries over DNS records, like Certificate Authority Authorization (CAA) records. CAA records authorize CAs to issue certificates for a domain, so spoofing them to trick unauthorized CAs into issuing certificates is another attack vector that multipath observation prevents.As we were developing our multipath checker, we were in contact with the Princeton research group that introduced the proof-of-concept (PoC) of certificate mis-issuance through BGP hijacking attacks. Prateek Mittal, coauthor of the Bamboozling Certificate Authorities with BGP paper, wrote:“Our analysis shows that domain validation from multiple vantage points significantly mitigates the impact of localized BGP attacks. We recommend that all certificate authorities adopt this approach to enhance web security. A particularly attractive feature of Cloudflare’s implementation of this defense is that Cloudflare has access to a vast number of vantage points on the Internet, which significantly enhances the robustness of domain control validation.”Our DCV checker follows our belief that trust on the Internet must be distributed, and vetted through third-party analysis (like that provided by Cloudflare) to ensure consistency and security. This tool joins our pre-existing Certificate Transparency monitor as a set of services CAs are welcome to use in improving the accountability of certificate issuance.An Opportunity to DogfoodBuilding our multipath DCV checker also allowed us to dogfood multiple Cloudflare products. The DCV orchestrator as a simple fetcher and aggregator was a fantastic candidate for Cloudflare Workers. We implemented the orchestrator in TypeScript using this post as a guide, and created a typed, reliable orchestrator service that was easy to deploy and iterate on. Hooray that we don’t have to maintain our own dcv-orchestrator  server!We use Argo Tunnel to allow Cloudflare Workers to contact DCV agents. Argo Tunnel allows us to easily and securely expose our DCV agents to the Workers environment. Since Cloudflare has approximately 175 datacenters running DCV agents, we expose many services through Argo Tunnel, and have had the opportunity to load test Argo Tunnel as a power user with a wide variety of origins. Argo Tunnel readily handled this influx of new origins!Getting Access to the Multipath DCV CheckerIf you and/or your organization are interested in trying our DCV checker, email dcv@cloudflare.com and let us know! We’d love to hear more about how multipath querying and validation bolsters the security of your certificate issuance.As a new class of BGP and IP spoofing attacks threaten to undermine PKI fundamentals, it’s important that website owners advocate for multipath validation when they are issued certificates. We encourage all CAs to use multipath validation, whether it is Cloudflare’s or their own. Jacob Hoffman-Andrews, Tech Lead, Let’s Encrypt, wrote:“BGP hijacking is one of the big challenges the web PKI still needs to solve, and we think multipath validation can be part of the solution. We’re testing out our own implementation and we encourage other CAs to pursue multipath as well”Hopefully in the future, website owners will look at multipath validation support when selecting a CA.

League of Entropy: Not All Heroes Wear Capes

CloudFlare Blog -

To kick-off Crypto Week 2019, we are really excited to announce a new solution to a long-standing problem in cryptography. To get a better understanding of the technical side behind this problem, please refer to the next post for a deeper dive.Everything from cryptography to big money lottery to quantum mechanics requires some form of randomness. But what exactly does it mean for a number to be randomly generated and where does the randomness come from? Generating randomness dates back three thousand years, when the ancients rolled “the bones” to determine their fate. Think of lotteries-- seems simple, right? Everyone buys their tickets, chooses six numbers, and waits for an official to draw them randomly from a basket. Sounds like a foolproof solution. And then in 1980, the host of the Pennsylvania lottery drawing was busted for using weighted balls to choose the winning number. This lesson, along with the need of other complex systems for generating random numbers spurred the creation of random number generators. Just like a lottery game selects random numbers unpredictably, a random number generator is a device or software responsible for generating sequences of numbers in an unpredictable manner. As the need for randomness has increased, so has the need for constant generation of substantially large, unpredictable numbers. This is why organizations developed publicly available randomness beacons -- servers generating completely unpredictable 512-bit strings (about 155-digit numbers) at regular intervals. Now, you might think using a randomness beacon for random generation processes, such as those needed for lottery selection, would make the process resilient against adversarial manipulation, but that’s not the case. Single-source randomness has been exploited to generate biased results. Today, randomness beacons generate numbers for lotteries and election audits -- both affect the lives and fortunes of millions of people. Unfortunately, exploitation of the single point of origin of these beacons have created dishonest results that benefited one corrupt insider. To thwart exploitation efforts, Cloudflare and other randomness-beacon providers have joined forces to bring users a quorum of decentralized randomness beacons. After all, eight independent globally distributed beacons can be much more trustworthy than one! We’re happy to introduce you to .... THE LEAGUE …. OF …. ENTROPY !!!!!! What is a randomness beacon? A randomness beacon is a public service that provides unpredictable random numbers at regular intervals. drand (pronounced dee-rand) is a distributed randomness beacon developed by Nicolas Gailly; with the help of Philipp Jovanovic, and Mathilde Raynal. The drand project originated from the research paper Scalable Bias-Resistant Distributed Randomness published at the 2017 IEEE Symposium on Security and Privacy by Ewa Syta, Philipp Jovanovic, Eleftherios Kokoris Kogias, Nicolas Gailly, Linus Gasser, Ismail Khoffi, Michael J. Fischer, Bryan Ford, from the Decentralized/Distributed Systems (DEDIS) lab at EPFL, Yale University, and Trinity College Hartford, with support from Research Institute. For every randomness generation round, drand provides the following properties, as specified in the research paper:Availability - The distributed randomness generation completes successfully with high probability.Unpredictability - No party learns anything about the random output of the current round, except with negligible probability, until a sufficient number of drand nodes reveals their contributions in the randomness generation protocol.Unbiasability - The random output represents an unbiased, uniformly random value, except with negligible probability.Verifiability - The random output is third-party verifiable against the collective public key computed during drand's setup. This serves as the unforgettable attestation that the documented set of drand nodes ran the protocol to produce the one-and-only random output, except with negligible probability.Entropy measures the unpredictable nature of a number. For randomness, the more entropy the better, so naturally it’s where we got our name, the League of Entropy. Our founding members are contributing their individual high-entropy sources to provide a more random and unpredictable beacon to generate publicly verifiable random values every sixty seconds. The fact that the drand beacon is decentralized and built using appropriate, provably-secure cryptographic primitives, increases our confidence that it possesses all the aforementioned properties. This global network of servers generating randomness ensures that even if a few servers are offline, the beacon continues to produce new numbers by using the remaining online servers.  Even if one or two of the servers or their entropy sources were to be compromised, the rest will still ensure that the jointly-produced entropy is fully unpredictable and unbiasable.Who exactly is running this beacon? Currently, The League of Entropy is a consortium of global organizations and individual contributors, including: Cloudflare, Protocol Labs researcher Nicolas Gailly, University of Chile, École polytechnique fédérale de Lausanne  (EPFL), Kudelski Security, and EPFL researchers, Philipp Jovanovic and Ludovic Barman.Meet the League of EntropyCloudflare’s LavaRand: LavaRand sources her high entropy from Cloudflare’s wall of lava lamps at our San Francisco Headquarters. The unpredictable flow of “lava” inside the lamps is used as an input to a camera feed into a CSPRNG (Cryptographically Secure PseudoRandom Number Generator) that generates the random value.EPFL’s URand: URand’s power comes from the local randomness generator present on every computer at /dev/urandom. The randomness input is collected from inputs such as keyboard presses, mouse clicks, network traffic, etc. URand bundles these random inputs to produce a continuous stream of randomness.UChile’s Seismic Girl: Seismic Girl extracts super verifiable randomness from five sources queried every minute. These sources include: seismic measurements of shakes and earthquakes in Chile; a stream from a local radio station; a selection of Twitter posts; data from the Ethereum blockchain; and their own off-the-shelf RNG card.Kudelski Security’s ChaChaRand: ChaChaRand uses a CRNG (Cryptographic Random Number Generator) based on the ChaCha20 stream cipher.Protocol Labs’ InterplanetaryRand: InterplanetaryRand uses the power of entropy to ensure protocol safety across space and time by using environmental noise and the Linux PRNG, supplemented by CPU-sourced randomness (RdRand). Together, our heroes are committed to #savetheinternet by combining their randomness to form a globally distributed and cryptographically verifiable randomness beacon.  Public versus Private RandomnessDifferent types of randomness are needed for different types of applications. The trick to generating secure cryptographic keys is to use large, privately-generated random numbers that no one else can predict. With randomness beacons publicly generating and announcing random numbers, users should NOT be using the output of a randomness beacon for their secret keys, as these numbers are accessible by anyone. If an attacker can guess the random number that a user’s private cryptographic key was derived from, they can crack their system and decrypt confidential information. This simply means that random numbers generated by a public beacon are not safe to use for encryption keys: not because there’s anything wrong with the randomness, but simply because the randomness is public.Clients using the drand beacon can request private randomness from some or all of the drand nodes if they would like to generate a random value that will not be publicly announced.  For more information on how to do this, check the developer docs . On the other hand, public randomness is often employed by users requiring a randomness value that is not supposed to be secret but whose generation must be transparent, fair, and unbiased. This is perfect for many purposes such as games, lotteries, and election auditing, where the auditor and the public require transparency into when and how and how fairly the random value was generated. The League of Entropy provides public randomness that any user can retrieve from leagueofentropy.com. Users will be able to view the 512-bit string value that is generated every 60 seconds. Why 60 seconds? No particular reason. Theoretically, the randomness generation can go as fast as the hardware allows, but it’s not necessary for most use cases. Values generated every 60 seconds give users 1440 random values in one 24-hour period.*FRIENDLY REMINDER: THIS RANDOMNESS IS PUBLIC. DO NOT USE IT FOR PRIVATE CRYPTOGRAPHIC KEYS*Why does public randomness matter? Election auditing In the US, most elections are followed by an audit to verify they were unbiased and conducted fairly. Robust auditing systems increase voter confidence by improving election officials’ ability to respond effectively to allegations of fraud, and to detect bugs in the system. Currently, most election ballots and precincts are randomly chosen by election officials. This approach is potentially vulnerable to bias by a corrupt insider who might select certain precincts to present a preferred outcome. Even in a situation where every voter district was tampered with, by using a robust, distributed, and most importantly, unpredictable and unbiasable beacon, election auditors can trust that a small sample of districts are enough to audit, as long as an attacker cannot predict district selection. In Chile, election poll workers are randomly selected from a pool of eligible voters. The University of Chile’s Random UChile project has been working on a prototype that uses their randomness beacon for this process. Alejandro Hevia, leader of Random UChile, believes that for election auditing, public randomness is important for transparency and distributed randomness gives people the ability to trust the unlikeliness that multiple contributors to the beacon colluded, as opposed to trusting a single entity.Lotteries From 2005 to 2014, the information security director for the Multi-State Lottery Association, Eddie Tipton, rigged a random number generator and won the lottery six times! Tipton could predict the winning numbers by skipping the standard random seeding process. He was able to insert into the function of the random number generator code that checked the date, day of the week, and time. If these three variables did not align, the random number generator used radioactive material and a Geiger counter to generate a random seed. If the variables aligned as surreptitiously programmed, which usually only happened once a year, then it would generate the seed using a 7-variable formula fed into a Mersenne Twister, a pseudo random-number generator. Tipton knew these 7 variables. He knew the small pool of numbers that might be the seed. This knowledge allowed him to predict the results of the Mersenne Twister. This is a scam which a distributed randomness beacon can make substantially more difficult, if not impossible.Rob Sand, the former Iowa Assistant Attorney General and current Iowa State Auditor who prosecuted the Tipton cases, is also an advocate for improved controls. He said:“There is no excuse for an industry that rakes in $80 billion in annual revenue not to use the most sophisticated, truly random means available to ensure integrity.” Distributed ledger platformsIn many cryptocurrencies and blockchain-based distributed computing platforms, such as Ethereum, there is often a need for random selection at the application layer. One solution to prevent bias for such a random selection is to use a distributed randomness beacon like drand to generate the random value. Justin Drake, researcher at the Ethereum Foundation, believes "randomness from a drand-type federation could be a particularly good match for real-time decentralized applications on Ethereum such as live gaming and gambling". This is due to the possibility to deliver ultra-low latency randomness applicable for a broad range of application where public randomness is required.Let’s get you on drand! To learn more about the League of Entropy and how to use the distributed randomness beacon, visit https://leagueofentropy.com. The website periodically displays the randomness generated by the network, and you can even see previously generated values. Go ahead, try it out! How to join the league: Want to join the league?? We’re not exclusive! If you are an organization or an individual who is interested in contributing to the drand beacon, check out the developer docs for more information regarding the requirements for setting up a server and joining the existing group. drand is currently in its beta release phase and an approval request must be sent to leagueofentropy@googlegroups.com in order to be approved as a contributing server.Looking into the futureIt only makes sense that the Internet of the future will demand unpredictable randomness beacons. The League of Entropy is out there now, creating the basis for future systems to leverage trustworthy public randomness. Our goal is to increase user trust and provide a one-stop shop for all your public entropy needs. Come, join us!

Inside the Entropy

CloudFlare Blog -

Randomness, randomness everywhere; Nor any verifiable entropy. Generating random outcomes is an essential part of everyday life; from lottery drawings and constructing competitions, to performing deep cryptographic computations. To use randomness, we must have some way to 'sample' it. This requires interpreting some natural phenomenon (such as a fair dice roll) as an event that generates some random output. From a computing perspective, we interpret random outputs as bytes that we can then use in algorithms (such as drawing a lottery) to achieve the functionality that we want.The sampling of randomness securely and efficiently is a critical component of all modern computing systems. For example, nearly all public-key cryptography relies on the fact that algorithms can be seeded with bytes generated from genuinely random outcomes.In scientific experiments, a random sampling of results is necessary to ensure that data collection measurements are not skewed. Until now, generating random outputs in a way that we can verify that they are indeed random has been very difficult; typically involving taking a variety of statistical measurements.During Crypto week, Cloudflare is releasing a new public randomness beacon as part of the launch of the League of Entropy. The League of Entropy is a network of beacons that produces distributed, publicly verifiable random outputs for use in applications where the nature of the randomness must be publicly audited. The underlying cryptographic architecture is based on the drand project.Verifiable randomness is essential for ensuring trust in various institutional decision-making processes such as elections and lotteries. There are also cryptographic applications that require verifiable randomness. In the land of decentralized consensus mechanisms, the DFINITY approach uses random seeds to decide the outcome of leadership elections. In this setting, it is essential that the randomness is publicly verifiable so that the outcome of the leadership election is trustworthy. Such a situation arises more generally in Sortitions: an election where leaders are selected as a random individual (or subset of individuals) from a larger set. In this blog post, we will give a technical overview behind the cryptography used in the distributed randomness beacon, and how it can be used to generate publicly verifiable randomness. We believe that distributed randomness beacons have a huge amount of utility in realizing the Internet of the Future; where we will be able to rely on distributed, decentralized solutions to problems of a global-scale. Randomness & entropyA source of randomness is measured in terms of the amount of entropy it provides. Think about the entropy provided by a random output as a score to indicate how “random” the output actually is. The notion of information entropy was concretised by the famous scientist Claude Shannon in his paper A Mathematical Theory of Communication, and is sometimes known as Shannon Entropy. A common way to think about random outputs is: a sequence of bits derived from some random outcome. For the sake of an argument, consider a fair 8-sided dice roll with sides marked 0-7. The outputs of the dice can be written as the bit-strings 000,001,010,...,111. Since the dice is fair, any of these outputs is equally likely. This is means that each of the bits is equally likely to be 0 or 1. Consequently, interpreting the output of the dice roll as a random output then derives randomness with 3 bits of entropy. More generally, if a perfect source of randomness guarantees strings with n bits of entropy, then it generates bit-strings where each bit is equally likely to be 0 or 1. This allows us to predict the value of any bit with maximum probability 1/2. If the outputs are sampled from such a perfect source, we consider them uniformly distributed. If we sample the outputs from a source where one bit is predictable with higher probability, then the string has n-1 bits of entropy. To go back to the dice analogy, rolling a 6-sided dice provides less than 3 bits of entropy because the possible outputs are 000,001,010,011,100,101 and so the 2nd and 3rd bits are more likely to be to set to 0 than to 1. It is possible to mix entropy sources using specifically designed mixing functions to retrieve something with even greater entropy. The maximum resulting entropy is the sum of the entropy taken from the number of entropic sources used as input. Sampling randomnessTo sample randomness, let’s first identify the appropriate sources. There are many natural phenomena that one can use:atmospheric noise;radioactive decay;turbulent motion; like that generated in Cloudflare’s wall of lava lamps(!).Unfortunately, these phenomena require very specific measuring tools, which are prohibitively expensive to install in mainstream consumer electronics. As such, most personal computing devices usually use external usage characteristics for seeding specific generator functions that output randomness as and when the system requires it. These characteristics include keyboard typing patterns and speed and mouse movement – since such usage patterns are based on the human user, it is assumed they provide sufficient entropy as a randomness source. An example of a random number generator that takes entropy from these characteristics is the Linux-based RDRAND function.Naturally, it is difficult to tell whether a system is actually returning random outputs by only inspecting the outputs. There are statistical tests that detect whether a series of outputs is not uniformly distributed, but these tests cannot ensure that they are unpredictable. This means that it is hard to detect if a given system has had its randomness generation compromised.Distributed randomnessIt’s clear we need alternative methods for sampling randomness so that we can provide guarantees that trusted mechanisms, such as elections and lotteries, take place in secure tamper-resistant environments. The drand project was started by researchers at EPFL to address this problem. The drand charter is to provide an easily configurable randomness beacon running at geographically distributed locations around the world. The intention is for each of these beacons to generate portions of randomness that can be combined into a single random string that is publicly verifiable.This functionality is achieved using threshold cryptography. Threshold cryptography seeks to derive solutions for standard cryptographic problems by combining information from multiple distributed entities. The notion of the threshold means that if there are n entities, then any t of the entities can combine to construct some cryptographic object (like a ciphertext, or a digital signature). These threshold systems are characterised by a setup phase, where each entity learns a share of data. They will later use this share of data to create a combined cryptographic object with a subset of the other entities.Threshold randomnessIn the case of a distributed randomness protocol, there are n randomness beacons that broadcast random values sampled from their initial data share, and the current state of the system. This data share is created during a trusted setup phase, and also takes in some internal random value that is generated by the beacon itself.When a user needs randomness, they send requests to some number t of beacons, where t < n, and combine these values using a specific procedure. The result is a random value that can be verified and used for public auditing mechanisms.Consider what happens if some proportion c/n of the randomness beacons are corrupted at any one time. The nature of a threshold cryptographic system is that, as long as c < t, then the end result still remains random.If c exceeds t then the random values produced by the system become predictable and the notion of randomness is lost. In summary, the distributed randomness procedure provides verifiably random outputs with sufficient entropy only when c < t.By distributing the beacons independent of each other and in geographically disparate locations, the probability that t locations can be corrupted at any one time is extremely low. The minimum choice of t is equal to n/2.How does it actually work?What we described above sounds a bit like magictm. Even if c = t-1, then we can ensure that the output is indeed random and unpredictable! To make it clearer how this works, let’s dive a bit deeper into the underlying cryptography.Two core components of drand are: a distributed key generation (DKG) procedure, and a threshold signature scheme. These core components are used in setup and randomness generation procedures, respectively. In just a bit, we’ll outline how drand uses these components (without navigating too deeply into the onerous mathematics).Distributed key generationAt a high-level, the DKG procedure creates a distributed secret key that is formed of n different key pairs (vk_i, sk_i), each one being held by the entity i in the system. These key pairs will eventually be used to instantiate a (t,n)-threshold signature scheme (we will discuss this more later). In essence, t of the entities will be able to combine to construct a valid signature on any message.To think about how this might work, consider a distributed key generation scheme that creates n distributed keys that are going to be represented by pizzas. Each pizza is split into n slices and one slice from each is secretly passed to one of the participants. Each entity receives one slice from each of the different pizzas (n in total) and combines these slices to form their own pizza. Each combined pizza is unique and secret for each entity, representing their own key pair.Mathematical intuitionMathematically speaking, and rather than thinking about pizzas, we can describe the underlying phenomenon by reconstructing lines or curves on a graph. We can take two coordinates on a (x,y) plane and immediately (and uniquely) define a line with the equation y = ax+b. For example, the points (2,3) and (4,7) immediately define a line with gradient (7-3)/(4/2) = 2 so a=2. You can then derive the b coefficient as -1 by evaluating either of the coordinates in the equation y = 2x + b. By uniquely, we mean that only the line y = 2x -1 satisfies the two coordinates that are chosen; no other choice of a or b fits.The curve ax+b has degree 1, where the degree of the equation refers to the highest order multiplication of unknown variables in the equation. That might seem like mathematical jargon, but the equation above contains only one term ax, which depends on the unknown variable x. In this specific term, the  exponent (or power) of x is 1, and so the degree of the entire equation is also 1.Likewise, by taking three sets of coordinates pairs in the same plane, we uniquely define a quadratic curve with an equation that approximately takes the form y = ax^2 + bx + c with the coefficients a,b,c uniquely defined by the chosen coordinates. The process is a bit more involved than the above case, but it essentially starts in the same way using three coordinate pairs (x_1, y_1), (x_2, y_2) and (x_3, y_3).By a quadratic curve, we mean a curve of degree 2. We can see that this curve has degree 2 because it contains two terms ax^2 and bx that depend on x. The highest order term is the ax^2 term with an exponent of 2, so this curve has degree 2 (ignore the term bx which has a smaller power).What we are ultimately trying to show is that this approach scales for curves of degree n (of the form y = a_n x^n + … a_1 x + a_0). So, if we take n+1 coordinates on the (x,y) plane, then we can uniquely reconstruct the curve of this form entirely. Such degree n equations are also known as polynomials of degree n.In order to generalise the approach to general degrees we need some kind of formula. This formula should take n+1 pairs of coordinates and return a polynomial of degree n. Fortunately, such a formula exists without us having to derive it ourselves, it is known as the Lagrange interpolation polynomial. Using the formula in the link, we can reconstruct any n degree polynomial using n+1 unique pairs of coordinates.Going back to pizzas temporarily, it will become clear in the next section how this Lagrange interpolation procedure essentially describes the dissemination of one slice (corresponding to (x,y) coordinates) taken from a single pizza (the entire n-1 degree polynomial) among n participants. Running this procedure n times in parallel allows each entity to construct their entire pizza (or the eventual key pair).Back to key generationIntuitively, in the DKG procedure we want to distribute n key pairs among n participants. This effectively means running n parallel instances of a t-out-of-n Shamir Secret Sharing scheme. This secret sharing scheme is built entirely upon the polynomial interpolation technique that we described above.In a single instance, we take the secret key to be the first coefficient of a polynomial of degree t-1 and the public key is a published value that depends on this secret key, but does not reveal the actual coefficient. Think of RSA, where we have a number N = pq for secret large prime numbers p,q, where N is public but does not reveal the actual factorisation. Notice that if the polynomial is reconstructed using the interpolation technique above, then we immediately learn the secret key, because the first coefficient will be made explicit.Each secret sharing scheme publishes shares, where each share is a different evaluation of the polynomial (dependent on the entity i receiving the key share). These evaluations are essentially coordinates on the (x,y) plane.By running n parallel instances of the secret sharing scheme, each entity receives n shares and then combines all of these to form their overall key pair (vk_i, sk_i).The DKG procedure uses n parallel secret sharing procedures along with Pedersen commitments to distribute the key pairs. We explain in the next section how this procedure is part of the procedure for provisioning randomness beacons.In summary, it is important to remember that each party in the DKG protocol generates a random secret key from the n shares that they receive, and they compute the corresponding public key from this. We will now explain how each entity uses this key pair to perform the cryptographic procedure that is used by the drand protocol.Threshold signature schemeRemember: a standard signature scheme considers a key-pair (vk,sk), where vk is a public verification key and sk is a private signing key. So, messages m signed with sk can be verified with vk. The security of the scheme ensures that it is difficult for anybody who does not hold sk to compute a valid signature for any message m.A threshold signature scheme allows a set of users holding distributed key-pairs (vk_i,sk_i) to compute intermediate signatures u_i on a given message m.Given knowledge of some number t of intermediate signatures u_i, a valid signature u on the message m can be reconstructed under the combined secret key sk. The public key vk can also be inferred using knowledge of the public keys vk_i, and then this public key can be used to verify u.Again, think back to reconstructing the degree t-1 curves on graphs with t known coordinates. In this case, the coordinates correspond to the intermediate signatures u_i, and the signature u corresponds to the entire curve. For the actual signature schemes, the mathematics are much more involved than in the DKG procedure, but the principal is the same.drand protocolThe n beacons that will take part in the drand project are identified. In the trusted setup phase, the DKG protocol from above is run, and each beacon effectively creates a key pair (vk_i, sk_i) for a threshold signature scheme. In other words, this key pair will be able to generate intermediate signatures that can be combined to create an entire signature for the system.For each round (occurring once a minute, for example), the beacons agree on a signature u evaluated over a message containing the previous round’s signature and the current round’s number. This signature u is the result of combining the intermediate signatures u_i over the same message. Each intermediate signature u_i is created by each of the beacons using their secret sk_i.Once this aggregation completes, each beacon displays the signature for the current round, along with the previous signature and round number. This allows any client to publicly verify the signature over this data to verify that the beacons honestly aggregate. This provides a chain of verifiable signatures, extending back to the first round of output. In addition, there are threshold signature schemes that output signatures that are indistinguishable from random sequences of bytes. Therefore, these signatures can be used directly as verifiable randomness for the applications we discussed previously.What does drand use?To instantiate the required threshold signature scheme, drand uses the (t,n)-BLS signature scheme of Boneh, Lynn and Shacham. In particular, we can instantiate this scheme in the elliptic curve setting using  Barreto-Naehrig curves. Moreover, the BLS signature scheme outputs sufficiently large signatures that are randomly distributed, giving them enough entropy to be sources of randomness. Specifically the signatures are randomly distributed over 64 bytes.BLS signatures use a specific form of mathematical operation known as a cryptographic pairing. Pairings can be computed in certain elliptic curves, including the Barreto-Naehrig curve configurations. A detailed description of pairing operations is beyond the scope of this blog post; though it is important to remember that these operations are integral to how BLS signatures work.Concretely speaking, all drand cryptographic operations are carried out using a library built on top of Cloudflare's implementation of the bn256 curve. The Pedersen DKG protocol follows the design of Gennaro et al..How does it work?The randomness beacons are synchronised in rounds. At each round, a beacon produces a new signature u_i using its private key sk_i on the previous signature generated and the round ID. These signatures are usually broadcast on the URL drand.<host>.com/api/public. These signatures can be verified using the keys vk_i and over the same data that is signed. By signing the previous signature and the current round identifier, this establishes a chain of trust for the randomness beacon that can be traced back to the original signature value.The randomness can be retrieved by combining the signatures from each of the beacons using the threshold property of the scheme. This reconstruction of the signature u from each intermediate signature u_i is done internally by the League of Entropy nodes. Each beacon broadcasts the entire signature u, that can be accessed over the HTTP endpoint above.The drand beaconAs we mentioned at the start of this blog post, Cloudflare has launched our distributed randomness beacon. This beacon is part of a network of beacons from different institutions around the globe that form the League of  Entropy.The Cloudflare beacon uses LavaRand as its internal source of randomness for the DKG. Other League of Entropy drand beacons have their own sources of randomness.Give me randomness!The drand beacon allows you to retrieve the latest random value from the League of Entropy using a simple HTTP request:curl https://drand.cloudflare.com/api/public The response is a JSON blob of the form:{ "round": 7, "previous": <hex-encoded-previous-signature>, "randomness": { "gid": 21, "point": <hex-encoded-new-signature> } } where, randomness.point is the signature u aggregated among the entire set of beacons.The signature is computed as an evaluation of the message, and is comprised of the signature of the previous round, previous, the current round number, round, and the aggregated secret key of the system. This signature can be verified using the entire public key vk of the Cloudflare beacon, learned using another HTTP request:curl https://drand.cloudflare.com/api/public There are eight collaborators in the League of Entropy. You can learn the current round of randomness (or the system’s public key) by querying these beacons on the HTTP endpoints listed above.https://drand.cloudflare.com:443https://random.uchile.cl:8080https://drand.cothority.net:7003https://drand.kudelskisecurity.com:443https://drand.lbarman.ch:443https://drand.nikkolasg.xyz:8888https://drand.protocol.ai:8080https://drand.zerobyte.io:8888Randomness & the futureCloudflare will continue to take an active role in the drand project, both as a contributor and by running a randomness beacon with the League of Entropy. The League of Entropy is a worldwide joint effort of individuals and academic institutions. We at Cloudflare believe it can help us realize the mission of helping Build a Better Internet. For more information on Cloudflare's participation in the League of Entropy, visit https://leagueofentropy.com or read Dina's blog post.Cloudflare would like to thank all of their collaborators in the League of Entropy; from EPFL, UChile, Kudelski Security and Protocol Labs. This work would not have been possible without the work of those who contributed to the open-source drand project. We would also like to thank and appreciate the work of Gabbi Fisher, Brendan McMillion, and Mahrud Sayrafi in launching the Cloudflare randomness beacon.

Welcome to Crypto Week 2019

CloudFlare Blog -

The Internet is an extraordinarily complex and evolving ecosystem. Its constituent protocols range from the ancient and archaic (hello FTP) to the modern and sleek (meet WireGuard), with a fair bit of everything in between. This evolution is ongoing, and as one of the most connected networks on the Internet, Cloudflare has a duty to be a good steward of this ecosystem. We take this responsibility to heart: Cloudflare’s mission is to help build a better Internet. In this spirit, we are very proud to announce Crypto Week 2019.Every day this week we’ll announce a new project or service that uses modern cryptography to build a more secure, trustworthy Internet. Everything we release this week will be free and immediately useful. This blog is a fun exploration of the themes of the week.Monday: The League of Entropy, Inside the EntropyTuesday: Securing Certificate Issuance using Multipath Domain Control ValidationWednesday: Coming SoonThursday: Coming SoonFriday: Coming SoonThe Internet of the FutureMany pieces of the Internet in use today were designed in a different era with different assumptions. The Internet’s success is based on strong foundations that support constant reassessment and improvement. Sometimes these improvements require deploying new protocols.Performing an upgrade on a system as large and decentralized as the Internet can’t be done by decree;There are too many economic, cultural, political, and technological factors at play.Changes must be compatible with existing systems and protocols to even be considered for adoption.To gain traction, new protocols must provide tangible improvements for users. Nobody wants to install an update that doesn’t improve their experience!The last time the Internet had a complete reboot and upgrade was during TCP/IP flag day in 1983. Back then, the Internet (called ARPANET) had fewer than ten thousand hosts! To have an Internet-wide flag day today to switch over to a core new protocol is inconceivable; the scale and diversity of the components involved is way too massive. Too much would break. It’s challenging enough to deprecate outmoded functionality. In some ways, the open Internet is a victim of its own success. The bigger a system grows and the longer it stays the same, the harder it is to change. The Internet is like a massive barge: it takes forever to steer in a different direction and it’s carrying a lot of garbage.ARPANET, 1983 (Computer History Museum)As you would expect, many of the warts of the early Internet still remain. Both academic security researchers and real-life adversaries are still finding and exploiting vulnerabilities in the system. Many vulnerabilities are due to the fact that most of the protocols in use on the Internet have a weak notion of trust inherited from the early days. With 50 hosts online, it’s relatively easy to trust everyone, but in a world-scale system, that trust breaks down in fascinating ways. The primary tool to scale trust is cryptography, which helps provide some measure of accountability, though it has its own complexities.In an ideal world, the Internet would provide a trustworthy substrate for human communication and commerce. Some people naïvely assume that this is the natural direction the evolution of the Internet will follow. However, constant improvement is not a given. It’s possible that the Internet of the future will actually be worse than the Internet today: less open, less secure, less private, less trustworthy. There are strong incentives to weaken the Internet on a fundamental level by Governments, by businesses such as ISPs, and even by the financial institutions entrusted with our personal data.In a system with as many stakeholders as the Internet, real change requires principled commitment from all invested parties. At Cloudflare, we believe everyone is entitled to an Internet built on a solid foundation of trust. Crypto Week is our way of helping nudge the Internet’s evolution in a more trust-oriented direction. Each announcement this week helps bring the Internet of the future to the present in a tangible way.Ongoing Internet UpgradesBefore we explore the Internet of the future, let’s explore some of the previous and ongoing attempts to upgrade the Internet’s fundamental protocols.Routing SecurityAs we highlighted in last year’s Crypto Week one of the weak links on the Internet is routing. Not all networks are directly connected. To send data from one place to another, you might have to rely on intermediary networks to pass your data along. A packet sent from one host to another may have to be passed through up to a dozen of these intermediary networks. No single network knows the full path the data will have to take to get to its destination, it only knows which network to pass it to next.  The protocol that determines how packets are routed is called the Border Gateway Protocol (BGP.) Generally speaking, networks use BGP to announce to each other which addresses they know how to route packets for and (dependent on a set of complex rules) these networks share what they learn with their neighbors.Unfortunately, BGP is completely insecure:Any network can announce any set of addresses to any other network, even addresses they don’t control. This leads to a phenomenon called BGP hijacking, where networks are tricked into sending data to the wrong network.A BGP hijack is most often caused by accidental misconfiguration, but can also be the result of malice on the network operator’s part.During a BGP hijack, a network inappropriately announces a set of addresses to other networks, which results in packets destined for the announced addresses to be routed through the illegitimate network.Understanding the riskIf the packets represent unencrypted data, this can be a big problem as it allows the hijacker to read or even change the data:In 2018, a rogue network hijacked the addresses of a service called MyEtherWallet, financial transactions were routed through the attacker network, which the attacker modified, resulting in the theft of over a hundred thousand dollars of cryptocurrency.Mitigating the riskThe Resource Public Key Infrastructure (RPKI) system helps bring some trust to BGP by enabling networks to utilize cryptography to digitally sign network routes with certificates, making BGP hijacking much more difficult.This enables participants of the network to gain assurances about the authenticity of route advertisements. Certificate Transparency (CT) is a tool that enables additional trust for certificate-based systems. Cloudflare operates the Cirrus CT log to support RPKI.Since we announced our support of RPKI last year, routing security has made big strides. More routes are signed, more networks validate RPKI, and the software ecosystem has matured, but this work is not complete. Most networks are still vulnerable to BGP hijacking. For example, Pakistan knocked YouTube offline with a BGP hijack back in 2008, and could likely do the same today. Adoption here is driven less by providing a benefit to users, but rather by reducing systemic risk, which is not the strongest motivating factor for adopting a complex new technology. Full routing security on the Internet could take decades.DNS SecurityThe Domain Name System (DNS) is the phone book of the Internet. Or, for anyone under 25 who doesn’t remember phone books, it’s the system that takes hostnames (like cloudflare.com or facebook.com) and returns the Internet address where that host can be found. For example, as of this publication, www.cloudflare.com is 104.17.209.9 and 104.17.210.9 (IPv4) and 2606:4700::c629:d7a2, 2606:4700::c629:d6a2 (IPv6). Like BGP, DNS is completely insecure. Queries and responses sent unencrypted over the Internet are modifiable by anyone on the path.There are many ongoing attempts to add security to DNS, such as:DNSSEC that adds a chain of digital signatures to DNS responsesDoT/DoH that wraps DNS queries in the TLS encryption protocol (more on that later)Both technologies are slowly gaining adoption, but have a long way to go.DNSSEC-signed responses served by CloudflareCloudflare’s 1.1.1.1 resolver queries are already over 5% DoT/DoHJust like RPKI, securing DNS comes with a performance cost, making it less attractive to users. However,Services like 1.1.1.1 provide extremely fast DNS, which means that for many users, encrypted DNS is faster than the unencrypted DNS from their ISP.This performance improvement makes it appealing for customers of privacy-conscious applications, like Firefox and Cloudflare’s 1.1.1.1 app, to adopt secure DNS.The WebTransport Layer Security (TLS) is a cryptographic protocol that gives two parties the ability to communicate over an encrypted and authenticated channel. TLS protects communications from eavesdroppers even in the event of a BGP hijack. TLS is what puts the “S” in HTTPS. TLS protects web browsing against multiple types of network adversaries.Requests hop from network to network over the InternetFor unauthenticated protocols, an attacker on the path can impersonate the serverAttackers can use BGP hijacking to change the path so that communication can be interceptedAuthenticated protocols are protected from interception attacksThe adoption of TLS on the web is partially driven by the fact that:It’s easy and free for websites to get an authentication certificate (via Let’s Encrypt, Universal SSL, etc.)Browsers make TLS adoption appealing to website operators by only supporting new web features such as HTTP/2 over HTTPS.This has led to the rapid adoption of HTTPS over the last five years.HTTPS adoption curve (from Google Chrome)‌‌To further that adoption, TLS recently got an upgrade in TLS 1.3, making it faster and more secure (a combination we love). It’s taking over the Internet!TLS 1.3 adoption over the last 12 months (from Cloudflare's perspective)Despite this fantastic progress in the adoption of security for routing, DNS, and the web, there are still gaps in the trust model of the Internet. There are other things needed to help build the Internet of the future. To find and identify these gaps, we lean on research experts.Research Farm to TableCryptographic security on the Internet is a hot topic and there have been many flaws and issues recently pointed out in academic journals. Researchers often study the vulnerabilities of the past and ask:What other critical components of the Internet have the same flaws?What underlying assumptions can subvert trust in these existing systems?The answers to these questions help us decide what to tackle next. Some recent research  topics we’ve learned about include:Quantum ComputingAttacks on Time SynchronizationDNS attacks affecting Certificate issuanceScaling distributed trust Cloudflare keeps abreast of these developments and we do what we can to bring these new ideas to the Internet at large. In this respect, we’re truly standing on the shoulders of giants.Future-proofing Internet CryptographyThe new protocols we are currently deploying (RPKI, DNSSEC, DoT/DoH, TLS 1.3) use relatively modern cryptographic algorithms published in the 1970s and 1980s.The security of these algorithms is based on hard mathematical problems in the field of number theory, such as factoring and the elliptic curve discrete logarithm problem.If you can solve the hard problem, you can crack the code. Using a bigger key makes the problem harder, making it more difficult to break, but also slows performance. Modern Internet protocols typically pick keys large enough to make it infeasible to break with classical computers, but no larger. The sweet spot is around 128-bits of security; meaning a computer has to do approximately 2¹²⁸ operations to break it. Arjen Lenstra and others created a useful measure of security levels by comparing the amount of energy it takes to break a key to the amount of water you can boil using that much energy. You can think of this as the electric bill you’d get if you run a computer long enough to crack the key.35-bit security is “Teaspoon security” -- It takes about the same amount of energy to break a 35-bit key as it does to boil a teaspoon of water (pretty easy).65 bits gets you up to “Pool security” – The energy needed to boil the average amount of water in a swimming pool.105 bits is “Sea Security” – The energy needed to boil the Mediterranean Sea.114-bits is “Global Security” –  The energy needed to boil all water on Earth.128-bit security is safely beyond that of Global Security – Anything larger is overkill.256-bit security corresponds to “Universal Security” – The estimated mass-energy of the observable universe. So, if you ever hear someone suggest 256-bit AES, you know they mean business.Post-Quantum of SolaceAs far as we know, the algorithms we use for cryptography are functionally uncrackable with all known algorithms that classical computers can run. Quantum computers change this calculus. Instead of transistors and bits, a quantum computer uses the effects of quantum mechanics to perform calculations that just aren’t possible with classical computers. As you can imagine, quantum computers are very difficult to build. However, despite large-scale quantum computers not existing quite yet, computer scientists have already developed algorithms that can only run efficiently on quantum computers. Surprisingly, it turns out that with a sufficiently powerful quantum computer, most of the hard mathematical problems we rely on for Internet security become easy! Although there are still quantum-skeptics out there, some experts estimate that within 15-30 years these large quantum computers will exist, which poses a risk to every security protocol online. Progress is moving quickly; every few months a more powerful quantum computer is announced.Luckily, there are cryptography algorithms that rely on different hard math problems that seem to be resistant to attack from quantum computers. These math problems form the basis of so-called quantum-resistant (or post-quantum) cryptography algorithms that can run on classical computers. These algorithms can be used as substitutes for most of our current quantum-vulnerable algorithms.Some quantum-resistant algorithms (such as McEliece and Lamport Signatures) were invented decades ago, but there’s a reason they aren’t in common use: they lack some of the nice properties of the algorithms we’re currently using, such as key size and efficiency.Some quantum-resistant algorithms require much larger keys to provide 128-bit securitySome are very CPU intensive,And some just haven’t been studied enough to know if they’re secure.It is possible to swap our current set of quantum-vulnerable algorithms with new quantum-resistant algorithms, but it’s a daunting engineering task. With widely deployed protocols, it is hard to make the transition from something fast and small to something slower, bigger or more complicated without providing concrete user benefits. When exploring new quantum-resistant algorithms, minimizing user impact is of utmost importance to encourage adoption. This is a big deal, because almost all the protocols we use to protect the Internet are vulnerable to quantum computers.Cryptography-breaking quantum computing is still in the distant future, but we must start the transition to ensure that today’s secure communications are safe from tomorrow’s quantum-powered onlookers; however, that’s not the most timely problem with the Internet. We haven’t addressed that...yet.Attacking timeJust like DNS, BGP, and HTTP, the Network Time Protocol (NTP) is fundamental to how the Internet works. And like these other protocols, it is completely insecure.Last year, Cloudflare introduced Roughtime support as a mechanism for computers to access the current time from a trusted server in an authenticated way.Roughtime is powerful because it provides a way to distribute trust among multiple time servers so that if one server attempts to lie about the time, it will be caught.However, Roughtime is not exactly a secure drop-in replacement for NTP.Roughtime lacks the complex mechanisms of NTP that allow it to compensate for network latency and yet maintain precise time, especially if the time servers are remote. This leads to imprecise time.Roughtime also involves expensive cryptography that can further reduce precision. This lack of precision makes Roughtime useful for browsers and other systems that need coarse time to validate certificates (most certificates are valid for 3 months or more), but some systems (such as those used for financial trading) require precision to the millisecond or below.With Roughtime we supported the time protocol of the future, but there are things we can do to help improve the health of security online today.Some academic researchers, including Aanchal Malhotra of Boston University, have demonstrated a variety of attacks against NTP, including BGP hijacking and off-path User Datagram Protocol (UDP) attacks.Some of these attacks can be avoided by connecting to an NTP server that is close to you on the Internet.However, to bring cryptographic trust to time while maintaining precision, we need something in between NTP and Roughtime.To solve this, it’s natural to turn to the same system of trust that enabled us to patch HTTP and DNS: Web PKI.Attacking the Web PKIThe Web PKI is similar to the RPKI, but is more widely visible since it relates to websites rather than routing tables.If you’ve ever clicked the lock icon on your browser’s address bar, you’ve interacted with it.The PKI relies on a set of trusted organizations called Certificate Authorities (CAs) to issue certificates to websites and web services.Websites use these certificates to authenticate themselves to clients as part of the TLS protocol in HTTPS.TLS provides encryption and integrity from the client to the server with the help of a digital certificate TLS connections are safe against MITM, because the client doesn’t trust the attacker’s certificateWhile we were all patting ourselves on the back for moving the web to HTTPS, some researchers managed to find and exploit a weakness in the system: the process for getting HTTPS certificates.Certificate Authorities (CAs) use a process called domain control validation (DCV) to ensure that they only issue certificates to websites owners who legitimately request them.Some CAs do this validation manually, which is secure, but can’t scale to the total number of websites deployed today.More progressive CAs have automated this validation process, but rely on insecure methods (HTTP and DNS) to validate domain ownership.Without ubiquitous cryptography in place (DNSSEC may never reach 100% deployment), there is no completely secure way to bootstrap this system. So, let’s look at how to distribute trust using other methods.One tool at our disposal is the distributed nature of the Cloudflare network.Cloudflare is global. We have locations all over the world connected to dozens of networks. That means we have different vantage points, resulting in different ways to traverse networks. This diversity can prove an advantage when dealing with BGP hijacking, since an attacker would have to hijack multiple routes from multiple locations to affect all the traffic between Cloudflare and other distributed parts of the Internet. The natural diversity of the network raises the cost of the attacks.A distributed set of connections to the Internet and using them as a quorum is a mighty paradigm to distribute trust, with or without cryptography.Distributed TrustThis idea of distributing the source of trust is powerful. Last year we announced the Distributed Web Gateway thatEnables users to access content on the InterPlanetary File System (IPFS), a network structured to reduce the trust placed in any single party.Even if a participant of the network is compromised, it can’t be used to distribute compromised content because the network is content-addressed.However, using content-based addressing is not the only way to distribute trust between multiple independent parties.Another way to distribute trust is to literally split authority between multiple independent parties. We’ve explored this topic before. In the context of Internet services, this means ensuring that no single server can authenticate itself to a client on its own. For example,In HTTPS the server’s private key is the lynchpin of its security. Compromising the owner of the private key (by hook or by crook) gives an attacker the ability to impersonate (spoof) that service. This single point of failure puts services at risk. You can mitigate this risk by distributing the authority to authenticate the service between multiple independently-operated services.TLS doesn’t protect against server compromiseWith distributed trust, multiple parties combine to protect the connectionAn attacker that has compromised one of the servers cannot break the security of the system‌‌The Internet barge is old and slow, and we’ve only been able to improve it through the meticulous process of patching it piece by piece. Another option is to build new secure systems on top of this insecure foundation. IPFS is doing this, and IPFS is not alone in its design. There has been more research into secure systems with decentralized trust in the last ten years than ever before. The result is radical new protocols and designs that use exotic new algorithms. These protocols do not supplant those at the core of the Internet (like TCP/IP), but instead, they sit on top of the existing Internet infrastructure, enabling new applications, much like HTTP did for the web.Gaining TractionSome of the most innovative technical projects were considered failures because they couldn’t attract users. New technology has to bring tangible benefits to users to sustain it: useful functionality, content, and a decent user experience. Distributed projects, such as IPFS and others, are gaining popularity, but have not found mass adoption. This is a chicken-and-egg problem. New protocols have a high barrier to entry—users have to install new software—and because of the small audience, there is less incentive to create compelling content. Decentralization and distributed trust are nice security features to have, but they are not products. Users still need to get some benefit out of using the platform.An example of a system to break this cycle is the web. In 1992 the web was hardly a cornucopia of awesomeness. What helped drive the dominance of the web was its users.The growth of the user base meant more incentive for people to build services, and the availability of more services attracted more users. It was a virtuous cycle.It’s hard for a platform to gain momentum, but once the cycle starts, a flywheel effect kicks in to help the platform grow.The Distributed Web Gateway project Cloudflare launched last year in Crypto Week is our way of exploring what happens if we try to kickstart that flywheel. By providing a secure, reliable, and fast interface from the classic web with its two billion users to the content on the distributed web, we give the fledgling ecosystem an audience.If the advantages provided by building on the distributed web are appealing to users, then the larger audience will help these services grow in popularity.This is somewhat reminiscent of how IPv6 gained adoption. It started as a niche technology only accessible using IPv4-to-IPv6 translation services.IPv6 adoption has now grown so much that it is becoming a requirement for new services. For example, Apple is requiring that all apps work in IPv6-only contexts.Eventually, as user-side implementations of distributed web technologies improve, people may move to using the distributed web natively rather than through an HTTP gateway. Or they may not! By leveraging Cloudflare’s global network to give users access to new technologies based on distributed trust, we give these technologies a better chance at gaining adoption.Happy Crypto WeekAt Cloudflare, we always support new technologies that help make the Internet better. Part of helping make a better Internet is scaling the systems of trust that underpin web browsing and protect them from attack. We provide the tools to create better systems of assurance with fewer points of vulnerability. We work with academic researchers of security to get a vision of the future and engineer away vulnerabilities before they can become widespread. It’s a constant journey.Cloudflare knows that none of this is possible without the work of researchers. From award-winning researcher publishing papers in top journals to the blog posts of clever hobbyists, dedicated and curious people are moving the state of knowledge of the world forward. However, the push to publish new and novel research sometimes holds researchers back from committing enough time and resources to fully realize their ideas. Great research can be powerful on its own, but it can have an even broader impact when combined with practical applications. We relish the opportunity to stand on the shoulders of these giants and use our engineering know-how and global reach to expand on their work to help build a better Internet.So, to all of you dedicated researchers, thank you for your work! Crypto Week is yours as much as ours. If you’re working on something interesting and you want help to bring the results of your research to the broader Internet, please contact us at research@cloudflare.com. We want to help you realize your dream of making the Internet safe and trustworthy.If you're a research-oriented engineering manager or student, we're also hiring in London and San Francisco!

Security Compliance at Cloudflare

CloudFlare Blog -

Cloudflare believes trust is fundamental to helping build a better Internet. One way Cloudflare is helping our customers earn their users’ trust is through industry standard security compliance certifications and regulations. Security compliance certifications are reports created by independent, third-party auditors that validate  and document a company’s commitment to security. These external auditors will conduct a rigorous review of a company’s technical environment and evaluate whether or not there are thorough controls - or safeguards - in place to protect the security, confidentiality, and availability of information stored and processed in the environment. SOC 2 was established by the American Institute of CPAs and is important to many of our U.S. companies, as it is a standardized set of requirements a company must meet in order to comply. Additionally, PCI and ISO 27001 are international standards. Cloudflare cares about achieving certifications because our adherence to these standards creates confidence to customers across the globe that we are committed to security. So, the Security team has been hard at work obtaining these meaningful compliance certifications.Since the beginning of this year, we have been renewing our PCI DSS certification in February, achieving SOC 2 Type 1 compliance in March, obtaining our ISO 27001 certification in April, and today we are proud to announce we are SOC 2 Type 2 compliant!Our SOC 2 JourneySOC 2 is a compliance certification that focuses on internal controls of an organization related to five trust services criteria. These criteria are: Security, Confidentiality, Availability, Processing Integrity, and Privacy. Each criterion presents a set of control standards that are established by the American Institute of Certified Public Accountants (AICPA) and are to be used to implement controls on the information systems of a company.Cloudflare’s Security team made the decision to evaluate our companies’ controls around three of the five criteria. We determined to pursue our SOC 2 compliance by evaluating our controls around Security, Confidentiality, and Availability across our entire organization. We first worked across the company to design and implement strong controls that meet the requirements set forth by the AICPA. This took effort and collaboration between teams in Engineering, IT, Legal, and HR to create strong controls that also make sense to our environment. Our external auditors then performed an audit of Cloudflare’s controls, and determined our security controls were suitably designed as of January 31, 2019.Three months after obtaining SOC 2 Type 1 compliance, the next step for Cloudflare was to demonstrate the controls we designed were actually operating effectively. Our SOC 2 Type 2 audit tested the operating effectiveness of Cloudflare’s security controls over this three month period. Cloudflare’s SOC 2 Type 2 report can be available upon request and describes the design of Cloudflare’s internal control framework around security, confidentiality and availability and the products and services in-scope for our certification.What else?SOC 3In addition to SOC 2 Type 2, Cloudflare also obtained our SOC 3 report from our independent external auditors. SOC 3 is a report for public consumption on the external auditor’s opinion and a narrative of Cloudflare’s control environment. Cloudflare’s Security team decided on obtaining our SOC 3 report so all customers and prospects could access our auditor’s opinion of our implementation of security, confidentiality, and availability controls. ISO/IEC 27001: 2013Prior to Cloudflare’s SOC audit, Cloudflare was working to mature our organizations’ Information Security Management System in order to obtain our ISO/IEC 27001: 2013 certification. ISO 27001 is an international management system standard developed by the International Organization for Standardization (ISO) and is an industry-wide accepted information security certification. Cloudflare’s commitment to achieving ISO/IEC 27001: 2013 certification was to demonstrate to our customers that we are committed to preserving the confidentiality, integrity, and availability of information on a global scale.The primary focus of ISO 27001:2013 requirements is the focus on implementation of an Information Security Management System (ISMS) and a comprehensive risk management program.  Cloudflare worked across the organization to implement the ISMS to ensure sensitive company information remains secure. Cloudflare’s ISMS was assessed by a third-party auditor, A-LIGN, and we received our ISO 27001: 2013 certification in April 2019. Cloudflare’s ISO 27001:2013 certificate is also available to customers upon request.PCI DSS v3.2.1Although Cloudflare has been PCI certified as a Level 1 Service Provider since 2014, our latest certification adheres to the newest security standards. The Payment Card Industry Data Security Standard (PCI DSS) is a global financial information security standards that ensures customers’ credit card data is safe and secure.Maintaining PCI DSS compliance is important for Cloudflare because not only are we evaluated as a merchant, but we are also a service provider. Cloudflare’s WAF product satisfies PCI requirement 6.6, and may be used by Cloudflare’s customers as a solution to prevent web-based attacks in front of public-facing web applications.Early in 2019, Cloudflare was audited by an independent Qualified Security Assessor to validate our adherence to the PCI DSS security requirements. Cloudflare’s latest PCI Attestation of Compliance (AOC) is available to customers upon request.Compliance Page on the WebsiteCloudflare is committed to helping our customers’ earn their user’s trust by ensuring our products are secure. The Security team is committed to adhering to security compliance certifications and regulations that maintain the security, confidentiality, and availability of company and client information. In order to help our customers keep track of the latest certifications, Cloudflare has launched our Compliance certification page - www.cloudflare.com/compliance. Today, you can view our status on all compliance certifications and download our SOC 3 report.

A free Argo Tunnel for your next project

CloudFlare Blog -

Argo Tunnel lets you expose a server to the Internet without opening any ports. The service runs a lightweight process on your server that creates outbound tunnels to the Cloudflare network. Instead of managing DNS, network, and firewall complexity, Argo Tunnel helps administrators serve traffic from their origin through Cloudflare with a single command.We built Argo Tunnel to remove the burden of securing and connecting servers to the Internet. This new model makes it easier to run a service in multi-cloud and hybrid deployments by replacing manual and error-prone work with a process that adds intelligence to the last-mile between Cloudflare and your origins or clusters. However, the service was previously only available to users with Cloudflare accounts. We want to make Argo Tunnel more accessible for any project.Starting today, any user, even those without a Cloudflare account, can try this new method of connecting their server to the Internet. Argo Tunnel can now be used in a free model that will create a new URL, known only to you, that will proxy traffic to your server. We’re excited to make connecting a server to the Internet more accessible for everyone.What is Argo Tunnel?Argo Tunnel replaces legacy models of connecting a server to the Internet with a secure, persistent connection to Cloudflare. Since Cloudflare first launched in 2010, customers have added their site to our platform by changing their name servers at their domain’s registrar to ones managed by Cloudflare. Administrators then create a DNS record in our dashboard that points visitors to their domain to their origin server.When requests are made for those domains, the queries hit our data centers first. We’re able to use that position to block malicious traffic like DDoS attacks. However, if attackers discovered that origin IP, they could bypass Cloudflare’s security features and attack the server directly. Adding additional protections against that risk introduced more hassle and configuration.One year ago, Cloudflare launched Argo Tunnel to solve those problems. Argo Tunnel connects your origin server to the Cloudflare network by running a lightweight daemon on your machine that only makes outbound calls. The process generates DNS records in the dashboard for you, removing the need to manually configure records and origin IP addresses.Most importantly, Argo Tunnel helps shield your origin by simplifying the firewall rules you need to configure. Argo Tunnel makes outbound calls to the Cloudflare network and proxies requests back to your server. You can then disable all ingress to the machine and ensure that Cloudflare’s security features always stand between your server and the rest of the Internet. In addition to secure, we made it fast. The connection uses our Argo Smart Routing technology to find the most performant path from your visitors to your origin.How can I use the free version?Argo Tunnel is now available to all users without a Cloudflare account. All that is needed is the Cloudflare daemon, cloudflared, running on your machine. With a single command, cloudflared will generate a random subdomain of “trycloudflare.com” and begin proxying traffic to your server.Install cloudflared on your web server or laptop; instructions are available here. If you have an older copy, you’ll first need to update your version to the latest (2019.6.0)Launch a web server.Run the terminal command below to start a free tunnel. cloudflared will begin proxying requests to your localhost server; no additional flags needed.$ cloudflared tunnelThe command above will proxy traffic to port 8080 by default, but you can specify a different port with the --url flag$ cloudflared tunnel --url localhost:7000cloudflared will generate a random subdomain when connecting to the Cloudflare network and print it in the terminal for you to use. This will make whatever server you are running on your local machine accessible to the world through a public URL only you know. The output will resemble the following:How can I use it?Run a web server on your laptop to share a project with collaborates on different networksTest mobile browser compatibility for a new sitePerform speed tests from different regionsWhy is it free?We want more users to experience the speed and security improvements of Argo Tunnel (and Argo Smart Routing). We hope you’ll feel the same way about those benefits after testing it with the free version and that you’ll start using it for your production sites.We also don’t guarantee any SLA or up-time of the free service - we plan to test new Argo Tunnel features and improvements on these free tunnels. This provides us with a group of connections to test before we deploy to production customers. Free tunnels are meant to be used for testing and development, not for deploying a production website.What’s next?You can read our guide here to start using the free version of Argo Tunnel. Got feedback? Please send it here.

Project Galileo: the view from the front lines

CloudFlare Blog -

Growing up in the age of technology has made it too easy for me to take the presence of the Internet for granted. It’s hard to imagine not being able to go online and connect with anyone in the world, whether I’m speaking with family members or following activists planning global rallies in support of a common cause. I find that as I forget the wonder of being connected, I become jaded. I imagine that many of you reading this blog feel the same way. I doubt you have gone a month, or even a week, this year without considering that the world might be better off without the Internet, or without parts of the Internet, or that your life would be better with a digital cleanse. Project Galileo is my antidote. For every person online who abuses their anonymity, there is an organization that literally could not fulfill their purpose without it. And they are doing amazing work.Working with ParticipantsAs program manager for Project Galileo, Cloudflare’s initiative to provide free services to vulnerable voices on the Internet, a large portion of my time is spent interacting with the project’s participants and partners. This includes a variety of activities. In my organizational role, I reach out to our partnering organizations, such as the National Democratic Institute and the Center for Democracy and Technology, about sponsoring new recipients. I also help recipients onboard their websites and technically explain our product and how it works. Answering emails from Project Galileo recipients is my favorite part of every day. I can still remember when the sense of wonder truly set in. A few weeks into my time at Cloudflare, I received a request from a local community healthcare clinic that was under attack. I was new, I didn’t have all the permissions I have now, and I didn’t fully understand how all of our systems worked (I still don’t, but I’m much better at figuring out who does). I started reaching out to other teams, all of whom eagerly volunteered their time. Within a few hours, a website that had been down for a week was back up, and best practices were being discussed to help them stay online in the future. About a week later I received a wonderful thank you message from the group, and made sure I sent it to those who had helped out and were invested. I treasure these little reminders in my day that what I’m doing makes a difference. In fact, I frequently question my luck in receiving all the praise for a project that functions thanks to the work of countless engineers, and other teams, who work tirelessly to make our product better. I try to find ways to pass these small moments on.It makes me laugh when participants who joined while I’ve been working on the project email me with an introduction along the lines of “I don’t know if you remember us, but…”. It makes sense, in the abstract. I receive a lot of emails, and around half of all recipients have joined since I started organizing the project. Still, I remember almost everyone who I’ve written to. How could I forget the person who signed off all their emails with something joyful they were doing at the moment, or the one who told me that they had finally made it through a week without their website going down? In many ways, on Project Galileo I interact less with organizations and more with a set of extremely passionate people. The purpose and drive of these individuals infect me with a sense of wonder and excitement, even when our only communications are virtual.Project Galileo partnersInternal CommitmentProject Galileo doesn’t just bring out the best of the Internet through our recipients, it also brings out the best in Cloudflare. Working on Project Galileo has given me a lot of leeway to explore all aspects of the company. We don’t have a large team in DC, and most of us are on the Policy team. To do my job, I rely on being able to contact teams globally, from Support to Trust and Safety to Solutions Engineering. I’ve chatted with Support team members at 2am to fix an emergency situation, and had a Solutions Engineer on call from 11pm to 1am on a Friday night to support an organization during an event. Even when frustrating or anxiety provoking, these times make me proud to work for an organization that not only vocally supports this project, but whose members commit their time to it despite competing priorities.At risk of being overly grandiose, there are a lot of hopes and dreams tied up in Project Galileo. There is the dream that the Internet is a place for vulnerable voices, no matter how small, to advocate for change. There is the dream that companies will use their products to help deserving groups who may not otherwise be able to afford them. As for me, I hope that every day I do something that makes the world a little better. It is an honor to carry these hopes and dreams within the company, and I strive to be a good steward.Happy 5th Birthday, Project Galileo! Here’s to many more.

Protecting Project Galileo websites from HTTP attacks

CloudFlare Blog -

Yesterday, we celebrated the fifth anniversary of Project Galileo. More than 550 websites are part of this program, and they have something in common: each and every one of them has been subject to attacks in the last month. In this blog post, we will look at the security events we observed between the 23 April 2019 and 23 May 2019.Project Galileo sites are protected by the Cloudflare Firewall and Advanced DDoS Protection which contain a number of features that can be used to detect and mitigate different types of attack and suspicious traffic. The following table shows how each of these features contributed to the protection of sites on Project Galileo. Firewall Feature Requests Mitigated Distinct originating IPs Sites Affected (approx.) Firewall Rules 78.7M 396.5K ~ 30 Security Level 41.7M 1.8M ~ 520 Access Rules 24.0M 386.9K ~ 200 Browser Integrity Check 9.4M 32.2K ~ 500 WAF 4.5M 163.8K ~ 200 User-Agent Blocking 2.3M 1.3K ~ 15 Hotlink Protection 2.0M 686.7K ~ 40 HTTP DoS 1.6M 360 1 Rate Limit 623.5K 6.6K ~ 15 Zone Lockdown 9.7K 2.8K ~ 10 WAF (Web Application Firewall)Although not the most impressive in terms of blocked requests, the WAF is the most interesting as it identifies and blocks malicious requests, based on heuristics and rules that are the result of seeing attacks across all of our customers and learning from those. The WAF is available to all of our paying customers, protecting them against 0-days, SQL/XSS exploits and more. For the Project Galileo customers the WAF rules blocked more than 4.5 million requests in the month that we looked at, matching over 130 WAF rules and approximately 150k requests per day.Heat map showing the attacks seen on customer sites (rows) per day (columns)This heat map may initially appear confusing but reading one is easy once you know what to expect so bear with us! It is a table where each line is a website on Project Galileo and each column is a day. The color represents the number of requests triggering WAF rules - on a scale from 0 (white) to a lot (dark red). The darker the cell, the more requests were blocked on this day.We observe malicious traffic on a daily basis for most websites we protect. The average Project Galileo site saw malicious traffic for 27 days in the 1 month observed, and for almost 60% of the sites we noticed daily events.Fortunately, the vast majority of websites only receive a few malicious requests per day, likely from automated scanners. In some cases, we notice a net increase in attacks against some websites - and a few websites are under a constant influx of attacks.Heat map showing the attacks blocked for each WAF rule (rows) per day (columns)This heat map shows the WAF rules that blocked requests by day. At first, it seems some rules are useless as they never match malicious requests, but this plot makes it obvious that some attack vectors become active all of a sudden (isolated dark cells). This is especially true for 0-days, malicious traffic starts once an exploit is published and is very active on the first few days. The dark active lines are the most common malicious requests, and these WAF rules protect against things like XSS and SQL injection attacks.DoS (Denial of Service)A DoS attack prevents legitimate visitors from accessing a website by flooding it with bad traffic.  Due to the way Cloudflare works, websites protected by Cloudflare are immune to many DoS vectors, out of the box. We block layer 3 and 4 attacks, which includes SYN floods and UDP amplifications. DNS nameservers, often described as the Internet’s phone book, are fully managed by Cloudflare, and protected - visitors know how to reach the websites.Line plot - requests per second to a website under DoS attackCan you spot the attack?As for layer 7 attacks (for instance, HTTP floods), we rely on Gatebot, an automated tool to detect, analyse and block DoS attacks, so you can sleep. The graph shows the requests per second we received on a zone, and whether or not it reached the origin server. As you can see, the bad traffic was identified automatically by Gatebot, and more than 1.6 million requests were blocked as a result.Firewall RulesFor websites with specific requirements we provide tools to allow customers to block traffic to precisely fit their needs. Customers can easily implement complex logic using Firewall Rules to filter out specific chunks of traffic, block IPs / Networks / Countries using Access Rules and Project Galileo sites have done just that. Let’s see a few examples.Firewall Rules allows website owners to challenge or block as much or as little traffic as they desire, and this can be done as a surgical tool “block just this request” or as a general tool “challenge every request”.For instance, a well-known website used Firewall Rules to prevent twenty IPs from fetching specific pages. 3 of these IPs were then used to send a total of 4.5 million requests over a short period of time, and the following chart shows the requests seen for this website. When this happened Cloudflare, mitigated the traffic ensuring that the website remains available.Cumulative line plot. Requests per second to a websiteAnother website, built with WordPress, is using Cloudflare to cache their webpages. As POST requests are not cacheable, they always hit the origin machine and increase load on the origin server - that’s why this website is using firewall rules to block POST requests, except on their administration backend. Smart!Website owners can also deny or challenge requests based on the visitor’s IP address, Autonomous System Number (ASN) or Country. Dubbed Access Rules, it is enforced on all pages of a website - hassle-free.For example, a news website is using Cloudflare’s Access Rules to challenge visitors from countries outside of their geographic region who are accessing their website. We enforce the rules globally even for cached resources, and take care of GeoIP database updates for them, so they don’t have to.The Zone Lockdown utility restricts a specific URL to specific IP addresses. This is useful to protect an internal but public path being accessed by external IP addresses. A non-profit based in the United Kingdom is using Zone Lockdown to restrict access to their WordPress’ admin panel and login page, hardening their website without relying on non official plugins. Although it does not prevent very sophisticated attacks, it shields them against automated attacks and phishing attempts - as even if their credentials are stolen, they can’t be used as easily.Rate LimitingCloudflare acts as a CDN, caching resources and happily serving them, reducing bandwidth used by the origin server … and indirectly the costs. Unfortunately, not all requests can be cached and some requests are very expensive to handle. Malicious users may abuse this to increase load on the server, and website owners can rely on our Rate Limit to help them: they define thresholds, expressed in requests over a time span, and we make sure to enforce this threshold. A non-profit fighting against poverty relies on rate limits to protect their donation page, and we are glad to help!Security LevelLast but not least, one of Cloudflare’s greatest assets is our threat intelligence. With such a wide lens of the threat landscape, Cloudflare uses our Firewall data, combined with machine learning to curate our IP Reputation databases. This data is provided to all Cloudflare customers, and is configured through our Security Level feature. Customers then may define their threshold sensitivity, ranging  from Essentially Off to I’m Under Attack. For every incoming request, we ask visitors to complete a challenge if the score is above a customer defined threshold. This system alone is responsible for 25% of the requests we mitigated: it’s extremely easy to use, and it constantly learns from the other protections.ConclusionWhen taken together, the Cloudflare Firewall features provide our Project Galileo customers comprehensive and effective security that enables them to ensure their important work is available. The majority of security events were handled automatically, and this is our strength - security that is always on, always available, always learning.

Project Galileo: Lessons from 5 years of protecting the most vulnerable online

CloudFlare Blog -

Today is the 5th anniversary of Cloudflare's Project Galileo. Through the Project, Cloudflare protects—at no cost—nearly 600 organizations around the world engaged in some of the most politically and artistically important work online. Because of their work, these organizations are attacked frequently, often with some of the fiercest cyber attacks we’ve seen.Since it launched in 2014, we haven't talked about Galileo much externally because we worry that drawing more attention to these organizations may put them at increased risk. Internally, however, it's a source of pride for our whole team and is something we dedicate significant resources to. And, for me personally, many of the moments that mark my most meaningful accomplishments were born from our work protecting Project Galileo recipients.The promise of Project Galileo is simple: Cloudflare will provide our full set of security services to any politically or artistically important organizations at no cost so long as they are either non-profits or small commercial entities. I'm still on the distribution list that receives an email whenever someone applies to be a Project Galileo participant, and those emails remain the first I open every morning.The Project Galileo BackstoryFive years ago, Project Galileo was born out of a mistake we made. At the time, Cloudflare's free service didn't include DDoS mitigation. If a free customer came under attack, our operations team would generally stop proxying their traffic. We did this to protect our own network, which was much smaller than it is today.Usually this wasn't a problem. Most sites that got attacked at the time were companies or businesses that could pay for our services. Every morning I'd receive a report of the sites that were kicked off Cloudflare the night before. One morning in late February 2014 I was reading the report as I walked to work. One of the sites listed as having been dropped stood out as familiar but I couldn't place it.I tried to pull up the site on my phone but it was offline, presumably because we were no longer shielding the site from attack. Still curious, I did a quick search and found a Wikipedia page describing the site. It was an independent newspaper in Ukraine and had been covering the ongoing Russian invasion of Crimea.I felt sick.When Nation States AttackWhat we later learned was that this publication had come under a significant attack, most likely directly from the Russian government. The newspaper had turned to Cloudflare for protection. Their IT director actually tried to pay for our higher tier of service but the bank tied to the publication's credit card had had its systems disrupted by a cyber attack as well and the payment failed. So they’d signed up for the free version of Cloudflare and, for a while, we mitigated the attack.The attack was large enough that it triggered an alert in our Network Operations Center (NOC). A member of our Systems Reliability Engineering (SRE) team who was on call investigated and found a free customer being pummeled by a major attack. He followed our run book and triggered a FINT — which stands for "Fail Internal" — directing traffic from the site directly back to its origin rather than passing through Cloudflare's protective edge. Instantly the site was overwhelmed by the attack and, effectively, fell off the Internet.Broken ProcessI should be clear: the SRE didn't do anything wrong. He followed the procedures we had established at the time exactly. He was a great computer scientist, but not a political scientist, so didn't recognize the site or understand its importance due to the situation at the time in Crimea and why a newspaper covering it may come under attack. But, the next morning, as I read the report on my walk in to work, I did.Cloudflare's mission is to help build a better Internet. That day we failed to live up to that mission. I knew we had to do something.Politically or Artistically Important?It was relatively easy for us to decide to provide Cloudflare's security services for free to politically or artistically important non-profits and small commercial entities. We were confident that we could stand up to even the largest attacks. What we were less confident about was our ability to determine who was "politically or artistically important."While Cloudflare runs infrastructure all around the world, our team is largely based in San Francisco, Austin, London, and Singapore. That certainly gives us a viewpoint, but it isn't a particularly globally representative viewpoint. We're also a very technical organization. If we surveyed our team to determine what organizations deserved protection we'd no-doubt identify a number of worthy organizations that were close to home and close to our interests, but we'd miss many others.We also worried that it was dangerous for an infrastructure provider like Cloudflare to start making decisions about what content was "good." Doing so inherently would imply that we were in a position to make decisions about what content was "bad." While moderating content and curating communities is appropriate for some more visible platforms, the deeper you go into Internet infrastructure, the less transparent, accountable, and consistent those decisions inherently become.Turning to the ExpertsSo, rather than making the determination of who was politically or artistically important ourselves, we turned to civil society organizations that were experts in exactly that. Initially, we partnered with 15 organizations, including: Access NowAmerican Civil Liberties Union (ACLU)Center for Democracy and Technology (CDT)Centre for Policy AlternativesCommittee to Protect Journalists (CPJ)Electronic Frontier Foundation (EFF)Engine AdvocacyFreedom of the Press FoundationMeedanMozillaOpen Tech FundOpen Technology InstituteWe agreed that if any partner said that a non-profit or small commercial entity that applied for protection was "politically or artistically important" then we would extend our security services and protect them, no matter what.With that, Project Galileo was born. Nearly 600 organizations are currently being protected under Project Galileo. We've never removed an organization from protection in spite of occasional political pressure as well as frequent extremely large attacks.Organizations can apply directly through Cloudflare for Project Galileo protection or can be referred by a partner. Today, we've grown the list of partners to 28, adding:Anti-Defamation LeagueAmnesty InternationalBusiness & Human Rights Resource CentreCouncil of EuropeDerechos DigitalesFourth EstateFrontline DefendersInstitute for War & Peace Reporting (IWPR)LION PublishersNational Democratic Institute (NDI)Reporters Sans FrontièresSocial Media Exchange (SMEX)Sontusdatos.orgTech Against TerrorismWorld Wide Web FoundationX-LabCloudflare's Mission: Help Build a Better InternetSome companies start with a mission. Cloudflare was not one of those companies. When Michelle, Lee, and I started building Cloudflare it was because we thought we'd identified a significant business opportunity. Truth be told, I thought the idea of being "mission driven" was kind of hokum.I clearly remember the day that changed for me. The director of one of the Project Galileo partners called me to say that he had three journalists who had received protection under Project Galileo that were visiting San Francisco and asked if it would be okay to bring them by our office. I said sure and carved out a bit of time to meet with them.The three journalists turned out to all be covering alleged government corruption in their home countries. One was from Angola, one was from Ethiopia, and they wouldn't tell me the name or home country of the third because he was "currently being hunted by death squads." All three of them hugged me. One had tears in his eyes. And then they proceeded to tell me about how they couldn't do their work as journalists without Cloudflare's protection.There are incredibly brave people doing important work and risking their lives around the world. Some of them use the Internet to reach their audience. Whether it’s African journalists covering alleged government corruption, LGBTQ communities in the Middle East providing support, or human rights workers in repressive regimes, unfortunately they all face the risk that the powerful forces that oppose them will use cyber attacks to silence them.I'm proud of the work we've done through Project Galileo over the last five years lending the full weight of Cloudflare to protect these politically and artistically important organizations. It has defined our mission to help build a better Internet.While we respect the confidentiality of the organizations that receive support under the Project, I'm thankful that a handful have allowed us to tell their stories. I encourage you to read about our newest recipients of the Project:MajalWomen's March GlobalVOST PortugalBullyingCanadaAnd, finally, if you know of an organization that needs Project Galileo's protection, please let them know we're here and happy to help.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Service Provider Blogs