Corporate Blogs

Adventures in Timezones: How a Server’s Timezone Can Go Wrong

Nexcess Blog -

For the average American living in Chicago, being able to tell the time in New York is easy. Simply take the time in Chicago and add one hour: 10am becomes 11am. Yet timezones becomes more complicated when geopolitics are involved, and for any tasks that involve time processing, knowledge of the correct timezone is vital.… Continue reading →

Join Cloudflare & Moz at our next meetup, Serverless in Seattle!

CloudFlare Blog -

Photo by oakie / UnsplashCloudflare is organizing a meetup in Seattle on Tuesday, June 25th and we hope you can join. We’ll be bringing together members of the developers community and Cloudflare users for an evening of discussion about serverless compute and the infinite number of use cases for deploying code at the edge. To kick things off, our guest speaker Devin Ellis will share how Moz uses Cloudflare Workers to reduce time to first byte 30-70% by caching dynamic content at the edge. Kirk Schwenkler, Solutions Engineering Lead at Cloudflare, will facilitate this discussion and share his perspective on how to grow and secure businesses at scale. Next up, Developer Advocate Kristian Freeman will take you through a live demo of Workers and highlight new features of the platform. This will be an interactive session where you can try out Workers for free and develop your own applications using our new command-line tool.Food and drinks will be served til close so grab your laptop and a friend and come on by!View Event Details & Register HereAgenda: 5:00 pm Doors open, food and drinks 5:30 pm Customer use case by Devin and Kirk 6:00 pm Workers deep dive with Kristian 6:30 - 8:30 pm Networking, food and drinks

Liquid Web Becomes a Million Kilowatt Hour Efficiency Partner

Liquid Web Official Blog -

Liquid Web was recently honored with the Million Kilowatt Hour Efficiency Partner Award from the Lansing Board of Water and Light/Hometown Energy Savers. This award recognized efforts made in 2018 which significantly reduced the energy footprint made by our data center and the equipment running within. Other winners this year for the award included Meijer, Lansing Mall, East Lansing Public Schools, and the State of Michigan. Past recipients have included GM, Auto-Owners Insurance, Boji Towers, Lansing Schools, and McLaren Hospital to name a few. Liquid Web was recognized for the award at a ceremony in Lansing, Michigan, where the award was presented to Aaron Reif, the Data Center Project Manager, and Kearn Reif, one of our fantastic Maintenance Technicians. Both individuals contributed greatly to the achievement of this award through leading efforts during the energy projects that led to the reduction in wattage used by our data center in Lansing. It was good for the team’s hard work, planning, and management to be recognized,” stated Scott Haraburda, Liquid Web’s Director of Facilities and Infrastructure. While the award was a pleasant surprise to the team at Liquid Web, it was hard earned. Our team has been busy making energy improvements for the last 18 months, which included a complete company-wide LED lighting conversion, along with a replacement of our HVAC equipment for cooling the data center and the server equipment as part of the Lean and Green Michigan PACE project. When asked for an update on the PACE improvements to the cooling equipment at the Lansing data center, Haraburda mentioned it was going excellent. “It is a very unique project solving a unique problem. Total replacement of an HVAC system in a live data center is tricky, creating a new way to cool the site and having non-standard situations make it even more complex,” commented Haraburda. Liquid Web is committed to an aggressive energy savings campaign through continuous energy improvements and the reduction of the carbon footprint left by the data centers. And as always – are committed to providing world-class infrastructure and service to all of our customers. The post Liquid Web Becomes a Million Kilowatt Hour Efficiency Partner appeared first on Liquid Web.

How Will My Hosting Plan Affect My SEO?

InMotion Hosting Blog -

Can your website hosting plan affect SEO? Absolutely. When it comes to SEO, one of the last things people consider is the web host – but your choice of provider can matter a lot. Why? Because your hosting service and the plan you choose can greatly influence the performance – and ultimately the traffic levels – of your website. In this article, you’ll learn how your web hosting can affect your search engine rankings and how to choose a plan that will actually improve your SEO. Continue reading How Will My Hosting Plan Affect My SEO? at The Official InMotion Hosting Blog.

A Guide to SEO Basics for Beginners

The Domain.com Blog -

SEO: just another buzzword? If that’s what you’re thinking, we’re delighted to tell you that nothing could be further from the truth. If you have a website, you’ve likely heard of SEO, and with good reason — it isn’t going anywhere. Understanding and implementing SEO fundamentals directly contributes to increased digital and business success, so it’s time you learned what SEO means and how it works. In this guide, we’re covering the SEO basics you need to know to help optimize your website. We’ll discuss: What is SEO?Why does SEO matter? How will SEO help me?The anatomy of a SERP. How to track your progress.Simple SEO strategies you can start today.What not to do with SEO.Where can I learn more on SEO? Let’s jump in, shall we? What does SEO mean? SEO is an abbreviation that stands for Search Engine Optimization. SEO is the practice of positively influencing your search engine result rankings, thereby increasing the quantity and quality of your website traffic. To put it simply, SEO gets your website in front of more people on search engines (like Google, Bing, or DuckDuckGo) without needing to pay for ads. Although search engine optimization sounds like you’d be making changes to the search engines themselves, the enhancements you’ll be making will be to your website, blog, or content. Why does SEO matter, does it affect my business? Need more convincing as to why you should implement a SEO strategy? Consider these facts gathered from Search Engine Journal: 91.5 percent: The average traffic share generated by the sites listed on the first Google search results page.51 percent of all website traffic comes from organic search, 10 percent from paid search, 5 percent for social, and 34 percent from all other sources. Over half of all website traffic comes from organic search — this is website traffic you AREN’T paying for, so refining your SEO strategy can save you money.4 in 5 consumers use search engines to find local information.~2 trillion: The estimated number of searches Google is handling per year worldwide. That breaks down to 63,000 searches per second; 3.8 million searches per minute; 228 million searches per hour; 5.5 billion searches per day; and 167 billion searches per month.~20: The number of times SEO has more traffic opportunity than PPC (Pay-Per-Click) on both mobile and desktop. Does SEO affect your business? Without question, yes. But exactly how much it affects your business is up to you. If you don’t do anything to optimize and edit your website and content for SEO then it can’t work for you. But if you take a few minutes to optimize your website, you’ll reap the benefits of SEO — an increase in the quantity and quality of traffic to your site due to improved search result rankings. SEO is uniquely different from other forms of digital marketing in that, with SEO, people are already searching for you. They need your services or products and they’re going to a search engine to figure out where they can get them. With SEO, you aren’t paying for ads in an attempt to woo fickle prospects back to your site — these people are already interested in what you’re selling, so help them find you by implementing an SEO strategy before your competitor does. The anatomy of a SERP What happens after you click “Search” on a search engine? You’re taken to the SERP, or Search Engine Results Page. (We’ve pulled the following SERP examples from Google because they dominate the search engine market worldwide with a 90.46% market share.) Depending on your search terms your SERP could include different types of results; however, there are some components on the results page that don’t change. Here’s what’s always included: Paid Ads (or PPC, Pay Per Click): These results appear first because the businesses they advertise have paid money for their top placements.Organic Search Results: Organic, or owned, search results aren’t paid for; instead, these results appear further up or down on the page depending on how well they’re optimized for SEO. Both paid and organic results can also display as: Basic search resultsThese results display as links with metadata (the description under the URL.) Basic results don’t include images, graphs, or shopping suggestions on the main SERP. Pro tip: If you do decide to pay for ads, avoid clicking on those search results yourself. You’ll cost yourself money since you’re charged per click on those results. Enriched search resultsThis is the most common SERP you’ll see, although it won’t always look the same. Enriched search results can include paid ads, organic results, sponsored links, local packs (local businesses that meet your search criteria), product carousels, and more. Google is always making updates and changes to its SEO algorithms to display the most relevant search results, so enriched search results won’t always show the same things.   If you click on a local search result it will take you to a page where you can find out more about those businesses. It looks like this: Pro tip: If you have a business, claim your “Google My Business” listing so you can control and edit information displayed about your business. “Add missing information” isn’t a good look when trying to attract visitors to your site. Before we continue, when was the last time you performed an online search to see how your business or website ranks? If you haven’t done that in a while, we recommend doing so. It’s a good idea to know where you stand in search rankings so you can better gauge your SEO efforts and improvements. Can I measure my SEO efforts? You certainly can! And with Google Search Console — it’s free. Google Search Console gives you deep insight into your website. You can discover how people are getting to your site — where they’re coming from, what device they’re using — and what the most popular, or heavily trafficked, pages of your website are. The Search Console allows you to submit your sitemap or individual URLs for search engine crawling, alerts you to issues with your site, and more. If you haven’t used it before, don’t fret. Click this link to get to the Search Console. Then, click “Start now.” On the next page you’ll need to input your Domain(s) and/or URL Prefix(es.) If you choose the Domain option, you will have to verify your pages using DNS to prove that you’re the owner of the domain and all its subdomains.Verifying your site and pages is for your security. Google Search Console provides great insight into your website and that’s information only you should have. By requiring verification, Google ensures a competitor won’t have access to your website data. If you choose the URL Prefixes method, you’ll have a few options to verify your account; you can upload an HTML file (a bit more advanced, and requires access to a site’s root directory), or if you already have Google Analytics set up you can verify your site on Search Console that way. This beginner’s guide to Google Search Console by Moz walks you through all the ways you can verify your site. What SEO tactics can I implement now? Here are three ways you can vastly improve your SEO. Write good contentGood content pays off when it comes to search engine results rankings. What makes for good content?It’s linkable. Search engines like content that can be linked to from other pages. If you create content, but have it gated (i.e. – you can only access it once you’re logged in or completed a similar action) then search engines won’t rank it as highly. They’re in the business of providing information to those who are seeking it, so make your content discoverable and linkable.Aim for at least 1000 words. Search engines reward robust content, so that 300-word blog post you’re hoping rises to the top of the search results? — that needs to be fleshed out, and with relevant, valuable content.Valuable, informative content drives demand. Search engines reward in-demand content with improved search result rankings. So if all you’ve done is write 1000+ words that no one cares to read, and doesn’t address your audience’s needs, you’ve wasted your time as it won’t rank highly in. You can figure out what your audience wants to know and what’s in demand by looking at keyword research. Use WordPress? There are many free SEO tools and plugins that can help you and provide suggestions as you work, like Yoast or ThirstyAffiliates. Keyword researchWhy is keyword research important? If you know what your desired audience is searching for, you know what words and terms to include in your content — thereby giving yourself a boost in results ranking. There are a variety of free tools that exist to help you identify trending keywords, like Google Trends. This tool allows you to search keywords and terms (and compare them against one another) to discover how well-searched those terms are. This information can influence what keywords you use in your content. If there’s a term that’s searched a lot and relates to your content, use it. Here’s a list of 10 free keyword research tools put together by Ahref, many of which provide an even deeper level of insight into the keywords you should use.On-page SEOMoz describes On-page SEO as “… the practice of optimizing individual web pages in order to rank higher and earn more relevant traffic in search engines.” So what are the optimizable components of your individual webpages? Content, which we touched upon earlier.Title Tag Title tags are important because they dictate the display title on SERPs (search engine results pages). It’s likely the first thing people will see when they scan their search results, so a good title tag can draw them in and get them to click on the result. Trying to write a good title tag? Avoid ALL CAPS, don’t stuff as many keywords as possible into it, and keep it under 60 characters. Some characters take up more space than others, so you can use free title tag preview tools to help visualize what your title tag will really look like. URL structureIt’s easy to make sure your URLs are working for you on search engines instead of against you. How’s that? Make sure your URLs display page hierarchy. By doing so, your URL is easily read by search engines and explains where the content or page can be found on your site. What does a good URL look like?www.domain.com/domains/transfer and here’s the breakdown of the page hierarchy: Now, imagine if the URL listed above looked something like “www.domain.com/int489/trans74087.” What does that tell the search engines? Not a whole lot, and definitely not where the page resides on your site. For more information on On-page SEO ranking factors, take a look here. What should I avoid when getting started with SEO? For every piece of good SEO advice out there, there are a few bad pieces floating around. No matter whose friend’s cousin’s uncle tells you it’s a good idea, avoid the following practices. Keyword stuffingSearch engines are constantly improving and refining their algorithms to make sure the most valuable content is surfaced first. You can’t fool them by stuffing your content full of keywords and calling it a day. Duplicate contentWhen the same piece of content appears on the internet in various places using different URLs, it’s considered duplicate content. It may seem like having your content available in more places, with different URLs, is a good idea — more ways for people to find you, right? — it isn’t. Duplicate content confuses search engines. Which URL is the primary or correct one for the content? Should they split the results and show half the searchers one URL and the other half another? What page, or URL, ends up getting the credit for the traffic? Instead of dealing with all of that, chances are you’ll suffer a loss of traffic because the search engine won’t surface all of the duplicates. Writing for search engines instead of peopleSearch engines are in the business of getting the correct and best information to the people who need it, or search for it. If you’re writing choppy, keyword-stuffed sentences they’ll be pretty painful for a human to read, so they won’t. If you don’t have people reading or interested in your content, there’s no demand. No demand = poor search result rankings. Thin contentYou should never create content for the sake of creating content. Make sure it’s quality content — relevant to your audience and at least 1000 words long — so search engines are more likely to surface it higher on SERPs. Where can I learn more about SEO? This introduction to SEO serves to get you acquainted with search engine optimization and lay down the groundwork, but don’t forget, the more you invest in SEO the better off your website will be. Once you’re familiar with the topics we’ve discussed here, challenge yourself to take it to the next level with these topics. White Hat vs. Black Hat SEO You know how in movies the bad guys are normally in dark, depressed colors while the good guys wear bright, or white colors? You can think of white hat and black hat SEO in the same way. Black hat SEO tactics may seem to pay off at first, but just like with bad guys, what you do will come back to haunt you (like getting blacklisted from search engines!) Google, for instance, is constantly updating and refining its search algorithms. If it notices questionable behavior (like keyword stuffing) they’ll penalize those behaviors in their updates — so that “hack” you discovered that allows you to rank on page 1 of search results? That won’t work once the algorithm is changed, and you’ll lose your authority. Good SEO habits, or white hat SEO, won’t put you at risk of being penalized by search engines, so your authority will continue to climb. Off-page SEO Unlike on-page SEO, off-page SEO (or off-site SEO) consists of tactics to improve your search engine result rankings that aren’t done on your site. There are a variety of things you can do, but link-building is the most well-known. The more links that exist to your site and content, the better (within reason, if you spam every website you can think of with your links in comments that’s not ok.) Link building happens a variety of ways; naturally, when someone finds your content to be relevant and links to it in one of their posts or pages, manually, when you deliberately work to increase the number of links that exist for your site, say by asking clients or associates to link to your content, and self-created. Self-created links, including links to your site or content on random social media posts and blog comments, can be good in moderation. Too many spammy posts or comments ventures into black hat SEO territory, so tread carefully. Putting it all together If you work on improving your SEO tactics, your website and business will thank you. A good SEO strategy increases the likelihood of your content and pages displaying higher in search engine results. When your content shows up sooner in search results you get more website traffic and better quality website traffic, after all, those are people already searching for what you have to offer. As you dive into SEO, remember to take stock of where your pages and content show up in SERPs today so you can gauge your progress and SEO results tomorrow. Use this introduction to SEO to help you write better content, create informative URL structures, and understand the SEO tactics to avoid. The post A Guide to SEO Basics for Beginners appeared first on Domain.com | Blog.

What Type of Hosting Will Be Best for Increasing Site Speed?

InMotion Hosting Blog -

Let’s look at how VPS hosting can have a positive impact on the speed of your website. Some of us can remember the old days of dial-up internet connections and how slow they were to load. You could type in the URL, go make yourself a sandwich, and still watch it load by the time you got back. But today, internet speeds continue to get much faster and website load speeds are having to keep up if sites want to be competitive with consumers. Continue reading What Type of Hosting Will Be Best for Increasing Site Speed? at The Official InMotion Hosting Blog.

Introducing time.cloudflare.com

CloudFlare Blog -

This is a guest post by Aanchal Malhotra, a Graduate Research Assistant at Boston University and former Cloudflare intern on the Cryptography team.Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (Universal SSL) to go along with our existing free HTTP plan. When we launched the 1.1.1.1 DNS resolver, we also supported the new secure versions of DNS (DNS over HTTPS and DNS over TLS). Today, we are doing the same thing for the Network Time Protocol (NTP), the dominant protocol for obtaining time over the Internet.This announcement is personal for me. I've spent the last four years identifying and fixing vulnerabilities in time protocols. Today I’m proud to help introduce a service that would have made my life from 2015 through 2019 a whole lot harder: time.cloudflare.com, a free time service that supports both NTP and the emerging Network Time Security (NTS) protocol for securing NTP. Now, anyone can get time securely from all our datacenters in 180 cities around the world.You can use time.cloudflare.com as the source of time for all your devices today with NTP, while NTS clients are still under development. NTPsec includes experimental support for NTS. If you’d like to get updates about NTS client development, email us asking to join at time-updates@cloudflare.com. To use NTS to secure time synchronization, reach out to your vendors and inquire about NTS support.A small tale of “time” firstBack in 2015, as a fresh graduate student interested in Internet security, I came across this mostly esoteric Internet protocol called the Network Time Protocol (NTP). NTP was designed to synchronize time between computer systems communicating over unreliable and variable-latency network paths. I was actually studying Internet routing security, in particular attacks against the Resource Public Key Infrastructure (RPKI), and kept hitting a dead end because of a cache-flushing issue. As a last-ditch effort I decided to roll back the time on my computer manually, and the attack worked.I had discovered the importance of time to computer security. Most cryptography uses timestamps to limit certificate and signature validity periods. When connecting to a website, knowledge of the correct time ensures that the certificate you see is current and is not compromised by an attacker. When looking at logs, time synchronization makes sure that events on different machines can be correlated accurately. Certificates and logging infrastructure can break with minutes, hours or months of time difference. Other applications like caching and Bitcoin are sensitive to even very small differences in time on the order of seconds.Two factor authentication using rolling numbers also rely on accurate clocks. This then creates the need for computer clocks to have access to reasonably accurate time that is securely delivered. NTP is the most commonly used protocol for time synchronization on the Internet. If an attacker can leverage vulnerabilities in NTP to manipulate time on computer clocks, they can undermine the security guarantees provided by these systems.Motivated by the severity of the issue, I decided to look deeper into NTP and its security. Since the need for synchronizing time across networks was visible early on, NTP is a very old protocol. The first standardized version of NTP dates back to 1985, while the latest NTP version 4 was completed in 2010 (see RFC5905). In its most common mode, NTP works by having a client send a query packet out to an NTP server that then responds with its clock time. The client then computes an estimate of the difference between its clock and the remote clock and attempts to compensate for network delay in this. NTP client queries multiple servers and implements algorithms to select the best estimate, and rejects clearly wrong answers.Request response flow of NTPSurprisingly enough, research on NTP and its security was not very active at the time. Before this, in late 2013 and early 2014, high-profile Distributed Denial of Service (DDoS) attacks were carried out by amplifying traffic from NTP servers; attackers able to spoof a victim’s IP address were able to funnel copious amounts of traffic overwhelming the targeted domains. This caught the attention of some researchers. However, these attacks did not exploit flaws in the fundamental protocol design. The attackers simply used NTP as a boring bandwidth multiplier. Cloudflare wrote extensively about these attacks and you can read about it here, here, and here.I found several flaws in the core NTP protocol design and its implementation that can be exploited by network attackers to launch much more devastating attacks by shifting time or denying service to NTP clients. What is even more concerning was that these attackers do not need to be a Monster-In-The-Middle (MITM), where an attacker can modify traffic between the client and the server, to mount these attacks. A set of recent papers authored by one of us showed that an off-path attacker present anywhere on the network can shift time or deny service to NTP clients. One of the ways this is done is by abusing IP fragmentation. Fragmentation is a feature of the IP layer where a large packet is chopped into several smaller fragments so that they can pass through the networks that do not support large packets. Basically, any random network element on the path between the client and the server can send a special “ICMP fragmentation needed” packet to the server telling them to fragment the packet to say X bytes. Since the server is not expected to know the IP addresses of all the network elements on its path, this packet can be sent from any source IP. Fragmentation attack against NTPIn our attack, the attacker exploits this feature to make the NTP server fragment its NTP response packet for the victim NTP client. The attacker then spoofs carefully crafted overlapping response fragments from off-path that contain the attacker’s timestamp values. By further exploiting the reassembly policies for overlapping fragments the attacker fools the client into assembling a packet with legitimate fragments and the attacker’s insertions. This evades the authenticity checks that rely on values in the original parts of the packet.NTP’s past and futureAt the time of NTP’s creation back in 1985, there were two main design goals for the service provided by NTP. First, they wanted it to be robust enough to handle networking errors and equipment failures. So it was designed as a service where client can gather timing samples from multiple peers over multiple communication paths and then average them to get more accurate measurement.The second goal was load distribution. While every client would like to talk to time servers which are directly attached to high precision time-keeping devices like atomic clocks, GPS, etc, and thus have more accurate time, the capacity of those devices is only so much. So, to reduce protocol load on the network, the service was designed in a hierarchical manner. At the top of the hierarchy are servers connected to non-NTP time sources, that distribute time to other servers, that further distribute time to even more servers. Most computers connect to either these second or third level servers.The stratum hierarchy of NTPThe original specification (RFC 958) also states the "non-goals" of the protocol, namely peer authentication and data integrity. Security wasn’t considered critical in the relatively small and trusting early Internet, and the protocols and applications that rely on time for security didn’t exist then. Securing NTP came second to improving the protocol and implementation.As the Internet has grown, more and more core Internet protocols have been secured through cryptography to protect against abuse: TLS, DNSSEC, RPKI are all steps toward ensuring the security of all communications on the Internet. These protocols use “time” to provide security guarantees. Since security of Internet hinges on the security of NTP, it becomes even more important to secure NTP.This research perspicuously showed the need for securing NTP. As a result, there was more work in the standards body for Internet Protocols, the Internet Engineering Task Force (IETF) towards cryptographically authenticating NTP. At the time, even though NTPv4 supported both symmetric and asymmetric cryptographic authentication, it was rarely used in practice due to limitations of both approaches.NTPv4’s symmetric approach to securing synchronization doesn’t scale as the symmetric key must be pre-shared and configured manually: imagine if every client on earth needed a special secret key with the servers they wanted to get time from, the organizations that run those servers would have to do a great deal of work managing keys. This makes this solution quite cumbersome for public servers that must accept queries from arbitrary clients. For context, NIST operates important public time servers and distributes symmetric keys only to users that register, once per year, via US mail or facsimile; the US Naval Office does something similar.The first attempt to solve the problem of key distribution was the Autokey protocol, described in RFC 5906. Many public NTP servers do not support Autokey (e.g., the NIST and USNO time servers, and many servers in pool.ntp.org). The protocol is badly broken as any network attacker can trivially retrieve the secret key shared between the client and server. The authentication mechanisms are non-standard and quite idiosyncratic. The future of the Internet is a secure Internet, which means an authenticated and encrypted Internet. But until now NTP remains mostly insecure, despite continuing protocol development. In the meantime more and more services depended on it.Timeline of NTP developmentFixing the problemFollowing the release of our paper, there was a lot more enthusiasm in the NTP community at standards body for Internet Protocols, the Internet Engineering Task Force (IETF) and outside for improving the state of NTP security. As a short-term fix, the ntpd reference implementation software was patched for several vulnerabilities that we found. And for a long-term solution, the community realized the dire need for a secure, authenticated time synchronization protocol based on public-key cryptography, which enables encryption and authentication without requiring the sharing of key material beforehand. Today we have a Network Time Security (NTS) draft at the IETF, thanks to the work of dozens of dedicated individuals at the NTP working group.In a nutshell, the NTS protocol is divided into two-phases. The first phase is the NTS key exchange that establishes the necessary key material between the NTP client and the server. This phase uses the Transport Layer Security (TLS) handshake and relies on the same public key infrastructure as the web. Once the keys are exchanged, the TLS channel is closed and the protocol enters the second phase. In this phase the results of that TLS handshake are used to authenticate NTP time synchronization packets via extension fields. The interested reader can find more information in the Internet draft.Cloudflare’s new serviceToday, Cloudflare announces its free time service to anyone on the Internet. We intend to solve the limitations with the existing public time services, in particular by increasing availability, robustness and security.We use our global network to provide an advantage in latency and accuracy. Our 180 locations around the world all use anycast to automatically route your packets to our closest server. All of our servers are synchronized with stratum 1 time service providers, and then offer NTP to the general public, similar to how other public NTP providers function. The biggest source of inaccuracy for time synchronization protocols is the network asymmetry, leading to a difference in travel times between the client and server and back from the server to the client. However, our servers’ proximity to users means there will be less jitter — a measurement of variance in latency on the network — and possible asymmetry in packet paths. We also hope that in regions with a dearth of NTP servers our service significantly improves the capacity and quality of the NTP ecosystem.Cloudflare servers obtain authenticated time by using a shared symmetric key with our stratum 1 upstream servers. These upstream servers are geographically spread and ensure that our servers have accurate time in our datacenters. But this approach to securing time doesn’t scale. We had to exchange emails individually with the organizations that run stratum 1 servers, as well as negotiate permission to use them. While this is a solution for us, it isn’t a solution for everyone on the Internet.As a secure time service provider Cloudflare is proud to announce that we are among the first to offer a free and secure public time service based on Network Time Security. We have implemented the latest NTS IETF draft. As this draft progresses through the Internet standards process we are committed to keeping our service current.Most NTP implementations are currently working on NTS support, and we expect that the next few months will see broader introduction as well as advancement of the current draft protocol to an RFC. Currently we have interoperability with NTPsec who have implemented draft 18 of NTS. We hope that our service will spur faster adoption of this important improvement to Internet security. Because this is a new service with no backwards compatibility requirements, we are requiring the use of TLS v1.3 with it to promote adoption of the most secure version of TLS.Use itIf you have an NTS client, point it at time.cloudflare.com:1234. Otherwise point your NTP client at time.cloudflare.com. More details on configuration are available in the developer docs.ConclusionFrom our Roughtime service to Universal SSL Cloudflare has played a role in expanding the availability and use of secure protocols. Now with our free public time service we provide a trustworthy, widely available alternative to another insecure legacy protocol. It’s all a part of our mission to help make a faster, reliable, and more secure Internet for everyone.Thanks to the many other engineers who worked on this project, including Watson Ladd, Gabbi Fisher, and Dina Kozlov

Meet a Helpful Human – Taylor Frye

Liquid Web Official Blog -

We’re the employees you would hire if you could. Responsive, helpful, and dedicated in ways automation simply can’t be. We’re your team. Each month we recognize one of our Most Helpful Humans in Hosting. Meet Taylor Frye Why did you join Liquid Web? My father had a friend who worked in the Liquid Web’s Sales Department as a manager. He had been with the company since its early start in 2000. I had met him several times outside work and was excited to get to work with him in the web hosting industry. He got me an interview and I’ve never looked back! Is there something specific at Liquid Web that you just love? The technology, the employees, and the personalities. I’m always learning something new from Liquid Web but what really makes me love this place is the friends I have gained after being here for nine years, not to mention the business connections! There is always a subject I can contribute too or learn from too. The place is full of geeks just like myself and it’s awesome to be around others who share the same interests as I do. In your eyes, what’s the difference between Liquid Web and other employers? Management is understanding and flexible when it comes to family needs. I’m the father of two daughters, ages 2 and 4. There can be times when I just need to work from home and help my wife. Liquid Web has been flexible and accommodating with me instead of being discouraging as past employers I’ve worked for have been. As long as you are doing your role effectively, you have nothing to worry about which brings a nice peace of mind. What draws you to the hosting industry as a career? I have an interest in technology as a whole. I do some web and graphic design on the side so my foot was already partially in the door, technologically-speaking. Once I saw how web design and web hosting relied on each other, my interest and knowledge grew with the web hosting industry. What is the biggest milestone you’ve accomplished? In the roles that I have fulfilled at Liquid Web, one of my biggest milestones was running a successful third-shift sales team for Liquid Web. I had a small team mixed of seasoned and new employee’s that I coached and watched grow. During this time, there was active sales management training. This allowed me to work effectively with my team on their sales skills such as helping them pitch our products via phone call or live chat. Pushing confidence and getting rid of wishy-washy terms allowed us to hit our sales goal several months in a row. Growing the sales team and watching them become successful was spectacular for me and gave me a sense of accomplishment that I haven’t felt in my career before. It was an amazing opportunity that I was grateful Liquid Web gave me. Tell us about a truly rewarding experience you’ve had with a customer. I was working with a customer who needed an upgrade to their existing infrastructure with us. While upgrading, the customer was migrating their email data and accidentally ran a terminal command which deleted a bulk of their email. They had no backups and were desperate to get this data back. We went above and beyond by contacting a smaller data recovery partner (at the time) who we sent the drives out to and were able to successfully recover their data. The customer was ecstatic to get the data back and even made an additional purchase for our Remote Backup Product after the incident. This was extremely rewarding for both myself and the customer, as well as a great sales opportunity that materialized due to our service and support. What are you known for at Liquid Web? What do people specifically come to you for? I’ve been called a swiss army knife due to the fact I have had so many different roles at Liquid Web, such as an Inbound Solutions Representative, Solutions Mentor, Solutions Manager, Customer Success Specialist, Managed WordPress On-Boarding Specialist, and a Jr. Solutions Architect. I now serve as an Install Base Solutions Consultant. Folks come to me with all types of questions that may be related to my current role or past roles I have fulfilled. I love being able to help so many people out with my diverse knowledge and background during my time at Liquid Web. What is one thing you wish our customers knew about their hosting? I wish our customers knew how dangerous using out-of-date hardware and software truly was. Since I have started at the company, we have expanded our Security offerings vastly for our products. If customers knew more about the free security and speed boost they receive with us through Cloudflare or the Web Application Firewalls we can offer, they wouldn’t have to seek third party providers to help fill these potential security gaps. Work aside, what are some of your hobbies? I’m a big gamer and run an opening gaming community to help with charities and games who may not have folks to game with. I also still do some graphics and web design on the side and video editing work when the time is available. I really enjoy digital media as well as streaming games on my twitch.tv channel. What is your favorite TV show? Game Of Thrones, my wife pulled me into it and we can’t get enough! If you could have dinner with one famous person [dead or alive] who would it be? Adam Sandler, Founder of Happy Madison Productions. Huge fan of Happy Gilmore, Big Daddy, and Little Nicky. He’s a great comedian but I truly appreciate his acting talents more. You can follow Taylor on LinkedIn. We hope you enjoyed our series, and stay tuned for the next Most Helpful Human in Hosting profile. The post Meet a Helpful Human – Taylor Frye appeared first on Liquid Web.

Factors To Consider When Creating A Business Site Budget

InMotion Hosting Blog -

When creating a business website, people often think that development and design are the only significant expenses – but there is so much more than that. Domain name, web hosting, website maintenance and more are all factors that need to be worked into your long-term budget. Below, we’re going to go over the different costs that play into website development and what you can reasonably expect to pay. Just remember – costs can vary greatly depending on where you purchase your services and the type of plan you sign up for, so you should always do your own research! Continue reading Factors To Consider When Creating A Business Site Budget at The Official InMotion Hosting Blog.

cPanel Application Manager and App Deployment 101

cPanel Blog -

Researching another piece I’ve been writing, I realized that I was grossly unfamiliar with a portion of the cPanel & WHM product. For a bit of background, I’ve been using cPanel & WHM for about nine years now, mostly from the end user and system administrator perspectives. Admittedly, I am not a developer, nor do I pretend to be one. Between you and me, I have immense respect for developers and the dark arts magic that ...

Ace the Interview: Tackle Tough Questions and Prep Like a Pro

LinkedIn Official Blog -

Interviewing for a new job can be nerve wracking. With so much riding on making a great impression, it’s easy to feel overwhelmed. So, it’s no surprise that two-thirds (67%) of millennials feel uneasy about job interviews. Almost 40% would rather spend an entire weekend cleaning out their garage than meet with a hiring manager, 15% of millennials feel so nervous they could throw up before every interview, and 80% admit to being stumped by interview questions. But don’t let your fear leading up... .

How to Create a  Freelance Writer Website That Actually Gets You Writing Gigs

DreamHost Blog -

The future is freelance. Did you know? By 2020, 50% of the U.S. workforce will do some type of freelance work — and it’s predicted that by 2027, freelancers will make up the majority. Whether you work exclusively freelance or take on additional side projects in conjunction with your full-time work, you’re joining an ever-growing population of successful, flexible, untethered, and creative craftspeople. What’s more, the innovation and growth of technology have made the work environment more fruitful for freelancers: 64% of freelancers found work online — a 22-point increase in the last five years. And you freelance writers, bloggers, and web content writers — we see you. We know you’re out there, coloring the world with your beautiful language and lightbulb ideas. But because freelancers must do their own marketing legwork, you need to take advantage of every tool available to you in building a prolific writing business. One of the biggest weapons in your arsenal? A relevant web presence. Forget scouring the wanted ads to find work — establishing an online presence and showing off a strong virtual CV is vital for getting seen and earning $$$. How to put your best foot — and word — forward online? A top-of-the-class website. For writers, a killer freelance writer website is a make-it-or-break-it tool for getting you leads on quality writing gigs. And we’re going to show you how to do it. Here’s what we’ll cover in this guide (in case you want to jump ahead): Why Having a Freelance Writer Website Is Important How to Build Your Freelance Writer Website Mistakes to Avoid When Setting Up Your Website Handy Resources for Starting a Writer Website With a website, you can flaunt your talent and personality, create sustainable sales, build your writing portfolio, and connect with potential and return customers, building your business and financial success — all in one place. Build Your Online Portfolio with DreamHostWe make sure your freelance writing website is fast, secure and always up so you never miss a gig. Plans start at $2.59/mo.Choose Your Plan Why is Having a Good Freelance Writer Website Important? You’re a writer — you know, good ‘ol pen and paper. Why do you even need a website in the first place? With a well-built freelance writer website, you can: Showcase Your Online Portfolio. One of the most significant advantages of creating a freelance writer website is having a living, breathing portfolio that is easily accessible online. Prospective clients can access your work, and through a broad range of content, get a feel for your style, voice, and writing ability. They can view your previous work and a wealth of relevant content that will help them trust their business to you. Increase Brand Visibility. Your website is a visible showcase of your writing ability and a crucial tool for establishing awareness of your brand. With a powerful online presence, visitors don’t have to go digging around to discover info on your offerings. Not only do you make it possible for people to find you online, but your website also helps you build likability. With great content and engaging content, visitors start to care about you and your work and will entertain the prospect of working with you. It illustrates your legitimacy as a writing professional and helps you position yourself as an authority in your field. By making your work accessible, you broaden your visibility and provide social proof which, in turn, increases your chances of getting rewarding freelance writing work. Strengthen Brand Legitimacy. Let’s be real. Companies without a website or an internet presence tend to raise some red flags in the e-commerce ecosystem, right? Everything’s on the web. These days, a dot com is an essential requirement in the biz world. If internet users can’t find your virtual corner of the web, customers seeking out a particular product or service will instantly think: can we trust that business if they’re not online in an everything-digital age? It’s a no-brainer that if you want to do business and market a product or service in the world we live in, potential clients need to be able to find you with just a couple of clicks from their browser. So on a very basic level, having a website helps establish your brand as a legitimate business, rather than just operating amateur or letting customers rely on what they gather from your social media presence. What’s more, the better you are at outfitting your site with great content and strong visuals, the more that legitimacy will increase and work in your favor. To bless your bottom line and earn trust from internet visitors, it’s crucial to demonstrate not only your tech-savvy web skills but also your ability to establish a professional and valuable web presence. We know you’re wondering: Do I have to have a freelance writer website if I’m just getting started? The short answer: No. BUT — having an established site for your freelance writing (your services and a showcasing portfolio) is the best way to build a marketing funnel and establish a legitimate, cohesive, and authoritative brand. It’s a clear way to put your best foot forward and secure quality writing jobs. OK, but hold up. It’s 2019, you say. Can’t I just use social media, like a LinkedIn company page, instead of a website to promote my writing business? Sure. But a website, even a simple one, is a good idea. With a well-established freelance writer website, you build authority as a brand, and increase your chances of getting seen by potential clients. Plus, you’ll own all the content on your site — something that isn’t always true on social media sites. Perhaps building a high-performing and snazzy-looking freelance writer website seems like an overwhelming task. But putting in the effort to set up a website is an investment with guaranteed returns.  A site to be admired — and get you hired. Related: Want to Build a Website in 2019? Here’s Your Game Plan How to Build a Great Freelance Writer Website (7 Steps) Like we said, creating a great-looking freelance writer website doesn’t have to be rocket science or overly time-intensive. We’ll show you how to set up a website in seven easily-manageable steps. 1. Brand Your Business Time to pick a name, business owner! If you’re branding yourself and marketing your skills, you can use your own name, but ask yourself a few of the following big-picture questions before nailing down a moniker: Would you ever sell your business? Even if you’re not entirely sure of your long-term business plan, you probably have an idea if you ever intend to pass the torch on your writing business or include others’ services or products in conjunction with your business. If you’ve entertained the idea of selling your brand one day or partnering up, don’t brand yourself with your own name. Obviously, that is unique to you and won’t transfer. Also, if your name is difficult to spell, pronounce, or remember, consider the possible confusion using your name might cost your business. But then again, your personal name might help brand you uniquely as potential clients can differentiate you from other common-name writing businesses. So consider your options before jumping into a brand or business name haphazardly. You never know how you’ll grow, adapt, and change in your freelance writing business. You’ll want to choose carefully in order to set yourself up for long-term success. 2. Choose a Content Management System Now that you’ve got your brand’s fancy new name tag, you need a content management system (CMS) to facilitate the creation and publication of your content on the web. The best part? You don’t have to know how to program a single line of code to use one! Take WordPress, one of the web’s most popular content management systems out there (it powers 30% of the internet!) Related: What Is WordPress? Everything You Need to Know About the Platform With the WordPress platform, you can create and manage your web content without the pressure of a deep learning curve — you can get a website set up with little-to-no technical know-how. 3. Register a Domain and Set up Hosting  OK, you’ve decided you want to use WordPress, and you’re full of great content ideas. Good to go, right? Well, first, you need to find your site a home on the web so that visitors can actually view and engage with your content. All those great ideas won’t amount to anything if your website isn’t available online. That means you need two very critical components: a domain and a hosting provider. A domain is the unique web address where your website can be found. This is what visitors will type into their browser to navigate to your site (for example, www.dreamhost.com). Your domain is unique to your website and should match your brand or business name. You should also consider your choice of top-level domain —  meaning .com or .blog or dot-whatever —  in order to position yourself as an authority in search engine rankings. Whatever domain name you choose, you purchase it through a registrar. Next, you need a hosting provider. Hosting companies sell unique-to-you plans that include space on a server so that your website has a place to live online. Without a server, your website won’t be available to visit. For the best chance at scoring quality gigs, you need a quality hosting provider. There are a lot of providers out there, but only DreamHost can offer you the best of the best: one-of-a-kind features, high-performance tech, and responsive support. Plus, we make things easy: domain registration and hosting services under one roof and one-click WordPress installs. With Shared Hosting, just check the “Pre-Install WordPress” box during sign-up and boom! We install it for you. Shared Hosting provides ambitious WordPress beginners everything they need to create a killer freelance writing website that gets them hired. Even better? Our Shared Hosting plans start at just $2.59 per month. 4. Choose a WordPress Theme Time to outfit your website with a WordPress theme. The theme you select doesn’t just dictate the overall appearance of your site (though it does do that), but it also determines what sort of functionality your site will have. The right theme will allow you to control and customize your website to your exact specifications and niche. Browse the WordPress Theme Directory or search WordPress theme developers to find and install your perfect theme. 5. Decide What Content Your Site Needs So what does your freelance writer website need? What are the must-have content and features relevant to your niche? Time to make a plan. While you have the freedom to customize your website according to your brand and personality, there are a few essential pages that your site should have to set you up for the best possible business success: Homepage: An easy-to-navigate and attractive landing page that can direct visitors and potential clients to important parts of your website. Online Portfolio: Your website should be a solid, structured way to demonstrate your skills as a professional writer. A vital feature — nay, asset —  of your website is an easy-to-find, specially-dedicated portfolio section where you can showcase relevant published work and prove your capabilities as a writer. Services: Nearly 50% of website visitors check out a company’s product or services page before any other sections of the site. That’s big. What do you offer? Give potential clients a clear and detailed description of the specific writing services you offer. About: Don’t be a robot behind the computer screen. Demonstrate your writing chops, let potential clients and visitors get to know you, and help them get acquainted with your unique voice with an engaging and humanizing Get-to-Know-Me section. Showcase your accomplishments and passion for what you do but also share what makes you unique. Contact: How can potential clients get in touch with you? Make your contact information easy to find and use. Now that you’ve got your essential pages set up, you can go above and beyond to bring your freelance writer website to the next level. While you should avoid non-essentials, you can consider adding the following optional (but helpful) pages: Clients: Name-dropping your current clients on your website is a great way to demonstrate social proof and establish your authority in the field. Think of it as a virtual word-of-mouth recommendation. Speaker, writer, and consultant Hillary Weiss proudly displays the well-known brands that believe in her work. Testimonials: The power of a good review cannot be overstated, especially in an online environment. Confidently showcasing positive feedback you’ve received from clients in your field about your writing services can be great fodder for snagging new clients and more writing jobs. It’s OK to toot your own horn. Writer and speaker Colleen M. Story inspires confidence with a visible display of reader testimonials. Blog: In addition to your portfolio, you can showcase your writing chops and your unique voice with a content-rich blog. The extra effort and value you’re providing your visitors with relevant blog content can be an investment with rich returns. Resume: Allow visitors and potential clients to check out a bulleted list of your skills and achievements with an easy-to-view CV. FAQs: If you want to answer potentially common questions about your work or services or provide more specific details to potential clients about what you offer, consider adding a FAQ section. Downloads/Freebies: Making free, downloadable goodies available to your visitors on your site shows that you’re going above and beyond to offer value, demonstrating the high-quality nature of your freelance business. Lastly, consider pricing: if you want to be explicit on your site about the cost of your services, be transparent, upfront, and confident in the value of your work. Or if you have adjust-to-fit service options, you can keep costs mum and invite interested visitors to contact you for a quote. 6. Create the Content Time to get creating! You know the adage: content is king. Live by it. You need to fill your website with rich content to attract traffic and prove your worthiness as a business. Fill the content on your must-have pages first, then continue to provide valuable content regularly. Of just as much importance as creating content is creating it smartly — meaning, using it to get found by potential clients. How to do that? Using keywords. Consider: what are relevant topics and search terms related to your field? Being smart about how you use phrasing and common search terms in your content will allow you to position yourself for good rankings and stronger search engine optimization. So do your research and incorporate common search terms into your content. Use tools like Google’s comprehensive (and free!) Keyword Planner to create high-traffic website content with smart keyword research and build a strong content marketing strategy. Also, consider the tone of your content. Does it appropriately and uniquely represent your brand? Does it showcase your expertise and/or personality? One of the most marketable tools in your writer repertoire is your voice — use it smartly. 7. Launch Celebrate! Toast to yourself, do a little dance, pat yourself on the back. You did it! Your website is up and running! You should be proud. We know that having something living, breathing out there on the web can be nerve-wracking. Don’t worry about your website not being perfect. The important thing is that it’s out there. Remember, you can always perfect and tweak over time. Most importantly, people can start finding you — and you have something you can improve on. 7 Mistakes to Avoid When Setting Up Your Writer Website When you’re starting out with your website, it’s inevitable to face a learning curve. Some things just take time to learn. You will improve over time. But guess what? We want you to succeed —  as soon as possible. So we’re giving you some inside knowledge: a list of thou-shalt-nots when setting up your freelance writer website. Avoid these major whoopsies, and you’ll be one step ahead in attracting quality writing jobs. 1. Bad Visuals Let’s talk a little science. Did you know 90% of the information processed by the brain is visual? What’s more, 80% of people remember what they see (compared to 10% of what they hear and 20% of what they read.) Lastly, know that visuals help grow traffic — content creators who feature visual content grow traffic 12 times faster than those who don’t. Not having visuals as a part of your freelance writer website is a BIG no-no. But even more, having bad visuals can torpedo your chance at building a successful freelance writing business. Judgments on a company’s credibility are 75% based on the company’s website design, so take seriously the first impression you’re making with your visuals. Your visuals should be reflective of the quality work you offer, proving you trustworthy to potential clients and their money. To benefit from the traffic-building and engaging powers of excellent visuals, select quality images, a robust visual structure, and remember: white space is good space. 2. CTA Issues When visitors come to your website, you want them to do something. But if you don’t ask them to do anything, they will click away and you won’t get any business. Not ideal. Even if you have kick-butt writing skills and excellent website design, having confusing, conflicting, or nonexistent CTAs (70% of small biz websites lack a CTA) will damage your chances of growing your business. So think: what do you need visitors to do to get writing gigs for your business? Whether it’s subscribing to an email list, filling out a contact form, or viewing your portfolio of work, make sure that your CTA is visible, clear, and focused. Elna of Innovative Ink has a clear CTA front and center — visitors know just what to do. 3. Sloppy Formatting You’re not just a freelancer — you are a brand. As such, your potential clients expect a level of professionalism from you, so they need to see that the minute they click onto your site. Along with clear navigation, focused visual structure, and a frictionless contact funnel, your website needs to be fine-tuned, sleek, and polished. Even as a freelancer, an entrepreneurial free spirit, you need to channel those suit-and-tie vibes on your website to gain the trust of potential clients. No sloppy formatting, no error-filled copy, or overly-casual design. Concern yourself with the details. If you want people to trust you with their dollars, you need to be professional. Not only does meticulous formatting help your site design make a killer first impression (remember the eye-opening stats about visuals?), but it helps people view you as a trustworthy business. Related: How to Create a Brand Style Guide for Your Website 4. TMI (Too Much Information) Don’t get us wrong; it’s great to be personable and relatable. A critical part of your brand’s success is your likability. You want to be a person to visitors and potential clients, not just a robot writer behind a screen. But your website is not your online diary. Refrain from sharing too much personal info or content irrelevant to your field. Focus your content and be strategic about what you choose to share, making it all in the aim of building your business and earning clients. 5. No Target Audience You have a brand-spankin’-new freelance writer website and are ready to bring in traffic, and ideally, new business. But who are you trying to reach through your website? What kinds of people are you looking to attract? In simple terms: who is your target audience? Your success is hugely determined by how you focus your efforts on building a business. If you cast too wide a net, you won’t be able to effectively target the high-quality clients that you want. So before you start seeking to build traffic, identify your target. 6. Weak Copy You’re a writer. Skilled wordsmithing is your talent, your money-making tool, and your passion. That being said, every aspect of your website should reflect your abilities as a writer. Weak, lackluster copy will not earn you clients, build trust, or engage visitors. In fact, it will send potential clients to your competitors. Take special, even meticulous care in making sure that your copy is strong, engaging, and polished. Whether you’re writing blog posts, articles, or landing page copy, don’t just wing it — write and rewrite, seek a second pair of eyes for outside observation, and edit, edit, edit. The strength of your copy will make or break your business. 7. Infrequent Updates Reality check: creating a money-making freelance writer website isn’t a one-and-done affair. Just like software needs regular updates, so does your website. Not only do periodic refreshes help you out SEO-wise, but they keep things relevant and professional. Update blog content, test plugins, solicit feedback, and use site analytics frequently to adjust how it operates for maximum UX. Know that you won’t always get things right the first time — continually be looking to improve all aspects of your website. Related: The Complete Guide to Cleaning Up Your WordPress Website Handy Resources for Starting a Writer Website Don’t worry — we’re not going to just throw you out to the web’s wolves without a few more top-tier tools for your burgeoning freelance writer website. Here, we offer you a well-curated roundup, a well-stocked toolbox of handy virtual resources destined to help you reach your goals. Web Hosting We know we’ve mentioned this before, but a good web hosting provider can make all the difference for the success of your freelance writing business. It’s true. Not only can a reliable hosting provider help make creating content easy, but it can make the management of your website a snap, leaving you to focus on the most crucial aspects of running your writing business. With DreamHost Shared Hosting plans, we offer you those benefits and more — including 24/7 support, high-performance tech, and budget-friendly options. Choosing a hosting provider is one of the first choices you’ll make on your journey — make it a smart choice with DreamHost. Logo Like we’ve said, your freelance writing business is just that: a business. And most companies out there are easily identified by a unique marker — their logo. Think about any famous company: Nike, Apple, McDonald’s — you can quickly think of their logo just by seeing the name, right? Or you’d be able to pick it out easily if you just saw the logo’s telltale visual? Having your own logo is an integral part of establishing and building your brand. It’s essential for consistency, visibility, and growth. But don’t worry; making one that your visitors will love isn’t hard to do. Brand Colors In addition to your logo, you should establish a color palette that is unique to your brand. This will help your website and materials feel cohesive and professional and can even help you grow your business by highlighting relevant sections or CTAs with specific colors. Picking your brand colors is as easy as 1-2-3, but remember to be intentional about your personal branding choices. Stock Images We’ve already emphasized how significant visuals are for helping bring in traffic and engage visitors. So where do you get professional-looking images and other visuals? Try Pexels or Unsplash for high-res, royalty-free photos, or find a photographer to take some for you. If you’re ambitious, follow a DIY at-home photography guide to snap your own for cheap. And remember, copyright rules rule, so keep things legal. Give credit where necessary and don’t steal. Photo Editing You don’t have to be a Photoshop master to give your images that extra oomph. Crop, adjust, and enhance your photos to improve composition and make your website visuals a powerful tool in earning your business. Try a few simple photo editing tricks on the software of choice. Icons As another type of visual, icons or symbols on your website can make it easy for visitors to find exactly what they’re looking for — whether it be your social media pages, your portfolio, or contact form — without even having to navigate menus or copy. They’re a universal language! Get great-looking icons on sites like The Noun Project, Creative Market, or for free on Flat Icon. Design Your freelance writer website should have its own unique feel. After all, you are your unique brand. Your design incorporates not only your layout, but the style of your copy, visuals, and navigation. A well-designed website is carefully thought-out for ultimate functionality and aesthetic, and we’ve got the guide to help you make it look snazzy. If you don’t have an eye for design, DreamHost can help. We’ve partnered with the experts at RipeConcepts, a leading web design firm, to offer professional web design services to our users. Professional Website Design Made EasyMake your site stand out with a professional design from our partners at RipeConcepts. Packages start at $299.Get a Free Consultation The Final Word Now, we’ll reveal the results of our crystal ball reading: we see a bright (and prolific) freelance writing career in your future! Getting quality writing gigs may take some website-building legwork, but with a well-built site, you’re well on your way to new clients and a growing portfolio. Because your success is our success, DreamHost offers you the perfect beginning-of-the-journey hosting packages to get you on your feet. Check out our comprehensive Shared Hosting plans to start taking your career to the next level with a freelance writing website. The post How to Create a  Freelance Writer Website That Actually Gets You Writing Gigs appeared first on Website Guides, Tips and Knowledge.

The Quantum Menace

CloudFlare Blog -

Over the last few decades, the word ‘quantum’ has become increasingly popular. It is common to find articles, reports, and many people interested in quantum mechanics and the new capabilities and improvements it brings to the scientific community. This topic not only concerns physics, since the development of quantum mechanics impacts on several other fields such as chemistry, economics, artificial intelligence, operations research, and undoubtedly, cryptography.This post begins a trio of blogs describing the impact of quantum computing on cryptography, and how to use stronger algorithms resistant to the power of quantum computing.This post introduces quantum computing and describes the main aspects of this new computing model and its devastating impact on security standards; it summarizes some approaches to securing information using quantum-resistant algorithms.Due to the relevance of this matter, we present our experiments on a large-scale deployment of quantum-resistant algorithms.Our third post introduces CIRCL, open-source Go library featuring optimized implementations of quantum-resistant algorithms and elliptic curve-based primitives.All of this is part of Cloudflare’s Crypto Week 2019, now fasten your seatbelt and get ready to make a quantum leap.What is Quantum Computing?Back in 1981, Richard Feynman raised the question about what kind of computers can be used to simulate physics. However, some physical phenomena, such as quantum mechanics, cannot be simulated using a classical computer. Then, he conjectured the existence of a computer model that behaves under quantum mechanics rules, which opened a field of research now called quantum computing. To understand the basics of quantum computing, it is necessary to recall how classical computers work, and from that shine a spotlight on the differences between these computational models.Fellows of the Royal Society: John Maynard Smith, Richard Feynman & Alan TuringIn 1936, Alan Turing and Emil Post independently described models that gave rise to the foundation of the computing model known as the Post-Turing machine, which describes how computers work and allowed further determination of limits for solving problems.In this model, the units of information are bits, which store one of two possible values, usually denoted by 0 and 1. A computing machine contains a set of bits and performs operations that modify the values of the bits, also known as the machine’s state. Thus, a machine with N bits can be in one of 2ᴺ possible states. With this in mind, the Post-Turing computing model can be abstractly described as a machine of states, in which running a program is translated as machine transitions along the set of states.A paper David Deutsch published in 1985 describes a computing model that extends the capabilities of a Turing machine based on the theory of quantum mechanics. This computing model introduces several advantages over the Turing model for processing large volumes of information. It also presents unique properties that deviate from the way we understand classical computing. Most of these properties come from the nature of quantum mechanics. We’re going to dive into these details before approaching the concept of quantum computing.SuperpositionOne of the most exciting properties of quantum computing that provides an advantage over the classical computing model is superposition. In physics, superposition is the ability to produce valid states from the addition or superposition of several other states that are part of a system.Applying these concepts to computing information, it means that there is a system in which it is possible to generate a machine state that represents a (weighted) sum of the states 0 and 1; in this case, the term weighted means that the state can keep track of “the quantity of” 0 and 1 present in the state. In the classical computation model, one bit can only store either the state of 0 or 1, not both; even using two bits, they cannot represent the weighted sum of these states. Hence, to make a distinction from the basic states, quantum computing uses the concept of a quantum bit (qubit) -- a unit of information to denote the superposition of two states. This is a cornerstone concept of quantum computing as it provides a way of tracking more than a single state per unit of information, making it a powerful tool for processing information.Classical computing – A bit stores only one of two possible states: ON or OFF.Quantum computing – A qubit stores a combination of two or more states.So, a qubit represents the sum of two parts: the 0 or 1 state plus the amount each 0/1 state contributes to produce the state of the qubit.In mathematical notation, qubit \( | \Psi \rangle \) is an explicit sum indicating that a qubit represents the superposition of the states 0 and 1. This is the Dirac notation used to describe the value of a qubit \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), where, A and B are complex numbers known as the amplitude of the states 0 and 1, respectively. The value of the basic states is represented by qubits as \( | 0 \rangle =  1 | 0 \rangle + 0 | 1 \rangle \)  and \( | 1 \rangle =  0 | 0 \rangle + 1 | 1 \rangle \), respectively. The right side of the term contains the abbreviated notation for these special states.MeasurementIn a classical computer, the values 0 and 1 are implemented as digital signals. Measuring the current of the signal automatically reveals the status of a bit. This means that at any moment the value of the bit can be observed or measured.The state of a qubit is maintained in a physically closed system, meaning that the properties of the system, such as superposition, require no interaction with the environment; otherwise any interaction, like performing a measurement, can cause interference on the state of a qubit.Measuring a qubit is a probabilistic experiment. The result is a bit of information that depends on the state of the qubit. The bit, obtained by measuring \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), will be equal to 0 with probability \( |A|^2 \),  and equal to 1 with probability \( |B|^2 \), where \( |x| \) represents the absolute value of \(x\). From Statistics, we know that the sum of probabilities of all possible events is always equal to 1, so it must hold that \( |A|^2 +|B|^2 =1 \). This last equation motivates to represent qubits as the points of a circle of radius one, and more generally, as the points on the surface of a sphere of radius one, which is known as the Bloch Sphere.The qubit state is analogous to a point on a unitary circle.The Bloch Sphere by Smite-Meister - Own work, CC BY-SA 3.0.Let’s break it down: If you measure a qubit you also destroy the superposition of the qubit, resulting in a superposition state collapse, where it assumes one of the basics states, providing your final result.Another way to think about superposition and measurement is through the coin tossing experiment. Toss a coin in the air and you give people a random choice between two options: heads or tails. Now, don't focus on the randomness of the experiment, instead note that while the coin is rotating in the air, participants are uncertain which side will face up when the coin lands. Conversely, once the coin stops with a random side facing up, participants are 100% certain of the status.How does it relate? Qubits are similar to the participants. When a qubit is in a superposition of states, it is tracking the probability of heads or tails, which is the participants’ uncertainty quotient while the coin is in the air. However, once you start to measure the qubit to retrieve its value, the superposition vanishes, and a classical bit value sticks: heads or tails. Measurement is that moment when the coin is static with only one side facing up.A fair coin is a coin that is not biased. Each side (assume 0=heads and 1=tails) of a fair coin has the same probability of sticking after a measurement is performed. The qubit \( \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle \) describes the probabilities of tossing a fair coin. Note that squaring either of the amplitudes results in ½, indicating that there is a 50% chance either heads or tails sticks.It would be interesting to be able to charge a fair coin at will while it is in the air. Although this is the magic of a professional illusionist, this task, in fact, can be achieved by performing operations over qubits. So, get ready to become the next quantum magician!Quantum GatesA logic gate represents a Boolean function operating over a set of inputs (on the left) and producing an output (on the right). A logic circuit is a set of connected logic gates, a convenient way to represent bit operations.The NOT gate is a single-bit operation that flips the value of the input bit.Other gates are AND, OR, XOR, and NAND, and more. A set of gates is universal if it can generate other gates. For example, NOR and NAND gates are universal since any circuit can be constructed using only these gates.Quantum computing also admits a description using circuits. Quantum gates operate over qubits, modifying the superposition of the states. For example, there is a quantum gate analogous to the NOT gate, the X gate. The X quantum gate interchanges the amplitudes of the states of the input qubit.The Z quantum gate flips the sign’s amplitude of state 1:Another quantum gate is the Hadamard gate, which generates an equiprobable superposition of the basic states.Using our coin tossing analogy, the Hadamard gate has the action of tossing a fair coin to the air. In quantum circuits, a triangle represents measuring a qubit, and the resulting bit is indicated by a double-wire.Other gates, such as the CNOT gate, Pauli’s gates, Toffoli gate, Deutsch gate, are slightly more advanced. Quirk, the open-source playground, is a fun sandbox where you can construct quantum circuits using all of these gates.ReversibilityAn operation is reversible if there exists another operation that rolls back the output state to the initial state. For instance, a NOT gate is reversible since applying a second NOT gate recovers the initial input.In contrast, AND, OR, NAND gates are not reversible. This means that some classical computations cannot be reversed by a classic circuit that uses only the output bits. However, if you insert additional bits of information, the operation can be reversed.Quantum computing mainly focuses on reversible computations, because there’s always a way to construct a reversible circuit to perform an irreversible computation. The reversible version of a circuit could require the use of ancillary qubits as auxiliary (but not temporary) variables.Due to the nature of composed systems, it could be possible that these ancillas (extra qubits) correlate to qubits of the main computation. This correlation makes it infeasible to reuse ancillas since any modification could have the side-effect on the operation of a reversible circuit. This is like memory assigned to a process by the operating system: the process cannot use memory from other processes or it could cause memory corruption, and processes cannot release their assigned memory to other processes. You could use garbage collection mechanisms for ancillas, but performing reversible computations increases your qubit budget.Composed SystemsIn quantum mechanics, a single qubit can be described as a single closed system: a system that has no interaction with the environment nor other qubits. Letting qubits interact with others leads to a composed system where more states are represented. The state of a 2-qubit composite system is denoted as \(A_0|00\rangle+A_1|01\rangle+A_2|10\rangle+A_3|11\rangle \), where, \( A_i \) values correspond to the amplitudes of the four basic states 00, 01, 10, and 11. This qubit \( \tfrac{1}{2}|00\rangle+\tfrac{1}{2}|01\rangle+\tfrac{1}{2}|10\rangle+\tfrac{1}{2}|11\rangle \) represents the superposition of these basic states, both having the same probability obtained after measuring the two qubits.In the classical case, the state of N bits represents only one of 2ᴺ possible states, whereas a composed state of N qubits represents all the 2ᴺ states but in superposition. This is one big difference between these computing models as it carries two important properties: entanglement and quantum parallelism.EntanglementAccording to the theory behind quantum mechanics, some composed states can be described through the description of its constituents. However, there are composed states where no description is possible, known as entangled states.Bell states are entangled qubit examplesThe entanglement phenomenon was pointed out by Einstein, Podolsky, and Rosen in the so-called EPR paradox. Suppose there is a composed system of two entangled qubits, in which by performing a measurement in one qubit causes interference in the measurement of the second. This interference occurs even when qubits are separated by a long distance, which means that some information transfer happens faster than the speed of light. This is how quantum entanglement conflicts with the theory of relativity, where information cannot travel faster than the speed of light. The EPR paradox motivated further investigation for deriving new interpretations about quantum mechanics and aiming to resolve the paradox.Quantum entanglement can help to transfer information at a distance by following a communication protocol. The following protocol examples rely on the fact that Alice and Bob separately possess one of two entangled qubits:The superdense coding protocol allows Alice to communicate a 2-bit message \(m_0,m_1\) to Bob using a quantum communication channel, for example, using fiber optics to transmit photons. All Alice has to do is operate on her qubit according to the value of the message and send the resulting qubit to Bob. Once Bob receives the qubit, he measures both qubits, noting that the collapsed 2-bit state corresponds to Alice’s message.Superdense coding protocol.The quantum teleportation protocol allows Alice to transmit a qubit to Bob without using a quantum communication channel. Alice measures the qubit to send Bob and her entangled qubit resulting in two bits. Alice sends these bits to Bob, who operates on his entangled qubit according to the bits received and notes that the result state matches the original state of Alice’s qubit.Quantum teleportation protocol.Quantum ParallelismComposed systems of qubits allow representation of more information per composed state. Note that operating on a composed state of N qubits is equivalent to operating over a set of 2ᴺ states in superposition. This procedure is quantum parallelism. In this setting, operating over a large volume of information gives the intuition of performing operations in parallel, like in the parallel computing paradigm; one big caveat is that superposition is not equivalent to parallelism.Remember that a composed state is a superposition of several states so, a computation that takes a composed state of inputs will result in a composed state of outputs. The main divergence between classical and quantum parallelism is that quantum parallelism can obtain only one of the processed outputs. Observe that a measurement in the output of a composed state causes that the qubits collapse to only one of the outputs, making it unattainable to calculate all computed values. Although quantum parallelism does not match precisely with the traditional notion of parallel computing, you can still leverage this computational power to get related information.Deutsch-Jozsa Problem: Assume \(F\) is a function that takes as input N bits, outputs one bit, and is either constant (always outputs the same value for all inputs) or balanced (outputs 0 for half of the inputs and 1 for the other half). The problem is to determine if \(F\) is constant or balanced.The quantum algorithm that solves the Deutsch-Jozsa problem uses quantum parallelism. First, N qubits are initialized in a superposition of 2ᴺ states. Then, in a single shot, it evaluates \(F\) for all of these states.(note that some factors were omitted for simplicity)The result of applying \(F\) appears (in the exponent) of the amplitude of the all-zero state. Note that only when \(F\) is constant is this amplitude, either +1 or -1. If the result of measuring the N qubit is an all-zeros bitstring, then there is a 100% certainty that \(F\) is constant. Any other result indicates that \(F\) is balanced.A deterministic classical algorithm solves this problem using \( 2^{N-1}+1\) evaluations of \(F\) in the worst case. Meanwhile, the quantum algorithm requires only one evaluation. The Deutsch-Jozsa problem exemplifies the exponential advantage of a quantum algorithm over classical algorithms.Quantum ComputersThe theory of quantum computing is supported by investigations in the field of quantum mechanics. However, constructing a quantum machine requires a physical system that allows representing qubits and manipulation of states in a reliable and precise way.The DiVincenzo Criteria require that a physical implementation of a quantum computer must:Be scalable and have well-defined qubits.Be able to initialize qubits to a state.Have long decoherence times to apply quantum error-correcting codes. Decoherence of a qubit happens when the qubit interacts with the environment, for example, when a measurement is performed.Use a universal set of quantum gates.Be able to measure single qubits without modifying others.Quantum computer physical implementations face huge engineering obstacles to satisfy these requirements. The most important challenge is to guarantee low error rates during computation and measurement. Lowering these rates require techniques for error correction, which add a significant number of qubits specialized on this task. For this reason, the number of qubits of a quantum computer should not be regarded as for classical systems. In a classical computer, the bits of a computer are all effective for performing a calculation, whereas the number of qubits is the sum of the effective qubits (those used to make calculations) plus the ancillas (used for reversible computations) plus the error correction qubits.Current implementations of quantum computers partially satisfy the DiVincenzo criteria. Quantum adiabatic computers fit in this category since they do not operate using quantum gates. For this reason, they are not considered to be universal quantum computers.Quantum Adiabatic ComputersA recurrent problem in optimization is to find the global minimum of an objective function. For example, a route-traffic control system can be modeled as a function that reduces the cost of routing to a minimum. Simulated annealing is a heuristic procedure that provides a good solution to these types of problems. Simulated annealing finds the solution state by slowly introducing changes (the adiabatic process) on the variables that govern the system.Quantum annealing is the analogous quantum version of simulated annealing. A qubit is initialized into a superposition of states representing all possible solutions to the problem. Here is used the Hamiltonian operator, which is the sum of vectors of potential and kinetic energies of the system. Hence, the objective function is encoded using this operator describing the evolution of the system in correspondence with time. Then, if the system is allowed to evolve very slowly, it will eventually land on a final state representing the optimal value of the objective function. Currently, there exist adiabatic computers in the market, such as the D-Wave and IBM Q systems, featuring hundreds of qubits; however, their capabilities are somewhat limited to some problems that can be modeled as optimization problems. The limits of adiabatic computers were studied by van Dam et al, showing that despite solving local searching problems and even some instances of the max-SAT problem, there exists harder searching problems this computing model cannot efficiently solve.Nuclear Magnetic ResonanceNuclear Magnetic Resonance (NMR) is a physical phenomena that can be used to represent qubits. The spin of atomic nucleus of molecules is perturbed by an oscillating magnetic field. A 2001 report describes successful implementation of Shor’s algorithm in a 7-qubit NMR quantum computer. An iconic result since this computer was able to factor the number 15.Nucleus spinning induced by a magnetic field, Darekk2 - CC BY-SA 3.0NRM Spectrometer by UCSBSuperconducting Quantum ComputersOne way to physically construct qubits is based on superconductors, materials that conduct electric current with zero resistance when exposed to temperatures close to absolute zero. The Josephson effect, in which current flows across the junction of two superconductors separated by a non-superconducting material, is used to physically implement a superposition of states. A Josephson junction - Public DomainWhen a magnetic flux is applied to this junction, the current flows continuously in one direction. But, depending on the quantity of magnetic flux applied, the current can also flow in the opposite direction. There exists a quantum superposition of currents going both clockwise and counterclockwise leading to a physical implementation of a qubit called flux qubit. The complete device is known as the Superconducting Quantum Interference Device (SQUID) and can be easily coupled scaling the number of qubits. Thus, SQUIDs are like the transistors of a quantum computer.SQUID: Superconducting Quantum Interference Device. Image by Kurzweil Network and original source.Examples of superconducting computers are:D-wave’s adiabatic computers process quantum annealing for solving diverse optimization problems.Google’s 72-qubit computer was recently announced and also several engineering issues such as achieving lower temperatures.IBM’s IBM-Q Tokyo, a 20-qubit adiabatic computer, and IBM Q Experience, a cloud-based system for exploring quantum circuits.D-Wave Cooling System by D-Wave Systems Inc.IBM Q SystemIBM Q System One cryostat at CES.The Imminent Threat of Quantum AlgorithmsThe quantum zoo website tracks problems that can be solved using quantum algorithms. As of mid-2018, more than 60 problems appear on this list, targeting diverse applications in the area of number theory, approximation, simulation, and searching. As terrific as it sounds, some easily-solvable problems by quantum computing are surrounding the security of information.Grover’s AlgorithmTales of a quantum detective (fragment). A couple of detectives have the mission of finding one culprit in a group of suspects that always respond to this question honestly: “are you guilty?”.The detective C follows a classic interrogative method and interviews every person one at a time, until finding the first one that confesses.The detective Q proceeds in a different way, First gather all suspects in a completely dark room, and after that, the detective Q asks them -- are you guilty? -- A steady sound comes from the room saying “No!” while at the same time, a single voice mixed in the air responds “Yes!.” Since everybody is submerged in darkness, the detective cannot see the culprit. However, detective Q knows that, as long as the interrogation advances, the culprit will feel desperate and start to speak louder and louder, and so, he continues asking the same question. Suddenly, detective Q turns on the lights, enters into the room, and captures the culprit. How did he do it?The task of the detective can be modeled as a searching problem. Given a Boolean function \( f\) that takes N bits and produces one bit, to find the unique input \(x\) such that \( f(x)=1\). A classical algorithm (detective C) finds \(x\) using \(2^N-1\) function evaluations in the worst case. However, the quantum algorithm devised by Grover, corresponding to detective Q, searches quadratically faster using around \(2^{N/2}\) function evaluations.The key intuition of Grover’s algorithm is increasing the amplitude of the state that represents the solution while maintaining the other states in a lower amplitude. In this way, a system of N qubits, which is a superposition of 2ᴺ possible inputs, can be continuously updated using this intuition until the solution state has an amplitude closer to 1. Hence, after updating the qubits many times, there will be a high probability to measure the solution state.Initially, a superposition of 2ᴺ states (horizontal axis) is set, each state has an amplitude (vertical axis) close to 0. The qubits are updated so that the amplitude of the solution state increases more than the amplitude of other states. By repeating the update step, the amplitude of the solution state gets closer to 1, which boosts the probability of collapsing to the solution state after measuring.Image taken from D. Bernstein’s slides.Grover’s Algorithm (pseudo-code):Prepare an N qubit \(|x\rangle \) as a uniform superposition of 2ᴺ states.Update the qubits by performing this core operation. $$ |x\rangle \mapsto (-1)^{f(x)} |x\rangle $$ The result of \( f(x) \) only flips the amplitude of the searched state.Negate the N qubit over the average of the amplitudes.Repeat Step 2 and 3 for \( (\tfrac{\pi}{4})  2^{ N/2} \) times.Measure the qubit and return the bits obtained.Alternatively, the second step can be better understood as a conditional statement:IF f(x) = 1 THEN Negate the amplitude of the solution state. ELSE /* nothing */ ENDIF Grover’s algorithm considers function \(f\) a black box, so with slight modifications, the algorithm can also be used to find collisions on the function. This implies that Grover’s algorithm can find a collision using an asymptotically less number of operations than using a brute-force algorithm.The power of Grover’s algorithm can be turned against cryptographic hash functions. For instance, a quantum computer running Grover’s algorithm could find a collision on SHA256 performing only 2¹²⁸ evaluations of a reversible circuit of SHA256. The natural protection for hash functions is to increase the output size to double. More generally, most of symmetric key encryption algorithms will survive to the power of Grover’s algorithm by doubling the size of keys. The scenario for public-key algorithms is devastating in face of Peter Shor’s algorithm.Shor’s AlgorithmMultiplying integers is an easy task to accomplish, however, finding the factors that compose an integer is difficult. The integer factorization problem is to decompose a given integer number into its prime factors. For example, 42 has three factors 2, 3, and 7 since \( 2\times 3\times 7 = 42\). As the numbers get bigger, integer factorization becomes more difficult to solve, and the hardest instances of integer factorization are when the factors are only two different large primes. Thus, given an integer number \(N\), to find primes \(p\) and \(q\) such that \( N = p \times q\), is known as integer splitting. Factoring integers is like cutting wood, and the specific task of splitting integers is analogous to using an axe for splitting the log in two parts. There exist many different tools (algorithms) for accomplishing each task.For integer factorization, trial division, the Rho method, the elliptic curve method are common algorithms. Fermat's method, the quadratic- and rational-sieve, leads to the (general) number field sieve (NFS) algorithm for integer splitting. The latter relies on finding a congruence of squares, that is, splitting \(N\) as a product of squares such that $$ N = x^2 - y^2 = (x+y)\times(x-y) $$ The complexity of NFS is mainly attached to the number of pairs \((x, y)\) that must be examined before getting a pair that factors \(N\). The NFS algorithm has subexponential complexity on the size of \(N\), meaning that the time required for splitting an integer increases significantly as the size of \(N\) grows. For large integers, the problem becomes intractable for classical computers. The Axe of Thor Shor Olaf Tryggvason - Public DomainThe many different guesses of the NFS algorithm are analogous to hitting the log using a dulled axe; after subexponential many tries, the log is cut by half. However, using a sharper axe allows you to split the log faster. This sharpened axe is the quantum algorithm proposed by Shor in 1994.Let \(x\) be an integer less than \(N\) and of the order \(k\). Then, if \(k\) is even, there exists an integer \(q\) so \(qN\) can be factored as follows.This approach has some issues. For example, the factorization could correspond to \(q\) not \(N\) and the order of \(x\) is unknown, and here is where Shor’s algorithm enters the picture, finding the order of \(x\).The internals of Shor’s algorithm rely on encoding the order \(k\) into a periodic function, so that its period can be obtained using the quantum version of the Fourier transform (QFT). The order of \(x\) can be found using a polynomial number quantum evaluations of Shor’s algorithm. Therefore, splitting integers using this quantum approach has polynomial complexity on the size of \(N\).Shor’s algorithm carries strong implications on the security of the RSA encryption scheme because its security relies on integer factorization. A large-enough quantum computer can efficiently break RSA for current instances.Alternatively, one may recur to elliptic curves, used in cryptographic protocols like ECDSA or ECDH. Moreover, all TLS ciphersuites use a combination of elliptic curve groups, large prime groups, and RSA and DSA signatures. Unfortunately, these algorithms all succumb to Shor’s algorithm. It only takes a few modifications for Shor’s algorithm to solve the discrete logarithm problem on finite groups. This sounds like a catastrophic story where all of our encrypted data and privacy are no longer secure with the advent of a quantum computer, and in some sense this is true.On one hand, it is a fact that the quantum computers constructed as of 2019 are not large enough to run, for instance, Shor’s algorithm for the RSA key sizes used in standard protocols. For example, a 2018 report shows experiments on the factorization of a 19-bit number using 94 qubits, they also estimate that 147456 qubits are needed for factoring a 768-bit number. Hence, there numbers indicates that we are still far from breaking RSA.What if we increment RSA key sizes to be resistant to quantum algorithms, just like for symmetric algorithms? Bernstein et al. estimated that RSA public keys should be as large as 1 terabyte to maintain secure RSA even in the presence of quantum factoring algorithms. So, for public-key algorithms, increasing the size of keys does not help.A recent investigation by Gidney and Ekerá shows improvements that accelerate the evaluation of quantum factorization. In their report, the cost of factoring 2048-bit integers is estimated to take a few hours using a quantum machine of 20 million qubits, which is far from any current development. Something worth noting is that the number of qubits needed is two orders of magnitude smaller than the estimated numbers given in previous works developed in this decade. Under these estimates, current encryption algorithms will remain secure several more years; however, consider the following not-so-unrealistic situation.Information currently encrypted with for example, RSA, can be easily decrypted with a quantum computer in the future. Now, suppose that someone records encrypted information and stores them until a quantum computer is able to decrypt ciphertexts. Although this could be as far as 20 years from now, the forward-secrecy principle is violated. A 20-year gap to the future is sometimes difficult to imagine. So, let’s think backwards, what would happen if all you did on the Internet at the end of the 1990s can be revealed 20 years later -- today. How does this impact the security of your personal information? What if the ciphertexts were company secrets or business deals? In 1999, most of us were concerned about the effects of the Y2K problem, now we’re facing Y2Q (years to quantum): the advent of quantum computers.Post-Quantum CryptographyAlthough the current capacity of the physical implementation of quantum computers is far from a real threat to secure communications, a transition to use stronger problems to protect information has already started. This wave emerged as post-quantum cryptography (PQC). The core idea of PQC is finding algorithms difficult enough that no quantum (and classical) algorithm can solve them.A recurrent question is: How does it look like a problem that even a quantum computer can not solve?These so-called quantum-resistant algorithms rely on different hard mathematical assumptions; some of them as old as RSA, others more recently proposed. For example, McEliece cryptosystem, formulated in the late 70s, relies on the hardness of decoding a linear code (in the sense of coding theory). The practical use of this cryptosystem didn’t become widespread, since with the passing of time, other cryptosystems superseded in efficiency. Fortunately, McEliece cryptosystem remains immune to Shor’s algorithm, gaining it relevance in the post-quantum era. Post-quantum cryptography presents alternatives:Lattice-based CryptographyHash-based CryptographyIsogeny-based CryptographyCode-based CryptographyMultivariate-based CryptographyAs of 2017, the NIST started an evaluation process that tracks possible alternatives for next-generation secure algorithms. From a practical perspective, all candidates present different trade-offs in implementation and usage. The time and space requirements are diverse; at this moment, it’s too early to define which will succeed RSA and elliptic curves. An initial round collected 70 algorithms for deploying key encapsulation mechanisms and digital signatures. As of early 2019, 28 of these survive and are currently in the analysis, investigation, and experimentation phase.Cloudflare's mission is to help build a better Internet. As a proactive action, our cryptography team is preparing experiments on the deployment of post-quantum algorithms at Cloudflare scale. Watch our blog post for more details.

Towards Post-Quantum Cryptography in TLS

CloudFlare Blog -

We live in a completely connected society. A society connected by a variety of devices: laptops, mobile phones, wearables, self-driving or self-flying things. We have standards for a common language that allows these devices to communicate with each other. This is critical for wide-scale deployment – especially in cryptography where the smallest detail has great importance.One of the most important standards-setting organizations is the National Institute of Standards and Technology (NIST), which is hugely influential in determining which standardized cryptographic systems see worldwide adoption. At the end of 2016, NIST announced it would hold a multi-year open project with the goal of standardizing new post-quantum (PQ) cryptographic algorithms secure against both quantum and classical computers.Many of our devices have very different requirements and capabilities, so it may not be possible to select a “one-size-fits-all” algorithm during the process. NIST mathematician, Dustin Moody, indicated that institute will likely select more than one algorithm:“There are several systems in use that could be broken by a quantum computer - public-key encryption and digital signatures, to take two examples - and we will need different solutions for each of those systems.”Initially, NIST selected 82 candidates for further consideration from all submitted algorithms. At the beginning of 2019, this process entered its second stage. Today, there are 26 algorithms still in contention.Post-quantum cryptography: what is it really and why do I need it?In 1994, Peter Shor made a significant discovery in quantum computation. He found an algorithm for integer factorization and computing discrete logarithms, both believed to be hard to solve in classical settings. Since then it has become clear that the 'hard problems' on which cryptosystems like RSA and elliptic curve cryptography (ECC) rely – integer factoring and computing discrete logarithms, respectively – are efficiently solvable with quantum computing. A quantum computer can help to solve some of the problems that are intractable on a classical computer. In theory, they could efficiently solve some fundamental problems in mathematics. This amazing computing power would be highly beneficial, which is why companies are actually trying to build quantum computers. At first, Shor’s algorithm was merely a theoretical result – quantum computers powerful enough to execute it did not exist – but this is quickly changing. In March 2018, Google announced a 72-qubit universal quantum computer. While this is not enough to break say RSA-2048 (still more is needed), many fundamental problems have already been solved. In anticipation of wide-spread quantum computing, we must start the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives. It may be that consumers will never get to hold a quantum computer, but a few powerful attackers who will get one can still pose a serious threat. Moreover, under the assumption that current TLS handshakes and ciphertexts are being captured and stored, a future attacker could crack these stored individual session keys and use those results to decrypt the corresponding individual ciphertexts. Even strong security guarantees, like forward secrecy, do not help out much there.In 2006, the academic research community launched a conference series dedicated to finding alternatives to RSA and ECC. This so-called post-quantum cryptography should run efficiently on a classical computer, but it should also be secure against attacks performed by a quantum computer. As a research field, it has grown substantially in popularity.Several companies, including Google, Microsoft, Digicert and Thales, are already testing the impact of deploying PQ cryptography. Cloudflare is involved in some of this, but we want to be a company that leads in this direction. The first thing we need to do is understand the real costs of deploying PQ cryptography, and that’s not obvious at all.What options do we have?Many submissions to the NIST project are still under study. Some are very new and little understood; others are more mature and already standardized as RFCs. Some have been broken or withdrawn from the process; others are more conservative or illustrate how far classical cryptography would need to be pushed so that a quantum computer could not crack it within a reasonable cost. Some are very slow and big; others are not. But most cryptographic schemes can be categorized into these families: lattice-based, multivariate, hash-based (signatures only), code-based and isogeny-based.For some algorithms, nevertheless, there is a fear they may be too inconvenient to use with today’s Internet. We must also be able to integrate new cryptographic schemes with existing protocols, such as SSH or TLS. To do that, designers of PQ cryptosystems must consider these characteristics:Latency caused by encryption and decryption on both ends of the communication channel, assuming a variety of devices from big and fast servers to slow and memory constrained IoT (Internet of Things) devicesSmall public keys and signatures to minimize bandwidthClear design that allows cryptanalysis and determining weaknesses that could be exploitedUse of existing hardware for fast implementation The work on post-quantum public key cryptosystems must be done in a full view of organizations, governments, cryptographers, and the public. Emerging ideas must be properly vetted by this community to ensure widespread support.Helping Build a Better InternetTo better understand the post-quantum world, Cloudflare began experimenting with these algorithms and used them to provide confidentiality in TLS connections. With Google, we are proposing a wide-scale experiment that combines client- and server-side data collection to evaluate the performance of key-exchange algorithms on actual users’ devices. We hope that this experiment helps choose an algorithm with the best characteristics for the future of the Internet. With Cloudflare’s highly distributed network of access points and Google’s Chrome browser, both companies are in a very good position to perform this experiment.Our goal is to understand how these algorithms act when used by real clients over real networks, particularly candidate algorithms with significant differences in public-key or ciphertext sizes. Our focus is on how different key sizes affect handshake time in the context of Transport Layer Security (TLS) as used on the web over HTTPS. Our primary candidates are an NTRU-based construction called HRSS-SXY (by Hülsing - Rijneveld - Schanck - Schwabe, and Tsunekazu Saito - Keita Xagawa - Takashi Yamakawa) and an isogeny-based Supersingular Isogeny Key Encapsulation (SIKE). Details of both algorithms are described in more detail below in section "Dive into post-quantum cryptography". This table shows a few characteristics for both algorithms. Performance timings were obtained by running the BoringSSL speed test on an Intel Skylake CPU. KEM Public Key size (bytes) Ciphertext (bytes) Secret size (bytes) KeyGen (op/sec) Encaps (op/sec) Decaps (op/sec) NIST level HRSS-SXY 1138 1138 32 3952.3 76034.7 21905.8 1 SIKE/p434 330 346 16 367.1 228.0 209.3 1 Currently the most commonly used key exchange algorithm (according to Cloudflare’s data) is the non-quantum X25519. Its public keys are 32 bytes and BoringSSL can generate 49301.2 key pairs, and is able to perform 19628.6 key agreements every second on my Skylake CPU.Note that HRSS-SXY shows a significant speed advantage, while SIKE has a size advantage. In our experiment, we will deploy these two algorithms on both the server side using Cloudflare’s infrastructure, and the client side using Chrome Canary; both sides will collect telemetry information about TLS handshakes using these two PQ algorithms to see how they perform in practice.What do we expect to find?In 2018, Adam Langley conducted an experiment with the goal of evaluating the likely latency impact of a post-quantum key exchange in TLS. Chrome was augmented with the ability to include a dummy, arbitrarily-sized extension in the TLS ClientHello (fixed number of bytes of random noise). After taking into account the performance and key size offered by different types key-exchange schemes, he concluded that constructs based on structured lattices may be most suitable for future use in TLS. However, Langley also observed a peculiar phenomenon; client connections measured at 95th percentile had much higher latency than the median. It means that in those cases, isogeny-based systems may be a better choice. In the "Dive into post-quantum cryptography", we describe the difference between isogeny-based SIKE and lattice-based NTRU cryptosystems.In our experiment, we want to more thoroughly evaluate and ascribe root causes to these unexpected latency increases. We would particularly like to learn more about the characteristics of those networks: what causes increased latency? how does the performance cost of isogeny-based algorithms impact the TLS handshake? We want to answer key questions, like:What is a good ratio for speed-to-key size (or how much faster could SIKE get to achieve the client-perceived performance of HRSS)?How do network middleboxes behave when clients use new PQ algorithms, and which networks have problematic middleboxes?How do the different properties of client networks affect TLS performance with different PQ key exchanges? Can we identify specific autonomous systems, device configurations, or network configurations that favor one algorithm over another? How is performance affected in the long tail?Experiment DesignOur experiment will involve both server- and client-side performance statistics collection from real users around the world (all the data is anonymized). Cloudflare is operating the server-side TLS connections. We will enable the CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE + X25519) key-agreement algorithms on all TLS-terminating edge servers. In this experiment, the ClientHello will contain a CECPQ2 or CECPQ2b public key (but never both). Additionally, Chrome will always include X25519 for servers that do not support post-quantum key exchange. The post-quantum key exchange will only be negotiated in TLS version 1.3 when both sides support it.Since Cloudflare only measures the server side of the connection, it is impossible to determine the time it takes for a ClientHello sent from Chrome to reach Cloudflare’s edge servers; however, we can measure the time it takes for the TLS ServerHello message containing post-quantum key exchange, to reach the client and for the client to respond. On the client side, Chrome Canary will operate the TLS connection. Google will enable either CECPQ2 or CECPQ2b in Chrome for the following mix of architecture and OSes:x86-64: Windows, Linux, macOS, ChromeOSaarch64: AndroidOur high-level expectation is to get similar results as Langley’s original experiment in 2018 — slightly increased latency for the 50th percentile and higher latency for the 95th. Unfortunately, data collected purely from real users’ connections may not suffice for diagnosing the root causes of why some clients experience excessive slowdown. To this end, we will perform follow-up experiments based on per-client information we collect server-side.Our primary hypothesis is that excessive slowdowns, like those Langley observed, are largely due to in-network events, such as middleboxes or bloated/lossy links. As a first-pass analysis, we will investigate whether the slowed-down clients share common network features, like common ASes, common transit networks, common link types, and so on. To determine this, we will run a traceroute from vantage points close to our servers back toward the clients (not overloading any particular links or hosts) and study whether some client locations are subject to slowdowns for all destinations or just for some.Dive into post-quantum cryptographyBe warned: the details of PQ cryptography may be quite complicated. In some cases it builds on classical cryptography, and in other cases it is completely different math. It would be rather hard to describe details in a single blog post. Instead, we are giving you an intuition of post-quantum cryptography, rather than provide deep academic-level descriptions. We’re skipping a lot of details for the sake of brevity. Nevertheless, settle in for a bit of an epic journey because we have a lot to cover.Key encapsulation mechanismNIST requires that all key-agreement algorithms have a form of key-encapsulation mechanism (KEM). The KEM is a simplified form of public key encryption (PKE). As PKE, it also allows agreement on a secret, but in a slightly different way. The idea is that the session key is an output of the encryption algorithm, conversely to public key encryption schemes where session key is an input to the algorithm. In a KEM, Alice generates a random key and uses the pre-generated public key from Bob to encrypt (encapsulate) it. This results in a ciphertext sent to Bob. Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the random key. The idea was initially introduced by Cramer and Shoup. Experience shows that such constructs are easier to design, analyze, and implement as the scheme is limited to communicating a fixed-size session key. Leonardo Da Vinci said, “Simplicity is the ultimate sophistication,” which is very true in cryptography.The key exchange (KEX) protocol, like Diffie-Hellman, is yet a different construct: it allows two parties to agree on a shared secret that can be used as a symmetric encryption key. For example, Alice generates a key pair and sends a public key to Bob. Bob does the same and uses his own key pair with Alice’s public key to generate the shared secret. He then sends his public key to Alice who can now generate the same shared secret. What’s worth noticing is that both Alice and Bob perform exactly the same operations.KEM construction can be converted to KEX. Alice performs key generation and sends the public key to Bob. Bob uses it to encapsulate a symmetric session key and sends it back to Alice. Alice decapsulates the ciphertext received from Bob and gets the symmetric key. This is actually what we do in our experiment to make integration with the TLS protocol less complicated.NTRU Lattice-based Encryption  We will enable the CECPQ2 implemented by Adam Langley from Google on our servers. He described this implementation in detail here. This key exchange uses the HRSS algorithm, which is based on the NTRU (N-Th Degree TRUncated Polynomial Ring) algorithm. Foregoing too much detail, I am going to explain how NTRU works and give simplified examples, and finally, compare it to HRSS.NTRU is a cryptosystem based on a polynomial ring. This means that we do not operate on numbers modulo a prime (like in RSA), but on polynomials of degree \( N \) , where the degree of a polynomial is the highest exponent of its variable. For example, \(x^7 + 6x^3 + 11x^2 \) has degree of 7.One can add polynomials in the ring in the usual way, by simply adding theirs coefficients modulo some integer. In NTRU this integer is called \( q \). Polynomials can also be multiplied, but remember, you are operating in the ring, therefore the result of a multiplication is always a polynomial of degree less than \(N\). It basically means that exponents of the resulting polynomial are added to modulo \(N\).In other words, polynomial ring arithmetic is very similar to modular arithmetic, but instead of working with a set of numbers less than N, you are working with a set of polynomials with a degree less than N. To instantiate the NTRU cryptosystem, three domain parameters must be chosen:\(N\) - degree of the polynomial ring, in NTRU the principal objects are polynomials of degree \(N-1\).\(p\) - small modulus used during key generation and decryption for reducing message coefficients.\(q\) - large modulus used during algorithm execution for reducing coefficients of the polynomials.First, we generate a pair of public and private keys. To do that, two polynomials \(f\) and \(g\) are chosen from the ring in a way that their randomly generated coefficients are much smaller than \(q\). Then key generation computes two inverses of the polynomial: $$ f_p= f^{-1} \bmod{p}   \\  f_q= f^{-1} \bmod{q} $$The last step is to compute $$ pk = p\cdot f_q\cdot g \bmod q $$, which we will use as public key pk. The private key consists of \(f\) and \(f_p\). The \(f_q\) is not part of any key, however it must remain secret.It might be the case that after choosing \(f\), the inverses modulo \(p\) and \( q \) do not exist. In this case, the algorithm has to start from the beginning and generate another \(f\). That’s unfortunate because calculating the inverse of a polynomial is a costly operation. HRSS brings an improvement to this issue since it ensures that those inverses always exist, making key generation faster than as proposed initially in NTRU. The encryption of a message \(m\) proceeds as follows. First, the message \(m\) is converted to a ring element \(pt\) (there exists an algorithm for performing this conversion in both directions). During encryption, NTRU randomly chooses one polynomial \(b\) called blinder. The goal of the blinder is to generate different ciphertexts per encyption. Thus, the ciphetext \(ct\) is obtained as $$ ct = (b\cdot pk + pt ) \bmod q $$ Decryption looks a bit more complicated but it can also be easily understood. Decryption uses both the secret value \(f\) and to recover the plaintext as $$ v =  f \cdot ct \bmod q \\ pt = v \cdot f_p \bmod p $$ This diagram demonstrates why and how decryption works.Step-by-step correctness of decryption procedure.After obtaining \(pt\), the message \(m\) is recovered by inverting the conversion function.The underlying hard assumption is that given two polynomials: \(f\) and \(g\) whose coefficients are short compared to the modulus \(q\), it is difficult to distinguish \(pk = \frac{f}{g} \) from a random element in the ring. It means that it’s hard to find \(f\) and \(g\) given only public key pk.LatticesNTRU cryptosystem is a grandfather of lattice-based encryption schemes. The idea of using  difficult problems for cryptographic purposes was due to Ajtai. His work evolved into a whole area of research with the goal of creating more practical, lattice-based cryptosystems. What is a lattice and why it can be used for post-quantum crypto? The picture below visualizes lattice as points in a two-dimensional space. A lattice is defined by the origin \(O\) and base vectors \( \{ b_1 , b_2\} \). Every point on the lattice is represented as a linear combination of the base vectors, for example  \(V = -2b_1+b_2\).There are two classical NP-hard problems in lattice-based cryptography:Shortest Vector Problem (SVP): Given a lattice, to find the shortest non-zero vector in the lattice. In the graph, the vector \(s\) is the shortest one. The SVP problem is NP-hard only under some assumptions.Closest Vector Problem (CVP). Given a lattice and a vector \(V\) (not necessarily in the lattice), to find the closest vector to \(V\). For example, the closest vector to \(t\) is \(z\).In the graph above, it is easy for us to solve SVP and CVP by simple inspection. However, the lattices used in cryptography have higher dimensions, say above 1000, as well as highly non-orthogonal basis vectors. On these instances, the problems get extremely hard to solve. It’s even believed future quantum computers will have it tough.NTRU vs HRSSHRSS, which we use in our experiment, is based on NTRU, but a slightly better instantiation. The main improvements are:Faster key generation algorithm.NTRU encryption can produce ciphertexts that are impossible to decrypt (true for many lattice-based schemes). But HRSS fixes this problem.HRSS is a key encapsulation mechanism.CECPQ2b - Isogeny-based Post-Quantum TLSFollowing CECPQ2, we have integrated into BoringSSL another hybrid key exchange mechanism relying on SIKE. It is called CECPQ2b and we will use it in our experimentation in TLS 1.3. SIKE is a key encapsulation method based on Supersingular Isogeny Diffie-Hellman (SIDH). Read more about SIDH in our previous post. The math behind SIDH is related to elliptic curves. A comparison between SIDH and the classical Elliptic Curve Diffie-Hellman (ECDH) is given.An elliptic curve is a set of points that satisfy a specific mathematical equation. The equation of an elliptic curve may have multiple forms, the standard form is called the Weierstrass equation $$ y^2 = x^3 +ax +b  $$ and its shape can look like the red curve.An interesting fact about elliptic curves is have a group structure. That is, the set of points on the curve have associated a binary operation called point addition. The set of points on the elliptic curve is closed under addition. Thus, adding two points results in another point that is also on the elliptic curve.If we can add two different points on a curve, then we can also add one point to itself. And if we do it multiple times, then the resulting operations is known as a scalar multiplication and denoted as  \(Q = k\cdot P = P+P+\dots+P\) for an integer \(k\).Multiplication of scalars is commutative. It means that two scalar multiplications can be evaluated in any order \( \color{darkred}{k_a}\cdot\color{darkgreen}{k_b} =   \color{darkgreen}{k_b}\cdot\color{darkred}{k_a} \); this an important property that makes ECDH possible.It turns out that carefully if choosing an elliptic curve "correctly", scalar multiplication is easy to compute but extremely hard to reverse. Meaning, given two points \(Q\) and \(P\) such that \(Q=k\cdot P\), finding the integer k is a difficult task known as the Elliptic Curve Discrete Logarithm problem (ECDLP). This problem is suitable for cryptographic purposes.Alice and Bob agree on a secret key as follows. Alice generates a private key \( k_a\). Then, she uses some publicly known point \(P\) and calculates her public key as \( Q_a = k_a\cdot P\). Bob proceeds in similar fashion and gets \(k_b\) and \(Q_b = k_b\cdot P\). To agree on a shared secret, each party multiplies their private key with the public key of the other party. The result of this is the shared secret. Key agreement as described above, works thanks to the fact that scalars can commute: $$  \color{darkgreen}{k_a} \cdot Q_b = \color{darkgreen}{k_a} \cdot  \color{darkred}{k_b} \cdot P \iff \color{darkred}{k_b} \cdot \color{darkgreen}{k_a} \cdot P = \color{darkred}{k_b} \cdot Q_a $$There is a vast theory behind elliptic curves. An introduction to elliptic curve cryptography was posted before and more details can be found in this book. Now, lets describe SIDH and compare with ECDH.Isogenies on Elliptic CurvesBefore explaining the details of SIDH key exchange, I’ll explain the 3 most important concepts, namely: j-invariant, isogeny and its kernel.Each curve has a number that can be associated to it. Let’s call this number a j-invariant. This number is not unique per curve, meaning many curves have the same value of j-invariant, but it can be viewed as a way to group multiple elliptic curves into disjoint sets. We say that two curves are isomorphic if they are in the same set, called the isomorphism class. The j-invariant is a simple criterion to determine whether two curves are isomorphic. The j-invariant of a curve \(E\) in Weierstrass form \( y^2 = x^3 + ax + b\) is given as $$ j(E) = 1728\frac{4a^3}{4^3 +27b^2} $$ When it comes to isogeny, think about it as a map between two curves. Each point on some curve \( E \) is mapped by isogeny to the point on isogenous curve \( E' \). We denote mapping from curve \( E \) to \( E' \) by isogeny \( \phi \) as:$$\phi: E \rightarrow E' $$It depends on the map if those two curves are isomorphic or not. Isogeny can be visualised as:There may exist many of those mappings, each curve used in SIDH has small number of isogenies to other curves. Natural question is how do we compute such isogeny. Here is where the kernel of an isogeny comes. The kernel uniquely determines isogeny (up to isomorphism class). Formulas for calculating isogeny from its kernel were initially given by J. Vélu and the idea of calculating them efficiently was extended.To finish, I will summarize what was said above with a picture.There are two isomorphism classes on the picture above. Both curves \(E_1\) and \(E_2\) are isomorphic and have  j-invariant = 6. As curves \(E_3\) and \(E_4\) have j-invariant=13, they are in a different isomorphism class. There exists an isogeny \(\phi_2\) between curve \(E_3\) and \(E_2\), so they both are isogeneous. Curves \( \phi_1 \) and \( E_2 \) are isomorphic and there is isogeny \( \phi_1 \) between them. Curves \( E_1\) and \(E_4\) are neither isomorphic nor isogeneus.For brevity I’m skipping many important details, like details of the finite field, the fact that isogenies must be separable and that the kernel is finite. But curious readers can find a number of academic research papers available on the Internet.Big picture: similarities with ECDHLet’s generalize the ECDH algorithm described above, so that we can swap some elements and try to use Supersingular Isogeny Diffie-Hellman.Note that what actually happens during an ECDH key exchange is:We have a set of points on elliptic curve, set SWe have another group of integers used for point multiplication, GWe use an element from Z to act on an element from S to get another element from S:$$ G \cdot S \rightarrow S $$Now the question is: what is our G and S in an SIDH setting? For SIDH to work, we need a big set of elements and something secret that will act on the elements from that set. This “group action” must also be resistant to attacks performed by quantum computers.In the SIDH setting, those two sets are defined as:Set S is a set (graph) of j-invariants, such that all the curves are supersingular: \( S = [j(E_1), j(E_2), j(E_3), .... , j(E_n)]\)Set G is a set of isogenies acting on elliptic curves and transforming, for example, the elliptic curve \(E_1\) into \(E_n\): Random walk on supersingular graphWhen we talk about Isogeny Based Cryptography, as a topic distinct from Elliptic Curve Cryptography, we usually mean algorithms and protocols that rely fundamentally on the structure of isogeny graphs. An example of such a (small) graph is pictured below.Animation based on Chloe Martindale slide deckEach vertex of the graph represents a different j-invariant of a set of supersingular curves. The edges between vertices represent isogenies converting one elliptic curve to another. As you can notice, the graph is strongly connected, meaning every vertex can be reached from every other vertex. In the context of isogeny-based crypto, we call such a graph a supersingular isogeny graph. I’ll skip some technical details about the construction of this graph (look for those here or here), but instead describe ideas about how it can be used.As the graph is strongly connected, it is possible to walk a whole graph by starting from any vertex, randomly choosing an edge, following it to the next vertex and then start the process again on a new vertex. Such a way of visiting edges of this graph is called a random walk.The random walk is a key concept that makes isogeny based crypto feasible. When you look closely at the graph, you can notice that each vertex has a small number of edges incident to it, this is why we can compute the isogenies efficiently. But also for any vertex there is only a limited number of isogenies to choose from, which doesn’t look like good base for a cryptographic scheme. The key question is - where does the security of the scheme come from exactly? In order to get it, it is necessary to visit a couple hundred vertices. What it means in practice is that secret isogeny (of large degree) is constructed as a composition of multiple isogenies (of small, prime degree).  Which means, the secret isogeny is:This property and properties of the isogeny graph are what makes some of us believe that scheme has a good chance to be secure. More specifically, there is no efficient way of finding a path that connects \( E_0 \) with \( E_n \), even with quantum computer at hand. The security level of a system depends on value n - the number of steps taken during the walk.The random walk is a core process used when both generating public keys and computing shared secrets. It starts with party generating random value m (see more below), starting curve \(E_0\) and points P and Q on this curve. Those values are used to compute the kernel of an isogeny \( R_1 \) in the following way:$$ R_1 = P + m \cdot Q $$Thanks to formulas given by Vélu we can now use the point \( R_1 \) to compute the isogeny, the party will choose to move from a vertex to another one. After the isogeny \( \phi_{R_1} \) is calculated it is applied to \( E_0 \)  which results in a new curve \( E_1 \):$$ \phi_{R_1}: E_0 \rightarrow E_1 $$Isogeny is also applied to points P and Q. Once on \( E_1 \) the process is repeated. This process is applied n times, and at the end a party ends up on some curve \( E_n \) which defines isomorphism class, so also j-invariant.Supersingular Isogeny Diffie-HellmanThe core idea in SIDH is to compose two random walks on an isogeny graph of elliptic curves in such a way that the end node of both ways of composing is the same.In order to do it, scheme sets public parameters - starting curve \( E_0 \) and 2 pairs of base points on this curve \( (PA,QA) \) , \( (PB,QB) \). Alice generates her random secret keys m, and calculates a secret isogeny \( \phi_q \) by performing a random walk as described above. The walk finishes with 3 values: elliptic curve \( E_a \) she has ended up with and pair of points \( \phi_a(PB) \) and \( \phi_a(QB) \) after pushing through Alice’s secret isogeny. Bob proceeds analogously which results in the triple \( {E_b, \phi_b(PA), \phi_b(QA)} \). The triple forms a public key which is exchanged between parties.The picture below visualizes the operation. The black dots represent curves grouped in the same isomorphism classes represented by light blue circles. Alice takes the orange path ending up on a curve \( E_a \) in a separate isomorphism class than Bob after taking his dark blue path ending on \( E_b \). SIDH is parametrized in a way that Alice and Bob will always end up in different isomorphism classes.Upon receipt of triple \( { E_a, \phi_a(PB), \phi_a(QB) } \)  from Alice, Bob will use his secret value m to calculate a new kernel - but instead of using point \(PA\) and \(QA\) to calculate an isogeny kernel, he will now use images \( \phi_a(PB) \) and \( \phi_a(QB) \) received from Alice:$$ R’_1 = \phi_a(PB) + m \cdot \phi_a(QB) $$Afterwards, he uses \( R’_1 \) to start the walk again resulting in the isogeny \( \phi’_b: E_a \rightarrow E_{ab} \). Allice proceeds analogously resulting in the isogeny \(\phi’_a: E_b \rightarrow E_{ba} \). With isogenies calculated this way, both Alice and Bob will converge in the same isomorphism class. The math math may seem complicated, hopefully the picture below makes it easier to understand.Bob computes a new isogeny and starts his random walk from \( E_a \) received from Alice. He ends up on some curve \(E_{ba}\). Similarly, Alice calculates a new isogeny, applies it on \( E_b \) received from Bob and her random walk ends on some curve \(E_{ab}\). Curves \(E_{ab}\) and \(E_{ba}\) are not likely to be the same, but construction guarantees that they are isomorphic. As mentioned earlier, isomorphic curves have the same value of j-invariant,  hence the shared secret is a value of j-invariant \(j(E_{ab})\).Coming back to differences between SIDH and ECDH - we can split them into four categories: the elements of the group we are operating on, the cornerstone computation required to agree on a shared secret, the elements representing secret values, and the difficult problem on which the security relies.Comparison based on Craig Costello’ s slide deck.In ECDH there is a secret key which is an integer scalar, in case of SIDH it is a secret isogeny, which also is generated from an integer scalar. In the case of ECDH one multiplies a point on a curve by a scalar, in the case of SIDH it is a random walk in an isogeny graph. In the case of ECDH, the public key is a point on a curve, in the case of SIDH, the public part is a curve itself and the image of some points after applying isogeny. The shared secret in the case of ECDH is a point on a curve, in the case of SIDH it is a j-invariant.SIKE: Supersingular Isogeny Key EncapsulationSIDH could potentially be used as a drop-in replacement of the ECDH protocol. We have actually implemented a proof-of-concept and added it to our implementation of TLS 1.3 in the tls-tris library and described (together with Mozilla) implementation details in this draft. Nevertheless, there is a problem with SIDH - the keys can be used only once. In 2016, a few researchers came up with an active attack on SIDH which works only when public keys are reused. In the context of TLS, it is not a big problem, because for each session a fresh key pair is generated (ephemeral keys), but it may not be true for other applications.SIKE is an isogeny key encapsulation which solves this problem. Bob can generate SIKE keys, upload the public part somewhere in the Internet and then anybody can use it whenever he wants to communicate with Bob securely. SIKE reuses SIDH - internally both sides of the connection always perform SIDH key generation, SIDH key agreement and apply some other cryptographic primitives in order to convert SIDH to KEM. SIKE is implemented in a few variants - each variant corresponds to the security levels using 128-, 192- and 256-bit secret keys. Higher security level means longer running time. More details about SIKE can be found here.SIKE is also one of the candidates in NIST post-quantum "competition".I’ve skipped many important details to give a brief description of how isogeny based crypto works. If you’re curious and hungry for details, look at either of these Cloudflare meetups, where Deirdre Connolly talked about isogeny-based cryptography or this talk by Chloe Martindale during PQ Crypto School 2017. And if you would like to know more about quantum attacks on this scheme, I highly recommend this work.ConclusionQuantum computers that can break meaningful cryptographic parameter settings do not exist, yet. They won't be built for at least the next few years. Nevertheless, they have already changed the way we look at current cryptographic deployments. There are at least two reasons it’s worth investing in PQ cryptography:It takes a lot of time to build secure cryptography and we don’t actually know when today’s classical cryptography will be broken. There is a need for a good mathematical base: an initial idea of what may be secure against something that doesn't exist yet. If you have an idea, you also need good implementation, constant time, resistance to things like time and cache side-channels, DFA, DPA, EM, and a bunch of other abbreviations indicating side-channel resistance. There is also deployment of, for example, algorithms based on elliptic curves were introduced in '85, but started to really be used in production only during the last decade, 20 or so years later. Obviously, the implementation must be blazingly fast! Last, but not least, integration: we need time to develop standards to allow integration of PQ cryptography with protocols like TLS.Even though efficient quantum computers probably won't exist for another few years, the threat is real. Data encrypted with current cryptographic algorithms can be recorded now with hopes of being broken in the future.Cloudflare is motivated to help build the Internet of tomorrow with the tools at hand today. Our interest is in cryptographic techniques that can be integrated into existing protocols and widely deployed on the Internet as seamlessly as possible. PQ cryptography, like the rest of cryptography, includes many cryptosystems that can be used for communications in today’s Internet; Alice and Bob need to perform some computation, but they do not need to buy new hardware to do that.Cloudflare sees great potential in those algorithms and believes that some of them can be used as a safe replacement for classical public-key cryptosystems. Time will tell if we’re justified in this belief!

Introducing CIRCL: An Advanced Cryptographic Library

CloudFlare Blog -

As part of Crypto Week 2019, today we are proud to release the source code of a cryptographic library we’ve been working on: a collection of cryptographic primitives written in Go, called CIRCL. This library includes a set of packages that target cryptographic algorithms for post-quantum (PQ), elliptic curve cryptography, and hash functions for prime groups. Our hope is that it’s useful for a broad audience. Get ready to discover how we made CIRCL unique.Cryptography in GoWe use Go a lot at Cloudflare. It offers a good balance between ease of use and performance; the learning curve is very light, and after a short time, any programmer can get good at writing fast, lightweight backend services. And thanks to the possibility of implementing performance critical parts in Go assembly, we can try to ‘squeeze the machine’ and get every bit of performance.Cloudflare’s cryptography team designs and maintains security-critical projects. It's not a secret that security is hard. That's why, we are introducing the Cloudflare Interoperable Reusable Cryptographic Library - CIRCL. There are multiple goals behind CIRCL. First, we want to concentrate our efforts to implement cryptographic primitives in a single place. This makes it easier to ensure that proper engineering processes are followed. Second, Cloudflare is an active member of the Internet community - we are trying to improve and propose standards to help make the Internet a better place. Cloudflare's mission is to help build a better Internet. For this reason, we want CIRCL helps the cryptographic community to create proof of concepts, like the post-quantum TLS experiments we are doing. Over the years, lots of ideas have been put on the table by cryptographers (for example, homomorphic encryption, multi-party computation, and privacy preserving constructions). Recently, we’ve seen those concepts picked up and exercised in a variety of contexts. CIRCL’s implementations of cryptographic primitives creates a powerful toolbox for developers wishing to use them.The Go language provides native packages for several well-known cryptographic algorithms, such as key agreement algorithms, hash functions, and digital signatures. There are also packages maintained by the community under golang.org/x/crypto that provide a diverse set of algorithms for supporting authenticated encryption, stream ciphers, key derivation functions, and bilinear pairings. CIRCL doesn’t try to compete with golang.org/x/crypto in any sense. Our goal is to provide a complementary set of implementations that are more aggressively optimized, or may be less commonly used but have a good chance at being very useful in the future. Unboxing CIRCLOur cryptography team worked on a fresh proposal to augment the capabilities of Go users with a new set of packages.  You can get them by typing:$ go get github.com/cloudflare/circlThe contents of CIRCL is split across different categories, summarized in this table: Category Algorithms Description Applications Post-Quantum Cryptography SIDH Isogeny-based cryptography. SIDH provides key exchange mechanisms using ephemeral keys. SIKE SIKE is a key encapsulation mechanism (KEM). Key agreement protocols. Key Exchange X25519, X448 RFC-7748 provides new key exchange mechanisms based on Montgomery elliptic curves. TLS 1.3. Secure Shell. FourQ One of the fastest elliptic curves at 128-bit security level. Experimental for key agreement and digital signatures. Digital Signatures Ed25519 RFC-8032 provides new digital signature algorithms based on twisted Edwards curves. Digital certificates and authentication methods. Hash to Elliptic Curve Groups Several algorithms: Elligator2, Ristretto, SWU, Icart. Protocols based on elliptic curves require hash functions that map bit strings to points on an elliptic curve. Useful in protocols such as Privacy Pass. OPAQUE. PAKE. Verifiable random functions. Optimization Curve P-384 Our optimizations reduce the burden when moving from P-256 to P-384. ECDSA and ECDH using Suite B at top secret level. SIKE, a Post-Quantum Key Encapsulation MethodTo better understand the post-quantum world, we started experimenting with post-quantum key exchange schemes and using them for key agreement in TLS 1.3. CIRCL contains the sidh package, an implementation of Supersingular Isogeny-based Diffie-Hellman (SIDH), as well as CCA2-secure Supersingular Isogeny-based Key Encapsulation (SIKE), which is based on SIDH.CIRCL makes playing with PQ key agreement very easy. Below is an example of the SIKE interface that can be used to establish a shared secret between two parties for use in symmetric encryption. The example uses a key encapsulation mechanism (KEM). For our example in this scheme, Alice generates a random secret key, and then uses Bob’s pre-generated public key to encrypt (encapsulate) it. The resulting ciphertext is sent to Bob. Then, Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the secret key. See more details about SIKE in this Cloudflare blog.Let's see how to do this with CIRCL:// Bob's key pair prvB := NewPrivateKey(Fp503, KeyVariantSike) pubB := NewPublicKey(Fp503, KeyVariantSike) // Generate private key prvB.Generate(rand.Reader) // Generate public key prvB.GeneratePublicKey(pubB) var publicKeyBytes = make([]array, pubB.Size()) var privateKeyBytes = make([]array, prvB.Size()) pubB.Export(publicKeyBytes) prvB.Export(privateKeyBytes) // Encode public key to JSON // Save privateKeyBytes on disk Bob uploads the public key to a location accessible by anybody. When Alice wants to establish a shared secret with Bob, she performs encapsulation that results in two parts: a shared secret and the result of the encapsulation, the ciphertext.// Read JSON to bytes // Alice's key pair pubB := NewPublicKey(Fp503, KeyVariantSike) pubB.Import(publicKeyBytes) var kem := sike.NewSike503(rand.Reader) kem.Encapsulate(ciphertext, sharedSecret, pubB) // send ciphertext to Bob Bob now receives ciphertext from Alice and decapsulates the shared secret:var kem := sike.NewSike503(rand.Reader) kem.Decapsulate(sharedSecret, prvA, pubA, ciphertext) At this point, both Alice and Bob can derive a symmetric encryption key from the secret generated.SIKE implementation contains:Two different field sizes: Fp503 and Fp751. The choice of the field is a trade-off between performance and security.Code optimized for AMD64 and ARM64 architectures, as well as generic Go code. For AMD64, we detect the micro-architecture and if it’s recent enough (e.g., it supports ADOX/ADCX and BMI2 instruction sets), we use different multiplication techniques to make an execution even faster.Code implemented in constant time, that is, the execution time doesn’t depend on secret values.We also took care of low heap-memory footprint, so that the implementation uses a minimal amount of dynamically allocated memory. In the future, we plan to provide multiple implementations of post-quantum schemes. Currently, our focus is on algorithms useful for key exchange in TLS. SIDH/SIKE are interesting because the key sizes produced by those algorithms are relatively small (comparing with other PQ schemes). Nevertheless, performance is not all that great yet, so we’ll continue looking. We plan to add lattice-based algorithms, such as NTRU-HRSS and Kyber, to CIRCL. We will also add another more experimental algorithm called cSIDH, which we would like to try in other applications. CIRCL doesn’t currently contain any post-quantum signature algorithms, which is also on our to-do list. After our experiment with TLS key exchange completes, we’re going to look at post-quantum PKI. But that’s a topic for a future blog post, so stay tuned.Last, we must admit that our code is largely based on the implementation from the NIST submission along with the work of former intern Henry De Valence, and we would like to thank both Henry and the SIKE team for their great work.Elliptic Curve CryptographyElliptic curve cryptography brings short keys sizes and faster evaluation of operations when compared to algorithms based on RSA. Elliptic curves were standardized during the early 2000s, and have recently gained popularity as they are a more efficient way for securing communications. Elliptic curves are used in almost every project at Cloudflare, not only for establishing TLS connections, but also for certificate validation, certificate revocation (OCSP), Privacy Pass, certificate transparency, and AMP Real URL.The Go language provides native support for NIST-standardized curves, the most popular of which is P-256. In a previous post, Vlad Krasnov described the relevance of optimizing several cryptographic algorithms, including P-256 curve. When working at Cloudflare scale, little issues around performance are significantly magnified. This is one reason why Cloudflare pushes the boundaries of efficiency.A similar thing happened on the chained validation of certificates. For some certificates, we observed performance issues when validating a chain of certificates. Our team successfully diagnosed this issue: certificates which had signatures from the P-384 curve, which is the curve that corresponds to the 192-bit security level, were taking up 99% of CPU time! It is common for certificates closer to the root of the chain of trust to rely on stronger security assumptions, for example, using larger elliptic curves. Our first-aid reaction comes in the form of an optimized implementation written by Brendan McMillion that reduced the time of performing elliptic curve operations by a factor of 10. The code for P-384 is also available in CIRCL.The latest developments in elliptic curve cryptography have caused a shift to use elliptic curve models with faster arithmetic operations. The best example is undoubtedly Curve25519; other examples are the Goldilocks and FourQ curves. CIRCL supports all of these curves, allowing instantiation of Diffie-Hellman exchanges and Edwards digital signatures. Although it slightly overlaps the Go native libraries, CIRCL has architecture-dependent optimizations.Hashing to GroupsMany cryptographic protocols rely on the hardness of solving the Discrete Logarithm Problem (DLP) in special groups, one of which is the integers reduced modulo a large integer. To guarantee that the DLP is hard to solve, the modulus must be a large prime number. Increasing its size boosts on security, but also makes operations more expensive. A better approach is using elliptic curve groups since they provide faster operations.In some cryptographic protocols, it is common to use a function with the properties of a cryptographic hash function that maps bit strings into elements of the group. This is easy to accomplish when, for example, the group is the set of integers modulo a large prime. However, it is not so clear how to perform this function using elliptic curves. In cryptographic literature, several methods have been proposed using the terms hashing to curves or hashing to point indistinctly.The main issue is that there is no general method for deterministically finding points on any elliptic curve, the closest available are methods that target special curves and parameters. This is a problem for implementers of cryptographic algorithms, who have a hard time figuring out on a suitable method for hashing to points of an elliptic curve. Compounding that, chances of doing this wrong are high. There are many different methods, elliptic curves, and security considerations to analyze. For example, a vulnerability on WPA3 handshake protocol exploited a non-constant time hashing method resulting in a recovery of keys. Currently, an IETF draft is tracking work in-progress that provides hashing methods unifying requirements with curves and their parameters. Corresponding to this problem, CIRCL will include implementations of hashing methods for elliptic curves. Our development is accompanying the evolution of the IEFT draft. Therefore, users of CIRCL will have this added value as the methods implement a ready-to-go functionality, covering the needs of some cryptographic protocols.Update on Bilinear PairingsBilinear pairings are sometimes regarded as a tool for cryptanalysis, however pairings can also be used in a constructive way by allowing instantiation of advanced public-key algorithms, for example, identity-based encryption, attribute-based encryption, blind digital signatures, three-party key agreement, among others.An efficient way to instantiate a bilinear pairing is to use elliptic curves. Note that only a special class of curves can be used, thus so-called pairing-friendly curves have specific properties that enable the efficient evaluation of a pairing.Some families of pairing-friendly curves were introduced by Barreto-Naehrig (BN), Kachisa-Schaefer-Scott (KSS), and Barreto-Lynn-Scott (BLS). BN256 is a BN curve using a 256-bit prime and is one of the fastest options for implementing a bilinear pairing. The Go native library supports this curve in the package golang.org/x/crypto/bn256. In fact, the BN256 curve is used by Cloudflare’s Geo Key Manager, which allows distributing encrypted keys around the world. At Cloudflare, high-performance is a must and with this motivation, in 2017, we released an optimized implementation of the BN256 package that is 8x faster than the Go’s native package. The success of these optimizations reached several other projects such as the Ethereum protocol and the Randomness Beacon project.Recent improvements in solving the DLP over extension fields, GF(pᵐ) for p prime and m>1, impacted the security of pairings, causing recalculation of the parameters used for pairing-friendly curves.Before these discoveries, the BN256 curve provided a 128-bit security level, but now larger primes are needed to target the same security level. That does not mean that the BN256 curve has been broken, since BN256 gives a security of 100 bits, that is, approximately 2¹⁰⁰ operations are required to cause a real danger, which is still unfeasible with current computing power.With our CIRCL announcement, we want to announce our plans for research and development to obtain efficient curve(s) to become a stronger successor of BN256. According to the estimation by Barbulescu-Duquesne, a BN curve must use primes of at least 456 bits to match a 128-bit security level. However, the impact on the recalculation of parameters brings back to the main scene BLS and KSS curves as efficient alternatives. To this end a standardization effort at IEFT is in progress with the aim of defining parameters and pairing-friendly curves that match different security levels.Note that regardless of the curve(s) chosen, there is an unavoidable performance downgrade when moving from BN256 to a stronger curve. Actual timings were presented by Aranha, who described the evolution of the race for high-performance pairing implementations. The purpose of our continuous development of CIRCL is to minimize this impact through fast implementations.OptimizationsGo itself is a very easy to learn and use for system programming and yet makes it possible to use assembly so that you can stay close “to the metal”. We have blogged about improving performance in Go few times in the past (see these posts about encryption, ciphersuites, and image encoding).When developing CIRCL, we crafted the code to get the best possible performance from the machine. We leverage the capabilities provided by the architecture and the architecture-specific instructions. This means that in some cases we need to get our hands dirty and rewrite parts of the software in Go assembly, which is not easy, but definitely worth the effort when it comes to performance. We focused on x86-64, as this is our main target, but we also think that it’s worth looking at ARM architecture, and in some cases (like SIDH or P-384), CIRCL has optimized code for this platform.We also try to ensure that code uses memory efficiently - crafting it in a way that fast allocations on the stack are preferred over expensive heap allocations. In cases where heap allocation is needed, we tried to design the APIs in a way that, they allow pre-allocating memory ahead of time and reuse it for multiple operations.SecurityThe CIRCL library is offered as-is, and without a guarantee. Therefore, it is expected that changes in the code, repository, and API occur in the future. We recommend to take caution before using this library in a production application since part of its content is experimental.As new attacks and vulnerabilities arise over the time, security of software should be treated as a continuous process. In particular, the assessment of cryptographic software is critical, it requires the expertise of several fields, not only computer science. Cryptography engineers must be aware of the latest vulnerabilities and methods of attack in order to defend against them.The development of CIRCL follows best practices on the secure development. For example, if time execution of the code depends on secret data, the attacker could leverage those irregularities and recover secret keys. In our code, we take care of writing constant-time code and hence prevent timing based attacks.Developers of cryptographic software must also be aware of optimizations performed by the compiler and/or the processor since these optimizations can lead to insecure binary codes in some cases. All of these issues could be exploited in real attacks aimed at compromising systems and keys. Therefore, software changes must be tracked down through thorough code reviews. Also static analyzers and automated testing tools play an important role on the security of the software.SummaryCIRCL is envisioned as an effective tool for experimenting with modern cryptographic algorithms yet providing high-performance implementations. Today is marked as the starting point of a continuous machinery of innovation and retribution to the community in the form of a cryptographic library. There are still several other applications such as homomorphic encryption, multi-party computation, and privacy-preserving protocols that we would like to explore.We are team of cryptography, security, and software engineers working to improve and augment Cloudflare products. Our team keeps the communication channels open for receiving comments, including improvements, and merging contributions. We welcome opinions and contributions! If you would like to get in contact, you should check out our github repository for CIRCL github.com/cloudflare/circl. We want to share our work and hope it makes someone else’s job easier as well. Finally, special thanks to all the contributors who has either directly or indirectly helped to implement the library - Ko Stoffelen, Brendan McMillion, Henry de Valence, Michael McLoughlin and all the people who invested their time in reviewing our code.

WP Engine Announces Smart Plugin Manager, World’s Only Comprehensive WordPress Plugin Manager

WP Engine -

AUSTIN, Texas – June 20, 2019 – WP Engine, the WordPress Digital Experience Platform (DXP), today announced the upcoming launch of the Smart Plugin Manager, the only comprehensive WordPress plugin manager on the market. The Smart Plugin Manager automates some of the most tedious yet necessary tasks associated with running a site, such as regularly… The post WP Engine Announces Smart Plugin Manager, World’s Only Comprehensive WordPress Plugin Manager appeared first on WP Engine.

HIPAA Rules and Considerations for Dedicated and Tandem Database Hosting

Liquid Web Official Blog -

Whether you’re in the Healthcare industry or your business model lends to clients in the Healthcare industry, HIPAA is likely at the forefront of your thoughts. But what is it, and how does it affect your data specifically? The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is legislation establishing rules, regulations, and potential levies around treatment and use of Protected Health Information (PHI). That’s a mouthful! Translated into lay-speak that sentence amounts to this: “If you touch private medical data, it’s your job to ensure it is kept safe.” Often there is a misconception about lines of responsibility which has caused several well-documented issues including tens of millions of dollars in fines and settlements. Avoiding these fines and settlements is of paramount importance to the health of your business. The first step is learning your responsibilities. HIPAA Compliance is broken into four Rules which govern four major points of the compliance: Access Handling Notification Reach Each aspect requires its own processes and procedures to maintain that compliance. Subscribe to the Liquid Web weekly newsletter to get more content on HIPAA compliance sent to your inbox. Access: The HIPAA Privacy rule notes several stipulations around who can access PHI, including the patient or the legal guardian of the patient, as well as detailing Health Care Providers’ steps to deny or allow access to that data. This rule also includes requirements for documentation around training staff on how to handle data and attestation of completion of that training. The Takeaway: Only the people who need to have access to private health care data should be granted access. This includes health care service industry employees and hosting business employees. Anyone who may come in contact with PHI should be scrutinized for need and granted accesses appropriately. If a specific team or individual doesn’t need access to PHI, they should not have it. Handling The HIPAA Security rule lays out standards for how data should be handled to maintain its integrity. This includes how PHI is stored, how it’s accessed once stored, how it’s transmitted, and even how the devices are physically maintained and monitored while in a Data Center. Further, this rule notes requirements for logging of access and proper means of disposal of data if disposal is ever required. The Takeaway: No one outside controlled members of the organization should be able to see PHI. While the data is at rest, it should be encrypted. Backups of the data should be encrypted, the means of access and transmission should be encrypted, and the physical security of your machines needs to be maintained and controlled at all times. Logs need to be diligently kept for every time PHI is accessed, changed, updated, or moved. Lastly, once you’re done with the data, be it account termination or a migration, any physical copies of the data (i.e., hard drives) need to be appropriately disposed of to ensure complete data integrity. Notification: Breaches usually cause the most confusion. The HIPAA Breach Notification rule sets standards for how PHI data breaches must be handled should the unthinkable happen. In general, a breach is defined as any uncontrolled access to unencrypted PHI. For example, if an encrypted transmission is intercepted, but it’s encrypted, and no one can actually see the specific data, this is not a breach. However, if a laptop with access to PHI is stolen and it is used to view that PHI, this is a breach and needs to be reported. Breaches are further broken into two types, Minor breaches, which affect fewer than 500 individuals, and Meaningful breaches, which affect greater than 500 individuals. Breaches do not necessarily equal violations. A violation is when a breach comes as a result of a poorly defined, partially implemented, loosely maintained, or generally incomplete compliance process; or as a result of direct violation of properly implemented processes and procedures. Calling back to our laptop example: If a laptop with access to PHI is stolen, it is a breach. This stolen laptop incident becomes a violation if the company didn’t have documented processes and procedures surrounding the use of that laptop OR if the owner of the computer was negligent with the device. The Takeaway: Not all breaches are violations, but all breaches need to be reported. Reach: The HIPAA Omnibus rule is a catch-all rule that controls compliance as it extends to other parties. In today’s internet business, we all understand that it’s rarely feasible to handle all processes in house, including hosting. This rule allows firms to extend HIPAA compliance responsibilities to other parties so long as those other parties are also HIPAA compliant and the two companies enter into a Business Associate Agreement, a contract which draws outlines of responsibility for both parties as pertains to the handling of PHI. The Takeaway: It’s possible to maintain HIPAA compliance even when parts of your processes are outsourced to other companies. Just make sure the other company is also HIPAA compliant, and you have executed a BAA before allowing access to PHI. How Does This All Apply to Databases? Now that all of those points are lined out, how does this affect databases specifically? Databases are the most likely place where PHI will be stored. Most modern applications and storage formats will be database driven and thereby rely on databases as their source. It’s essential to understand the structure of the app you’re using as you consider maintaining HIPAA compliance and database hosting. There are two primary styles of database hosting, dedicated database hosting, and tandem database hosting. Dedicated database hosting is the more complex and expensive of the two options. It requires a separate and segregated server that’s dedicated strictly to hosting your database service and nothing else. This server is usually connected to a private network and not open to the public internet. While accessing the data on this server, the application will often be configured to make an external connection to this database server, run its query, and return the response. That response is then processed however appropriate. Tandem database hosting runs your database service on the same machine as other services and in conjunction with those services. This is often the approach many less resource intensive applications will take as its deployment is less expensive and less complicated. The database service is usually configured to accept only local connections and performs the same queries without having to send the request or response outside the server. Hosting Type Pros Cons Dedicated Database Hosting 1. Easily scalable 2.Designed to handle large databases 3. Hardware can be customized for databases 1. More expenses 2. Requires more hardware 3. More difficult to administrate Tandem Database Hosting 1. Less complex 2. Deployed by default 3. No additional configuration 1. Shared Resources 2. Could be affected by other services 3. Scaling requires taking resources from other services Whether you use a dedicated database server or a database service running on a web server, if PHI will be stored there, the entire server is required to follow all compliance guidelines. These guidelines fall into four categories: Data handling Backups Physical Safeguards Logging Data Handling: Data Handling refers to data that is ready to be accessed, data that is being accessed, and data that’s moving so it can be accessed once received. And these processes are governed by one concept: encryption. According to the HIPAA Security rule, no one should simply be able to see PHI. That means data should be encrypted while at rest or in transit. Encryption for databases exists at several levels and is available on all database platforms. There are means by which to encrypt entire database warehouses, whole databases, full tables, or even individual columns. It would be best to investigate your current application deployments and decide the way that best fits your needs for means of access to cause the least amount of interruption. Data while in transit also requires encryption, even if it’s across a private network connection. Again, there are many ways to move and maintain the requirement for encryption. SSH, Rsync, FTPs, and sFTP, even dumping databases to a file and encrypting those files is acceptable. So long as you’re adhering to the Security rule’s requirement for encryption, you’re on the right track. The last consideration is how the data is being accessed. Again, encryption is vital. SSH, Database Administration Tools, or the secured implementations of FTP work well. Considerations for FTP FTP and its encrypted implementations require moving data from the source to a destination. If someone is using FTP to pull data to their local machine then re-upload that data, it’s imperative that the destination machine follows all HIPAA compliance requirements no matter the type: laptop, tablet, workstation, what have you. If they are not following HIPAA requirements or requirements are not set up around this type of access, the organization is in violation and at risk of legal action.   Considerations for Database Administration Tools There are many stand-alone and web-driven database administration tools, all with their own pros and cons. No matter the application, DBeaver, SQLite, MySQL Workbench, PHPMyAdmin, or SQL Server Management Studio; they all need to make fully encrypted connections and follow the same accessing standards: controlled, logged, encrypted. This means all web-driven application need an SSL at minimum. Backups Database backups are paramount to a company’s survival, and the governing bodies understand this, which is why HIPAA compliance has stipulations specifically for maintaining backups. First: Backups are Required A means by which to back up data and databases is not only encouraged, it’s required. Not having backups is a direct violation of HIPAA compliance. Further, those backups must follow the encryption policies for data handling. They must be encrypted, accessed only via encrypted means, and maintain encryption in transit. Backups Also Require Testing What good are backups if they don’t work and how do we know they’re failing if we don’t test them? These are essential questions, and their answers are built into HIPAA compliance. All backups must be checked regularly, those backups need to be verified, and the testing and verification must be logged so they can be submitted to your HIPAA compliance officer at the time of an audit. Physical Safeguards This is a point of much contention. In the hosting world, almost everyone pays to avoid needing physical access to a server. We rely on our hosts to handle that part and any access to the server, physical access included, requires the same scrutiny as other access. According to the Security Rule, physical access must be controlled and logged. Luckily, as per the Omnibus rule, a third-party can handle almost any aspect of your compliance, so long as they’re HIPAA compliant and there’s a BAA executed. This includes physical access! Liquid Web offers offer attestation via a BAA which covers their processes, documentation, and logged procedures surrounding physical access, its maintenance, and logging. That attestation can then be submitted to an auditor and serves as compliance when you need it. Logging Logging is another point that seems blurry to most clients and is absolutely crucial to maintaining HIPAA compliance. As part of the HIPAA compliance audit process, a compliance officer will require documentation showcasing all of the above points are followed. This means all access to your databases needs to be logged, and those logs need to be maintained. You’ll have to provide logged details about each person who can access data the data, every time the data was accessed and by whom, the reason the data was accessed, and the outcome of accessing said data; you’ll need to maintain every time the physical hardware was accessed as well as by whom. You’ll also need to show logs of your backup periods, verification of employee training, all breach awareness testing, and any breaches. That’s a lot to log! But it’s crucial. Not keeping these logs leaves you in violation of HIPAA compliance regulations and susceptible to fines and actions. There’s a lot to consider as you move toward HIPAA compliance, but Liquid Web has you covered. We offer HIPAA compliant backup solutions and HIPAA compliant server packages. We can also provide a BAA for attestation. This is only the first step in winning HIPAA compliance, but it’s an important step. The post HIPAA Rules and Considerations for Dedicated and Tandem Database Hosting appeared first on Liquid Web.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs