Corporate Blogs

The Network is the Computer: A Conversation with Greg Papadopoulos

CloudFlare Blog -

I spoke with Greg Papadopoulos, former CTO of Sun Microsystems, to discuss the origins and meaning of The Network is the Computer®, as well as Cloudflare’s role in the evolution of the phrase. During our conversation, we considered the inevitability of latency, the slowness of the speed of light, and the future of Cloudflare’s newly acquired trademark. Listen to our conversation here and read the full transcript below.[00:00:08]John Graham-Cumming: Thank you so much for taking the time to chat with me. I've got Greg Papadopoulos who was CTO of Sun and is currently a venture capitalist. Tell us about “The Network is the Computer.” [00:00:22]Greg Papadopoulos: Well, from certainly a Sun perspective, the very first Sun-1 was connected via Internet protocols and at that time there was a big war about what should win from a networking point of view. And there was a dedication there that everything that we made was going to interoperate on the network over open standards, and from day one in the company, it was always that thought. It's really about the collection of these machines and how they interact with one another, and of course that puts the network in the middle of it. And then it becomes hard to, you know, where's the line? But it is one of those things that I think even if you ask most people at Sun, you go, “Okay explain to me ‘The Network is the Computer.’” It would get rather meta. People would see that phrase and sort of react to it in their own way. But it would always come back to something similar to what I had said I think in the earlier days. [00:01:37]Graham-Cumming: I remember it very well because it was obviously plastered everywhere in Silicon Valley for a while. And it sounded incredibly cool but I was never quite sure what it meant. It sounded like it was one of those things that was super deep but I couldn't dig deep enough. But it sort of seems like this whole vision has come true because if you dial back to I think it's 2006, you wrote a blog post about how the world was only going to need five or seven or some small number of computers. And that was also linked to this as well, wasn't it?[00:02:05]Papadopoulos: Yeah, I think as things began to evolve into what we would call cloud computing today, but that you could put substantial resources on the other side of the network and from the end user’s perspective and those could be as effective or more effective than something you'd have in front of you. And so this idea that you really could provide these larger scale computing services in early days — you know, grid was the term used before cloud — but if you follow that logic, and you watch what was happening to the improvements of the network. Dave Patterson at Cal was very fond of saying in that era and in the 90s, networks are getting to the place where the desk connected to another machine is transparent to you. I mean it could be your own, in fact, somebody else's memory may in fact be closer to you than your own disk. And that's a pretty interesting thought. And so where we ended up going was really a complete realization that these things we would call servers were actually just components of this network computer. And so it was very mysterious, “The Network is the Computer,” and it actually grew into itself in this way. And I'll say looking at Cloudflare, you see this next level of scale happening. It's not just, what are those things that you build inside a data center, how do you connect to it, but in fact, it's the network that is the computer that is the network.[00:04:26]Graham-Cumming: It's interesting though that there have been these waves of centralization and then push the computing power to the edge and the PCs at some point and then Larry Ellison came along and he was going to have this networked computer thing, and it sort of seems to swing back and forth, so where do you think we are in this swinging?[00:04:44]Papadopoulos: You know, I don't think so much swinging. I think it's a spiral upwards and we come to a place and we look down and it looks familiar. You know, where you'll say, oh I see, here's a 3270 connected to a mainframe. Well, that looks like a browser connected to a web server. And you know, here's the device, it’s connected to the web service. And they look similar but there are some very important differences as we're traversing this helix of sorts. And if you look back, for example the 3270, it was inextricably bound to a single server that was hosted. And now our devices have really the ability to connect to any other computer on the network. And so then I think we're seeing something that looks like a pendulum there, it’s really a refactoring question on what software belongs where and how hard is it to maintain where it is, and naturally I think that the Internet protocol clearly is a peer to peer protocol, so it doesn't take sides on this. And so that we end up in one state, with more on the client or less on the client. I think it really has to do with how well we've figured out distributed computing and how well we can deliver code in a management-free way. And that's a longer conversation. [00:06:35]Graham-Cumming: Well, it's an interesting conversation. One thing is what you talked about with Sun Grid which then we end up with Amazon Web Services and things like that, is that there was sort of the device, be it your handheld or your laptop talking to some cloud computing, and then what Cloudflare has done with this Workers product to say, well, actually I think there's three places where code could exist. There's something you can put inside the network.[00:07:02]Papadopoulos: Yes. And by extension that could grow to another layer too. And it goes back to, I think it's Dave Clark who I first remember saying you can get all the bandwidth you want, that's money, but you can't reduce latency. That's God, right. And so I think there are certainly things and as I see the Workers architecture, there are two things going on. There's clearly something to be said about latency there, and having distributed points of presence and getting closer to the clients. And there’s IBM with interaction there too, but it is also something that is around management of software and how we should be thinking in delivery of applications, which ultimately I believe, in the limit, become more distributed-looking than they are now. It's just that it's really hard to write distributed applications in kind of the general way we think about it.[00:08:18]Graham-Cumming: Yes, that's one of these things isn’t it, it is exceedingly hard to actually write these things which is why I think we're going through a bit of a transition right now where people are trying to figure out where that code should actually execute and what should execute where.[00:08:31]Papadopoulos: Yeah. You had graciously pointed out this blog from a dozen years ago on, hey this is inevitable that we're going to have this concentration of computing, for a lot of economic reasons as anything else. But it's both a hammer and a nail. You know, cloud stuff in some ways is unnatural in that why should we expect computing to get concentrated like it is. If you really look into it more deeply, I think it has to do with management and control and capital cycles and really things that are kind of on the economic and the administrative side of things, are not about what's truth and beauty and the destination for where applications should be.[00:09:27]Graham-Cumming: And I think you also see some companies are now starting to wrestle with the economics of the cloud where they realize that they are kind of locked into their cloud provider and are paying rent kind of thing; it becomes entirely economic at that point.[00:09:41]Papadopoulos: Well it does, and you know, this was also something I was pretty vocal about, although I got misinterpreted for a while there as being, you know, anti-cloud or something which I'm not, I think I'm pragmatic about it. One of the dangers is certainly as people yield particularly to SaaS products, that in fact, your data in many ways, unless you have explicit contracts and abilities to disgorge that data from that service, that data becomes more and more captive. And that's the part that I think is actually the real question here, which is like, what's the switching cost from one service to another, from one cloud to another.[00:10:35]Graham-Cumming: Yes, absolutely. That's one of the things that we faced, one of the reasons why we worked on this thing called the Bandwidth Alliance, which is one of the ways in which stuff gets locked into clouds is the egress fee is so large that you don't want to get your data out.[00:10:50]Papadopoulos: Exactly. And then there is always the, you know, well we have these particular features in our particular cloud that are very seductive to developers and you write to them and it's kind of hard to undo, you know, just the physics of moving things around. So what you all have been doing there is I think necessary and quite progressive. But we can do more.[00:11:17]Graham-Cumming: Yes definitely. Just to go back to the thought about latency and bandwidth, I have a jokey pair of slides where I show the average broadband network you can buy over time and it going up, and then the change in the speed of light over the same period, which of course is entirely flat, zero progress in the speed of light. Looking back through your biography, you wrote thinking machines and I assume that fighting latency at a much shorter distance of cabling must have been interesting in those machines because of the speeds at which they were operating.[00:11:54]Papadopoulos: Yes, it surprises most people when you say it, but you know, computer architects complain that the speed of light is really slow. And you know, Grace Hopper who is really one of the founders, the pioneers of modern programming languages and COBOL. I think she was a vice admiral. And she would walk around with a wire that was a foot long and say, “this is a nanosecond”. And that seemed pretty short for a while but, you know a nanosecond is an eternity these days.[00:12:40]Graham-Cumming: Yes, it's an eternity. People don't quite appreciate it if they're not thinking about it, how long it is. I had someone who was new to the computing world learning about it, come to me with a book which was talking about fiber optics, and in the book it said there is a laser that flashes on and off a billion times a second to send data down the fiber optic. And he came to me and said, “This can't possibly be true; it's just too fast.”[00:13:09]Papadopoulos: No, it's too slow![00:013:12]Graham-Cumming: Right? And I thought, well that’s slow. And then I stepped back and thought, you know, to the average person, that is a ridiculous statement, that somehow we humans have managed to control time at this ridiculously small level. And then we keep pushing and pushing and pushing it and people don't appreciate how fast and actually how slow the light is, really.[00:13:33]Papadopoulos: Yeah. And I think if it actually comes down to it, if you want to get into a very pure reckoning of this is latency is the only thing that matters. And one can look at bandwidth as a component of latency, so you can see bandwidth as a serialization delay and that kind of goes back to Clark thing, you know that, yeah I can buy that, I can't bribe God on the other side so you know I'm fundamentally left with this problem that we have. Thank you, Albert Einstein, right? It's kind of hopeless to think about sending information faster than that.[00:14:09]Graham-Cumming: Yeah exactly. There’s information limits, which is driving why we have such powerful phones, because in fact the latency to the human is very low if you have it in your hand.[00:14:23]Papadopoulos: Yes, absolutely. This is where the edge architecture and the Worker structure that you guys are working on, and I think that's where it becomes really interesting too because it gives me — you talked about earlier, well we're now introducing this new tier — but it gives me a really closer place from a latency point of view to have some intimate relationship with a device, and at the same time be well-connected to the network.[00:14:55]Graham-Cumming: Right. And I think the other thing that is interesting about that is that your device fundamentally is an insecure thing, so you know if you put code on that thing, you can't put secrets in it, like a cryptographic secrets, because the end user has access to them. Normally you would keep that in the server somewhere, but then the other funny thing is if you have this intermediary tier which is both secure and low latency to the end user, you suddenly have a different world in which you can put secrets, you can put code that is privileged, but it can interact with the user very very rapidly because the low latency.[00:15:30]Papadopoulos: Yeah. And that essence of where’s my trust domain. Now I've seen all kinds of like, oh my gosh, I cannot believe somebody is doing it, like putting their S3 credentials, putting it down on a device and having it talk, you know, the log in for a database or something. You must be kidding. I mean that trust proxy point at low latency is a really key thing.[00:16:02]Graham-Cumming: Yes, I think it's just people need to start thinking about that architecture. Is there a sort of parallel with things that were going on with very high-performance computing with sort of the massively parallel stuff and what's happening today? What lessons can we take from work done in the 70s and 80s and apply it to the Internet of today?[00:16:24]Papadopoulos: Well, we talked about this sort of, there are a couple of fundamental issues here. And one we've been speaking about is latency. The other one is synchronization, and this comes up in a bunch of different ways. You know, whether it's when one looks at the cap theorem kinds of things that Eric Brewer has been famous for, can I get consistency and availability and survive partitionability, all that, at the same time. And so you end up in this kind of place of—goes back to maybe Einstein a bit—but you know, knowing when things have happened and when state has been actually changed or committed is a pretty profound problem. [00:17:15]Graham-Cumming: It is, and what order things have happened. [00:17:18]Papadopoulos: Yes. And that order is going to be relative to an observer here as well. And so if you're insisting on some total ordering then you're insisting on slowing things down as well. And that really is fundamental. We were pushing into that in the massively parallel stuff and you'll see that at Internet scale. You know there's another thing, if I could. This is one of my greatest “aha”s about networks and it's due to a fellow at Sun, Rob Gingell, who actually ended up being chief engineer at Sun and was one of the real pioneers of the software development framework that brought Solaris forward. But Rob would talk about this thing that I label as network entropy. It's basically what happens when you connect systems to networks, what do networks kind of do to those systems? And this is a little bit of a philosophical question; it’s not a physical one. And Rob observed that over time networks have this property of wanting to decompose things into constituent parts, have those parts get specialized and then reintegrated. And so let me make that less abstract. So in the early days of connecting systems to networks, one of the natural observations were, well why don't we take the storage out of those desktop systems or server systems and put them on the other side of at least a local network and into a file server or storage server. And so you could see that computer sort of get pulled apart between its computing and its storage piece. And then that storage piece, you know in Rob’s step, that would go on and get specialized. So we had whole companies start like Network Appliances, Pure Storage, EMC. And so, you know like big pieces of industry or look the original routers were RADb you know running on workstations and you know Cisco went and took that and made that into something and so you now see this effect happen at the next scale. One of the things that really got me excited when I first saw Cloudflare a decade ago was, wow okay in those early days, well we can take a component like a network firewall and that can get pulled away and created as its own network entity and specialized. And I think one of the things, at least from my history of Cloudflare, one of the most profound things was, particularly as you guys went in and separated off these functions early on, the fear of people was this is going to introduce latency, and in fact things got faster. Figure that.[00:20:51]Graham-Cumming: Part of that of course is caching and then there's dealing with the speed of light by being close to people. But also if you say your company makes things faster and you do all these different things including security, you are forced to optimize the whole thing to live up to the claim. Whereas if you try and chain things together, nobody's really responsible for that overall latency budget. It becomes natural that you have to do it.[00:21:18]Papadopoulos: Yes. And you all have done it brilliantly, you know, to sort of Gingell’s view. Okay so this piece got decomposed and now specialized, meaning optimized like heck, because that's what you do. And so you can see that over and over again and you see it in terms of even Twilio or something. You know, here's a messaging service. I’m just pulling my applications apart, letting people specialize. But the final piece, and this is really the punchline. The final piece is, Rob will talk about it, the value is in the reintegration of it. And so you know what are those unifying forces that are creating, if you will, the operating system for “The Network is the Computer.” You were asking about the massively parallel scale. Well, we had an operating system we wrote for this. As you get up to the higher scale, you get into these more distributed circumstances where the complexity goes up by some important number of orders of magnitude, and now what's that reintegration? And so I come back and I look at what Cloudflare is doing here. You're entering into that phase now of actually being that re-integrator, almost that operating system for the computer that is the network.[00:23:06]Graham-Cumming: I think that's right. We often talk about actually being an operating system on the Internet, so very similar kind of thoughts.[00:23:14]Papadopoulos: Yes. And you know as we were talking earlier about how developers make sense of this pendulum or cycle or whatever it is. Having this idea of an operating system or of a place where I can have ground truths and trust and sort of fixed points in this are terribly important.[00:23:44]Graham-Cumming: Absolutely. So do you have any final thoughts on, what, it must be 30 years on from when “The Network is the Computer” was a Sun trademark. Now it's a Cloudflare trademark. What's the future going to look of that slogan and who's going to trademark it in 30 years time now?[00:24:03]Papadopoulos: Well, it could be interplanetary at that point. [00:24:13]Graham-Cumming: Well, if you talk about the latency problems of going interplanetary, we definitely have to solve the latency.[00:24:18]Papadopoulos: Yeah. People do understand that. They go, wow it’s like seven minutes within here and Mars, hitting close approach. [00:24:28]Graham-Cumming: The earthly equivalent of that is New Zealand. If you speak to people from New Zealand and they come on holiday to Europe or they move to the US, they suddenly say that the Internet works so much better here. And it’s just that it's closer. Now the Australians have figured this out because Australia is actually drifting northwards so they're actually going to get within. That's going to fix it for them but New Zealand is stuck.[00:24:56]Papadopoulos: I do ask my physicist friends for one of two things. You know, either give me a faster speed of light — so far they have not delivered — or another dimension I can cut through. Maybe we'll keep working on the latter.[00:25:16]Graham-Cumming: All right. Well listen Greg, thank you for the conversation. Thank you for thinking about this stuff many many years ago. I think we're getting there slowly on some of this work. And yeah, good talking to you.[00:25:27]Papadopoulos: Well, you too. And thank you for carrying the torch forward. I think everyone from Sun who listens to this, and John, and everybody should feel really proud about what part they played in the evolution of this great invention.[00:25:48]Graham-Cumming: It's certainly the case that a tremendous amount of work was done at Sun that was really fundamental and, you know, perhaps some of that was ahead of its time but here we are. [00:25:57]Papadopoulos: Thank you. [00:25:58]Graham-Cumming: Thank you very much.[00:25:59]Papadopoulos: Cheers.Interested in hearing more? Listen to my conversations with John Gage and Ray Rothrock of Sun Microsystems:John GageRay Rothrock To learn more about Cloudflare Workers, check out the use cases below:Optimizely - Optimizely chose Workers when updating their experimentation platform to provide faster responses from the edge and support more experiments for their customers.Cordial - Cordial used a “stable of Workers” to do custom Black Friday load shedding as well as using it as a serverless platform for building scalable customer-facing - used Workers to avoid significant code changes to their underlying platform when migrating from a legacy provider to a modern cloud backend.Pwned Passwords - Troy Hunt’s popular "Have I Been Pwned" project benefits from cache hit ratios of 94% on its Pwned Passwords API due to Workers.Timely - Using Workers and Workers KV, Timely was able to safely migrate application endpoints using simple value updates to a distributed key-value store.Quintype - Quintype was an eager adopter of Workers to cache content they previously considered un-cacheable and improve the user experience of their publishing platform.

The Network is the Computer

CloudFlare Blog -

We recently registered the trademark for The Network is the Computer®, to encompass how Cloudflare is utilizing its network to pave the way for the future of the Internet.The phrase was first coined in 1984 by John Gage, the 21st employee of Sun Microsystems, where he was credited with building Sun’s vision around “The Network is the Computer.” When Sun was acquired in 2010, the trademark was not renewed, but the vision remained. Take it from him: “When we built Sun Microsystems, every computer we made had the network at its core. But we could only imagine, over thirty years ago, today’s billions of networked devices, from the smallest camera or light bulb to the largest supercomputer, sharing their packets across Cloudflare’s distributed global network.We based our vision of an interconnected world on open and shared standards. Cloudflare extends this dedication to new levels by openly sharing designs for security and resilience in the post-quantum computer world.Most importantly, Cloudflare is committed to immediate, open, transparent accountability for network performance. I’m a dedicated reader of their technical blog, as the network becomes central to our security infrastructure and the global economy, demanding even more powerful technical innovation.” Cloudflare's massive network, which spans more than 180 cities in 80 countries, enables the company to deliver its suite of security, performance, and reliability products, including its serverless edge computing offerings. In March of 2018, we launched our serverless solution Cloudflare Workers, to allow anyone to deploy code at the edge of our network. We also recently announced advancements to Cloudflare Workers in June of 2019 to give application developers the ability to do away with cloud regions, VMs, servers, containers, load balancers—all they need to do is write the code, and we do the rest. With each of Cloudflare’s data centers acting as a highly scalable application origin to which users are automatically routed via our Anycast network, code is run within milliseconds of users worldwide. In honor of registering Sun’s former trademark, I spoke with John Gage, Greg Papadopoulos, former CTO of Sun Microsystems, and Ray Rothrock, former Director of CAD/CAM Marketing at Sun Microsystems, to learn more about the history of the phrase and what it means for the future: John GageRay RothrockGreg Papadopoulos To learn more about Cloudflare Workers, check out the use cases below:Optimizely - Optimizely chose Workers when updating their experimentation platform to provide faster responses from the edge and support more experiments for their customers.Cordial - Cordial used a “stable of Workers” to do custom Black Friday load shedding as well as using it as a serverless platform for building scalable customer-facing - used Workers to avoid significant code changes to their underlying platform when migrating from a legacy provider to a modern cloud backend.Pwned Passwords - Troy Hunt’s popular "Have I Been Pwned" project benefits from cache hit ratios of 94% on its Pwned Passwords API due to Workers.Timely - Using Workers and Workers KV, Timely was able to safely migrate application endpoints using simple value updates to a distributed key-value store.Quintype - Quintype was an eager adopter of Workers to cache content they previously considered un-cacheable and improve the user experience of their publishing platform.

Is Backing Up Your Website as Fun as Singing to Your Neighbors?

InMotion Hosting Blog -

Singing to your neighbor is hard to beat, but backing up your WordPress website just might be the thing to do it. What is in a backup? While browsing different web hosts or even articles of websites you may be wondering what exactly a backup is. Thankfully, it is a very simple procedure to explain. Basically, a backup is the act of saving your websites data in its current form and storing it on a secure server or some other external location. Continue reading Is Backing Up Your Website as Fun as Singing to Your Neighbors? at The Official InMotion Hosting Blog.

Taking the Holiday Leap with WP Engine

WP Engine -

iFly has been empowering humans to experience the freedom and thrill of flight since the company was founded in 1998. Today, more than 10 million people have traveled to one of iFly’s 80 locations across the globe to fly in one of their wind tunnels. The iFly experience has also grown into a popular gift… The post Taking the Holiday Leap with WP Engine appeared first on WP Engine.

Out of Office: How to Actually Disconnect During Your Summer Vacation

LinkedIn Official Blog -

With summer in full swing, vacation is on the minds – and calendars – of most of us. So much so, it’s one of the three most important benefits professionals want when considering a new job. Nearly 75% of professionals would turn down a job offer if the vacation policy didn’t meet their expectations, according to new LinkedIn research released today. Regardless, nearly half (46%) of professionals admit to not taking all their vacation time last year, pointing to reasons like having too much work... .

How to Choose a Web Host: A 15-Point Checklist

DreamHost Blog -

Choosing a web host can be challenging — especially if you’re just starting your first website. There’s a lot of information to digest about hosting your site, and it’s easy to forget something important when you’re weighing the pros and cons of various providers. However, if you know the right questions to ask, you can navigate the waters of web hosting without fear. There are many excellent plans to pick from. Making the right choice is simply a matter of considering your needs alongside what each service provider has to offer. In this post, we’ll discuss why it’s necessary to determine your site’s hosting needs before you begin shopping. Then we’ll share a 15-point checklist to help decide which web hosting provider is right for you. Let’s get going! Why It’s Vital to Identify Your Hosting Needs Upfront There’s no such thing as one-size-fits-all web hosting. Every website has different needs when it comes to storage, performance, features, and price. So before you start looking at plans, you’ll want to determine your site’s hosting requirements. By knowing what you need ahead of time, you can narrow down your choices more quickly and avoid making costly mistakes when selecting your host. Some questions you might ask include: How large is your website and what are its storage needs? On average, how much traffic do you expect each month? What’s your hosting budget? What are your current website management skills? What might you need help with? Apart from storing your site, what services will you need from your hosting provider? Your answers to these questions will eliminate some hosts right away. Then, you can use the checklist below to determine if other hosting options are a smart match for your site. Be Awesome on the InternetJoin our monthly newsletter for tips and tricks to build your dream website!Sign Me Up How to Choose a Web Host (A 15-Point Checklist) There are many aspects to consider when choosing a hosting provider, and the process can seem overwhelming at first. That’s why we’ve listed out the 15 most important questions to ask when evaluating a hosting provider: How Reliable Are the Host’s Servers? Is It Easy to Upgrade Your Plan? Can You Easily Add a Domain? Are There Significant Differences in the Sign-Up and Renewal Costs? Does the Host Have a Generous Refund Policy? Is There a One-Click Installer? Will Your Host Provide Email Addresses for Your Domain? Will You Have Easy SFTP Access? How Difficult Is It to Find and Edit .htaccess? What E-Commerce Features Are Included (If Any)? Can You Easily Navigate and Use the Control Panel? Are SSL Certificates Included? How Often Will You Have to Renew Your Subscription? Does the Web Host Offer Easy Site Backups? Can You Quickly Access Support 24/7? Now, let’s dive into each question in more detail to guide you towards the best host for your situation. 1. How Reliable Are the Host’s Servers? Performance and uptime can make or break your website. Your website’s performance influences Search Engine Optimization (SEO), bounce and conversion rates, and how trustworthy your site appears to visitors. We’re not exaggerating when we say that the reliability of your server has a direct impact on your website’s bottom line. Any provider you consider should have an uptime guarantee of at least 99%. At DreamHost, our uptime guarantee is 100%, as per our Terms of Service. It’s also wise to check out what performance-related features a given host offers. This can include built-in caching, access to a Content Delivery Service (CDN), and more. Shared Hosting That Powers Your PurposeWe make sure your website is fast, secure and always up so your visitors trust you. Plans start at $2.59/mo.Choose Your Plan 2. Is It Easy to Upgrade Your Plan? If you’ve created a website with all the elements it needs to succeed, chances are it’s going to grow. With any luck, you’ll see an increase in traffic and conversion rates. This will likely mean you’ll have to upgrade your web hosting plan. Related: When Should You Upgrade Your Hosting Plan? Most new sites start on a shared, low-cost plan. As your online presence expands, however, you’ll need more resources, bandwidth, and disk space to maintain your site for all its users. A host that offers easy upgrades to a Virtual Private Server (VPS), Managed WordPress, or Dedicated Hosting plan can make this process smoother. If you choose a host that makes it difficult to change your plan, you could find yourself migrating to a new provider just a few months after launching your site. Already Have a Website? We’ll Move It for You!Migrating to a new hosting provider is a pain. Sit back and let our experts do it! We’ll move your existing site within 48 hours without any interruption in service. Included FREE with purchase of any DreamPress plan.Move My Site 3. Can You Easily Add a Domain? As your digital brand grows, you may find that you not only want to expand your current site but start a new one as well. Alternatively, perhaps you simply like collecting domain names or you want to get into website flipping. Whatever the reason, if you’re going to purchase additional domains, you’ll need a host that makes it simple to acquire and manage them. Choosing a provider that offers unlimited domains ensures that you won’t ever run out of space. Related: The Complete Guide to New Top-Level Domains (TLDs) 4. Are There Significant Differences in the Sign-Up and Renewal Costs? It’s important to choose an affordable host. However, be careful when signing up, as you don’t want to get roped into a plan that’s more expensive than it seems on the surface. Some companies will offer attractive sign-up deals for new customers. Then, when it comes time to renew, they’ll raise the price. Make sure to look into your potential host’s renewal fees as well as the initial sign-up cost. Some difference between these two is an industry norm. However, you’ll want to keep the contrast as low as possible and avoid a higher renewal rate entirely if possible. 5. Does the Host Have a Generous Refund Policy? In an ideal world, you’ll choose the perfect host the first time around, your website will flourish, and you’ll never need to cancel your service. However, things don’t always go according to plan. If you need to cancel your hosting for any reason, you’ll want to avoid excessive fees. It’s also wise to choose a host that offers a trial period so that if things don’t work out in the first few weeks of service, you can cancel without penalty. 6. Is There a One-Click Installer? As the most popular Content Management Service (CMS) on the web, WordPress often receives additional support from hosting companies. Managed WordPress plans and WordPress-related features can be especially helpful if this is the platform you intend to use. A particularly useful feature that some hosts offer is a one-click WordPress installer. Better yet, some hosts will pre-install WordPress for you. This can save you a lot of time during the initial setup. You can also find one-click installers for other platforms, such as Joomla and Zen Cart. Related: What Is a WordPress One-Click Install? 7. Will Your Host Provide Email Addresses for Your Domain? Whether you have a business site, a blog, an e-commerce store, or some other type of website, your visitors will probably need a way to get in touch. Having an email address that’s associated with your site’s domain (i.e., appears more professional and is easier for users to remember. Checking out a potential host’s email services is a must if you want to incorporate this feature into your online presence. Choosing a host that includes this service in its web hosting packages or provides it for a low cost means you won’t have to set up custom email addresses manually. 8. Will You Have Easy SFTP Access? File Transfer Protocol (FTP) and Secure File Transfer Protocol (SFTP) are vital tools for website maintenance. At some point, you’ll likely have to use one or the other to resolve an error, customize your site, and carry out different tasks. Your host should provide credentials so that you can use FTP or SFTP via a client such as FileZilla. This information should be easy to locate so that you can access it at any time. Additionally, some hosts will provide their own FTP clients for your use as well. This is a nice bonus and can be an easier and more secure option than third-party FTP clients. 9. How Difficult Is It to Find and Edit .htaccess? For WordPress users, the .htaccess file is a crucial part of your site. It contains a wealth of configuration information that influences permalink structure, caching, 301 redirects, file accessibility, and more. You may need to edit .htaccess at some point to resolve an error, tighten security, or carry out other tasks to improve your site. Unfortunately, this isn’t always easy, since .htaccess is a hidden file. Even if you can find the file, editing it via SFTP can be risky. It’s helpful if your web host provides a file manager for editing .htaccess, to minimize the risks to the rest of your site. 10. What E-Commerce Features Are Included (If Any)? All websites have the same basic needs. However, if you’re running an e-commerce site, you’ll need some unique features. For instance, you’ll probably want more frequent backups and a Content Delivery Network (CDN) to reach customers around the world. A specialized e-commerce website hosting plan can help you get the support your online store needs at an affordable rate. Some plans — including our own e-commerce plans — will even pre-install WooCommerce and the Storefront theme for WordPress retailers. Related: How to Start an Online Store in 1 Hour with WooCommerce 11. Can You Easily Navigate and Use the Control Panel? You’ll be spending a lot of time in your hosting control panel. Being able to navigate around your account easily can make managing your website much less challenging. Plus, you won’t have to rely on support as much when you’re figuring out tasks such as billing and upgrading. Choosing a host that offers a custom control panel can save you a lot of headaches in the long run. Our control panel, for instance, offers clear navigation menus. That way, you can easily find information on your site, contact support, or edit your account information. 12. Are SSL Certificates Included? Secure Socket Layer (SSL) certificates are vital for keeping your site and its users safe. This is particularly true if you’re dealing with sensitive information such as credit card details, SSL certificates, and the like. Adding an SSL certificate to your site is usually an additional expense. However, some hosting providers will include one in your plan at no extra cost. Choosing one of these hosts can save you a little extra money while helping to keep your site secure. 13. How Often Will You Have to Renew Your Subscription? Many hosts require a monthly subscription from their customers. There’s nothing wrong with that model, and if your fees are low enough, you might not mind having to pay monthly. However, this option isn’t always the most cost-effective. Other hosts will offer one or even three-year plans. By paying for a longer term upfront, you can often save some money down the line. When comparing prices between hosts, make sure to consider this. Don’t forget that you’ll have to renew your domain name as well. This is usually an annual occurrence, although you can find options for two- and three-year registrations here at DreamHost. You can also sign up for an auto-renewal program to avoid forgetting to renew your domain. 14. Does the Web Host Offer Easy Site Backups? We all like to think the worst will never happen to us. However, it’s best to be prepared. Accidents and attacks happen, and if you’re in a position where your site has been destroyed, you’ll want a way to restore it. Backups ensure that you have a way to bring your site back if it’s lost. While there are many methods available for backing up a website, one of the easiest is to do it through your web host. It’s even more convenient if your host offers automated daily backups for your site, along with one-click on-demand backups. 15. Can You Quickly Access Support 24/7? Your relationship with your web host will hopefully be a long one. Reliable customer support is key if that relationship is going to be mutually beneficial. Making sure any host you’re considering has multiple contact methods and a 24/7 support team can guarantee that someone will be available whenever you need help. Additionally, specific support for WordPress, e-commerce, or other niches can come in handy. Choosing a host with a team that is knowledgeable about the tools you use will ensure that your site has the best support possible. For example, if you opt for DreamPress, our WordPress-specific managed hosting, you’ll get priority access to our elite squad of in-house WordPress experts. Finding the Right Web Hosting Service When it comes to choosing a web host, it can be easy to get overwhelmed. There are many factors to consider, and your decision could ultimately determine your website’s success or failure. However, if you go into your web hosting search with your needs clearly outlined, you’ll eventually find the best provider for you. Asking careful questions about the quality of the host’s services and equipment, the additional features it offers, and its pricing will steer you in the right direction. If you’re a WordPress user, that direction just might be DreamHost’s Starter Shared Hosting plan. This plan is a low-cost option that’s ideal for small business owners or those just starting out. With Shared Hosting, there’s no limit to the amount of disk space you can use for your site. Unlimited bandwidth means when your site goes viral, you don’t have to stress about storage space. Most importantly, with any DreamHost plan, you’ll be able to answer “Yes!” to each of the questions on this checklist. The post How to Choose a Web Host: A 15-Point Checklist appeared first on Website Guides, Tips and Knowledge.

A gentle introduction to Linux Kernel fuzzing

CloudFlare Blog -

For some time I’ve wanted to play with coverage-guided fuzzing. Fuzzing is a powerful testing technique where an automated program feeds semi-random inputs to a tested program. The intention is to find such inputs that trigger bugs. Fuzzing is especially useful in finding memory corruption bugs in C or C++ programs. Image by Patrick Shannon CC BY 2.0 Normally it's recommended to pick a well known, but little explored, library that is heavy on parsing. Historically things like libjpeg, libpng and libyaml were perfect targets. Nowadays it's harder to find a good target - everything seems to have been fuzzed to death already. That's a good thing! I guess the software is getting better! Instead of choosing a userspace target I decided to have a go at the Linux Kernel netlink machinery. Netlink is an internal Linux facility used by tools like "ss", "ip", "netstat". It's used for low level networking tasks - configuring network interfaces, IP addresses, routing tables and such. It's a good target: it's an obscure part of kernel, and it's relatively easy to automatically craft valid messages. Most importantly, we can learn a lot about Linux internals in the process. Bugs in netlink aren't going to have security impact though - netlink sockets usually require privileged access anyway. In this post we'll run AFL fuzzer, driving our netlink shim program against a custom Linux kernel. All of this running inside KVM virtualization. This blog post is a tutorial. With the easy to follow instructions, you should be able to quickly replicate the results. All you need is a machine running Linux and 20 minutes. Prior work The technique we are going to use is formally called "coverage-guided fuzzing". There's a lot of prior literature: The Smart Fuzzer Revolution by Dan Guido, and LWN article about it Effective file format fuzzing by Mateusz “j00ru” Jurczyk honggfuzz by Robert Swiecki, is a modern, feature-rich coverage-guided fuzzer ClusterFuzz Fuzzer Test Suite Many people have fuzzed the Linux Kernel in the past. Most importantly: syzkaller (aka syzbot) by Dmitry Vyukov, is a very powerful CI-style continuously running kernel fuzzer, which found hundreds of issues already. It's an awesome machine - it will even report the bugs automatically! Trinity fuzzer We'll use the AFL, everyone's favorite fuzzer. AFL was written by Michał Zalewski. It's well known for its ease of use, speed and very good mutation logic. It's a perfect choice for people starting their journey into fuzzing! If you want to read more about AFL, the documentation is in couple of files: Historical notes Technical whitepaper README Coverage-guided fuzzing Coverage-guided fuzzing works on the principle of a feedback loop: the fuzzer picks the most promising test case the fuzzer mutates the test into a large number of new test cases the target code runs the mutated test cases, and reports back code coverage the fuzzer computes a score from the reported coverage, and uses it to prioritize the interesting mutated tests and remove the redundant ones For example, let's say the input test is "hello". Fuzzer may mutate it to a number of tests, for example: "hEllo" (bit flip), "hXello" (byte insertion), "hllo" (byte deletion). If any of these tests will yield an interesting code coverage, then it will be prioritized and used as a base for a next generation of tests. Specifics on how mutations are done, and how to efficiently compare code coverage reports of thousands of program runs is the fuzzer secret sauce. Read on the AFL's technical whitepaper for nitty gritty details. The code coverage reported back from the binary is very important. It allows fuzzer to order the test cases, and identify the most promising ones. Without the code coverage the fuzzer is blind. Normally, when using AFL, we are required to instrument the target code so that coverage is reported in an AFL-compatible way. But we want to fuzz the kernel! We can't just recompile it with "afl-gcc"! Instead we'll use a trick. We'll prepare a binary that will trick AFL into thinking it was compiled with its tooling. This binary will report back the code coverage extracted from kernel. Kernel code coverage The kernel has at least two built-in coverage mechanisms - GCOV and KCOV: Using gcov with the Linux kernel KCOV: code coverage for fuzzing KCOV was designed with fuzzing in mind, so we'll use this. Using KCOV is pretty easy. We must compile the Linux kernel with the right setting. First, enable the KCOV kernel config option: cd linux ./scripts/config \ -e KCOV \ -d KCOV_INSTRUMENT_ALL KCOV is capable of recording code coverage from the whole kernel. It can be set with KCOV_INSTRUMENT_ALL option. This has disadvantages though - it would slow down the parts of the kernel we don't want to profile, and would introduce noise in our measurements (reduce "stability"). For starters, let's disable KCOV_INSTRUMENT_ALL and enable KCOV selectively on the code we actually want to profile. Today, we focus on netlink machinery, so let's enable KCOV on whole "net" directory tree: find net -name Makefile | xargs -L1 -I {} bash -c 'echo "KCOV_INSTRUMENT := y" >> {}' In a perfect world we would enable KCOV only for a couple of files we really are interested in. But netlink handling is peppered all over the network stack code, and we don't have time for fine tuning it today. With KCOV in place, it's worth to add "kernel hacking" toggles that will increase the likelihood of reporting memory corruption bugs. See the README for the list of Syzkaller suggested options - most importantly KASAN. With that set we can compile our KCOV and KASAN enabled kernel. Oh, one more thing. We are going to run the kernel in a kvm. We're going to use "virtme", so we need a couple of toggles: ./scripts/config \ -e VIRTIO -e VIRTIO_PCI -e NET_9P -e NET_9P_VIRTIO -e 9P_FS \ -e VIRTIO_NET -e VIRTIO_CONSOLE -e DEVTMPFS ... (see the README for full list) How to use KCOV KCOV is super easy to use. First, note the code coverage is recorded in a per-process data structure. This means you have to enable and disable KCOV within a userspace process, and it's impossible to record coverage for non-task things, like interrupt handling. This is totally fine for our needs. KCOV reports data into a ring buffer. Setting it up is pretty simple, see our code. Then you can enable and disable it with a trivial ioctl: ioctl(kcov_fd, KCOV_ENABLE, KCOV_TRACE_PC); /* profiled code */ ioctl(kcov_fd, KCOV_DISABLE, 0); After this sequence the ring buffer contains the list of %rip values of all the basic blocks of the KCOV-enabled kernel code. To read the buffer just run: n = __atomic_load_n(&kcov_ring[0], __ATOMIC_RELAXED); for (i = 0; i < n; i++) { printf("0x%lx\n", kcov_ring[i + 1]); } With tools like addr2line it's possible to resolve the %rip to a specific line of code. We won't need it though - the raw %rip values are sufficient for us. Feeding KCOV into AFL The next step in our journey is to learn how to trick AFL. Remember, AFL needs a specially-crafted executable, but we want to feed in the kernel code coverage. First we need to understand how AFL works. AFL sets up an array of 64K 8-bit numbers. This memory region is called "shared_mem" or "trace_bits" and is shared with the traced program. Every byte in the array can be thought of as a hit counter for a particular (branch_src, branch_dst) pair in the instrumented code. It's important to notice that AFL prefers random branch labels, rather than reusing the %rip value to identify the basic blocks. This is to increase entropy - we want our hit counters in the array to be uniformly distributed. The algorithm AFL uses is: cur_location = <COMPILE_TIME_RANDOM>; shared_mem[cur_location ^ prev_location]++; prev_location = cur_location >> 1; In our case with KCOV we don't have compile-time-random values for each branch. Instead we'll use a hash function to generate a uniform 16 bit number from %rip recorded by KCOV. This is how to feed a KCOV report into the AFL "shared_mem" array: n = __atomic_load_n(&kcov_ring[0], __ATOMIC_RELAXED); uint16_t prev_location = 0; for (i = 0; i < n; i++) { uint16_t cur_location = hash_function(kcov_ring[i + 1]); shared_mem[cur_location ^ prev_location]++; prev_location = cur_location >> 1; } Reading test data from AFL Finally, we need to actually write the test code hammering the kernel netlink interface! First we need to read input data from AFL. By default AFL sends a test case to stdin: /* read AFL test data */ char buf[512*1024]; int buf_len = read(0, buf, sizeof(buf)); Fuzzing netlink Then we need to send this buffer into a netlink socket. But we know nothing about how netlink works! Okay, let's use the first 5 bytes of input as the netlink protocol and group id fields. This will allow the AFL to figure out and guess the correct values of these fields. The code testing netlink (simplified): netlink_fd = socket(AF_NETLINK, SOCK_RAW | SOCK_NONBLOCK, buf[0]); struct sockaddr_nl sa = { .nl_family = AF_NETLINK, .nl_groups = (buf[1] <<24) | (buf[2]<<16) | (buf[3]<<8) | buf[4], }; bind(netlink_fd, (struct sockaddr *) &sa, sizeof(sa)); struct iovec iov = { &buf[5], buf_len - 5 }; struct sockaddr_nl sax = { .nl_family = AF_NETLINK, }; struct msghdr msg = { &sax, sizeof(sax), &iov, 1, NULL, 0, 0 }; r = sendmsg(netlink_fd, &msg, 0); if (r != -1) { /* sendmsg succeeded! great I guess... */ } That's basically it! For speed, we will wrap this in a short loop that mimics the AFL "fork server" logic. I'll skip the explanation here, see our code for details. The resulting code of our AFL-to-KCOV shim looks like: forksrv_welcome(); while(1) { forksrv_cycle(); test_data = afl_read_input(); kcov_enable(); /* netlink magic */ kcov_disable(); /* fill in shared_map with tuples recorded by kcov */ if (new_crash_in_dmesg) { forksrv_status(1); } else { forksrv_status(0); } } See full source code. How to run the custom kernel We're missing one important piece - how to actually run the custom kernel we've built. There are three options: "native": You can totally boot the built kernel on your server and fuzz it natively. This is the fastest technique, but pretty problematic. If the fuzzing succeeds in finding a bug you will crash the machine, potentially losing the test data. Cutting the branches we sit on should be avoided. "uml": We could configure the kernel to run as User Mode Linux. Running a UML kernel requires no privileges. The kernel just runs a user space process. UML is pretty cool, but sadly, it doesn't support KASAN, therefore the chances of finding a memory corruption bug are reduced. Finally, UML is a pretty magic special environment - bugs found in UML may not be relevant on real environments. Interestingly, UML is used by Android network_tests framework. "kvm": we can use kvm to run our custom kernel in a virtualized environment. This is what we'll do. One of the simplest ways to run a custom kernel in a KVM environment is to use "virtme" scripts. With them we can avoid having to create a dedicated disk image or partition, and just share the host file system. This is how we can run our code: virtme-run \ --kimg bzImage \ --rw --pwd --memory 512M \ --script-sh "<what to run inside kvm>" But hold on. We forgot about preparing input corpus data for our fuzzer! Building the input corpus Every fuzzer takes a carefully crafted test cases as input, to bootstrap the first mutations. The test cases should be short, and cover as large part of code as possible. Sadly - I know nothing about netlink. How about we don't prepare the input corpus... Instead we can ask AFL to "figure out" what inputs make sense. This is what Michał did back in 2014 with JPEGs and it worked for him. With this in mind, here is our input corpus: mkdir inp echo "hello world" > inp/01.txt Instructions, how to compile and run the whole thing are in on our github. It boils down to: virtme-run \ --kimg bzImage \ --rw --pwd --memory 512M \ --script-sh "./afl-fuzz -i inp -o out -- fuzznetlink" With this running you will see the familiar AFL status screen: Further notes That's it. Now you have a custom hardened kernel, running a basic coverage-guided fuzzer. All inside KVM. Was it worth the effort? Even with this basic fuzzer, and no input corpus, after a day or two the fuzzer found an interesting code path: NEIGH: BUG, double timer add, state is 8. With a more specialized fuzzer, some work on improving the "stability" metric and a decent input corpus, we could expect even better results. If you want to learn more about what netlink sockets actually do, see a blog post by my colleague Jakub Sitnicki Multipath Routing in Linux - part 1. Then there is a good chapter about it in Linux Kernel Networking book by Rami Rosen. In this blog post we haven't mentioned: details of AFL shared_memory setup implementation of AFL persistent mode how to create a network namespace to isolate the effects of weird netlink commands, and improve the "stability" AFL score technique on how to read dmesg (/dev/kmsg) to find kernel crashes idea to run AFL outside of KVM, for speed and stability - currently the tests aren't stable after a crash is found But we achieved our goal - we set up a basic, yet still useful fuzzer against a kernel. Most importantly: the same machinery can be reused to fuzz other parts of Linux subsystems - from file systems to bpf verifier. I also learned a hard lesson: tuning fuzzers is a full time job. Proper fuzzing is definitely not as simple as starting it up and idly waiting for crashes. There is always something to improve, tune, and re-implement. A quote at the beginning of the mentioned presentation by Mateusz Jurczyk resonated with me: "Fuzzing is easy to learn but hard to master." Happy bug hunting!

Streamline Your Online Donation Process with These 9 Steps

HostGator Blog -

The post Streamline Your Online Donation Process with These 9 Steps appeared first on HostGator Blog. Fundraising for your nonprofit group, school, or personal cause is usually more productive when it’s super-easy for people to donate. But online fundraising faces some of the same challenges as online retail. People often start a transaction, then quit because they get frustrated or distracted. As many as 60% of the people who go to a donation page abandon the process before they complete their online donation. That’s not great, but the best practices that reduce retail cart abandonment can cut donor abandonment, too. Here’s how to make your online donation process easier to complete. 9 Steps to Hassle-Free Online Donations Google’s Retail UX Playbook makes recommendations for eCommerce checkout that you can adapt to streamline your online donation process, too. 1. Make it easy for visitors to stay on the donation page. “Limit exit points” in the payment process, like links to social media accounts and related content, so you don’t lose potential donors to distractions. 2. Show donors how far along they are in the donation process. Have you ever started an online donation, then immediately wondered how long it’s going to take you to get it done, and maybe bailed out because you’re not sure you have time to complete it before your Uber arrives/baby wakes up/boss starts the meeting? It’s not just you (or me). People like to know what they’re getting into, even when what they’re getting into is a relatively short online payment process. Google recommends using a progress bar on the page if the conversion flow has more than 2 steps. 3. Remind your potential donors of why they’re entering their data. Your donation checkout pages should include your fundraising goal, so people are more likely to see the process through to the end. The example above, from the ASPCA, includes three clear reminders of why this person is donating: in the header, in the touching puppy photo, and in the paragraph on the side. By donating, they can be a lifesaver to animals. 4. Interruptions happen, but you can make it easy for donors to finish later. Your checkout page should let people complete their donation on another device, either by emailing themselves a link or saving their data for to come back to on your site. These first four steps focus on what should and should not be part of your online donation process. The next four steps focus on how your online donation form can move  people through the process to complete their donation. 5. Make sure that your online donation form only includes required fields. We’re talking about the fields that are required to verify donors’ identity and payment information. The longer your form is, and the more information prospective donors must enter, the more likely they are to abandon it. 6. Give users instant feedback as they fill out the donation form. Inline validation prevents the frustrating experience of filling out a form completely and then seeing it rejected because of a data entry error.  Set up your form to show a check mark when fields like email addresses, credit card numbers, billing zip codes are entered properly, and your visitors won’t have to scroll back up the page to fix errors. In the example below, from the Red Cross, correctly completed fields receive a green checkmark, while incomplete fields get highlighted in red with a X. 7. Enable autofill for your form fields. The less information people must enter by hand, the more likely they are to complete your donation form. That’s especially true if they’re visiting your site on a mobile phone. 8. Make your donation form mobile-friendly. Your donation form’s fields for card numbers, phone numbers, CVVs, and zip codes should use a numeric keypad. Is there anything more frustrating than trying to enter a credit card number on a typewriter-style keyboard? Especially on your phone? After you set up your form, preview it on several different browsers and devices—especially mobile browsers. When your form is live, it’s a good idea to run A/B tests to see which format delivers the highest conversion rate. 9. Say thank you! Finally, there’s one more thing your donation process should do. Always thank your donors immediately after they contribute. It’s a good idea to follow up again later via email with a progress report or results on your fundraiser. Hold On to Your Donor Data Even if you’re only fundraising for one project right now, hold on to your list of donors (and keep that data secure). Besides sending thank-you notes and project updates, you may want to reach out to those contributors if you have other fundraising projects in the future. And if you’re raising money for a nonprofit organization or political campaign, you’ll need good donor records to comply with reporting rules. Just make sure you abide by GDPR and request their permission to be contacted in the future. A donation plugin like the ones we’ll look at next can help you store and manage your donor information. Donation Plugins for Your WordPress Website The fastest and easiest way to start taking donations is to install a donation plugin on your WordPress site. Here are a few of the most popular WordPress plugins for nonprofits. 1. Give Give lets you customize your donation forms, accept one-time and recurring donations, and accept donations in honor of or in memory of someone. Give’s dashboard helps manage your donor information for receipts, tax reporting, and more. The basic plugin is free. Add-ons for upgraded features, credit-card processing, and branded payment gateways like Stripe and PayPal are available as monthly bundle subscriptions or individually. 2. Seamless Donations Seamless Donations offers a quick setup to link donations to your PayPal account. Seamless also lets donors choose between one-time and recurring contributions. You can buy premium extensions to add functions like custom donation levels, enhanced thank-you notices for donors, and a widget pack that lets you display recent donations, total donations, and other data on your site. 3. Charitable Charitable integrates with WordPress and has a free theme of its own that you can apply to your site. The free basic plugin lets you direct contributions to your PayPal account, and it allows you to set up multiple fundraising campaigns. Premium packages add more payment gateways, email marketing integrations, and more. Ready to Set Up Your Fundraising Website? Get started today with HostGator’s shared hosting plan that keeps your costs low and includes a free SSL certificate to protect your donors’ personal information. Find the post on the HostGator Blog

4 Smart Ways to Make Remote Work, Work for You

Pickaweb Blog -

The future of remote work has never looked brighter. More and more people are working remotely and there are opportunities of all kinds for them. You aren’t just limited to telecommuting, but you can keep your job and work from home, start freelancing and travel the world, or change locations frequently and have a remote The post 4 Smart Ways to Make Remote Work, Work for You appeared first on Pickaweb.

Amazon Aurora PostgreSQL Serverless – Now Generally Available

Amazon Web Services Blog -

The database is usually the most critical part of a software architecture and managing databases, especially relational ones, has never been easy. For this reason, we created Amazon Aurora Serverless, an auto-scaling version of Amazon Aurora that automatically starts up, shuts down and scales up or down based on your application workload. The MySQL-compatible edition of Aurora Serverless has been available for some time now. I am pleased to announce that the PostgreSQL-compatible edition of Aurora Serverless is generally available today. Before moving on with details, I take the opportunity to congratulate the Amazon Aurora development team that has just won the 2019 Association for Computing Machinery’s (ACM) Special Interest Group on Management of Data (SIGMOD) Systems Award! When you create a database with Aurora Serverless, you set the minimum and maximum capacity. Your client applications transparently connect to a proxy fleet that routes the workload to a pool of resources that are automatically scaled. Scaling is very fast because resources are “warm” and ready to be added to serve your requests.   There is no change with Aurora Serverless on how storage is managed by Aurora. The storage layer is independent from the compute resources used by the database. There is no need to provision storage in advance. The minimum storage is 10GB and, based on the database usage, the Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. Creating an Aurora Serverless PostgreSQL Database Let’s start an Aurora Serverless PostgreSQL database and see the automatic scalability at work. From the Amazon RDS console, I select to create a database using Amazon Aurora as engine. Currently, Aurora serverless is compatible with PostgreSQL version 10.5. Selecting that version, the serverless option becomes available. I give the new DB cluster an identifier, choose my master username, and let Amazon RDS generate a password for me. I will be able to retrieve my credentials during database creation. I can now select the minimum and maximum capacity for my database, in terms of Aurora Capacity Units (ACUs), and in the additional scaling configuration I choose to pause compute capacity after 5 minutes of inactivity. Based on my settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory. Testing Some Load on the Database To generate some load on the database I am using sysbench on an EC2 instance. There are a couple of Lua scripts bundled with sysbench that can help generate an online transaction processing (OLTP) workload: The first script, parallel_prepare.lua, generates 100,000 rows per table for 24 tables. The second script, oltp.lua, generates workload against those data using 64 worker threads. By using those scripts, I start generating load on my database cluster. As you can see from this graph, taken from the RDS console monitoring tab, the serverless database capacity grows and shrinks to follow my requirements. The metric shown on this graph is the number of ACUs used by the database cluster. First it scales up to accommodate the sysbench workload. When I stop the load generator, it scales down and then pauses. Available Now Aurora Serverless PostgreSQL is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). With Aurora Serverless, you pay on a per-second basis for the database capacity you use when the database is active, plus the usual Aurora storage costs. For more information on Amazon Aurora, I recommend this great post explaining why and how it was created: Amazon Aurora ascendant: How we designed a cloud-native relational database It’s never been so easy to use a relational database in production. I am so excited to see what you are going to use it for!

The Best and the Worst Website Hosting

InMotion Hosting Blog -

Choosing a host for your website can be a tricky decision. With a large variety of hosts to navigate through it’s hard to cut out the bad from the good. Here’s the lowdown on selecting the best website hosting for your current project. There are three categories you should look at when determining a host, and what the best hosts would do compared to what the worst hosting providers are guilty of doing. Continue reading The Best and the Worst Website Hosting at The Official InMotion Hosting Blog.

What Is the Fastest Web Hosting

HostGator Blog -

The post What Is the Fastest Web Hosting appeared first on HostGator Blog. If you have a slow loading website, you’re killing your site before it’s even had a chance to succeed.  Today, having a fast web host is no longer an option. It’s a necessity. With a slow loading site you’ll be negatively influencing your search engine rankings, impairing your user experience, and likely losing a lot of potential sales.  But all the onsite optimization in the world won’t matter if your web host is slow.  While on-page improvements like image optimization will make a huge difference, your web host sets the stage for your site speed. You can think of your hosting company as either making it easier or harder to achieve blazing fast loading speeds.  Below we not only examine why having a fast loading website is so important, but we also show you what to look for in fast web hosting services.  Why You Need a Fast Web Host If you didn’t hear the news, attention spans are down. You can’t expect your users to sit around and wait for your site to load. In fact, if your site takes longer than 2 seconds, then they’ll click the back button and head over to one of your competitors. Your web host can either greatly enhance your site speed or make it very difficult to achieve even subpar loading speeds. Here are some of the biggest reasons you’ll want fast web hosting services behind you: 1. Rank Higher in the Search Engine Rankings Google loves websites that load fast. Think about it, Google’s mission is to provide search engine users with the most relevant and highest quality search results.  When your site loads slowly, your visitors are immediately put off. They start moving their mouse (or trackpad) towards the back button as fast as possible.  Site loading speed is a Google ranking factor. Keep in mind that it’s not a big one, but everything adds up when it comes to SEO. Plus, if your site takes too long to load, then Google will send crawlers to your website less often. So, they won’t know if you’ve published that new blog post you’ve spent hours on. Beyond the ranking factor impact, having a slow loading website will also influence user experience factors like your bounce rate. Having a very high bounce rate will signal to Google that your site is low-quality, so you can say goodbye to those hard-earned rankings.  2. Offer a Positive User Experience If you want your site to succeed in the long-term, then you need to be creating a positive experience for your visitors.  Face it, the web is a crowded place, so if you’re looking to stand out, a great way to do that is with your user experience. Create a super enjoyable experience for your visitors and you’ll be having visitors come back to your site again and again (and maybe tell their friends about it too!). As you’ve already seen, today’s internet users are impatient. We all are. We all expect things to happen instantly. So, if your site fails to achieve this expectation you’re going to have a lot of disappointed visitors on your hands.  Also, as we mentioned above, when you create a great user experience you’ll improve key website stats like your bounce rate and dwell time, which suggest a positive user experience. This will have a compounding effect of ensuring it’s easier for your site to rank, and helping turn visitors into customers, subscribers, and more.  3. Create a Stellar First Impression The moment a visitor lands on your website they’re judging you. This doesn’t mean you need to feel self-conscious. It’s something we all do.  Visitors are immediately making judgments about your site. Should they trust you? Is this site worth their time? Are you a professional? These judgments happen instantly. One of the best ways to help bring visitors over to your side is to have a fast loading website.  Sure, there are important elements of your site that need to be in place too. But, users tend to view fast loading sites as more trustworthy. On the other hand, when your site loads slowly, it does the exact opposite of inspiring confidence. In fact, if your site loads too slowly, your visitors may never return again.  4. Improve Your Conversions Most people will leave your site and never return again if it loads slowly. According to some experts, 40% of people will abandon your site if it takes more than 3 seconds to load.  So, let’s say your site gets 10,000 visitors per month. With a slow loading site, 4,000 of those visitors will leave your site! That’s 4,000 visitors who could have become customers.  If your site is generating revenue, then any lost visitor is a potentially lost customer. Of course, not every website visitor will buy something from you. But, losing 40% of your visitors due to a fixable problem with site loading isn’t something you’ll want to experience. The more traffic your site gets the more tragic this figure becomes.  5. Make Speed Optimization Easier Finally, when you choose a fast hosting services you’re making your life as a webmaster so much easier. No matter what kind of hosting plan you’re on there are steps you can take to help get the highest performance out of your site. But, if you’re on a slow loading web host, even big changes will only lead to very small improvements. It’ll be near impossible to overcome the slow loading baseline that’s set by your host.  Some web hosts are so blazing fast that they deliver incredible loading speeds without any onsite speed optimization. In an ideal world you’ll have a host that values speed, along with having proper onsite optimization to truly get the highest levels of performance out of your site.  What to Look for in a Fast Web Host Now that you understand the importance of having a fast loading website let’s look at the things you should look for in a web host. Note that some of the factors are actual features of a hosting company while others are more dependant on the needs of your site.  The following factors will help you find a host with the fastest web hosting:  1. Choosing the Right Plan  One of the biggest influences on the performance of your site is choosing the right type of hosting and plan for your needs. For example, if you’re on a basic shared hosting plan, but your site gets a large volume of monthly traffic, then this will probably lead to poor performance. There’s no way around it.  It’s important to choose a host that can actually support your website needs. For most hosting plans there are limits, and when you hit these your site performance is going to suffer.  First, ensure that you choose a hosting plan that can support your existing traffic and storage levels while giving you space to grow or upgrade your plan. Second, consider choosing a type of hosting that’s best suited to your site. For example, if you run a WordPress site whose traffic levels are increasing, then you might want to consider moving to WordPress hosting.  2. Server Location One thing that’s typically overlooked is the actual physical location of the servers. If you primarily get traffic from the UK, but your servers are based in Los Angeles, then your loading times won’t be as fast as they could be. If you have the opportunity to select the location of your servers, then choose a location that’s the closest to your website visitors and not your physical location.  If you can’t select this, then at least make sure there’s a bundled CDN, or you have the opportunity to utilize one.  3. Quality Server Hardware The server hardware will also influence your site’s performance. Look for a host that uses the latest web server hardware technology, not simply a host that’s trying to get by with older server hardware.  Typically, the higher the level of your plan the better web server hardware you’ll have access to. This includes things like the quality of the processors, integrated caching, unlimited bandwidth, along with support for faster software or program updates, like PHP 7.  4. Type of Drives When you purchase hosting services you’re basically buying a place to store your site’s files. If possible, try to find a host that uses Solid State Drives (SSDs). This is an upgrade from the traditional HDD drives. SSDs have a much higher level of performance and can execute server commands at a much faster rate.  5. Bundled CDN Using a CDN will help your site load faster across the board. With a CDN behind you, cached versions of your site will be stored on different servers across the globe. Whenever a visitor accesses your site, they’ll be served a version of your site from a server that’s in the closest physical proximity.  Choosing the Fastest Web Host for Your Needs Even though speed is important, there’s a lot more that goes into choosing a web host than just evaluating speed. Here are a few factors you’ll want to look for along with fast performance:  1. Fits Your Budget Even if you’ve found the fastest web host in the world, it won’t matter if you can’t afford it. Usually, a VPS or dedicated server plan will be faster than a shared hosting plan.  You’ll need to find the fastest form of hosting available within your budget. In time, as the size of your site and traffic levels grow you can upgrade to more expensive forms of hosting that will also offer upgrades in performance.  2. Quality of Support A rock-solid customer support team should be mandatory when deciding between different web hosting providers. The chances are high that you’ll need to reach out to support at some time to help you resolve an issue regarding your hosting or your website. You’ll want to look for a support team that offers customer support in your preferred channel, along with actually being responsive and helpful. When an issue arises with your website you’ll want to be confident you can get it resolved as quickly as possible.  3. Ability to Grow With You Chances are your site will grow in time and so will your hosting needs. Even if you’re building out your first site you’ll want to choose a hosting company that can grow with you. This means you should be on the lookout for hosts that offer multiple plans, as well as, multiple forms of hosting services.  4. Bundled Tools and Features When looking for a host make sure they can meet all of your needs, not just your need for speed.  For example, maybe you want a host that allows you to manage and send email, or you know you want to use cPanel to manage your web server, or you want a host that has a website builder. Take stock of your additional needs before you begin researching and exploring different web hosting services..  5. High Level of Reliability Site uptime is incredibly important. Even if your site loads in 10ms, this won’t matter if it’s offline all the time. Look for a host that offers an uptime guarantee of 99.99%.  This uptime percentage refers to the percentage of time that your site will be online. This might not seem crucially important, but think about it from the perspective of a visitor who comes to your site while it’s offline.  Or, a visitor who came to your site with the intention of making a purchase, but all they see is an offline site.  Look for a host that offers a high level of uptime, and even an uptime guarantee if the time your site is offline drops below their baseline percentage. Choose a Fast Web Host for Your Website Hopefully, you have a better understanding of why having the fastest web hosting is very important. With a low loading host behind you, you’re going to have an uphill battle from the very beginning.  By using the information above you’ll be on your way towards finding the best blazing-fast web hosting package for your website. Whether you go for shared hosting, dedicated server hosting, or any of our other web hosting packages, our team at HostGator will help you find the best service for you and your website. If you’re looking for a web host that values load time speed and also checks all the boxes above, then explore HostGator’s hosting plans today. Find the post on the HostGator Blog

New gTLD Report – June 2019

Reseller Club Blog -

The second quarter of this year has drawn to an end and just like that, we’re through with the first half of 2019! The month of June closed with a good 31% rise in the total number of new gTLD registrations. While it wasn’t surprising to see .SITE, .TOP, .ONLINE and .XYZ retain their spots in the top 5 new gTLDs with the highest number of registrations, we did see a remarkable improvement in the performance of a couple of other new gTLDs. Read on to know about the new gTLDs that made it to the top 15 in June 2019. Here’s a look at the top 15 most registered new gTLDs that took the lead in the month of June: New gTLD Report – June 2019Infogram *Registration numbers are facilitated by ResellerClub .SITE: .SITE was the top contributor to this list with a whopping 211% spike in the registrations as compared to the month of May. The promo price of $1.99 which drove the registrations in the Global markets has been the primary reason for this new gTLD’s humongous growth in June. .ONLINE: With a  16% share of the overall new gTLD registrations, .ONLINE has been a constant amongst the top 5 new gTLDs for the sixth month in a row. The fact that this new gTLD has had such an impressive streak in all the months of 2019 so far, goes to show how widely preferred .ONLINE is! .TOP: The promo price of $0.99 drove the sales of .TOP  in June, thus helping it contribute to our top 5 new gTLDs list with a total share of 16%. As compared to the registrations that took place in May, .TOP registrations saw an increase of over 25% in the month of June. .XYZ: As was the case in the month of May,.XYZ continued to retain its place in the top 5 new gTLD list in June. We are positive that this new gTLD will continue to remain in the top 5 in July as well since .XYZ will be available at an exciting promo price all throughout the month of July. Stay tuned for more details on this promo! .FUN: .FUN made its way back to the top 5 registered new gTLDs in June. The promo price of $0.99 which was active all throughout the month of June ensured that the new gTLD continued to hold on to its spot in the top 5 in June as well. Not only that, .FUN registrations also grew by a good 64% in the month of June. On the other hand, the registrations of .SPACE saw a phenomenal spike of 412%. The other new gTLDs that outshone the rest included .LIVE with a contribution of 3.41%, .CLUB with a contribution of 3.25%, .and STORE, .LIFE, .WORLD, .WEBSITE, .ICU and .GLOBAL with a cumulative contribution of 7%. Here’s a quick overview of the exciting domain promos that we’ve lined up for July, so that you all can make the most out of them: .XYZ at $0.69: Offer this versatile new gTLD to your customers at an incredibly low price and help them give their website a short and memorable domain extension. .SITE at $4.99: Enable your customers to establish their online identity with a .SITE domain name at this exciting promo price! .SHOP at $6.99: Have customers with e-commerce or brick-and-mortar stores? Offer .SHOP to them so that they can establish a prominent identity for their online shop. That’s all folks! Check our list of all the active domain promos we have, offer them to your customers and let them choose the right one for their business. You can also head to our Facebook or Twitter pages to get all the updates about our ongoing domain promos. Remember to look out for the posts with the hashtag – #domainpromos.  .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post New gTLD Report – June 2019 appeared first on ResellerClub Blog.

A Look at The WP Engine Summit 2019

WP Engine -

Just over a week ago, brand trailblazers, agency innovators, and industry visionaries came together in Austin Texas for the 2019 WP Engine Summit. The three-day event, held at The Fairmont in downtown Austin, included inspiring and informative sessions, product demos, and a little bit of fun. This was WP Engine’s 4th annual Summit; we were honored… The post A Look at The WP Engine Summit 2019 appeared first on WP Engine.

Meet one of our Customer Success Supervisors

InMotion Hosting Blog -

Dan recently joined our team as a customer success supervisor and we’re super thankful for all he does to make sure all teams and shifts run smoothly. Learn a bit more about Dan in the interview below: How did you join the InMotion Hosting team? Dan: I was happy in my old job, I had a good team and good management. But I wanted to live closer to friends and family. Continue reading Meet one of our Customer Success Supervisors at The Official InMotion Hosting Blog.

5 Takeaways From IRCE 2019

Nexcess Blog -

The year of content has passed. That doesn’t mean it’s not still a priority, it just means that other areas are starting to require more attention. The adoption of omnichannel, the creation of unique and memorable purchasing experiences, and the creation and delivery of content in the best way possible. This year’s IRCE saw all… Continue reading →


Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs