Corporate Blogs

Podcast #299: February 2019 Updates

Amazon Web Services Blog -

Simon guides you through lots of new features, services and capabilities that you can take advantage of. Including the new AWS Backup service, more powerful GPU capabilities, new SLAs and much, much more! Chapters: Service Level Agreements 0:17 Storage 0:57 Media Services 5:08 Developer Tools 6:17 Analytics 9:54 AI/ML 12:07 Database 14:47 Networking & Content Delivery 17:32 Compute 19:02 Solutions 21:57 Business Applications 23:38 AWS Cost Management 25:07 Migration & Transfer 25:39 Application Integration 26:07 Management & Governance 26:32 End User Computing 29:22 Additional Resources Topic || Service Level Agreements 0:17 Amazon Kinesis Data Firehose Announces 99.9% Service Level Agreement Amazon Kinesis Data Streams Announces 99.9% Service Level Agreement Amazon Kinesis Video Streams Announces 99.9% Service Level Agreement Amazon EKS Announces 99.9% Service Level Agreement Amazon ECR Announces 99.9% Service Level Agreement Amazon Cognito Announces 99.9% Service Level Agreement AWS Step Functions Announces 99.9% Service Level Agreement AWS Secrets Manager Announces Service Level Agreement Amazon MQ Announces 99.9% Service Level Agreement Topic || Storage 0:57 Introducing AWS Backup Introducing Amazon Elastic File System Integration with AWS Backup AWS Storage Gateway Integrates with AWS Backup – Amazon Web Services Amazon EBS Integrates with AWS Backup to Protect Your Volumes AWS Storage Gateway Volume Detach and Attach – Amazon Web Services AWS Storage Gateway – Tape Gateway Performance Amazon FSx for Lustre Offers New Options and Faster Speeds for Working with S3 Data Topic || Media Services 5:08 AWS Elemental MediaConvert Adds IMF Input and Enhances Caption Burn-In Support AWS Elemental MediaLive Adds Support for AWS CloudTrail AWS Elemental MediaLive Now Supports Resource Tagging AWS Elemental MediaLive Adds I-Frame-Only HLS Manifests and JPEG Outputs Topic || Developer Tools 6:17 Amazon Corretto is Now Generally Available AWS CodePipeline Now Supports Deploying to Amazon S3 AWS Cloud9 Supports AWS CloudTrail Logging AWS CodeBuild Now Supports Accessing Images from Private Docker Registry Develop and Test AWS Step Functions Workflows Locally AWS X-Ray SDK for .NET Core is Now Generally Available Topic || Analytics 9:54 Amazon Elasticsearch Service doubles maximum cluster capacity with 200 node cluster support Amazon Elasticsearch Service announces support for Elasticsearch 6.4 Amazon Elasticsearch Service now supports three Availability Zone deployments Now bring your own KDC and enable Kerberos authentication in Amazon EMR Source code for the AWS Glue Data Catalog client for Apache Hive Metastore is now available for download Topic || AI/ML 12:07 Amazon Comprehend is now Integrated with AWS CloudTrail Object Bounding Boxes and More Accurate Object and Scene Detection are now Available for Amazon Rekognition Video Amazon Elastic Inference Now Supports TensorFlow 1.12 with a New Python API New in AWS Deep Learning AMIs: Updated Elastic Inference for TensorFlow, TensorBoard 1.12.1, and MMS 1.0.1 Amazon SageMaker Batch Transform Now Supports TFRecord Format Amazon Transcribe Now Supports US Spanish Speech-to-Text in Real Time Topic || Database 14:47 Amazon Redshift now runs ANALYZE automatically Introducing Python Shell Jobs in AWS Glue Amazon RDS for PostgreSQL Now Supports T3 Instance Types Amazon RDS for Oracle Now Supports T3 Instance Types Amazon RDS for Oracle Now Supports SQLT Diagnostics Tool Version 12.2.180725 Amazon RDS for Oracle Now Supports January 2019 Oracle Patch Set Updates (PSU) and Release Updates (RU) Amazon DynamoDB Local Adds Support for Transactional APIs, On-Demand Capacity Mode, and 20 GSIs Topic || Networking and Content Delivery 17:32 Network Load Balancer Now Supports TLS Termination Amazon CloudFront announces six new Edge locations across United States and France AWS Site-to-Site VPN Now Supports IKEv2 VPC Route Tables Support up to 1,000 Static Routes Topic || Compute 19:02 Announcing a 25% price reduction for Amazon EC2 X1 Instances in the Asia Pacific (Mumbai) AWS Region Amazon EKS Achieves ISO and PCI Compliance AWS Fargate Now Has Support For AWS PrivateLink AWS Elastic Beanstalk Adds Support for Ruby 2.6v AWS Elastic Beanstalk Adds Support for .NET Core 2.2 Amazon ECS and Amazon ECR now have support for AWS PrivateLink GPU Support for Amazon ECS now Available AWS Batch now supports Amazon EC2 A1 Instances and EC2 G3s Instances Topic || Solutions 21:57 Deploy Micro Focus Enterprise Server on AWS with New Quick Start AWS Public Datasets Now Available from UK Meteorological Office, Queensland Government, University of Pennsylvania, Buildzero, and Others Quick Start Update: Active Directory Domain Services on the AWS Cloud Introducing the Media2Cloud solution Topic || Business Applications 23:38 Alexa for Business now offers IT admins simplified workflow to setup shared devices Topic || AWS Cost Management 25:07 Introducing Normalized Units Information for Amazon EC2 Reservations in AWS Cost Explorer Topic || Migration and Transfer 25:39 AWS Migration Hub Now Supports Importing On-Premises Server and Application Data to Track Migration Progress Topic || Application Integration 26:07 Amazon SNS Message Filtering Adds Support for Multiple String Values in Blacklist Matching Topic || Management and Governance 26:32 AWS Trusted Advisor Expands Functionality With New Best Practice Checks AWS Systems Manager State Manager Now Supports Management of In-Guest and Instance-Level Configuration AWS Config Increases Default Limits for AWS Config Rules VIntroducing AWS CloudFormation UpdateReplacePolicy Attribute Automate WebSocket API Creation in Amazon API Gateway Using AWS CloudFormation AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise Now Support AWS CloudFormation VPC Route Tables Support up to 1,000 Static Routes Amazon CloudWatch Agent Adds Support for Procstat Plugin and Multiple Configuration Files Improve Security Of Your AWS SSO Users Signing In To The User Portal By Using Email-based Verification Topic || End User Computing 29:22 Introducing Amazon WorkLink AppStream 2.0 enables custom scripts before session start and after session termination About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to We want to hear from you!

The 5 Top Takeaways From Magento Live Australia 2019

Nexcess Blog -

Magento Live Australia has come and gone, and another year of informative information, actionable strategies, and future predictions has passed. For Merchants, changes to Magento such as the 2.3.1 update, came with a promise of increased accessibility and improve integration. For developers, roundtables, discussions, and future developments came with both personal, professional, and business recommendations… Continue reading →

SOCKMAP - TCP splicing of the future

CloudFlare Blog -

Recently we stumbled upon the holy grail for reverse proxies - a TCP socket splicing API. This caught our attention because, as you may know, we run a global network of reverse proxy services. Proper TCP socket splicing reduces the load on userspace processes and enables more efficient data forwarding. We realized that Linux Kernel's SOCKMAP infrastructure can be reused for this purpose. SOCKMAP is a very promising API and is likely to cause a tectonic shift in the architecture of data-heavy applications like software proxies. Image by Mustad Marine public domain But let’s rewind a bit. Birthing pains of L7 proxies Transmitting large amounts of data from userspace is inefficient. Linux provides a couple of specialized syscalls that aim to address this problem. For example, the sendfile(2) syscall (which Linus doesn't like) can be used to speed up transferring large files from disk to a socket. Then there is splice(2) which traditional proxies use to forward data between two TCP sockets. Finally, vmsplice can be used to stick memory buffer into a pipe without copying, but is very hard to use correctly. Sadly, sendfile, splice and vmsplice are very specialized, synchronous and solve only one part of the problem - they avoid copying the data to userspace. They leave other efficiency issues unaddressed. between avoid user-space memory zerocopy sendfile disk file --> socket yes no splice pipe <--> socket yes yes? vmsplice memory region --> pipe no yes Processes that forward large amounts of data face three problems: Syscall cost: making multiple syscalls for every forwarded packet is costly. Wakeup latency: the user-space process must be woken up often to forward the data. Depending on the scheduler, this may result in poor tail latency. Copying cost: copying data from kernel to userspace and then immediately back to the kernel is not free and adds up to a measurable cost. Many tried Forwarding data between TCP sockets is a common practice. It's needed for: Transparent forward HTTP proxies, like Squid. Reverse caching HTTP proxies, like Varnish or NGINX. Load balancers, like HAProxy, Pen or Relayd. Over the years there have been many attempts to reduce the cost of dumb data forwarding between TCP sockets on Linux. This issue is generally called “TCP splicing”, “L7 splicing”, or “Socket splicing”. Let’s compare the usual ways of doing TCP splicing. To simplify the problem, instead of writing a rich Layer 7 TCP proxy, we'll write a trivial TCP echo server. It's not a joke. An echo server can illustrate TCP socket splicing well. You know - "echo" basically splices the socket… with itself! Naive: read write loop The naive TCP echo server would look like: while data: data = read(sd, 4096) write(sd, data) Nothing simpler. On a blocking socket this is a totally valid program, and will work just fine. For completeness I prepared full code here. Splice: specialized syscall Linux has an amazing splice(2) syscall. It can tell the kernel to move data between a TCP buffer on a socket and a buffer on a pipe. The data remains in the buffers, on the kernel side. This solves the problem of needlessly having to copy the data between userspace and kernel-space. With the SPLICE_F_MOVE flag the kernel may be able to avoid copying the data at all! Our program using splice() looks like: pipe_rd, pipe_wr = pipe() fcntl(pipe_rd, F_SETPIPE_SZ, 4096); while n: n = splice(sd, pipe_wr, 4096) splice(pipe_rd, sd, n) We still need wake up the userspace program and make two syscalls to forward any piece of data, but at least we avoid all the copying. Full source. io_submit: Using Linux AIO API In a previous blog post about io_submit() we proposed using the AIO interface with network sockets. Read the blog post for details, but here is the prepared program that has the echo server loop implemented with only a single syscall. Image by jrsnchzhrs By-Nd 2.0 SOCKMAP: The ultimate weapon In recent years Linux Kernel introduced an eBPF virtual machine. With it, user-space programs can run specialized, non-turing-complete bytecode in the kernel context. Nowadays it's possible to select eBPF programs for dozens of use cases, ranging from packet filtering, to policy enforcement. From Kernel 4.14 Linux got new eBPF machinery that can be used for socket splicing - SOCKMAP. It was created by John Fastabend at, exposing the Strparser interface to eBPF programs. Cilium uses SOCKMAP for Layer 7 policy enforcement, and all the logic it uses is embedded in an eBPF program. The API is not well documented, requires root and, from our experience, is slightly buggy. But it's very promising. Read more: LPC2018 - Combining kTLS and BPF for Introspection and Policy Enforcement Paper Video Slides Original SOCKMAP commit This is how to use SOCKMAP: SOCKMAP or specifically "BPF_MAP_TYPE_SOCKMAP", is a type of an eBPF map. This map is an "array" - indices are integers. All this is pretty standard. The magic is in the map values - they must be TCP socket descriptors. This map is very special - it has two eBPF programs attached to it. You read it right: the eBPF programs live attached to a map, not attached to a socket, cgroup or network interface as usual. This is how you would set up SOCKMAP in user program: sock_map = bpf_create_map(BPF_MAP_TYPE_SOCKMAP, sizeof(int), sizeof(int), 2, 0) prog_parser = bpf_load_program(BPF_PROG_TYPE_SK_SKB, ...) prog_verdict = bpf_load_program(BPF_PROG_TYPE_SK_SKB, ...) bpf_prog_attach(bpf_parser, sock_map, BPF_SK_SKB_STREAM_PARSER) bpf_prog_attach(bpf_verdict, sock_map, BPF_SK_SKB_STREAM_VERDICT) Ta-da! At this point we have an established sock_map eBPF map, with two eBPF programs attached: parser and verdict. The next step is to add a TCP socket descriptor to this map. Nothing simpler: int idx = 0; int val = sd; bpf_map_update_elem(sock_map, &idx, &val, BPF_ANY); At this point the magic happens. From now on, each time our socket sd receives a packet, prog_parser and prog_verdict are called. Their semantics are described in the strparser.txt and the introductory SOCKMAP commit. For simplicity, our trivial echo server only needs the minimal stubs. This is the eBPF code: SEC("prog_parser") int _prog_parser(struct __sk_buff *skb) { return skb->len; } SEC("prog_verdict") int _prog_verdict(struct __sk_buff *skb) { uint32_t idx = 0; return bpf_sk_redirect_map(skb, &sock_map, idx, 0); } Side note: for the purposes of this test program, I wrote a minimal eBPF loader. It has no dependencies (neither bcc, libelf, or libbpf) and can do basic relocations (like resolving the sock_map symbol mentioned above). See the code. The call to bpf_sk_redirect_map is doing all the work. It tells the kernel: for the received packet, please oh please redirect it from a receive queue of some socket, to a transmit queue of the socket living in sock_map under index 0. In our case, these are the same sockets! Here we achieved exactly what the echo server is supposed to do, but purely in eBPF. This technology has multiple benefits. First, the data is never copied to userspace. Secondly, we never need to wake up the userspace program. All the action is done in the kernel. Quite cool, isn't it? We need one more piece of code, to hang the userspace program until the socket is closed. This is best done with good old poll(2): /* Wait for the socket to close. Let SOCKMAP do the magic. */ struct pollfd fds[1] = { {.fd = sd, .events = POLLRDHUP}, }; poll(fds, 1, -1); Full code. The benchmarks At this stage we have presented four simple TCP echo servers: naive read-write loop splice io_submit SOCKMAP To recap, we are measuring the cost of three things: Syscall cost Wakeup latency, mostly visible as tail latency The cost of copying data Theoretically, SOCKMAP should beat all the others: syscall cost waking up userspace copying cost read write loop 2 syscalls yes 2 copies splice 2 syscalls yes 0 copy (?) io_submit 1 syscall yes 2 copies SOCKMAP none no 0 copies Show me the numbers This is the part of the post where I'm showing you the breathtaking numbers, clearly showing the different approaches. Sadly, benchmarking is hard, and well... SOCKMAP turned out to be the slowest. It's important to publish negative results so here they are. Our test rig was as follows: Two bare-metal Xeon servers connected with a 25Gbps network. Both have turbo-boost disabled, and the testing programs are CPU-pinned. For better locality we localized RX and TX queues to one IRQ/CPU each. The testing server runs a script that sends 10k batches of fixed-sized blocks of data. The script measures how long it takes for the echo server to return the traffic. We do 10 separate runs for each measured echo-server program. TCP: "cubic" and NONAGLE=1. Both servers run the 4.14 kernel. Our analysis of the experimental data identified some outliers. We think some of the worst times, manifested as long echo replies, were caused by unrelated factors such as network packet loss. In the charts presented we, perhaps controversially, skip the bottom 1% of outliers in order to focus on what we think is the important data. Furthermore, we spotted a bug in SOCKMAP. Some of the runs were delayed by up to whopping 64ms. Here is one of the tests: Values min:236.00 avg:669.28 med=390.00 max:78039.00 dev:3267.75 count:2000000 Values: value |-------------------------------------------------- count 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3531 512 |************************************************** 1756052 1024 | ***** 208226 2048 | 18589 4096 | 2006 8192 | 9 16384 | 1 32768 | 0 65536 | 11585 131072 | 1 The great majority of the echo runs (of 128KiB in this case) were finished in the 512us band, while a small fraction stalled for 65ms. This is pretty bad and makes comparison of SOCKMAP to other implementations pretty meaningless. This is a second reason why we are skipping 1% of worst results from all the runs - it makes SOCKMAP numbers way more usable. Sorry. 2MiB blocks - throughput The fastest of our programs was doing ~15Gbps over one flow, which seems to be a hardware limit. This is very visible in the first iteration, which shows the throughput of our echo programs. This test shows: Time to transmit and receive 2MiB blocks of data, via our tested echo server. We repeat this 10k times, and run the test 10 times. After stripping the worst 1% numbers we get the following latency distribution: This charts shows that both naive read+write and io_submit programs were able to achieve 1500us mean round trip time for TCP echo server of 2MiB blocks. Here we clearly see that splice and SOCKMAP are slower than others. They were CPU-bound and unable to reach the line rate. We have raised the unusual splice performance problems in the past, but perhaps we should debug it one more time. For each server we run the tests twice: without and with SO_BUSYPOLL setting. This setting should remove the "wakeup latency" and greatly reduce the jitter. The results show that naive and io_submit tests are almost identical. This is perfect! BUSYPOLL does indeed reduce the deviation and latency, at a cost of more CPU usage. Notice that neither splice nor SOCKMAP are affected by this setting. 16KiB blocks - wakeup time Our second run of tests was with much smaller data sizes, sending tiny 16KiB blocks at a time. This test should illustrate the "wakeup time" of the tested programs. In this test the non-BUSYPOLL runs of all the programs look quite similar (min and max values), with SOCKMAP being the exception. This is great - we can speculate the wakeup time is comparable. Surprisingly, the splice has slightly better median time from others. Perhaps this can be explained by CPU artifacts, like having better CPU cache locality due to less data copying. SOCKMAP is again, slowest with worst max and median times. Boo. Remember we truncated the worst 1% of the data - we artificially shortened the "max" values. TL;DR In this blog post we discussed the theoretical benefits of SOCKMAP. Sadly, we noticed it's not ready for prime time yet. We compared it against splice, which we noticed didn't benefit from BUSYPOLL and had disappointing performance. We noticed that the naive read/write loop and iosubmit approaches have exactly the same performance characteristics and do benefit from BUSYPOLL to reduce jitter (wakeup time). If you are piping data between TCP sockets, you should definitely take a look at SOCKMAP. While our benchmarks show it's not ready for prime time yet, with poor performance, high jitter and a couple of bugs, it's very promising. We are very excited about it. It's the first technology on Linux that truly allows tahe user-space process to offload TCP splicing to the kernel. It also has potential for much better performance than other approaches, ticking all the boxes of being async, kernel-only and totally avoiding needless copying of data. This is not everything. SOCKMAP is able to pipe data across multiple sockets - you can imagine a full mesh of connections being able to send data to each other. Furthermore it exposes the strparser API, which can be used to offload basic application framing. Combined with kTLS you can combine it with transparent encryption. Furthermore, there are rumors of adding UDP support. The possibilities are endless. Recently the kernel has been exploding with eBPF innovations. It seems like we've only just scratched the surface of the possibilities exposed by the modern eBPF interfaces. Many thanks to Jakub Sitnicki for suggesting SOCKMAP in the first place, writing the proof of concept and now actually fixing the bugs we found. Go strong Warsaw office!

Increase Your WordPress Speed

InMotion Hosting Blog -

These days, customers expect fast and reliable service on every website they visit. In fact, the average web user will wait no more than three seconds for a page to load before moving on to another site. For comparison, that’s just about how long it takes to have a sip of coffee (which isn’t very long at all). That’s why performance is one of the most important factors when it comes to the success of your website. Continue reading Increase Your WordPress Speed at The Official InMotion Hosting Blog.

Save Time with a WordPress Automatic Backup Plugin

InMotion Hosting Blog -

Who among us isn’t constantly looking for ways to simplify tasks in order to save time and energy? If you’re running your own website, one way to simplify a task is by using a WordPress automatic backup plugin. Having a backup of your website is a great insurance policy against disasters — namely hackers, network crashes, or even something as simple as an error on your part. (You would be amazed at how many times people accidentally delete necessary files while trying to add some feature or do an upgrade.) If someone or something should take your website down or corrupt the files that run it, then you can use your backup to quickly and easily restore it to its last saved point. Continue reading Save Time with a WordPress Automatic Backup Plugin at The Official InMotion Hosting Blog.

Frequently Asked Questions about Website Creator

InMotion Hosting Blog -

WordPress has been the go-to website creator for years. In fact, over 75 million websites use the platform, renowned for its ease-of-use and simple customization. Now, Website Creator makes creating a WordPress website even easier. With a simple drag-and-drop format, even beginners can have a new site up and running within a few hours. Keep reading below for answers to some of our most frequently asked questions. But first, let’s go over what Website Creator is (and why it’s the best website builder tool on the market): What is Website Creator? Continue reading Frequently Asked Questions about Website Creator at The Official InMotion Hosting Blog.

How to Write Your First Blog Post in WordPress

InMotion Hosting Blog -

Writing your first blog post can be intimidating – and it’s not just the technical aspects of it that are scary. Many writers worry that they won’t be able to connect with their audience, or they don’t really know what to write about. While we may not be able to help you with the actual writing, we can tell you everything else, from how to select the best topics to how to create and publish your post. Continue reading How to Write Your First Blog Post in WordPress at The Official InMotion Hosting Blog.

Own Multiple Websites? Did You Know There Are WordPress Multisite Backup Plugins?

InMotion Hosting Blog -

While a lot of attention has been placed on how to handle a backup for a single WordPress website, what if you need to use a WordPress multisite backup plugin? Many businesses may have multiple websites that they maintain to perform different duties. Similarly, some folks keep a separate website for blogging and another one for their family with news and pictures, for example. Regardless of which category you fall into, there are several plugins that can help you with a multi-site backup. Continue reading Own Multiple Websites? Did You Know There Are WordPress Multisite Backup Plugins? at The Official InMotion Hosting Blog.

Is Blogging Better For Business Than Social Media?

InMotion Hosting Blog -

We know how easy and fun it is to engage with your readers, customers, or friends on various social media channels. But what’s happening on your blog? It’s all too often that we see business people making frequent updates on their social media accounts while posts on their blog have sat lingering for two or three years. In this article, we’re going to give you a handful of reasons why your blog needs more attention. Continue reading Is Blogging Better For Business Than Social Media? at The Official InMotion Hosting Blog.

Introducing Cf-Terraforming

CloudFlare Blog -

Ever since we implemented support for configuring Cloudflare via Terraform, we’ve been steadily expanding the set of features and services you can manage via this popular open-source tool. If you're unfamiliar with how Terraform works with Cloudflare, check out our developer docs.We are Terraform users ourselves, and we believe in the stability and reproducibility that can be achieved by defining your infrastructure as code.What is Terraform?Terraform is an open-source tool that allows you to describe your infrastructure and cloud services (think virtual machines, servers, databases, network configurations, Cloudflare API resources, and more) as human-readable configurations. Once you’ve done this, you can run the Terraform command-line tool and it will figure out the difference between your desired state and your current state, and make the API calls in the background necessary to reconcile the two. Unlike other solutions, Terraform does not require you to run software on your hosts, and instead of spending time manually configuring machines, creating DNS records, and specifying Page Rules, you can simply run:terraform apply and the state described in your configuration files will be built for you. Enter Cloudflare Terraforming Terraform is a tremendous time-saver once you have your configuration files in place, but what do you do if you’re already a Cloudflare user and you need to convert your particular setup, records, resources and rules into Terraform config files in the first place?Today, we’re excited to share a new open-source utility to make the migration of even complex Cloudflare configurations into Terraform simple and fast.It’s called cf-terraforming and it downloads your Cloudflare setup, meaning everything you’ve defined via the Cloudflare dashboard and API, into Terraform-compliant configuration files in a few commands.Getting up and running quicklyCf-terraforming is open-source and available on Github now. You need a working Golang installation and a Cloudflare account with some resources defined. That’s it!Let’s first install cf-terraforming, while also pulling down all dependencies and updating them as necessary: $ go get -u Cf-terraforming is a command line tool that you invoke with your Cloudflare credentials, some zone information and the resource type that you want to export. The output is a valid Terraform configuration file describing your resources. To use cf-terraforming, first get your API key and Account ID from the Cloudflare dashboard. You can find your account id at the bottom right of the overview page for any zone in your account. It also has a quick link to get your API key as well. You can store your key and account ID in environment variables to make it easier to work with the tool: export CLOUDFLARE_TOKEN=”<your-key>” export CLOUDFLARE_EMAIL=”<your-email>” export CLOUDFLARE_ACCT_ID=”<your-id>” Cf-terraforming can create configuration files for any of the resources currently available in the official Cloudflare Terraform provider, but sometimes it’s also handy to export individual resources as needed.Let’s say you’re migrating your Cloudflare configuration to Terraform and you want to describe your Spectrum applications. You simply call cf-terraforming with your credentials, zone, and the spectrum_application command, like so: go run cmd/cf-terraforming/main.go --email $CLOUDFLARE_EMAIL --key $CLOUDFLARE_TOKEN --account $CLOUDFLARE_ACCT_ID spectrum_application Cf-terraforming will contact the Cloudflare API on your behalf and define your resources in a format that Terraform understands: resource"cloudflare_spectrum_application""1150bed3f45247b99f7db9696fffa17cbx9" { protocol = "tcp/8000" dns = { type = "CNAME" name = "" } ip_firewall = "true" tls = "off" origin_direct = [ "tcp://", ] } You can redirect the output to a file and then start working with Terraform. First, ensure you are in the cf-terraforming directory, then run: go run cmd/cf-terraforming/main.go --email $CLOUDFLARE_EMAIL --key $CLOUDFLARE_TOKEN --account $CLOUDFLARE_ACCT_ID spectrum_application > The same goes for Zones, DNS records, Workers scripts and routes, security policies and more. Which resources are supported?Currently cf-terraforming supports every resource type that you can manage via the official Cloudflare Terraform provider: access_applicationaccess_ruleaccess_policyaccount_membercustom_pagesfilterfirewall_ruleload_balancerload_balancer_poolload_balancer_monitorrate_limitrecordspectrum_applicationwaf_ruleworker_routeworker_scriptzonezone_lockdownzone_settings_overrideGet involvedWe’re looking for feedback and any issues you might encounter while getting up and running with cf-terraforming. Please open any issues against the GitHub repo.Cf-terraforming is open-source, so if you want to get involved feel free to pick up an open issue or make a pull request. Looking forwardWe’ll continue to expand the set of Cloudflare resources that you can manage via Terraform, and that you can export via cf-terraforming. Be sure to keep and eye on the cf-terraforming repo for updates.

How to Protect Your Website Content: Disable Right Click in WordPress

InMotion Hosting Blog -

Piracy and theft have been around since the dawn of recorded time and the Internet has only made this more prevalent and easier. Unfortunately, theft of copyrighted information is now easier to obtain and it can be a major headache for creative artists who are trying to earn a living from their work. Fortunately, there are a few steps that WordPress site owners can do to protect their copyrighted material. Here’s how you can keep others from stealing your content and pictures from your website: Let’s Talk About Copyright Laws First, let’s say a few things about copyright laws. Continue reading How to Protect Your Website Content: Disable Right Click in WordPress at The Official InMotion Hosting Blog.

SEO Best Practices with Cloudflare Workers, Part 2: Implementing Subdomains

CloudFlare Blog -

RecapIn Part 1, the merits and tradeoffs of subdirectories and subdomains were discussed.  The subdirectory strategy is typically superior to subdomains because subdomains suffer from keyword and backlink dilution.  The subdirectory strategy more effectively boosts a site's search rankings by ensuring that every keyword is attributed to the root domain instead of diluting across subdomains.Subdirectory Strategy without the NGINXIn the first part, our friend Bob set up a hosted Ghost blog at that he connected to using a CNAME DNS record.  But what if he wanted his blog to live at to gain the SEO advantages of subdirectories?A reverse proxy like NGINX is normally needed to route traffic from subdirectories to remotely hosted services.  We'll demonstrate how to implement the subdirectory strategy with Cloudflare Workers and eliminate our dependency on NGINX. (Cloudflare Workers are serverless functions that run on the Cloudflare global network.)Back to BobtopiaLet's write a Worker that proxies traffic from a subdirectory – – to a remotely hosted platform –  This means that if I go to, I should see the content of, but my browser should still think it's on OptionsIn the Workers editor, we'll start a new script with some basic configuration options.// keep track of all our blog endpoints here const myBlog = { hostname: "", targetSubdirectory: "/articles", assetsPathnames: ["/public/", "/assets/"] }The script will proxy traffic from myBlog.targetSubdirectory to Bob's hosted Ghost endpoint, myBlog.hostname.  We'll talk about myBlog.assetsPathnames a little later.Requests are proxied from to (Uh oh... is because the hosted Ghost blog doesn't actually exist)Request HandlersNext, we'll add a request handler:async function handleRequest(request) { return fetch(request) } addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) }) So far we're just passing requests through handleRequest unmodified.  Let's make it do something: async function handleRequest(request) { ... // if the request is for blog html, get it if (requestMatches(myBlog.targetSubdirectory)) { console.log("this is a request for a blog document", parsedUrl.pathname) const targetPath = formatPath(parsedUrl) return fetch(`https://${myBlog.hostname}/${targetPath}`) } ... console.log("this is a request to my root domain", parsedUrl.pathname) // if its not a request blog related stuff, do nothing return fetch(request) } addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) }) In the above code, we added a conditional statement to handle traffic to myBlog.targetSubdirectory.  Note that we've omitted our helper functions here.  The relevant code lives inside the if block near the top of the function. The requestMatches helper checks if the incoming request contains targetSubdirectory.  If it does, a request is made to myBlog.hostname to fetch the HTML document which is returned to the browser.When the browser parses the HTML, it makes additional asset requests required by the document (think images, stylesheets, and scripts).  We'll need another conditional statement to handle these kinds of requests.// if its blog assets, get them if ([myBlog.assetsPathnames].some(requestMatches)) { console.log("this is a request for blog assets", parsedUrl.pathname) const assetUrl = request.url.replace(parsedUrl.hostname, myBlog.hostname); return fetch(assetUrl) }This similarly shaped block checks if the request matches any pathnames enumerated in myBlog.assetPathnames and fetches the assets required to fully render the page.  Assets happen to live in /public and /assets on a Ghost blog.  You'll be able to identify your assets directories when you fetch the HTML and see logs for scripts, images, and stylesheets.Logs show the various scripts and stylesheets required by Ghost live in /assets and /publicThe full script with helper functions included is: // keep track of all our blog endpoints here const myBlog = { hostname: "", targetSubdirectory: "/articles", assetsPathnames: ["/public/", "/assets/"] } async function handleRequest(request) { // returns an empty string or a path if one exists const formatPath = (url) => { const pruned = url.pathname.split("/").filter(part => part) return pruned && pruned.length > 1 ? `${pruned.join("/")}` : "" } const parsedUrl = new URL(request.url) const requestMatches = match => new RegExp(match).test(parsedUrl.pathname) // if its blog html, get it if (requestMatches(myBlog.targetSubdirectory)) { console.log("this is a request for a blog document", parsedUrl.pathname) const targetPath = formatPath(parsedUrl) return fetch(`https://${myBlog.hostname}/${targetPath}`) } // if its blog assets, get them if ([myBlog.assetsPathnames].some(requestMatches)) { console.log("this is a request for blog assets", parsedUrl.pathname) const assetUrl = request.url.replace(parsedUrl.hostname, myBlog.hostname); return fetch(assetUrl) } console.log("this is a request to my root domain",, parsedUrl.pathname); // if its not a request blog related stuff, do nothing return fetch(request) } addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) }) CaveatThere is one important caveat about the current implementation that bears mentioning. This script will not work if your hosted service assets are stored in a folder that shares a name with a route on your root domain.  For example, if you're serving assets from the root directory of your hosted service, any request made to the home page will be masked by these asset requests, and the home page won't load.The solution here involves modifying the blog assets block to handle asset requests without using paths.  I'll leave it to the reader to solve this, but a more general solution might involve changing myBlog.assetPathnames to myBlog.assetFileExtensions, which is a list of all asset file extensions (like .png and .css).  Then, the assets block would handle requests that contain assetFileExtensions instead of assetPathnames.ConclusionBob is now enjoying the same SEO advantages as Alice after converting his subdomains to subdirectories using Cloudflare Workers.  Bobs of the world, rejoice!

SEO Best Practices with Cloudflare Workers, Part 1: Subdomain vs. Subdirectory

CloudFlare Blog -

Subdomain vs. Subdirectory: 2 Different SEO StrategiesAlice and Bob are budding blogger buddies who met up at a meetup and purchased some root domains to start writing.  Alice bought and Bob scooped up and Bob decided against WordPress because its what their parents use and purchased subscriptions to a popular cloud-based Ghost blogging platform instead.Bob decides his blog should live at at – a subdomain of Alice keeps it old school and builds hers at – a subdirectory of and subdirectories are different strategies for instrumenting root domains with new features (think a blog or a storefront).  Alice and Bob chose their strategies on a whim, but which strategy is technically better?  The short answer is, it depends. But the long answer can actually improve your SEO.  In this article, we'll review the merits and tradeoffs of each. In Part 2, we'll show you how to convert subdomains to subdirectories using Cloudflare Workers.Setting Up Subdomains and SubdirectoriesSetting up subdirectories is trivial on basic websites.  A web server treats its subdirectories (aka subfolders) the same as regular old folders in a file system.  In other words, basic sites are already organized using subdirectories out of the box.  No set up or configuration is required.In the old school site above, we'll assume the blog folder contains an index.html file. The web server renders blog/index.html when a user navigates to the subdirectory.  But Alice and Bob's sites don't have a blog folder because their blogs are hosted remotely – so this approach won't work.On the modern Internet, subdirectory setup is more complicated because the services that comprise a root domain are often hosted on machines scattered across the world.Because DNS records only operate on the domain level, records like CNAME have no effect on a url like – and because her blog is hosted remotely, Alice needs to install NGINX or another reverse proxy and write some configuration code that proxies traffic from to her hosted blog. It takes time, patience, and experience to connect her domain to her hosted blog.A location block in NGINX is necessary to proxy traffic from a subdirectory to a remote hostBob's subdomain strategy is the easier approach with his remotely hosted blog.  A DNS CNAME record is often all that's required to connect Bob's blog to his subdomain.  No additional configuration is needed if he can remember to pay his monthly subscription.Configuring a DNS record to point a hosted service at your blog subdomainTo recap, subdirectories are already built into simple sites that serve structured content from the same machine, but modern sites often rely on various remote services.  Subdomain set up is comparatively easy for sites that take advantage of various hosted cloud-based platforms.Are Subdomains or Subdirectories Better for SEO?Subdomains are neat. If you ask me, is more appealing than But if we want to make an informed decision about the best strategy, where do we look?  If we're interested in SEO, we ought to consult the Google Bot.Subdomains and subdirectories are equal in the eyes of the Google Bot, according to Google itself.  This means that Alice and Bob have the same chance at ranking in search results.  This is because Alice's root domain and Bob's subdomain build their own sets of keywords.  Relevant keywords help your audience find your site in a search. There is one important caveat to point out for Bob:A subdomain is equal and distinct from a root domain.  This means that a subdomain's keywords are treated separately from the root domain.What does this mean for Bob?  Let's imagine is already a popular online platform for folks named Bob to seek kinship with other Bobs.  In this peculiar world, searches that rank for wouldn't automatically rank for because each domain has its own separate keywords.  The lesson here is that keywords are diluted across subdomains.  Each additional subdomain decreases the likelihood that any particular domain ranks in a given search.  A high ranking subdomain does not imply your root domain ranks well.In a search for "Cool Blog", suffers from keyword dilution. It doesn't rank because its blog keyword is owned by also suffer from backlink dilution.  A backlink is simply a hyperlink that points back to your site. Alice's attribution to a post on the etymology of Bob from does not help because the subdomain is treated separate but equal from the root domain.  If Bob used subdirectories instead, Bob's blog posts would feed the authority of and Bobs everywhere would rejoice.The authority of is increased when Alice links to Bob's interesting blog post, but the authority of is not affected.Although search engines have improved at identifying subdomains and attributing keywords back to the root domain, they still have a long way to go.  A prudent marketer would avoid risk by assuming search engines will always be bad at cataloguing subdomains.So when would you want to use subdomains?  A good use case is for companies who are interested in expanding into foreign markets.  Pretend is an American company whose website is in English.  Their English keywords won't rank well in German searches – so they translate their site into German to begin building new keywords on Erfolg!Other use cases for subdomains include product stratification (think global brands with presence across many markets) and corporate internal tools (think productivity and organization tools that aren't user facing).  But unless you're a huge corporation or just finished your Series C round of funding, subdomaining your site into many silos is not helping your SEO.ConclusionIf you're a startup or small business looking to optimize your SEO, consider subdirectories over subdomains.  Boosting the authority of your root domain should be a universal goal of any organization. The subdirectory strategy concentrates your keywords onto a single domain while the subdomain strategy spreads your keywords across multiple distinct domains. In a word, the subdirectory strategy results in better root domain authority. Higher domain authority leads to better search rankings which translates to more engagement.Consider the multitude of disruptive PaaS startups with and  Why not switch to and to boost the authority of your root domain with all those docs searches and StackOverflow backlinks?Want to Switch Your Subdomains to Subdirectories?Interested in switching your subdomains to subdirectories without a reverse proxy? In Part 2, we'll show you how using Cloudflare Workers.

Top 7 HR Software Solutions for Your Business

Pickaweb Blog -

The human resources (HR) software market is continuously growing, gaining more popularity each year, and is predicted to reach 10.9 billion U.S. dollars by 2023 according to Statista survey. The overarching aim of such software solutions is to automate the tasks of man-power services, which were previously done manually. That’s why business leaders should keep The post Top 7 HR Software Solutions for Your Business appeared first on Pickaweb.

How to Audition Plugins For WordPress (The Right Way)

InMotion Hosting Blog -

When it comes to searching for and installing plugins for WordPress, you’ve got a whole world of options. But you want to carefully pick just the right ones to complement (rather than detract from) your website. Hot Tip: If you’re on our WordPress Hosting (and, if not, then you should really consider it) we recommend installing the WordPress Nginx Helper Plugin to manage your caching right from within the WordPress admin area. Continue reading How to Audition Plugins For WordPress (The Right Way) at The Official InMotion Hosting Blog.

Our 2019 Sitecore MVPs Turn Technical Expertise into High-Value Business Outcomes

The Rackspace Blog & Newsroom -

Sitecore Experience Platform is an industry leader for a reason. It offers a comprehensive suite of marketing tools, a holistic view of customer data and machine learning-generated insights to personalize experiences across channels. With that level of sophistication, however, comes a certain amount of complexity. Managing your Sitecore platform on-premises means continual attention to planning, […] The post Our 2019 Sitecore MVPs Turn Technical Expertise into High-Value Business Outcomes appeared first on The Official Rackspace Blog.

5 Email List Building Mistakes That Kill Your Sales (and How to Avoid Them)

HostGator Blog -

The post 5 Email List Building Mistakes That Kill Your Sales (and How to Avoid Them) appeared first on HostGator Blog. Building your email list is the key to boosting your sales. Email marketing is an opportunity to directly engage with potential customers. With this communication channel, you become a trusted friend in your subscribers’ pursuit to find the right product solution. Entrepreneur VIP contributor Susan Gunelius offers her perspective: “Email marketing doesn’t work unless you build a list of people to send messages to who are interested in your products or services. If you’ve captured email addresses from your prior customers, then you have a great head start.” Steer clear of roadblocks when building your list. Here are five mistakes to avoid. Mistake #1: Buying Email Subscribers As a business, it’s tempting to take the easy route. You’re juggling multiple responsibilities, and a quick growth hack seems reliable. Most companies will attempt to buy their email subscribers. But honestly, that’s not a sound business idea. For starters, these subscribers didn’t sign up to receive messages from your brand. Sending unsolicited emails may result in legal violations, while annoying people. Subscribers who haven’t expressed interest in your products are less likely to engage with your messages. Everyone involved loses and lots of precious time gets wasted. So, what happens to your unsolicited messages? They end up in a person’s spam folder, never to be read. The result equals no sales for your business and a poor brand image. Rather than purchasing subscribers, work with your team to capture consumers when they visit your blog, exit a product page, or scroll down a sales page. Building a co-marketing campaign with another brand is also a creative way to cultivate your list. This strategy will introduce new buyers to your product offerings and get potential consumers excited to receive your emails. Are you seriously thinking about purchasing subscribers to build your list? Skip the hassle and grow your list in an organic way.   Mistake #2: Asking for Too Many Details List building is very much like a friendship. When you’re getting to know someone, you don’t bombard the individual with intimate questions. If that happens, you may startle the person and never hear from him or her again. In a similar manner, you can scare away potential subscribers by requesting too much information up front. It’s not necessary on the first encounter to ask for an individual’s mailing address or phone number. “It sounds counterintuitive, but more choices is not better for your users. In fact, the more choices you give people, the less likely they are to take action. And even if they do ultimately make a decision to take action, they will be less happy with that decision than if you had only given them one choice,” writes Mary Fernandez, a professional blogger. Moreover, you want to minimize the time it takes to subscribe. Requiring only a name and email address takes a few seconds, while a laundry list of form fields may take a few minutes. Progressive profiling is one solution to gaining more details about your subscribers. It’s the process of requesting additional information at specific points in the consumer relationship. For instance, you may send an email talking about the origin of your business, leading your brand to ask for the subscriber’s birthdate. Be mindful of when and how you ask for consumer information. Give the subscriber time to learn about your brand.   Mistake #3: Offering a Weak Incentive Nowadays, your consumers understand how marketing works. You can’t trick someone (nor should you) into being part of your mailing list. It will quickly damage your brand reputation. You can entice customers with an incentive. But if you’re wanting to give away a superficial trinket, your business should rethink that strategy. Competition is stiff across several industries. So, copying your competitors’ tactics will not work for your business either. To join your newsletter, consumers want more than empty promises. Instead, they desire information that will strengthen the brand-customer relationship. Your action plan may translate into offering offering 15% coupons, invitations to brand events, or even access to exclusive product launches. The goal is to give subscribers a compelling reason to sign up and stay on your list. Below is a pop-up box on the Nike website. The footwear and apparel company tempts consumers with “exclusives, offers, and the latest” from the brand. Strong incentives will satisfy your subscribers and persuade them to buy from your business. Plus, your consumers will likely spread the word to their friends and family members, resulting in more sales. It’s time to drop any and all weak incentives. Do the research to learn what will attract consumers to join your brand family.   Mistake #4: Failing to Send a Welcome Email Once a consumer signs up, your team’s job isn’t over. You must follow through on your promise to send an incredible email marketing campaign. Let’s begin with the basics. You need a welcome email that will deliver your incentive and intrigue your new subscribers to not touch the delete button. Treat your welcome email as a greeting and as an add-on to the onboarding process. Subscribers should feel delighted to join your brand’s journey. Bria Sullivan, Constant Contact contributor, explains in more detail: “A welcome email is the perfect way to greet your new subscribers and ease them into your list before they start getting your regular communications. With a welcome email, you increase the likelihood that your subscriber stays engaged with your business and becomes a great, loyal customer.” A captivating welcome includes an engaging subject line, relevant visuals, concise copy, and a clear call to action. If you promised a $10 off promo code, be sure to add it to the message. Welcome emails serve a distinct purpose in email marketing. Use them to your advantage to connect with consumers and earn their trust for future sales.   Mistake #5: Forgetting to Ask for Feedback Your email list is only as valuable as the insight you receive from subscribers. Learning how and why they remain on your list and buy your products can help you make better business decisions. Feedback loops are an integral part of your marketing and sales funnel. It’s the cycle of asking for feedback and receiving it. When asking for feedback, stick to one topic. You don’t want to flood your consumers with various questions. Also, keep your feedback survey short. It should take less than 5 minutes to complete. Below is a feedback email Little Black Bag sent to its subscribers. It expresses how much the brand values the consumers’ thoughts. Learning about your flaws isn’t helpful to customers if you don’t take action. After you receive their suggestions, you’ll want to take steps to rectify their concerns. For instance, customers may demand your support team offer more ways to communicate. If your team adds a live chat feature as a response, you’ll want to notify your customers of the improvements. Feedback is a valuable asset for your brand. By learning from your subscribers, you walk the path to increasing your revenue.   Don’t Make the Same Mistake Twice Email marketing plays an essential role in growing your company’s sales. It’s your chance to connect with your target audience. Stay away from buying subscribers who will delete your emails anyway. Avoid offering a sign-up incentive that doesn’t correlate with the consumers’ needs. And always immediately send a welcome email. Build your email list, and boost your sales without the mistakes. Find the post on the HostGator Blog

How to Launch Your Website Using Gator Website Builder

HostGator Blog -

The post How to Launch Your Website Using Gator Website Builder appeared first on HostGator Blog. HostGator’s new product, Gator Website Builder, is an easy-to-use, drag-and-drop website builder for anyone that has an idea for a website and wants to get started quickly. Gator is a full-featured solution that includes the website builder and website hosting in a convenient package. Your package comes with HostGator’s powerful cloud hosting included, which means you have the ability to upgrade your web hosting package as your business grows. In addition, the Gator website builder comes with security basics like an SSL certificate and a free domain name if you don’t already have one. While some other website builders limit what you can do, Gator Website Builder delivers a complete package to fit every need. No more shopping around for a site builder that has either blog or eCommerce functionalities, Gator website builder is for blogs and eCommerce. No matter what type of website you want to start, you’ll be ready to go in a few steps with this easy website builder. 6 quick steps to launch your website using Gator Website Builder:     1. Decide which plan is right for you. Gator by HostGator has three different plan choices. The starter plan comes with everything you need for your new website – a free domain, access more than 200 professionally-designed templates, a frustration free drag-and-drop editor, and integrated website analytics. If you want access to priority support, choose the premium plan. If you’re starting an eCommerce business with an online store, choose the eCommerce plan. Each package comes with free cloud hosting included. Once you’ve decided which plan is right for you, click “buy now.” You’ll be directed to a page to set up your account. 2. Set up your domain. The domain is the web address that your business will be known by. While you can create a 301 redirect and change this in the future, be sure to choose a domain address that is easy to remember and represents your business. Need some help? We put together a list of ideas for how to choose the perfect domain name for your business. If you don’t already have a domain, Gator Website Builder comes with a free domain.  Start typing in the “find a new domain name” box to see if your top choice is available. If you already have a domain, you can quickly connect it to your Gator website with the “connect it here” button. 3. Create your account. Now that you have selected the perfect domain name, it’s time to set up your account. Gator makes it easy – you can create an account with your email address or quickly connect to your current Gmail or Facebook account. Select your preferred billing cycle, enter your payment information, and you’re ready to start building. 4. Choose a template. After you create your account, you’ll be directed to the “choose a template” page. This is where you’ll choose the visual design for your site. Gator comes with more than 200 professionally-designed templates included for free. Scroll through all the options available and choose the one that best fits your business. You can sort the templates by categories such as music and entertainment, photography, portfolio, online store, wedding, professional services, and more. All of the designs are fully customizable so you can change the fonts, colors, or text style to match your business’ brand. Click the full screen preview to see all the features and secondary page layout options for your favorite themes. Plus, the designs are fully customizable – you can quickly change the color scheme, fonts, or text style to match your business. All of the professional design templates included with Gator come with a mobile-friendly version installed. You don’t need to do anything to activate the mobile version, but with Gator, you can control the content if you want to. You can even edit content in the mobile view without affecting your main website. 5. Add content to your website. Once you have selected a theme for your website and clicked the button to “Start Editing,” you will be directed to your main account dashboard. At first glance, you’ll see that a few pages have already been created. You can add, edit, or delete any of these pages by clicking the “pages” button on the left side of your dashboard. Gator comes with an easy step-by-step guide to show you how to set up the different sections of your site. Click the menu icon next to the Gator by HostGator logo and select the “getting started tour.” This tour will guide you through the steps to edit pages and add elements such as text blocks, images, buttons, and more. You can customize your pages by adding more elements. Click on the elements tab, to see the types of elements you want such as an image, text block, or button. If you want to start a blog… Gator comes with an easy blogging feature integrated. Some website builders make you choose either a basic website or a blog function. Gator offers both. Select the “blog” tab from the left sidebar and then click “start a blog.” Not ready to start a blog now? Check out these five reasons to start blogging whenever you’re ready. The blog feature comes with all Gator packages and is available for anyone to easily add a blog as their business grows. If you want to start an eCommerce business with online store… Choose the “eCommerce plan” (or upgrade your account to the eCommerce plan) to access the online store feature. Click the Store button from the left side of your dashboard to add a store. You’re now ready to add and manage your own store. The website builder will automatically populate the store with example products so you can see what the store will look like when it’s done. Follow the next set of instructions to complete the setup process for your store. 6. Review and launch your website. When you’re done adding information and are ready to “go live,” the process to publish is simple. First, you’ll want to do a final review by clicking the “preview” button at the top of your dashboard. Click through the pages on your website and make sure the design and content looks great. When you’re finished previewing, click the “finish preview” button at the top and then the “publish website” at the top of the dashboard. Follow the steps to go live. If you have an eCommerce Store upgrade, you’ll see a pop-up asking you to add products now or after you publish your website pages. If you choose to go live without your store products added, no problem, simply select “Publish Without Store.” This means people will be able to see your websites pages (or storefront) but they won’t be able to shop your products or purchase. Otherwise, you can select “Setup Store Now” If you would rather set up your store for selling before you go live. Now your website is live! Congratulations! Now that your website is published, you’re ready to grow your online business or website and build your network. What did you think about building your website with Gator? What’s the number one Gator feature you want to try on your new website? Let us know in the comments below. Find the post on the HostGator Blog


Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs