Industry Buzz

YouTube Changes Public Subscriber Counts

Social Media Examiner -

Welcome to this week’s edition of the Social Media Marketing Talk Show, a news show for marketers who want to stay on the leading edge of social media. On this week’s Social Media Marketing Talk Show, we explore upcoming changes to YouTube’s public-facing subscriber counts and marketers’ reactions to Facebook Ads Manager issues with special […] The post YouTube Changes Public Subscriber Counts appeared first on Social Media Marketing | Social Media Examiner.

The Serverlist Newsletter: Connecting the Serverless Ecosystem

CloudFlare Blog -

Check out our fifth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.Sign up below to have The Serverlist sent directly to your mailbox. .newsletter .visually-hidden { position: absolute; white-space: nowrap; width: 1px; height: 1px; overflow: hidden; border: 0; padding: 0; clip: rect(0 0 0 0); clip-path: inset(50%); } .newsletter form { display: flex; flex-direction: row; margin-bottom: 1em; } .newsletter input[type="email"], .newsletter button[type="submit"] { font: inherit; line-height: 1.5; padding-top: .5em; padding-bottom: .5em; border-radius: 3px; } .newsletter input[type="email"] { padding-left: .8em; padding-right: .8em; margin: 0; margin-right: .5em; box-shadow: none; border: 1px solid #ccc; } .newsletter input[type="email"]:focus { border: 1px solid #3279b3; } .newsletter button[type="submit"] { padding-left: 1.25em; padding-right: 1.25em; background-color: #f18030; color: #fff; } .newsletter .privacy-link { font-size: .9em; } Email Submit Your privacy is important to us newsletterForm.addEventListener('submit', async function(e) { e.preventDefault() fetch('', { method: 'POST', body: newsletterForm.elements[0].value }).then(async res => { const thing = await res.text() newsletterForm.innerHTML = thing const homeURL = '' if (window.location.href !== homeURL) { window.setTimeout(_ => { window.location = homeURL }, 5000) } }) }) iframe[seamless]{ background-color: transparent; border: 0 none transparent; padding: 0; overflow: hidden; } const magic = document.getElementById('magic') function resizeIframe() { const iframeDoc = magic.contentDocument const iframeWindow = magic.contentWindow magic.height = iframeDoc.body.clientHeight const injectedStyle = iframeDoc.createElement('style') injectedStyle.innerHTML = ` body { background: white !important; } ` magic.contentDocument.head.appendChild(injectedStyle) function onFinish() { setTimeout(() => { = '' }, 80) } if (iframeDoc.readyState === 'loading') { iframeWindow.addEventListener('load', onFinish) } else { onFinish() } } async function fetchURL(url) { magic.addEventListener('load', resizeIframe) const call = await fetch(`${url}`) const text = await call.text() const divie = document.createElement("div") divie.innerHTML = text const listie = divie.getElementsByTagName("a") for (var i = 0; i < listie.length; i++) { listie[i].setAttribute("target", "_blank") } magic.scrolling = "no" magic.srcdoc = divie.innerHTML } fetchURL("")

Webby for Good: Sisterh>>d by Girls Who Code

WP Engine -

Since 1996, The Webby Awards have celebrated the best of the Internet. In the web’s infancy, that meant recognizing trail-blazing websites. Today, the Webbys honor the best of video, advertising, media & public relations, social, apps, mobile, voice, games, and podcasts. Two awards are granted per category: The Webby Award and The Webby People’s Voice… The post Webby for Good: Sisterh>>d by Girls Who Code appeared first on WP Engine.

How to Create a Style Guide for Your Website in 5 Steps

HostGator Blog -

The post How to Create a Style Guide for Your Website in 5 Steps appeared first on HostGator Blog. Websites are online storefronts for small businesses. Because they play a pivotal role in the customer experience, your team must make it a priority. A style guide helps your small business develop a cohesive look for your website. Without a clear branding style, customers will disengage and leave your site. Style guides also ensure there aren’t any discrepancies in your branding strategy. Let’s streamline your online presence. Here are 5 elements to consider in your website style guide. 1. Brand Voice Branding is the overall perception of your small business. It’s how you differentiate your products and services from others in the market. Brand voice is part of building your website. You get to show visitors your brand personality and unique qualities. Voice can range from casual and calm to vibrant and risky. In the chart below, each voice characteristic corresponds with suggested actions (and inactions) for businesses. For instance, a company aiming for an authentic voice should portray honesty and ownership of mistakes and stay away from marketing jargon. A description of your brand voice isn’t always enough. When developing your style guide, you also should include explicit examples for your team to follow. This tactic eliminates any uncertainty when posting copy to your site. Web design affects many internal departments. Your sales team needs to know the appropriate messaging to secure customers. The finance team is interested in the actual costs, and human resources wants to attract new employees. Therefore, it’s helpful to get input from your entire team when making key brand decisions. Choose a brand voice that inspires your customers. Then, you can start developing a website that represents your brand story.   2. Navigation Laying out your website is just as critical as selecting the right words and images. When visitors land on your site, they should easily tell where to go next. It’s vital that your team craft a straightforward roadmap for their visit. For starters, keep your main heading options under six. Too many choices can overwhelm visitors and can cause them to take no action at all. Drop-down menus also can offer structure, giving visitors access to additional pages without multiple clicks. When mapping out your navigation, conduct customer research and examine data from conversion optimization tools like heatmaps. You’ll want to begin with what’s important. Andy Crestodina, the co-founder and CMO of Orbit Media, provides his perspective: “In website navigation, just like any list, items at the beginning and the end are most effective, because this is where attention and retention are highest. Always seek to put the things that are most important to visitors in the most visually prominent places.” Effective navigation helps customers buy your products. So, streamline the navigation bar to increase engagement.   3. Colors Red, blue, purple, yellow. The colors on your website matter to your visitors. They can either spark an invitation to stay or ignite a reaction to leave your site immediately. Colors influence consumers’ perceptions of your brand. While each color represents something different for every individual, humans do recognize specific colors to represent different emotions. Yet, studies recommend that companies select colors that support the brand personality they want to portray, instead of aligning with stereotypical color associations. Your team then can add meaning to the chosen colors through other branding aspects. The diagram below shows the connection between a color and a meaning. For example, lime green can translate into competence with a brand personality of reliability and intelligence. Colors relay an essential message your customers. Don’t force your brand to adhere to the traditional norms of what a color embodies. Find the right palette for your small business.   4. Fonts Fonts are usually the last thing on a small business owner’s mind. However, fonts help communicate your brand’s voice. Script fonts can portray a young, playful company, while a slab font can mean a bold, established brand. Google Fonts is an interactive library of more than 900 fonts. It’s an easy-to-use tool to experiment with fonts and compare your top choices. Avoid fonts that aren’t legible or clear. Consumers shouldn’t have to squint their eyes to read your text or take a second look just to be certain. Jill Chongva, a WordPress website designer, says: “It’s best to use fonts that complement each other and work together without being jarring for the reader. This usually means choosing a combination of a serif font and a sans serif font that don’t fight for the reader’s attention.” It’s also wise to not select fonts similar to well-known brands, like Coca-Cola or Nike. You want a distinct font that separates your small business from the competition. What font expresses your brand? Do your research and select one that will grab your consumers’ attention.   5. Images Images impact how consumers see your small business. With a couple of pictures, buyers can quickly determine whether they can see themselves with your product. In your style guide, outline the type of images that are acceptable for brand promotion. Specify the recommended file format and display size. You also may want to limit the number of images per page—leaving some white space. That way, your visitors don’t get bombarded with too many visuals at once. Invest in quality product photography. You want images that display the fine details of your product. For example, if you sell purses, consumers should see every pattern design. The image should give them a sense of how the product would look and feel in real life. Customers can become accustomed to the same old stock photos. For your website to stand out, you may want to shoot your own photos. Most smartphones are capable of taking high-quality pictures. So, encourage your team to share their photos from the last company retreat or team-building outing. Choose your images carefully. The image specifications make a huge difference for your website.   Your Website’s Style Guide Websites are open invitations for customers to learn about your small business. Style guides create a roadmap to establish your brand. With the right elements, your team can build a better customer experience. Find the post on the HostGator Blog

How to Write Facebook Ads That Sell

Social Media Examiner -

Do you want to create ad copy that sells? Wondering how you can improve your Facebook ads? To explore how to write Facebook ad copy that converts, I interview Ken Moskowitz. Ken is the author of Jab Till It Hurts and founder of Ad Zombies, one of the world’s top flat-fee ad copywriting services. Ken […] The post How to Write Facebook Ads That Sell appeared first on Social Media Marketing | Social Media Examiner.

New – Opt-in to Default Encryption for New EBS Volumes

Amazon Web Services Blog -

My colleagues on the AWS team are always looking for ways to make it easier and simpler for you to protect your data from unauthorized access. This work is visible in many different ways, and includes the AWS Cloud Security page, the AWS Security Blog, a rich collection of AWS security white papers, an equally rich set of AWS security, identity, and compliance services, and a wide range of security features within individual services. As you might recall from reading this blog, many AWS services support encryption at rest & in transit, logging, IAM roles & policies, and so forth. Default Encryption Today I would like to tell you about a new feature that makes the use of encrypted Amazon EBS (Elastic Block Store) volumes even easier. This launch builds on some earlier EBS security launches including: EBS Encryption for Additional Data Protection. Encrypting EBS Snapshots Via Copying. Encrypted EBS Boot Volumes. Encryption with Custom Keys at Instance Launch Time. Sharing of Encrypted AMIs Across AWS Accounts. You can now specify that you want all newly created EBS volumes to be created in encrypted form, with the option to use the default key provided by AWS, or a key that you create. Because keys and EC2 settings are specific to individual AWS regions, you must opt-in on a region-by-region basis. This new feature will let you reach your protection and compliance goals by making it simpler and easier for you to ensure that newly created volumes are created in encrypted form. It will not affect existing unencrypted volumes. If you use IAM policies that require the use of encrypted volumes, you can use this feature to avoid launch failures that would occur if unencrypted volumes were inadvertently referenced when an instance is launched. Your security team can enable encryption by default without having to coordinate with your development team, and with no other code or operational changes. Encrypted EBS volumes deliver the specified instance throughput, volume performance, and latency, at no extra charge. I open the EC2 Console, make sure that I am in the region of interest, and click Settings to get started: Then I select Always encrypt new EBS volumes: I can click Change the default key and choose one of my keys as the default: Either way, I click Update to proceed. One thing to note here: This setting applies to a single AWS region; I will need to repeat the steps above for each region of interest, checking the option and choosing the key. Going forward, all EBS volumes that I create in this region will be encrypted, with no additional effort on my part. When I create a volume, I can use the key that I selected in the EC2 Settings, or I can select a different one: Any snapshots that I create are encrypted with the key that was used to encrypt the volume: If I use the volume to create a snapshot, I can use the original key or I can choose another one: Things to Know Here are some important things that you should know about this important new AWS feature: Older Instance Types – After you enable this feature, you will not be able to launch any more C1, M1, M2, or T1 instances or attach newly encrypted EBS volumes to existing instances of these types. We recommend that you migrate to newer instance types. AMI Sharing – As I noted above, we recently gave you the ability to share encrypted AMIs with other AWS accounts. However, you cannot share them publicly, and you should use a separate account to create community AMIs, Marketplace AMIs, and public snapshots. To learn more, read How to Share Encrypted AMIs Across Accounts to Launch Encrypted EC2 Instances. Other AWS Services – AWS services such as Amazon Relational Database Service (RDS) and Amazon WorkSpaces that use EBS for storage perform their own encryption and key management and are not affected by this launch. Services such as Amazon EMR that create volumes within your account will automatically respect the encryption setting, and will use encrypted volumes if the always-encrypt feature is enabled. API / CLI Access – You can also access this feature from the EC2 CLI and the API. No Charge – There is no charge to enable or use encryption. If you are using encrypted AMIs and create a separate one for each AWS account, you can now share the AMI with other accounts, leading to a reduction in storage utilization and charges. Per-Region – As noted above, you can opt-in to default encryption on a region-by-region basis. Available Now This feature is available now and you can start using it today in all public AWS regions and in GovCloud. It is not available in the AWS regions in China. — Jeff;  

How to Block an IP Address

The Blog -

Of all the metaphors used to describe the internet, one of the most appropriate might be the “Wild West.” The Wild West, just like the internet, was expansive and difficult to regulate, and filled with bandits and marauders who would take advantage of someone without batting an eye. While technological progress has fortified internet security, in reality there are still many ways for bad actors to infiltrate a business or person’s website, email, or online persona in order to wreak havoc. How to Block an IP Address Just as it would have been in the Wild West, it’s important to learn how to protect yourself from external threats. The basic security offered by internet servers can ward off some infiltration attempts, but often crafty criminals slip through the cracks. Learning how to identify and block the IP address of an online pest is perhaps the best way to improve your security on the internet. It all starts with a great domain. Get yours at What is an IP Address? Blocking IP addresses might be the most effective way to bolster your internet security, but what good is that knowledge if you don’t know what an IP address is? The best way to think of an IP address is by comparing it to a street address. Think about your place of residence—you receive bills, packages, and guide friends to your house by giving them a combination of numbers and letters. That combination—your address—is used to single out your location in relation to all other possible locations. IP addresses work in the exact same way. Each device that’s connected to the internet is assigned a unique IP address.A device’s IP address allows the device to interact with, receive information from, and otherwise contact other devices and networks on the internet. Simply put, an IP address places internet users on the grid. Without it, they would be unable to communicate with other networks. What do IP Addresses Look Like? Even though most internet users connect to the internet using an IP address on a daily basis, the vast majority of people don’t know what an IP address looks like. There are two forms that an IP address can take. The first is IPv4, which stands for “Internet Protocol version 4.” The second is IPv6, which stands for — can you guess? — “Internet Protocol version 6.” IPv4 Invented all the way back in the 70s, IPv4 was the first wave of IP addresses. Most devices are still connected to the internet using an IPv4 address, but that started to change in 2011 with the release of IPv6. IPv4 addresses are composed of four numbers between 0-255, separated by dots or periods.An IPv4 address might look like: From the inception of the internet, IP addresses were provided using the IPv4 model. However, all of the available IPv4 addresses have been allocated, necessitating the move to IPv6. IPv6 On June 6, 2012, IPv6 was launched by organizations like the Internet Society, among others. IPv6 addresses use a hexadecimal digit system, separates groups using colons, and may include letters. The number of conceivable IPv6 addresses is enormous and won’t run out anytime soon.An IPv6 address might look like: 2001:0db8:85a3:0000:0000:8a2e:0370:7334. The complexity of an IPv6 address means that the internet will be prepared to host an even larger number of connected devices in the future. Why Block an IP Address? There are several reasons a business, educational institution, or internet user would attempt to block an IP address. In general, the most common reasons are: Blocking Bots, Spammers, and Hackers: When bots, spammers, and hackers attempt to infiltrate your website, it can put a heavy strain on your bandwidth and decrease the speed with which you and other users can access your website. If you run a business online, this can be detrimental to sales.  Limiting Website Access: Many academic institutions and businesses use IP blocking to limit the websites that students or employees can visit. The goal is typically to increase productivity by limiting distractions.Protecting Data: Hackers often attempt to infiltrate websites to steal data or other important information. That information can be used to blackmail or otherwise undermine a company. Maintaining Confidentiality: Many academic institutions and companies who keep sensitive records—like transcripts, health records, etc.—are regularly targeted by hackers. Identifying threatening IP addresses and placing them on a blacklist is an essential step to keep those records safe and confidential. This list should only be seen as the tip of the iceberg. There are countless reasons that an individual or organization might want to block certain IP addresses, and there should be no underestimating how malicious certain internet hackers can be. How to Block an IP Address Ultimately, blocking an IP address allows administrators and website owners to control website traffic. The process of blocking an IP address—or several—changes depending on the operating system that’s being used. While there are several different operating systems, the most common are Windows and Mac. We’ll cover the steps for blocking an IP address using both of these systems, which achieve the same goal through slightly different means. Blocking an IP Address for Mac Users To block an IP address on your Mac computer, you’re going to need access to your wireless router (or LAN router, which connects to the internet using an Ethernet cable). Knowing the password is essential, which can often be found printed or stuck on the outside of the modem. System Preferences: Find the Apple menu, represented as the Apple logo in the top left corner of your computer screen. Open the dropdown menu and select “System Preferences.” Once your System Preferences menu appears, find the icon labeled “Network.” Then, press the “Advanced…” bar at the bottom of the screen. Navigate to the TCP/IP tab, where you should find your IPv4 or IPv6 address.Access Router: Next, you’re going to have log into your router. Again, password information can typically be found on the outside of the router, but if you’re having trouble you can always contact your network administrator. Restrict Access: Once you’ve logged into your router, a list of enabled and disabled IP addresses should appear. From there, most routers will give you the option to deny access to unique IP addresses or an entire range of addresses. You should also have the option to block a website. After blocking the IP address, your network will be protected from that address. Blocking an IP Address for Windows Users Blocking IP addresses on a Windows computer requires going through the “Windows Firewall.” In tech terms, a firewall is a component that allows your computer to block access to your network without inhibiting your ability to communicate with outside networks. This guide is going to explain how to locate and block the IP address of a website. Windows Firewall makes this a relatively simple process. If you already know the IP address you want to block, begin with step 3. 1 – Locate Website to Block: Open your internet browser and locate the website you want to block. Highlight and copy everything that comes after the “www” in the web address. 2 – Open Command Prompt: Navigate to your start menu and open “Command Prompt (Admin).” Paste the website’s web address into the first line of code. Command Prompt should respond by generating several lines of code, which should reveal the website’s IP address. Highlight and copy the IPv4 or IPv6 address. Return to your internet browser, paste it into the search bar, and press enter. Confirm that it takes you back to the website.  3 – Open Windows Firewall: Open the start menu. Locate “Control Panel.” From there, find “Windows Firewall.” Open it. 4 – Advanced Settings + Windows Inbound Rules: With Windows Firewall open, locate and click on “Advanced settings” on the left of the screen. Then, locate “Inbound Rules,” which should also be found near the top left of the screen. This should change the menu options. On the right portion of the window, find and click on “New Rule…”5 – New Rule: With the New Rule tab open, select the “Custom” option and press “Next.”  Advance by pressing Next two more times, until you arrive at a window which asks “Which remote IP addresses does this rule apply to?” Click the option that reads, “These IP Addresses.” 6 – Add IP Addresses: Click on the “Add…” button. From there, you can paste the website’s IP address (or any other IP address) into the box that reads “This IP address or subnet:” Repeat this process, adding all IP addresses you wish to block. Once they’re added, click “Next” at the bottom of the screen. 7 – Block: Three options should appear on the next page. The bottom option will read “Block the connection.” Click this and advance to a page which prompts you to “Name,” the blocked IP addresses. After you’ve named it, press Next until the “Finish” bar appears. Click Finish. 8 – Repeat Process with “Outbound Rules”: Return to the Advanced settings window and repeat the process you completed under “Inbound Rules” with “Outbound Rules.” Once steps 1-8 are complete, the IP address or addresses that you’ve isolated will be blocked from your network. Why Have I Been Blocked? If you’ve attempted to visit a website and discovered that you’ve been blocked or have otherwise been denied access, there are several potential reasons. The most common include: Viruses in your DeviceSoftware ExtensionsHistory of Illegal Actions Viruses in your Device One of the most common reasons that IP addresses are blocked from accessing remote servers is because the remote server detects a virus contained within your IP address. It’s often the case that internet users don’t even know that they have picked up a virus. Once you’ve removed the virus from your network, feel free to reach out to the website you attempted to access and explain why you should be removed from the blacklist. Software Extensions There are many ways to customize your internet browser. Some of the extensions that you can add will eliminate pop-up ads from websites or attempt to detect viruses that might be hiding within a website. While there’s nothing illegal about adding extensions to your browser, some websites will ban users who run ad-blockers. They may see this as a disruption of their revenue flow. History of Illegal Actions If you have a history of conducting illegal activity online, many website admins will block your IP address as a preventative measure, deeming you untrustworthy. Online illegal activities may include illicit trade, activity in the dark web, or cyber-crimes. Inappropriate Website Content If you operate a website that contains potentially offensive content like pornographic material or illegal trade, you will likely be blacklisted from many websites on the grounds that your content is subjectively inappropriate. While you may disagree with the decision of another admin to blacklist your website, there is often no way around the blacklist outside of a direct appeal to the admin. It all starts with a great domain. Get yours at Recapping How to Block an IP Address To recap, IP addresses are used to connect devices to the internet at large. They help locate a connected device in relation to all other devices. By discovering the IP address of a device or website that is causing trouble to an internet user, that user can block the address using a rather straightforward process. The process of blocking an IP address may change depending on the operating system that is used by the internet connected device. While there are more steps required for PC users, the process is equally straightforward, and perhaps even easier than the process required by Mac users. If your IP address has been blocked, there are several possible reasons. The first, and most common reason, is that your IP address is associated with a virus—usually one that you’ve picked up by accident. By using antivirus software, you can purge that virus from your computer and then appeal to the website admin to remove you from the IP blacklist.   The post How to Block an IP Address appeared first on | Blog.

AWS Ground Station – Ready to Ingest & Process Satellite Data

Amazon Web Services Blog -

Last fall I told you about AWS Ground Station and gave you a sneak preview of the steps that you would take to downlink data from a satellite. I am happy to report that the first two ground stations are now in operation, and that you can start using AWS Ground Station today. Using AWS Ground Station As I noted at the time, the first step is to Add satellites to your AWS account by sharing the satellite’s NORAD ID and other information with us: The on-boarding process generally takes a couple of days. For testing purposes, the Ground Station team added three satellites to my account: Terra (NORAD ID 25994) – This satellite was launched in 1989 and orbits at an altitude of 705 km. It carries five sensors that are designed to study the Earth’s surface. Aqua (NORAD ID 27424) – This satellite was launched in 2002 and also orbits at an altitude of 705 km. It carries six sensors that are designed to study surface water. NOAA-20 (NORAD ID 43013) – This satellite was launched in 2017 and orbits at an altitude of 825 km. It carries five sensors that observe both land and water. While the on-boarding process is under way, the next step is to choose the ground station that you will use to receive your data. This is dependent on the path your satellite takes as it orbits the Earth and the time at which you want to receive data. Our first two ground stations are located in Oregon and Ohio, with other locations under construction. Each ground station is associated with an adjacent AWS region and you need to set up your AWS infrastructure in that region ahead of time. I’m going to use the US East (Ohio) Region for this blog post. Following the directions in the AWS Ground Station User Guide, I use a CloudFormation template to set up my infrastructure within my VPC: The stack includes an EC2 instance, three Elastic Network Interfaces (ENIs), and the necessary IAM roles, EC2 security groups, and so forth: The EC2 instance hosts Kratos DataDefender (a lossless UDP transport mechanism). I can also use the instance to host the code that processes the incoming data stream. DataDefender makes the incoming data stream available on a Unix domain socket at port 55892. My code is responsible for reading the raw data, splitting it in to packets, and then processing each packet. You can also create one or more Mission Profiles. Each profile outlines the timing requirements for a contact, lists the resources needed for the contact, and defines how data flows during the contact. You can use the same Mission Profile for multiple satellites, and you can also use different profiles (as part of distinct contacts) for the same satellite. Scheduling a Contact With my satellite configured and my AWS infrastructure in place, I am ready to schedule a contact! I open the Ground Station Console, make sure that I am in the AWS Region that corresponds to the ground station that I want to use, and click Contacts. I review the list of upcoming contacts, select the desired one (If you are not accustomed to thinking in Zulu time, a World Clock / Converter is helpful), and click Reserve contact: Then I confirm my intent by clicking Reserve: The status of the connection goes to SCHEDULING and then to SCHEDULED, all within a minute or so: The next step is to wait for the satellite to come within range of the chosen ground station. During this time, I can connect to the EC2 instance in two ways: SSH – I can SSH to the instance’s IP address, verify that my code is in place and ready to run, and confirm that DataDefender is running: WEB – I can open up a web browser and see the DataDefender web interface: One thing to note: you may need to edit the security group attached to the instance in order to allow it to be accessed from outside of the VPC: 3-2-1 Contact! Ok, now I need to wait for Terra to come within range of the ground station that I selected. While not necessary, it can be fun (and educational) to use a real-time satellite tracker such as the one at When my satellite comes in to range, DataDefender shows me that the data transfer is under way (at an impressive 781 Mbps), as indicated by the increased WAN Data Rate: As I noted earlier, the incoming data stream is available within the instance in real time on a Unix domain socket. After my code takes care of all immediate, low-level processing, it can route the data to Amazon Kinesis Data Streams for real-time processing, store it in Amazon S3 for safe-keeping or further analysis, and so forth. Customer Perspective – Spire While I was writing this blog post I spoke with Robert Sproles, a Program Manager with AWS customer Spire to learn about their adoption of Ground Station. Spire provides data & analytics from orbit, and runs the space program behind it. They design and build their own cubesats in-house, and currently have about 70 in orbit. Collectively, the satellites have more sensors than any of Spire’s competitors, and collect maritime, aviation, and weather data. Although Spire already operates a network of 30 ground stations, they were among the first to see the value of (and to start using) AWS Ground Station. In addition to being able to shift from a CapEx (capital expense) to OpEx (operating expense) model, Ground Station gives them the ability to collect fresh data more quickly, with the goal of making it available to their customers even more rapidly. Spire’s customers are wide-ranging and global, but can all benefit from rapid access to high-quality data. Their LEMUR (Low Earth Multi-Use Repeater) satellites go around the globe every 90 minutes, but this is a relatively long time when the data is related to aviation or weather. Robert told me that they can counter this by adding additional satellites in the same orbit or by making use of additional ground stations, all with the goal of reducing latency and delivering the freshest possible data. Spire applies machine learning to the raw data, with the goal of going from a “lump of data” to actionable insights. For example, they use ML to make predictions about the future positions of cargo ships, using a combination of weather and historical data. The predicted ship positions can be used to schedule dock slots and other scarce resources ahead of time. Now Available You can get started with AWS Ground Station today. We have two ground stations in operation, with ten more in the works and planned for later this year. — Jeff;  

Three Tools That Test WordPress Themes For Code Quality and Accessibility

Nexcess Blog -

WordPress contributor teams recently released Theme Sniffer and WP Theme Auditor, tools that help developers to create themes that adhere to coding and accessibility best practices. There are thousands of free WordPress themes and thousands more premium themes. Some are excellent, and some are terrible, but most are somewhere in-between on the quality scale. Installing… Continue reading →

What Is a Domain Name Registrar?

HostGator Blog -

The post What Is a Domain Name Registrar? appeared first on HostGator Blog. Every website you visit online has a domain name, which means that every website owner went through the process of buying and registering that domain name. It’s one of the first necessary steps involved in starting a new website, along with getting web hosting and building out your site. And it’s a step that requires working with a domain registrar.   What Is a Domain Registrar? A domain registrar, sometimes called a DNS registrar (short for domain name server), is a business that sells domain names and handles the business of registering them. Domain names are the main address a website uses on the web—they’re the thing that usually starts with www and most often ends with .com. While technically, computers identify websites with a different sort of address—an IP address that’s a long string of numbers separated by periods (e.g.—humans wouldn’t be much good at remembering and using that kind of address. So for us, websites also have an address made up of alphanumeric letters, that usually spell out a word or brand name. And there’s a specific type of process behind how people claim domain names. There are registries that manage the different top-level domains. The registries are large, centralized databases with information about which domain names have been claimed and by who. But the registries don’t sell the names directly, they delegate that job to DNS registrars. Registrars must be accredited by the Internet Corporation for Assigned Names and Numbers (ICANN). Then, each time they sell a domain to a customer, they’re expected to register it with the appropriate registry by updating a record with your information.   Domain Registration FAQs For the most part, this process happens behind the scenes for website owners. Part of the service a good domain name registrar provides is making the process of finding, buying, and managing a domain (or multiple) simple and intuitive. You don’t have to know how the sausage is made, but if you’re curious to learn more, we’ve got the answers to the most common questions about domain name registrars. What is the role of a domain name registrar? The domain name registrar handles the process of updating the registry when a customer purchases a new domain name. As part of that, they keep track of which domain names are available and typically provide customers with an intuitive search tool to find out what options they have. They handle the financial transaction with the customer, and provide the tools needed to maintain the domain name subscription over time. You can’t buy a domain name outright, you can only rent it for up to ten years at a time. DNS registrars usually provide the option of annual renewals or multi-year subscriptions, sometimes offering a discount for registering the name for a longer period of time upfront. Domain registrars will often provide a user account where you can keep up with your domain registration status, and features like automatic renewal or email reminders. What is a domain registrant? That’s you! Well, assuming that you, the person reading this, is planning to buy a domain name or already has one. Once you take the step of selecting and purchasing a domain name from a domain registrar, you become the domain registrant. And the title will continue to apply for as long as you keep up your domain subscription. In most contexts though, people are more likely to call a “domain registrant” a domain owner, or a website owner once their site is up. What is a domain registry? A domain registry is the database that includes all the information about a specific top-level domain (TLD). The term is also sometimes used to refer to the organization that manages the database, as well as the database itself.    Domain registries have relationships with domain registrars, who submit domain name registration requests and record updates to them on behalf of customers. One of the biggest examples of a domain registry is Verisign, which manages the databases for several of the most common TLDs, including .com, .net, .gov, and .edu. What is private domain name registration? Part of the domain registration process includes providing the registrant’s information to the database of domain owners. In addition to the domain registries, the WHOIS directory tracks information on every website domain that’s registered, who owns it, and their main contact information. That’s because someone needs to be able to identify website owners who use their site for illegal purposes. But in our age of high-profile data breaches and growing concern around internet privacy issues, not every website owner wants to put their name and contact information out on the open web. And it shouldn’t be a requirement for running a website. Thanks to the private domain name registration options now offered by many DNS registrants, it’s not. Domain registrars usually charge a little more in order to shield you from having your own name and information included in the directory. They provide enough contact information to the WHOIS to keep you on the right side of the law, typically an email address associated with the registrant’s company, and keep the rest of it private. What is a domain name server? We talked earlier about how computers don’t use domain names to recognize website addresses, they use IP addresses. Domain name servers are the technology that translates between the two. The domain name system is the protocol established to ensure machines exchange the right data for the average internet viewer to see the correct webpage when they type a domain name into their browser or click on a link. Domain name servers play an important role in that system, storing all the information required to connect a specific domain name address to the correct IP address. Each time a computer queries a domain name server for a particular domain name, it finds the appropriate IP address to serve up. How do I register a new domain name? Now that we’ve covered much of the back-end technical stuff, you’re probably wondering how this all translates into what you, a would-be website owner, need to do to get the domain name that you want for your site. Luckily, the process for you is pretty easy. Start by finding a domain registrant you want to work with (more on how to do that in a bit). Most of them make it easy to search for available names, see the different top-level domain options you can consider, and go through the purchasing process. Provide your name, contact, and payment information through a secure form on the registrar’s website, and you should be set! How do I find an available domain name? This part can be trickier. With billions of websites already out there, all of them with a unique domain name, a lot of your options are already taken. Finding an available domain name that’s easy to remember and describes what your website does can take some work and creativity. Expect to spend some time using your domain registrar’s domain name search tool. Try out different variations on the names you have in mind. Consider synonyms and creative spellings. While a .com is usually the easiest option for visitors to remember, consider if you’re willing to go with another top-level domain like .website or .biz. The TLDs that aren’t as common will have more domain name options available. What is a top-level domain? A top-level domain is the last part of the domain that follows a period, such as .com or .net. ICANN controls which TLDs are available, and used to be pretty strict about opening up new ones. Early on, most specialty TLDs related to a specific industry, type of website, or geographic location. For example, .com was for commercial businesses, .gov for government websites, and .org for nonprofit websites. But as the internet has grown, the need for more available domain names has caused ICANN to lift the restrictions on how many TLDs are available, and who can use different ones. As such, when you do a domain name search on your chosen registrar’s website, you’ll see an array of TLD options at different price points. If the name you want isn’t available as a .com, you may be able to get it for cheaper at a .us or .site TLD address. How does domain name transfer work? When you choose a domain registrar to purchase your domain name with, you don’t have to make a long-term commitment to working with them. You have the option of switching over to a different registrar down the line, although you have to wait at least 60 days, due to an ICANN policy designed to reduce domain hijacking. If you’re past that sixty day point, you can transfer your domain name to a new provider by unlocking your domain name at your current registrar, disabling any other privacy protections such as WHOIS domain name privacy, and obtaining a domain authorization code from your current registrar. Once that’s done, follow the domain transfer steps provided by the new registrar you’re switching to. For HostGator, you can start the domain name transfer process here.    What to Look for in a Domain Registrar Now that you know all the ins and outs of what a domain registrar is and how domain registration works, you’re probably ready to find a good domain registrar and get started. You have a lot of different options. Some companies only provide domain registration services. Others, like HostGator, offer domain registration along with other services like web hosting, so you can take care of multiple basic website needs all under one account. With so many options to choose from, you need to know what to look for. Here are some of the most important factors to consider. 1. Pricing Some of the cost of registering a new domain name is related to the name you choose. In particular, different top-level domains come at different prices. But you’ll also see some variety in what different companies charge. When considering the pricing of different domain registrars, there are a couple of important things to keep in mind, First, the prices advertised are generally for a one-year time period, but you should check to be sure. A domain name isn’t a one-time purchase, you have to plan on continuing to pay for as long as you keep your website. You want to make sure you’re comparing apples to apples, and not putting one company’s one-year price against the price another advertises for a longer period. Also, it’s fairly normal for companies to advertise an introductory price that you pay for year one that goes up in the second year. Don’t just consider what you’re paying right now, think about what you can afford on an ongoing basis. And as with most things, sometimes a cheaper price will mean you pay in other ways, as with weaker customer service or a worse customer experience. Don’t just jump at the first low price you see without researching the company to find out if they’re cheap for a reason. 2. Reputation While domain name management doesn’t involve that much interaction with the company, you still want to choose a domain registrar that will be easy to work with and reliable. Spend some time reading customer reviews and doing general research on the company. Are they well known as a legitimate domain registrar? Do they have a reputation for solid customer service? Do people find the registration and renewal processes intuitive? Your domain name is an important part of running your website and maintaining it over time. You can always transfer your domain later, but you’ll be better off picking the right DNS registrar from day one. 3. Extras Most domain name registrars provide services beyond just domain name registration. It’s very common for registrars to also be web hosting providers, and bundling the two services can increase your ease of use for managing each. Other good add-ons to look for are: Domain name privacy, which helps you avoid spam and any risk that comes with making your personal information more public. Auto-renewals, which allow you to put the renewal process on autopilot so you don’t have to worry about forgetting or doing any extra work to keep your domain name registration up to date.Email addresses that you can set up for yourself and people in your organization at the domain, making your communications look more official.A multi-year purchase option, so you can secure your domain name for longer without worrying about renewal. If any of these are features you know you want, find a domain registrar that provides them. Register Your Domain Today As you know by now, HostGator is a domain name registrar that provides an intuitive domain name search function and easy registration process.   We offer domain name privacy, automatic renewals, and the option to buy your domain for up to three years at a time. And on top of all that, we’re one of the most respected web hosting providers in the industry. If you want the convenience of managing your web hosting and domain name registration in one place, you can count on HostGator to be a reliable option for both. If you’re ready to move forward and buy a new domain name, get started searching. Find the post on the HostGator Blog

An Introduction to Load Balancing

Liquid Web Official Blog -

Traffic Means Business You want your company to be popular. You want to be #trending. Today, it’s a part of doing business. However, trending means traffic and traffic means a heavy load on your servers. Can your servers—your site—handle viral marketing campaigns and social media campaigns where incoming end users can spike dramatically? Can you host a live stream or media event without having to worry about slowdowns or (we shudder even thinking about it) a total systems failure? One way you can make sure that you’re ready for whatever comes your way (well, your site’s way, anyway) is to have a load balancer in place. A load balancer uses a series of algorithms to evenly distribute your end users across multiple instances—across multiple servers—of your website, ensuring consistent performance and preventing crashes. Also acting as an automatic failover device, the load balancer is an essential component when it comes to your infrastructure. Why is Load Balancing Important to You As of Friday, April 12, 2019, at 12:09 p.m. (thanks there are 4.2 billion internet users, worldwide. Since January first of this year, they’ve sent 74 trillion tweets, 25.5 quadrillion emails, and have made 646.8 trillion Google searches. Oh, and there are 2.5 billion active Facebook users as of 12:18 p.m. Is your website ready for the potentiality that these numbers represent? With so many internet users and the ever-rising popularity (and ubiquity) of social media, a small nudge in the right direction could have a significant impact on your site traffic—and with an increase in traffic comes an increase in risk to your ecosystem. You need a way to make sure everyone that visits your site does so in an orderly way; a way that doesn’t risk the performance or integrity of your servers. That’s what a load balancer does: it acts like an attended parking lot.  Subscribe to the Liquid Web weekly newsletter to get the latest on high availability technology sent straight to your inbox. Remember the last time you went to an event where you had to pay for parking? There was, most likely, a single entrance, a person taking money, and a person directing people into parking spots one by one, row by row. A load balancer does much the same thing—your website iterations (across multiple servers) is the parking lot, your end users are the cars, and your load balancer is the attendant. Take a minute and imagine what it would look like if the parking lot at the event had several entrances and no attendants. It would be complete chaos. (I can see it now; fist fights, fender benders, and an eventual, full-scale riot. The police would come, the event would get shut down, and no one gets to see whatever it is they were there to see in the first place.) Okay, so maybe that’s a bit of a stretch, but without a load balancer, a spike in traffic can bring your website to a screeching halt. A screeching halt is bad for business. For every minute of IT downtime—website, servers, database, and the like—companies lose an average of $5,600 (thanks Gartrillioner, Inc.). That’s somewhere between $140,000 and $300,000 an hour depending on the size and model of your company. The modest investment it takes to put a load balancing solution in place pales in comparison to the losses your enterprise could take if your server(s) crash. Your Company Will Benefit According to the Aberdeen Group, the average business will experience 14.1 hours of IT downtime, annually—that 14.1 hours translates into 1.55 million in revenue. Revenue loss only increases as your company’s reliance on IT increases. For example, Dunn & Bradstreet estimate 6.4 million in losses per hour for the average online brokerage company. Finally, if you consider that 81% of companies report that they can only shoulder 8.76 hours of downtime annually (this one’s from Information Technology and Intelligence Corp), it becomes abundantly clear how important uptime is to the overall health of your business and the businesses around you. Regardless of the size of your enterprise, a load balancing solution will pay for itself. Even a single, averted hour of downtime can be the difference between a good year and a bad year considering the fact that small businesses average only $390,000 in revenue a year (according to the U.S. Census 2014 Survey of Entrepreneurs). In 2016, Medium put together a comprehensive report on eCommerce. This report made plain the impact a website outage—or even a slowdown—has on revenue. They even put the top 50 eCommerce websites (Ikea, Macy’s, Nike, etc.) through their paces, measuring connectivity around the clock, for a week straight. Given that eCommerce company websites, as Medium puts it, “…are not only an important source of information but the source of income for the companies themselves…” these numbers are pretty drastic. However, as connectivity and website speed and performance are increasingly integral to all enterprises, crashing under a heavy load is simply not good for business. Here’s the skinny, according to Medium. A whopping 73% of mobile internet users report coming across websites that were simply too slow to load, while 38% reported a 404. Is your page not loading? If so, 90% of users will (if it’s an option) go to a competitor. On average (over the 7 days Medium measured), uptime amongst the top 50 was only 99.03% (two 9s), somewhat below the industry’s recognized standard of 99.9% (three 9s) and well below the industry’s gold standard of 99.999% (five 9s). Short, but frequent, outages—not prolonged downtime—were most common amongst the top 50 sites. Obviously these numbers—both revenue earned and revenue lost as a result of downtime—are going to change depending on the size, shape, and model of your company. However, one thing is for sure: Your business is probably online which means you have a server, and any time those things go down you’re losing money. You don’t want to lose money. How a Load Balancer Works Ok, so, you definitely want a load balancer. But, even if you’re not designing, buying, and maintaining your own hardware and software it’s a good idea to know how your hosting service is implementing the technology. Why? So you can stay agile. In most cases, you can work with your host to make changes (sometimes big, sometimes small) to your IT infrastructure to better suit your unique needs. Typically, hosts that provide load balancing will have options that you can choose from. These options are primarily relegated to two categories: Algorithms and methods Hosting dedication Algorithms & Methods Load balancing works by employing an algorithm that determines the method by which site traffic is distributed between servers. The 9 algorithms and methods, below, represent the most common ways load balancing is done. 1. The Round Robin Method The round robin method is perhaps the least complex of the balancing methods. Traffic is evenly distributed by simply forwarding requests to each server in the infrastructure in turn, one by one. When the algorithm has made it through the list of instances/servers in its entirety it goes back to the top of the list and begins again. For example, in a 3-server system, a request is made, the load balancer directs the request to server A, then B, then C, and then A again, so on and so forth. The round robin method is best applied in scenarios in which all the server hardware in the infrastructure is similarly capable (computing power and capacity). 2. The Least Connections Method A default load balancing algorithm, the least connections method will assign incoming requests to the server with the least active connections. This is the default load balancing method as it will offer the best performance in most cases. The least connections method is best suited for situations in which server engagement time (the amount of time a connection stays active) is varied. In a round robin method, it is conceivable that one server could get overloaded—for example, if more connections are staying active for longer on server A than B, server A could come under strain. In the least connections method, this can’t happen. 3. Weighted Least Connections Also available with the round robin method (it’s called the weighted round robin method, go figure), the weighted least connections algorithm allows for each server to be assigned a priority status. For example, if you have one server that has more capacity than another server you might more heavily weight the higher capacity server. This means that the algorithm would assign an incoming request to the more heavily weighted server in the case of a tie (or some other active connection metric), ensuring a reduced load on the server with less capacity. 4. Source IP Hash When a load balancer uses a source IP hash, each request coming in from a unique IP is assigned a key and that key is assigned a server. This not only evenly distributes traffic across the infrastructure, but it also allows for server consistency in the case of a disconnection/reconnection. A unique IP, once assigned, will always connect to the same server. According to Citrix, “Caching requests reduces request and response latency, and ensures better resource (CPU) utilization, making caching popular on heavily used Web sites and application servers.” 5. URL Hash Almost identical to the source IP hash method, the URL hash method assigns keys based on the requested IP, not the incoming IP. 6. The Least Response Time Method Similar to the least connections method, the least response time method assigns requests based on both the number of connections on the server and the shortest average response time, thus reducing load by incorporating two layers of balancing. 7. The Bandwidth and Packets Method A method of virtual server balancing, in the bandwidth and packets method the load balancer assigns request based on which server is dealing with the least amount of traffic (bandwidth). 8. Custom Load A complex algorithm that requires a load monitor, the custom load method uses an array of server metrics (CPU usage, memory, and response time, among other things) to determine request assignments. 9. Least Pending Requests (LPR) With the least pending requests method, HTTP/S requests are monitored and distributed to the most available server. The LPR method can simultaneously handle a surge of requests while monitoring the availability of each server making for even distribution across the infrastructure. As you can see, there are a lot of solutions to the same issue. One of them is bound to be the solution for you and your company’s unique needs. If you aren’t sure what the best algorithm/solution for you is, you can always work with your hosting provider to help you make the call. What We Offer at Liquid Web At Liquid Web, we offer shared or dedicated load balancers. Both options are fully managed. From design to implementation, administration, and monitoring, our network engineers will help make sure you are operating optimally. Shared Load Balancers Our managed shared load balancers—think many clients across a hardware/software/network infrastructure—are cost-effective, high performing, and easily scalable (additional web servers can be added to the existing pool of load balanced servers). You’ll have full redundancy with automatic failover built right in. A shared solution is perfect for sites that have gone beyond a single web server. Managed Shared Load Balancers are economical plans that include a 1Gbps throughput, 100,000 concurrent sessions, 2-10 servers, and 1-10 virtual IPs Managed Dedicated Load Balancers At Liquid Web, our dedicated load balancers are exactly that, completely dedicated to your enterprise. A dedicated solution comes with all of the benefits of shared load balancing but also features advanced traffic scripting options, a complete API, high-performance SSL, and a full set of resources committed to your infrastructure 24/7/365. With dedicated hardware, you’re guaranteed high performance, low latency, and no bottlenecking. Managed Dedicated Load Balancers are robust solutions that include up to 10Gbps, 100,000 (starting at) concurrent sessions, and unlimited servers and IPs Cloud Load Balancers As more and more companies operate within (at least in part) a cloud environment, a balancing solution within the same environment—as best practice dictates—becomes necessary. Say hello to cloud load balancers. Just like their physical counterparts cloud load balancers distribute site traffic across redundant virtual nodes, ensuring uptime and mitigating performance issues as a result of high traffic. A distinct advantage of the cloud load balancer over physical appliances is the ease and cost-effectiveness of scaling up to meet demand. Simply put, it’s quicker and cheaper to scale up in a cloud environment. At Liquid Web, we’ve got you covered regardless of the environment. Algorithms We offer a variety of algorithms, including the round robin method, the least connect method, and the least response time method. A Final Word About Load Balancing So, no matter what your goal, if you’ve moved beyond a single web server (or are about to), you would benefit from a load balancer—it will keep your website and your data up, running, highly available, and performing at peak levels. Whether you’re going to implement it yourself or are looking for a managed system, you’ll be better equipped to make decisions that benefit your company if you have an understanding of your needs, your current systems, and where you want to ultimately get to. An HA system (of which load balancing is a part) has to be thought of as not simply improving uptime, but mitigating downtime, the death knell of a company in today’s always-on, 24/7/365 digital economy. With a load balancer solution in place (physical, virtual, or both), you’ll be on your way toward a lean, mean, HA machine. However, are your other systems HA? Do they have the proper redundancies? We can help with that, too. Read The Ultimate High Availability Checklist for Any Website, it will help you take stock of your infrastructure, help you identify vulnerabilities in your systems, and help you work towards a truly HA environment so that you can avoid downtime. The post An Introduction to Load Balancing appeared first on Liquid Web.

Agency Spotlight Series: Power Digital Marketing

WP Engine -

A key part of our business at WP Engine is the partnerships we’ve built with digital agencies. With emerging technologies and trends, increasing competitiveness, and the pressure to deliver memorable digital experiences, agencies have enough to worry about. WP Engine allows agencies to focus on creation and execution instead of worrying about performance and security.… The post Agency Spotlight Series: Power Digital Marketing appeared first on WP Engine.

NGINX structural enhancements for HTTP/2 performance

CloudFlare Blog -

IntroductionMy team: the Cloudflare PROTOCOLS team is responsible for termination of HTTP traffic at the edge of the Cloudflare network. We deal with features related to: TCP, QUIC, TLS and Secure Certificate management, HTTP/1 and HTTP/2. Over Q1, we were responsible for implementing the Enhanced HTTP/2 Prioritization product that Cloudflare announced during Speed Week.This is a very exciting project to be part of, and doubly exciting to see the results of, but during the course of the project, we had a number of interesting realisations about NGINX: the HTTP oriented server onto which Cloudflare currently deploys its software infrastructure. We quickly became certain that our Enhanced HTTP/2 Prioritization project could not achieve even moderate success if the internal workings of NGINX were not changed.Due to these realisations we embarked upon a number of significant changes to the internal structure of NGINX in parallel to the work on the core prioritization product. This blog post describes the motivation behind the structural changes, how we approached them, and what impact they had. We also identify additional changes that we plan to add to our roadmap, which we hope will improve performance further.BackgroundEnhanced HTTP/2 Prioritization aims to do one thing to web traffic flowing between a client and a server: it provides a means to shape the many HTTP/2 streams as they flow from upstream (server or origin side) into a single HTTP/2 connection that flows downstream (client side).Enhanced HTTP/2 Prioritization allows site owners and the Cloudflare edge systems to dictate the rules about how various objects should combine into the single HTTP/2 connection: whether a particular object should have priority and dominate that connection and reach the client as soon as possible, or whether a group of objects should evenly share the capacity of the connection and put more emphasis on parallelism.As a result, Enhanced HTTP/2 Prioritization allows site owners to tackle two problems that exist between a client and a server: how to control precedence and ordering of objects, and: how to make the best use of a limited connection resource, which may be constrained by a number of factors such as bandwidth, volume of traffic and CPU workload at the various stages on the path of the connection.What did we see?The key to prioritisation is being able to compare two or more HTTP/2 streams in order to determine which one’s frame is to go down the pipe next. The Enhanced HTTP/2 Prioritization project necessarily drew us into the core NGINX codebase, as our intention was to fundamentally alter the way that NGINX compared and queued HTTP/2 data frames as they were written back to the client.Very early in the analysis phase, as we rummaged through the NGINX internals to survey the site of our proposed features, we noticed a number of shortcomings in the structure of NGINX itself, in particular: how it moved data from upstream (server side) to downstream (client side) and how it temporarily stored (buffered) that data in its various internal stages. The main conclusion of our early analysis of NGINX was that it largely failed to give the stream data frames any 'proximity'. Either streams were processed in the NGINX HTTP/2 layer in isolated succession or frames of different streams spent very little time in the same place: a shared queue for example. The net effect was a reduction in the opportunities for useful comparison.We coined a new, barely scientific but useful measurement: Potential, to describe how effectively the Enhanced HTTP/2 Prioritization strategies (or even the default NGINX prioritization) can be applied to queued data streams. Potential is not so much a measurement of the effectiveness of prioritization per se, that metric would be left for later on in the project, it is more a measurement of the levels of participation during the application of the algorithm. In simple terms, it considers the number of streams and frames thereof that are included in an iteration of prioritization, with more streams and more frames leading to higher Potential.What we could see from early on was that by default, NGINX displayed low Potential: rendering prioritization instructions from either the browser, as is the case in the traditional HTTP/2 prioritization model, or from our Enhanced HTTP/2 Prioritization product, fairly useless.What did we do?With the goal of improving the specific problems related to Potential, and also improving general throughput of the system, we identified some key pain points in NGINX. These points, which will be described below, have either been worked on and improved as part of our initial release of Enhanced HTTP/2 Prioritization, or have now branched out into meaningful projects of their own that we will put engineering effort into over the course of the next few months.HTTP/2 frame write queue reclamationWrite queue reclamation was successfully shipped with our release of Enhanced HTTP/2 Prioritization and ironically, it wasn’t a change made to the original NGINX, it was in fact a change made against our Enhanced HTTP/2 Prioritization implementation when we were part way through the project, and it serves as a good example of something one may call: conservation of data, which is a good way to increase Potential.Similar to the original NGINX, our Enhanced HTTP/2 Prioritization algorithm will place a cohort of HTTP/2 data frames into a write queue as a result of an iteration of the prioritization strategies being applied to them. The contents of the write queue would be destined to be written the downstream TLS layer.  Also similar to the original NGINX, the write queue may only be partially written to the TLS layer due to back-pressure from the network connection that has temporarily reached write capacity.Early on in our project, if the write queue was only partially written to the TLS layer, we would simply leave the frames in the write queue until the backlog was cleared, then we would re-attempt to write that data to the network in a future write iteration, just like the original NGINX.The original NGINX takes this approach because the write queue is the only place that waiting data frames are stored. However, in our NGINX modified for Enhanced HTTP/2 Prioritization, we have a unique structure that the original NGINX lacks: per-stream data frame queues where we temporarily store data frames before our prioritization algorithms are applied to them.We came to the realisation that in the event of a partial write, we were able to restore the unwritten frames back into their per-stream queues. If it was the case that a subsequent data cohort arrived behind the partially unwritten one, then the previously unwritten frames could participate in an additional round of prioritization comparisons, thus raising the Potential of our algorithms.The following diagram illustrates this process:We were very pleased to ship Enhanced HTTP/2 Prioritization with the reclamation feature included as this single enhancement greatly increased Potential and made up for the fact that we had to withhold the next enhancement for speed week due to its delicacy.HTTP/2 frame write event re-orderingIn Cloudflare infrastructure, we map the many streams of a single HTTP/2 connection from the eyeball to multiple HTTP/1.1 connections to the upstream Cloudflare control plane.As a note: it may seem counterintuitive that we downgrade protocols like this, and it may seem doubly counterintuitive when I reveal that we also disable HTTP keepalive on these upstream connections, resulting in only one transaction per connection, however this arrangement offers a number of advantages, particularly in the form of improved CPU workload distribution.When NGINX monitors its upstream HTTP/1.1 connections for read activity, it may detect readability on many of those connections and process them all in a batch. However, within that batch, each of the upstream connections is processed sequentially, one at a time, from start to finish: from HTTP/1.1 connection read, to framing in the HTTP/2 stream, to HTTP/2 connection write to the TLS layer.The existing NGINX workflow is illustrated in this diagram:By committing each streams’ frames to the TLS layer one stream at a time, many frames may pass entirely through the NGINX system before backpressure on the downstream connection allows the queue of frames to build up, providing an opportunity for these frames to be in proximity and allowing prioritization logic to be applied.  This negatively impacts Potential and reduces the effectiveness of prioritization.The Cloudflare Enhanced HTTP/2 Prioritization modified NGINX aims to re-arrange the internal workflow described above into the following model:Although we continue to frame upstream data into HTTP/2 data frames in the separate iterations for each upstream connection, we no longer commit these frames to a single write queue within each iteration, instead we arrange the frames into the per-stream queues described earlier. We then post a single event to the end of the per-connection iterations, and perform the prioritization, queuing and writing of the HTTP/2 data frames of all streams in that single event.This single event finds the cohort of data conveniently stored in their respective per-stream queues, all in close proximity, which greatly increases the Potential of the Edge Prioritization algorithms.In a form closer to actual code, the core of this modification looks a bit like this:ngx_http_v2_process_data(ngx_http_v2_connection *h2_conn, ngx_http_v2_stream *h2_stream, ngx_buffer *buffer) { while ( ! ngx_buffer_empty(buffer) { ngx_http_v2_frame_data(h2_conn, h2_stream->frames, buffer); } ngx_http_v2_prioritise(h2_conn->queue, h2_stream->frames); ngx_http_v2_write_queue(h2_conn->queue); } To this:ngx_http_v2_process_data(ngx_http_v2_connection *h2_conn, ngx_http_v2_stream *h2_stream, ngx_buffer *buffer) { while ( ! ngx_buffer_empty(buffer) { ngx_http_v2_frame_data(h2_conn, h2_stream->frames, buffer); } ngx_list_add(h2_conn->active_streams, h2_stream); ngx_call_once_async(ngx_http_v2_write_streams, h2_conn); } ngx_http_v2_write_streams(ngx_http_v2_connection *h2_conn) { ngx_http_v2_stream *h2_stream; while ( ! ngx_list_empty(h2_conn->active_streams)) { h2_stream = ngx_list_pop(h2_conn->active_streams); ngx_http_v2_prioritise(h2_conn->queue, h2_stream->frames); } ngx_http_v2_write_queue(h2_conn->queue); } There is a high level of risk in this modification, for even though it is remarkably small, we are taking the well established and debugged event flow in NGINX and switching it around to a significant degree. Like taking a number of Jenga pieces out of the tower and placing them in another location, we risk: race conditions, event misfires and event blackholes leading to lockups during transaction processing.Because of this level of risk, we did not release this change in its entirety during Speed Week, but we will continue to test and refine it for future release.Upstream buffer partial re-useNginx has an internal buffer region to store connection data it reads from upstream. To begin with, the entirety of this buffer is Ready for use. When data is read from upstream into the Ready buffer, the part of the buffer that holds the data is passed to the downstream HTTP/2 layer. Since HTTP/2 takes responsibility for that data, that portion of the buffer is marked as: Busy and it will remain Busy for as long as it takes for the HTTP/2 layer to write the data into the TLS layer, which is a process that may take some time (in computer terms!).During this gulf of time, the upstream layer may continue to read more data into the remaining Ready sections of the buffer and continue to pass that incremental data to the HTTP/2 layer until there are no Ready sections available.When Busy data is finally finished in the HTTP/2 layer, the buffer space that contained that data is then marked as: FreeThe process is illustrated in this diagram:You may ask: When the leading part of the upstream buffer is marked as Free (in blue in the diagram), even though the trailing part of the upstream buffer is still Busy, can the Free part be re-used for reading more data from upstream?The answer to that question is: NOBecause just a small part of the buffer is still Busy, NGINX will refuse to allow any of the entire buffer space to be re-used for reads. Only when the entirety of the buffer is Free, can the buffer be returned to the Ready state and used for another iteration of upstream reads. So in summary, data can be read from upstream into Ready space at the tail of the buffer, but not into Free space at the head of the buffer.This is a shortcoming in NGINX and is clearly undesirable as it interrupts the flow of data into the system. We asked: what if we could cycle through this buffer region and re-use parts at the head as they became Free? We seek to answer that question in the near future by testing the following buffering model in NGINX:TLS layer BufferingOn a number of occasions in the above text, I have mentioned the TLS layer, and how the HTTP/2 layer writes data into it. In the OSI network model, TLS sits just below the protocol (HTTP/2) layer, and in many consciously designed networking software systems such as NGINX, the software interfaces are separated in a way that mimics this layering.The NGINX HTTP/2 layer will collect the current cohort of data frames and place them in priority order into an output queue, then submit this queue to the TLS layer. The TLS layer makes use of a per-connection buffer to collect HTTP/2 layer data before performing the actual cryptographic transformations on that data.The purpose of the buffer is to give the TLS layer a more meaningful quantity of data to encrypt, for if the buffer was too small, or the TLS layer simply relied on the units of data from the HTTP/2 layer, then the overhead of encrypting and transmitting the multitude of small blocks may negatively impact system throughput.The following diagram illustrates this undersize buffer situation:If the TLS buffer is too big, then an excessive amount of HTTP/2 data will be committed to encryption and if it failed to write to the network due to backpressure, it would be locked into the TLS layer and not be available to return to the HTTP/2 layer for the reclamation process, thus reducing the effectiveness of reclamation. The following diagram illustrates this oversize buffer situation:In the coming months, we will embark on a process to attempt to find the ‘goldilocks’ spot for TLS buffering: To size the TLS buffer so it is big enough to maintain efficiency of encryption and network writes, but not so big as to reduce the responsiveness to incomplete network writes and the efficiency of reclamation.Thank you - Next!The Enhanced HTTP/2 Prioritization project has the lofty goal of fundamentally re-shaping how we send traffic from the Cloudflare edge to clients, and as results of our testing and feedback from some of our customers shows, we have certainly achieved that! However, one of the most important aspects that we took away from the project was the critical role the internal data flow within our NGINX software infrastructure plays in the outlook of the traffic observed by our end users. We found that changing a few lines of (albeit critical) code, could have significant impacts on the effectiveness and performance of our prioritization algorithms. Another positive outcome is that in addition to improving HTTP/2, we are looking forward to carrying our newfound skills and lessons learned and apply them to HTTP/3 over QUIC.We are eager to share our modifications to NGINX with the community, so we have opened this ticket, through which we will discuss upstreaming the event re-ordering change and the buffer partial re-use change with the NGINX team.As Cloudflare continues to grow, our requirements on our software infrastructure also shift. Cloudflare has already moved beyond proxying of HTTP/1 over TCP to support termination and Layer 3 and 4 protection for any UDP and TCP traffic. Now we are moving on to other technologies and protocols such as QUIC and HTTP/3, and full proxying of a wide range of other protocols such as messaging and streaming media.For these endeavours we are looking at new ways to answer questions on topics such as: scalability, localised performance, wide scale performance, introspection and debuggability, release agility, maintainability.If you would like to help us answer these questions and know a bit about: hardware and software scalability, network programming, asynchronous event and futures based software design, TCP, TLS, QUIC, HTTP, RPC protocols, Rust or maybe something else?, then have a look here.

4 Free or Inexpensive Resources to Help You Start Your Online Business

HostGator Blog -

The post 4 Free or Inexpensive Resources to Help You Start Your Online Business appeared first on HostGator Blog. There’s a lot to learn before (and after) you start your own business, and if you don’t have a business degree or previous experience running an online business, your exciting plans can feel a bit overwhelming. So can sorting through all the advice and information out there for new and would-be business owners. To help you get off to a strong start on a small budget, here are some reliable free and low-cost resources to help you plan, launch, and grow your new business. 1. Mentoring from Experienced Professionals Want answers to specific business questions or insights from someone who’s been there and done that? SCORE is a nonprofit supported by the US Small Business Administration that provides free, confidential mentoring for entrepreneurs in person, online, and by phone. With more than 10,000 volunteers providing advice nationwide, the odds are good that you can connect with someone in your niche. You can enter your location on SCORE’s Find a Mentor page to see all the SCORE volunteer mentors near you, search for mentors by industry or keyword, and find the closest SCORE office. The SCORE website also has a resource library full of blog posts, webinars, podcasts, videos, and templates on thousands of topics. Some of the webinars charge a small fee but most of the resources are free.    2. Courses to Build Your Business Skills Khan Academy has a group of videos in its Careers section that feature different small business owners and freelancers talking about what they do, how much they earn, how they work, and how they got started. The range of careers covered is relatively small, but even if your niche isn’t included, there’s good advice on running a business in several of the presentations, and you can get an idea of all the tasks that go into being your own boss. If you’re ready to tackle business topics at the college level, check out OpenCourseWare from the Massachusetts Institute of Technology. The site provides free access to the materials for most of MIT’s undergraduate and graduate-level courses. You can search by academic department for classes on accounting, marketing, and other business topics. Or you can explore OpenCourseWare’s Entrepreneurship portal, which includes dozens of classes covering planning, pricing, finance and accounting, marketing, patents, sales, operations, and much more. The only catch? It’s up to you to download and work through the course materials on your own. Coursera also offers college-level instruction, and it provides graded assignments and feedback in courses from universities around the world. Unlike traditional distance-learning classes, Coursera courses don’t come with a traditional tuition price tag. Some courses can be audited for free, and if you want to earn a certificate or access all the course features, a subscription plan runs about $50 per month. One Coursera option for budding business owners is Michigan State University’s 6-course specialization program called How to Start Your Own Business, which is designed to walk students through the process of starting their own businesses as they launch it. The classes you may need will depend on the type of business you want to run. Planning an e-commerce business? OpenCourseWare’s undergrad-level Economics and E-Commerce course materials cover pricing, sales taxes, different types of e-commerce, advertising, and search. One recommendation from me: If you’re planning a service business like freelance design or writing, event planning, or repairs, it’s a good idea to learn as much as you can about negotiation before you begin, both to earn what you’re worth and to build good relationships with good clients. Becoming a good negotiator can help you in many areas of your business, from setting rates and writing bids to working with vendors and hammering out the fine print in contracts. Coursera offers more than 50 negotiation courses, and MIT OpenCourseWare offers materials for several negotiation classes from the Sloan School of Management’s curriculum. Whatever you decide to study now, remember that successful business owners are always learning. Free and low-cost courses are a low-stress way to keep up with trends and innovations in your niche. 3. Guidance for Building a User-Friendly eCommerce Website In late 2018, Google published its UX Playbook for Retail: Collection of best practices to delight your users. Google reviewed hundreds of retail sites to come up with its recommendations, and the result is probably the best free resource you’re going to find for learning what to include on your site and why to include it. The free-to-download playbook uses Sephora, Warby Parker, Boots, ThredUp and other best-in-class e-commerce sites to show you exactly what works for six key areas: the homepage or landing page, menus and navigation, search, products and categories, conversion, and forms. For each area, there are details on what to include and what to avoid, to help you create a site that looks professional and is frustration-free for shoppers. There are also charts showing the ease of implementation, impact, and key metrics to track for each suggestion in the playbook. Don’t let the playbook’s 108-page length discourage you from diving in. The guide’s design—lots of screenshots, checklists, and charts—makes it a fast, informative read you can consult as you plan each section of your site. 4. Easy Tools to Create Your Website DIY website design used to be reserved for hardy amateurs who enjoy coding and don’t mind spending time tinkering and consulting support forums. For the rest of us, website builders have opened up high-quality site design to anyone who can drag and drop. Site builders like Gator Website Builder make setting up a small business website or even an online store fast and easy by packaging everything you need to get started and making the design process a snap. For example, every Gator plan includes site hosting, domain name registration, an SSL certificate to protect your data and your customers, analytics to help you measure and improve your site’s performance, and support. You also get unlimited pages, storage, and bandwidth so there’s no limit to how much your site can grow as you add products, services, and testimonials from your best customers. You can also upgrade to Gator Premium for priority support or to Gator eCommerce for priority support plus online store functionality. Ready to get started? Choose your Gator Website Builder plan now. Find the post on the HostGator Blog

How Do WordPress Caching Plugins Work?

Liquid Web Official Blog -

With loading speed being one of the crucial factors that make or break the success of a WordPress website, WordPress caching plugins are all over the place these days. There are free caching plugins and premium ones. Companies bombard us with their marketing, explaining why their WordPress caching plugin is the best one. Ever wonder: How do those WordPress caching plugins actually work? Why do they make our sites load faster? Why do they sometimes break the entire layout of our websites? In this article, I’m walking you through the actions performed by caching plugins in the background. Don’t worry, I’m not going to lose myself in tech speak. You’ll be able to understand this article just fine if you cannot write code. To set a standard for slow loading times, I mean sites that take more than three seconds to load. Based on data collected by Pingdom, the average loading time of websites in 2017 was 3.21 seconds. Google said in a study made in 2018 that 53% of mobile visitors will leave a website that takes longer than three seconds to load. If your site is taking longer than three seconds to load, chances are that you could double your mobile traffic by making it load faster. Why WordPress Sites Load Slow Let’s start with why WordPress sites actually are loading slow. This problem is actually coming down to how WordPress works. Let me explain. WordPress is nothing more than a collection of files and a database located on your web hosting account. The files somewhat magically create your website, and the database contains all the texts, logins, settings, etc. You can think of the database as an Excel spreadsheet (just a bit more complicated). When a visitor comes to your website, your web host starts sending some of those files to the browser of your visitor and loads the data that makes up your website from the database. If the visitor opens your Home page, the web host will load all data that builds the home page. If the visitor goes onto your Contact page, the web host will load the contents on your Contact page from the database and will send that to the browser of your visitor. Now, this is where the potential for slow loading times begins. If your web hosting company is not optimized for WordPress (or if you are on shared hosting), loading the contents from the database can take a long time. Let me walk you through this step by step so that you understand how WordPress caching plugins can speed up this process: Your website visitor clicks on a link to your website (e.g. she found you in Google’s search results or saw one of your ads on social media). The browser of your visitor now sends a request to your web hosting company. It’s as if the browser was saying “Hey, please send me the website of Jan Koch.” Your hosting company says back “Ok, let me grab all the data necessary to load Jan’s website and send it over to you.” At this point, your web host starts processing the files that make up your website, searches for the contents in your database and packages it up nicely in the form of your website. Once your web host loaded all files and found all contents, the website gets sent to your visitor. Finally, the browser of your visitor receives the data and can display the website. Even though this process is very simplified, I hope it illustrates multiple problems that potentially make your website load slowly. There are three metrics you’ll see in most speed analysis tools, which summarize how well a website is loading: the number of requests, the page size, and finally the load time itself. This is a screenshot of a speed test I ran against my site using the Pingdom Website Speed Test. You can see that the load time of my page is 351 ms, the page size is 757.2KB, and it takes 29 requests to load the website. Those results are pretty good and what I consider to be the optimum of what’s possible with reasonable efforts. I tested many WordPress caching plugins to get to these results, settling with Swift Performance. I use their paid version but Swift Performance Lite is pretty good too! What I’ve done to achieve these numbers is pretty simple. Let me show you how. Build better WordPress sites. Subscribe to our weekly newsletter to get content like this sent straight to your inbox. How WordPress Caching Plugins Work Whenever you try to make your WordPress site load faster, you’ll inevitably touch WordPress caching plugins at one point. That’s because those plugins can intercept and improve the loading process I described above quite heavily! This graphic shows the types of files a website consists of. It’s part of the speed analysis with Pingdom I shared above. You can see that my website (like yours) consists of: Image files CSS files (those control how your site looks) Scripts (those control how your site works and sometimes parts of the designs) Fonts (your text should look beautiful) XHR data (data being transferred between your host and a browser) HTML (the structural code of your website) Each of these file types adds to the loading time of your site. You can see that most of the page size for WP Mastery are images. Though not exactly a caching issue, I want you to know that I use Imagify to optimize my images for fast loading times. Page Caching and Preloading Let’s start with two of the most important features of WordPress caching plugins: page caching and cache preloading. Don’t worry, you don’t need to be a rocket scientist to understand how they work. Remember my explanation on how your website gets loaded when a visitor accesses your domain? That whole process of loading the data from the database, gathering all files, and all the other stuff gets circumvented when a page cache is enabled and the cache is preloaded. Instead of loading all information from the database, your website now has a copy of every single page ready to send to your visitors directly. Your web host doesn’t have to look for the correct information in the database anymore, it can simply send the data directly. This is my current page cache status at WP Mastery. You can see that I have 559 individual pages on my site, including blog posts, archives, and other contents. My website has preloaded every single content piece into a cache so that it can deliver the website quickly to visitors who access the page. Preloading the page cache is a highly automated process, in which you don’t have to do anything other than to start it. You’ll want to check the cache status occasionally and restart the preloading if something went wrong. But that’s all. The magic happens in the background. File Minification and Combination Similarly to how Imagify reduces the file size of the images on a website, caching plugins can reduce the size of CSS files, JavaScript files, and HTML content. Those functions are called “minification” and included in most WordPress caching plugins. By minifying files, caching plugins remove whitespaces, line breaks, and other unnecessary markups automatically. With just the necessary code and no styling to make the code more readable for humans, WordPress caching plugins can reduce your overall page size. I always start my optimization processes by reducing the page size of the website I’m working on. So if you can set up your caching plugins with minification, you’ll have a head start already. One word of caution though, minification can sometimes cause problems and break your layout! Be very careful when applying minification to your website and enable one option at a time. Here’s a screenshot of my settings for minifying and optimizing CSS delivery in Swift Performance Pro: You’ll likely see similar options in your WordPress caching plugin if you’re using a different one. Activate just one option at a time, clear the cache, and then test your site in multiple browsers to ensure it loads correctly. Almost every WP caching plugin will allow you to exclude stylesheets or JavaScript files from minification. Make good use of this option in case your layout breaks when minifying all files. Alongside minifying files, most WordPress caching plugins will also allow you to merge multiple files into one, especially when working on CSS and JavaScript files. That function is a fantastic way to reduce the number of requests needed to load a website. Without that merging of files, it would take more than 100 requests to load my site. But by enabling the merging of JS files and CSS files, I got it down to 29 requests. Setting Expires Headers Another very useful and core functionality in WordPress caching plugins is to set so-called “Expires” headers. These are snippets of information that tell web browsers whether a file has changed since the browser last visited a website. Most often, these Expires headers are set for files that don’t change often. Images, JavaScript, and CSS files are common examples. These headers work as follows: The first time a visitor comes to your site, the browser downloads all files necessary to display the page. When the visitor opens a new page on your site (say, from your Home page to your Services page), her browser knows that some files don’t need to be downloaded again – because those files are marked with Expires headers. The browser then loads the new data and thus can load the new page faster. From the definition of MDN web docs (Mozilla): The Expires header contains the date/time after which the response is considered stale. Browsers know that, if an Expires header did not expire yet, the information contained in the file is still valid. Since you usually would set Expires headers to at least one month (sometimes even one year), those headers deliver great results for recurring visitors on your website. Expires headers are a simple way to improve your website speed, I highly encourage you to give them a try! What Caching Plugins to Use After highlighting just three of the main features WordPress caching plugins bring to your website, I want to leave you with a few recommendations and tips on choosing a caching plugin for your site. Ultimately, it’s a decision of free plugins vs. paid ones. Popular free caching plugins are: W3 Total Cache (1+ million installs) WP Super Cache (2+ million installs) WP Fastest Cache (900,000+ installs) My favorite free caching plugin, however, is Swift Performance Lite with a mere 10,000+ installations. I’ve received the best speed results with this plugin. When it comes to paid caching plugins, there are really just two plugins to be taken seriously: WP Rocket and Swift Performance Pro. Asking which plugin is better is similar to asking whether you like Windows or Mac more. It’s almost a religious debate in the WordPress world. Liquid Web recommends WP Rocket in their Knowledge Base and I’ve personally been able to make WooCommerce shops load in less than one second using WP Rocket. My own site is running Swift Performance Pro and also loads extremely fast. So you cannot go wrong with either of them. Try Managed WordPress for Better Results Managed WordPress Hosting takes care of image compression, automatic updates for plugins and the platform, automatic daily backups, automatic SSL, and staging environments, as well as access to developer tools and no pageview/traffic limits. The post How Do WordPress Caching Plugins Work? appeared first on Liquid Web.

3 Types of Social Video That Work for Any Business

Social Media Examiner -

Want to add more video to your social media marketing? Wondering how other businesses use video? In this article, you’ll discover three types of video that work for any business on IGTV, Twitter, and LinkedIn. Why Social Video Matters to Your Business According to a 2019 survey by Wyzowl, 87% of marketers see video as […] The post 3 Types of Social Video That Work for Any Business appeared first on Social Media Marketing | Social Media Examiner.

5 Best Online Payment Gateways in 2019 for your E-commerce Website

Reseller Club Blog -

Running an online e-commerce business takes a lot of strategizing and planning. Right from setting up your website, to selecting the right web hosting, and most importantly choosing a payment gateway for your store. To make your work easier we have compiled a list of the 5 best online payment gateways in 2019. Do Payment Gateways Affect your Business? Choosing the right payment gateway is crucial when it comes to the success of your e-commerce store. Payment gateways are like a bridge between the buyer and seller. They permit fund transfer directly to the seller, keeping the security and comfort of the buyer in mind. As per a survey by Baymard Institute, 6% of the customers abandoned their cart because there weren’t enough payment methods available. Added to this, most users these days prefer mobile payment options as they are quick and effortless. Thus, it is imperative that you choose a payment gateway keeping these points in mind: Secure Well-known Easy to use Let us have a look at the top 5 online payment gateways to look forward to in 2019. PayPal PayPal is a global online payments platform that assists in online money transfer. It currently has over 277 million users worldwide and operates in 202 markets. Moreover, PayPal allows customers to send, receive, & hold funds in 25 currencies worldwide. One of the advantages of PayPal is that you need not have a PayPal account for processing payment. This is a great advantage of e-commerce stores as they need not worry if their customer has a PayPal account or not. Key Features of PayPal: Doesn’t require users to have a PayPal account to process payment Supports international payment and credit card Multi-currency support No withdrawal fee Fast mobile payment PayU PayU is a prominent online payment service provider that processes payments faster for both merchants and buyers. PayU covers 18 markets across Asia, Central and Eastern Europe, the Middle East, Latin America and Africa catering to over 2.3 billion consumers. They have over 300 payment methods for fast, simple and secure electronic payments across platforms. It supports one-click buy that allows users to purchase with a single click, thus improving customer conversion rates on your e-commerce website. Key Features of PayU: Easily integrate and receive all local payment methods instantly Supports one-click buy allowing users to purchase with a single click Mobile integration Web checkout Multi-currency support Amazon Payments Amazon Payments is a payment service offered by the e-commerce giant Amazon. The payment gateway is available to Amazon users both sellers and buyers to help smoothen their online purchase process. In Amazon Pay, the merchants can accept payment either online or on mobile. Moreover, it easily allows users to access their information from the merchant’s site so you don’t need to add any details like name, shipping address, credit card details etc. without compromising the security. This smoothens and fastens the payment process. Key Features of Amazon Payments: Has a faster checkout process Offers top-notch security There is a merchant website integration Supports automatic payments Fraud protection Braintree Braintree is an online payment gateway and a division of PayPal designed to make your payment process simpler. Braintree supports over 45 countries and 130+ currencies worldwide. One of the benefits for Braintree is that users can tailor their checkout flows any way they would like while remaining PCI compliant. Moreover, it saves your customers the time and hassle of re-entering their payment information every time they make a purchase. Key Features of Braintree: Merchants can customize their checkout workflow Easy data migration Dynamic control panel Easy and Faster repeat billing Advanced fraud protection Authorize.Net Authorize.Net is a payment gateway platform powering over 3+ lakh customers. It provides security and complex infrastructure to enable fast, secure and reliable data transfer. It offers plenty of options to its users for both accepting and processing payments. It offers an online payment system, as well as, at retail locations. The online payment system accepts credit cards and electronic cheques from websites and deposits the money directly to the merchant account. Key Features of Authorize.Net It supports multiple payment options viz. Mobile, retail, mail and phone payment It employs advanced Fraud Detection Suite It supports recurring billing It does not have a fixed enterprise pricing scheme It supports sync for Quickbooks Conclusion: We at ResellerClub, use PayPal and PayU payment gateways as we have a global presence, however, as an e-commerce store owner you need to figure out which is the best online payment gateway option for you. It might be one amongst our top 5 list or any other payment gateway. The right choice is the one that is the most beneficial to your customers. Satisfied customers equal to increased and improved conversions which, in turn, leads to improved business. Which is the online payment gateway used by you? Is it one amongst these or something else? Tell us in the comments section below. .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post 5 Best Online Payment Gateways in 2019 for your E-commerce Website appeared first on ResellerClub Blog.

bingbot Series: Easy set-up guide for Bing’s Adaptive URL submission API

Bing's Webmaster Blog -

In February, we announced launch of adaptive URL submission capability. As called out during the launch, as SEO manager or website owner, you do not need to wait for the crawler to discover new links, you should just submit those links automatically to Bing to get your content immediately indexed as soon as your content is published!  Who in SEO didn’t dream of that.  In the last few months we have seen rapid adoption of this capability with thousands of websites submitting millions of URLs and getting them indexed on Bing instantly.   At the same time, we have few webmasters who have asked for guidance on integrating the adaptive URL submission API. This blog provides information on how easy it is to set-up the adaptive URL submission API.   Step 1: Generate an API Key     Webmasters need an API key to be able to access and use Bing Webmaster APIs. This API key can be generated from Bing Webmaster Tools by following these steps:   Sign in to your account on Bing Webmaster Tools. In case you do not already have a Bing Webmaster account, sign up today using any Microsoft, Google or Facebook ID.  Add & verify the site that you want to submit URL for through the API, if not already done.  Select and open any verified site through the My Sites page on Bing Webmaster Tools and click on Webmaster API on the left-hand side navigation menu.    If you are generating the API key for the first time, please click Generate to create an API Key. Else you will see the key previously generated.    Note: Only one API key can be generated per user. You can change your API key anytime; change is taken by the system within 30 minutes. Step 2: Integrate with your website    You can any of the below protocols to easily integrate the Submit URL API into your system.   JSON request sample  POST /webmaster/api.svc/json/SubmitUrl? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: { "siteUrl":"http:\/\/", "url":"http:\/\/\/url1.html" } XML Request sample  POST /webmaster/api.svc/pox/SubmitUrl?apikey=sampleapikey341CC57365E075EBC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: <SubmitUrl xmlns=""> <siteUrl></siteUrl> <url></url> </SubmitUrl> If the URL submission is successful you will receive an http 200 response. This ensures that your pages will be discovered for indexing and if Bing webmaster guidelines are met then the pages will be crawled and indexed in real time. Using any of the above methods you should be able to directly and automatically let Bing know whenever new links are created in your website. We encourage you to integrate such solution in your Web Content Management System to let Bing auto discover your new content at publication time.  In case you face any challenges during the integration, you can reach out to raise a service ticket. Feel free to contact us if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.  Thanks!  Bing Webmaster Tools team

New – Updated Pay-Per-Use Pricing Model for AWS Config Rules

Amazon Web Services Blog -

AWS Config rules give you the power to perform Dynamic Compliance Checking on your Cloud Resources. Building on the AWS Resource Configuration Tracking provided by AWS Config, you can use a combination of predefined and custom rules to continuously and dynamically check that all changes made to your AWS resources are compliant with the conditions specified in the rules, and to take action (either automatic or manual) to remediate non-compliant resources. You can currently select from 84 different predefined rules, with more in the works. These are managed rules that are refined and updated from time to time. Here are the rules that match my search for EC2: Custom rules are built upon AWS Lambda functions, and can be run periodically or triggered by a configuration change. Rules can optionally be configured to execute a remediation action when a noncompliant resource is discovered. There are many built-in actions, and the option to write your own action using AWS Systems Manager documents as well: New Pay-Per-Use Pricing Today I am happy to announce that we are switching to a new, pay-per-use pricing model for AWS Config rules. Effective August 1st, 2019 you will be charged based on the number of rule evaluations that you run each month. Here is the new pricing for AWS Public Regions: Rule Evaluations Per Month Price Per Evaluation 0-100,000 $0.0010 100,001-500,000 $0.0008 500,001 and above $0.0005 You will no longer pay for active config rules, which can grow costly when used across multiple accounts and regions. You will continue to pay for configuration items recorded, and any additional costs such as use of S3 storage, SNS messaging, and the invocation of Lambda functions. The pricing works in conjunction with AWS Consolidated Billing, and is designed to provide almost all AWS customers with a significant reduction in their Config Rules bill. The new model will let you expand globally and cost-effectively, and will probably encourage you to make even more use of AWS Config rules! — Jeff;  


Recommended Content

Subscribe to Complete Hosting Guide aggregator