Industry Buzz

Liquid Web Acquires ServerSide, a leading Microsoft Windows CMS Hosting Provider

My Host News -

LANSING, Mich. – Liquid Web, LLC, the market leader in managed hosting and managed application services to SMBs and entrepreneurs, is excited to announce the acquisition of ServerSide adding proven experience in hosting the leading Microsoft Windows Content Management solutions to Liquid Web’s portfolio. The acquisition of ServerSide bolsters Liquid Web’s VMware cloud hosting capabilities for small to medium businesses launched in 2019. It also accelerates the company’s entrance into the Progress Sitefinity, Kentico, and Sitecore hosting market. The ServerSide team, including Steve Oren, founder, and CEO, have joined Liquid Web and have helped lead the effort to migrate customers onto the Liquid Web platform. “The acquisition of ServerSide supports Liquid Web’s mission to power leading content management platforms. With ServerSide, we are excited about building upon the relationships ServerSide had with Sitefinity, Kentico, and Sitecore and their ecosystem partners”, said Joe Oesterling, CTO. “We are excited about joining the Liquid Web team. We’ve successfully migrated our customers to Liquid Web’s platform, and we are working hand and hand to deploy our VMware architecture more broadly within Liquid Web”, said Steve Oren, Former CEO at ServerSide. “We look forward to using Liquid Web’s scale to be a bigger player in the leading Windows CMS ecosystems,” said Oren. To learn more about the Liquid Web Private Cloud powered by VMware and NetApp visit: https://www.liquidweb.com/products/private-cloud/. About the Liquid Web Family of Brands Building on over 20 years of success, our Liquid Web Brand Family consists of four companies (Liquid Web, Nexcess, iThemes, and InterWorx), delivering software, solutions, and managed services for mission-critical sites, stores, and applications to SMBs and the designers, developers, and agencies who create for them. With more than 1.5 million sites under management, The Liquid Web Family of Brands serves over 45,000 customers spanning 150 countries. Collectively, the companies have assembled a world-class team of industry experts, provide unparalleled service from a dedicated group of solution engineers available 24/7/365, and own and manage 10 global data centers. As an industry leader in customer service*, the rapidly expanding brand family has been recognized among INC. Magazine’s 5000 Fastest-Growing Companies for twelve years. For more information, please visit https://www.liquidweb.com/ for more info. *2019 Net Promoter Score of 67

TierPoint Announces Seattle Data Center Expansion Plan

My Host News -

SEATTLE – TierPoint, a leading provider of secure, connected data center and cloud solutions at the edge of the internet, today announced plans to expand its state-of-the-art data center in Seattle’s KOMO Plaza. The nearly 18,000 sq. ft. expansion will include new raised floor, office and support space, featuring fully redundant and generator-backed power; high-efficiency cooling; multi-layer physical security, meeting stringent regulatory compliance standards; and diverse network connectivity through a group of 15 carriers and onramp providers, including AWS Direct Connect. “We already have commitments from customers for some of the expanded capacity, and additional room to support the robust demand we’re seeing for colocation and cloud solutions in the Pacific Northwest,” said TierPoint Region Vice President Boyd Goodfellow. “Seattle is a key market for us and one of the fastest-growing markets for IT and other technology companies in the country.” TierPoint expects the expansion to be completed and available to clients later this year, with the total facility then featuring nearly 3.5 MW of installed critical load capacity, scalable to 5.0 MW. About TierPoint Meeting clients where they are on their journey to IT transformation, TierPoint (tierpoint.com) is a leading provider of secure, connected data center and cloud solutions at the edge of the internet. The company has one of the largest customer bases in the industry, with thousands of clients ranging from the public to private sectors, from small businesses to Fortune 500 enterprises. TierPoint also has one of the largest and most geographically diversified footprints in the nation, with over 40 world-class data centers in 20 U.S. markets and 8 multi-tenant cloud pods, connected by a coast-to-coast network. Led by a proven management team, TierPoint’s highly experienced IT professionals offer a comprehensive solution portfolio of private, multitenant, managed hyperscale, and hybrid cloud, plus colocation, disaster recovery, security, and other managed IT services.

Equinix Expands Dallas Infomart Campus with New $142M Data Center and 5G Proof of Concept Center

My Host News -

REDWOOD CITY, CA – Equinix, Inc.(Nasdaq: EQIX), the global interconnection and data center company, today announced the expansion of its Dallas Infomart Data Center campus with the opening of a new $142M International Business Exchange (IBX®) data center and the launch of its 5G and Edge Proof of Concept Center (POCC). The moves support the growing demand for companies to accelerate their evolution from traditional to digital businesses by rapidly scaling their infrastructure, easily adopting hybrid multicloud architectures and interconnecting with strategic business partners within the Platform Equinix® global ecosystem of nearly 10,000 customers. The Dallas region is a major communications hub for the southern United States, with a concentration of telecommunications companies. Many of these companies are part of the dense and diverse ecosystem of carriers, clouds and enterprises at Equinix’s Dallas Infomart campus. This ecosystem makes Equinix Dallas an ideal location for companies seeking to test and validate new 5G and edge innovations. The Equinix 5G and Edge Proof of Concept Center (POCC) will provide a 5G and edge “sandbox” environment, enabling Mobile Network Operators (MNOs), cloud platforms, technology vendors and enterprises to directly connect with the largest edge data center platform in order to test, demonstrate and accelerate complex 5G and edge deployment and interoperability scenarios. The Equinix 5G and Edge POCC aims to:   Develop 5G and edge architectures that leverage ecosystems already resident at Equinix. Explore hybrid multicloud interconnectivty scenarios between MNOs, public clouds and private infrastructures. Develop multiparty business models, partnering strategies and go-to-market motions for the nascent 5G and edge market. The DA11 IBX is the ninth data center for Equinix in the Dallas metro area, and the second building on the growing Dallas Infomart campus. It is a four-story, state-of-the-art data center designed to deliver both small- and large-capacity deployments. The innovative, modular construction incorporates Equinix’s Flexible Data Center (FDC) principles, which leverage common design elements for space, power and cooling to reduce capital cost while ensuring long-term maintenance predictability. For Equinix customers, this approach supports delivery of the highest standards for uptime and availability while lowering operating risk and complexity. It will provide needed capacity for businesses seeking to architect hybrid multicloud infrastructures within a dense ecosystems of customers and partners. The $142 million first phase of DA11 provides a capacity of 1,975 cabinets and colocation space of approximately 72,000 square feet. Upon completion of the planned future phases, the facility is expected to provide a total capacity of more than 3,850 cabinets and colocation space of more than 144,000 square feet. The Dallas metro represents one of the largest enterprise and colocation markets in the Americas and includes nine Equinix IBX data centers, that house more than 135 network service providers—more than any other data center provider in the Dallas metro area. Directly connected to Equinix’s Infomart Data Center, the fifth-most-dense interconnection hub in the United States, these colocation facilities provide proximity to banking, commerce, telecommunications, computer technology, energy, healthcare and medical research, transportation and logistics companies in the metro area. Dallas is a major interconnection point for Latin America traffic with key terrestrial routes serving Central and South America. In combination with our operations in Miami, Los Angeles, Mexico, Bogotá, Sao Paulo and Rio de Janeiro, Equinix continues to expand solutions for enterprise, cloud and content providers looking to address the Latin America Market. According to the 2019 Global Interconnection Index (GXI) Report published by Equinix, enterprise consumption of interconnection bandwidth is expected to grow by 63 percent CAGR in LATAM by 2022 and will contribute up to 11 percent of interconnection bandwidth globally. In this region, content and digital media is expected to outpace other regions in interconnection bandwidth adoption. The Equinix Dallas IBX data centers offer access to Equinix Cloud Exchange Fabric (ECX Fabric), an on-demand platform that enables Equinix customers to discover and dynamically connect to any other customer across any Equinix location globally. Offered through an easy-to-use portal and a single connection to the Equinix platform, ECX Fabric offers access to more than 2,100 of the world’s largest enterprises, cloud service providers (including Alibaba Cloud, Amazon Web Services, Google Cloud Platform, IBM Cloud, Microsoft Azure and Oracle Cloud) and SaaS providers (including Salesforce, SAP and ServiceNow, among others). By reaching their entire digital ecosystem through a single private and secure connection, companies can rapidly scale their digital business operations globally. Customers can also locate their data close to the edge of their network, increasing performance by keeping data near consumption points. Equinix is a leader in data center sustainability and in greening the supply chains of its customers. Equinix’s long-term goal of using 100% clean and renewable energy for its global platform has resulted in significant increases in renewable energy coverage globally including 100% renewable throughout the United States. Equinix continues to make advancements in the way it designs, builds and operates its data centers with high energy efficiency standards. DA11 customers will benefit from reductions of their CO2 footprint through Equinix’s renewable energy procurement strategy and the use of energy-efficient systems throughout the facility. In the Americas, Equinix now operates more than 90 IBX data centers strategically located in Brazil, Canada, Colombia, Mexico and the United States. Globally, Platform Equinix is comprised of more than 210 IBX data centers across 56 markets and 26 countries, providing data center and interconnection services for more than 9,700 of the world’s leading businesses. About Equinix Equinix, Inc. (Nasdaq: EQIX) connects the world’s leading businesses to their customers, employees and partners inside the most-interconnected data centers. On this global platform for digital business, companies come together across more than 55 markets on five continents to reach everywhere, interconnect everyone and integrate everything they need to create their digital futures. Equinix.com.

Making the WAF 40% faster

CloudFlare Blog -

Cloudflare’s Web Application Firewall (WAF) protects against malicious attacks aiming to exploit vulnerabilities in web applications. It is continuously updated to provide comprehensive coverage against the most recent threats while ensuring a low false positive rate.As with all Cloudflare security products, the WAF is designed to not sacrifice performance for security, but there is always room for improvement.This blog post provides a brief overview of the latest performance improvements that were rolled out to our customers.Transitioning from PCRE to RE2Back in July of 2019, the WAF transitioned from using a regular expression engine based on PCRE to one inspired by RE2, which is based around using a deterministic finite automaton (DFA) instead of backtracking algorithms. This change came as a result of an outage where an update added a regular expression which backtracked enormously on certain HTTP requests, resulting in exponential execution time.After the migration was finished, we saw no measurable difference in CPU consumption at the edge, but noticed execution time outliers in the 95th and 99th percentiles decreased, something we expected given RE2's guarantees of a linear time execution with the size of the input.As the WAF engine uses a thread pool, we also had to implement and tune a regex cache shared between the threads to avoid excessive memory consumption (the first implementation turned out to use a staggering amount of memory).These changes, along with others outlined in the post-mortem blog post, helped us improve reliability and safety at the edge and have the confidence to explore further performance improvements.But while we’ve highlighted regular expressions, they are only one of the many capabilities of the WAF engine.Matching StagesWhen an HTTP request reaches the WAF, it gets organized into several logical sections to be analyzed: method, path, headers, and body. These sections are all stored in Lua variables. If you are interested in more detail on the implementation of the WAF itself you can watch this old presentation.Before matching these variables against specific malicious request signatures, some transformations are applied. These transformations are functions that range from simple modifications like lowercasing strings to complex tokenizers and parsers looking to fingerprint certain malicious payloads.As the WAF currently uses a variant of the ModSecurity syntax, this is what a rule might look like:SecRule REQUEST_BODY "@rx /\x00+evil" "drop, t:urlDecode, t:lowercase" It takes the request body stored in the REQUEST_BODY variable, applies the urlDecode() and lowercase() functions to it and then compares the result with the regular expression signature \x00+evil. In pseudo-code, we can represent it as:rx( "/\x00+evil", lowercase( urlDecode( REQUEST_BODY ) ) ) Which in turn would match a request whose body contained percent encoded NULL bytes followed by the word "evil”, e.g.:GET /cms/admin?action=post HTTP/1.1 Host: example.com Content-Type: text/plain; charset=utf-8 Content-Length: 16 thiSis%2F%00eVil The WAF contains thousands of these rules and its objective is to execute them as quickly as possible to minimize any added latency to a request. And to make things harder, it needs to run most of the rules on nearly every request. That’s because almost all HTTP requests are non-malicious and no rules are going to match. So we have to optimize for the worst case: execute everything!To help mitigate this problem, one of the first matching steps executed for many rules is pre-filtering. By checking if a request contains certain bytes or sets of strings we are able to potentially skip a considerable number of expressions.In the previous example, doing a quick check for the NULL byte (represented by \x00 in the regular expression) allows us to completely skip the rule if it isn’t found: contains( "\x00", REQUEST_BODY ) and rx( "/\x00+evil", lowercase( urlDecode( REQUEST_BODY ) ) ) Since most requests don’t match any rule and these checks are quick to execute, overall we aren’t doing more operations by adding them.Other steps can also be used to scan through and combine several regular expressions and avoid execution of rule expressions. As usual, doing less work is often the simplest way to make a system faster.MemoizationWhich brings us to memoization - caching the output of a function call to reuse it in future calls.Let’s say we have the following expressions:1. rx( "\x00+evil", lowercase( url_decode( body ) ) ) 2. rx( "\x00+EVIL", remove_spaces( url_decode( body ) ) ) 3. rx( "\x00+evil", lowercase( url_decode( headers ) ) ) 4. streq( "\x00evil", lowercase( url_decode( body ) ) ) In this case, we can reuse the result of the nested function calls (1) as they’re the same in (4). By saving intermediate results we are also able to take advantage of the result of url_decode( body ) from (1) and use it in (2) and (4). Sometimes it is also possible to swap the order functions are applied to improve caching, though in this case we would get different results. A naive implementation of this system can simply be a hash table with each entry having the function(s) name(s) and arguments as the key and its output as the value.Some of these functions are expensive and caching the result does lead to significant savings. To give a sense of magnitude, one of the rules we modified to ensure memoization took place saw its execution time reduced by about 95%:Execution time per ruleThe WAF engine implements memoization and the rules take advantage of it, but there’s always room to increase cache hits.Rewriting Rules and ResultsCloudflare has a very regular cadence of releasing updates and new rules to the Managed Rulesets. However, as more rules are added and new functions implemented, the memoization cache hit rate tends to decrease.To improve this, we first looked into the rules taking the most wall-clock time to execute using some of our performance metrics:Execution time per ruleHaving these, we cross-referenced them with the ones having cache misses (output is truncated with [...]):moura@cf $ ./parse.py --profile Hit Ratio: ------------- 0.5608 Hot entries: ------------- [urlDecode, replaceComments, REQUEST_URI, REQUEST_HEADERS, ARGS_POST] [urlDecode, REQUEST_URI] [urlDecode, htmlEntityDecode, jsDecode, replaceNulls, removeWhitespace, REQUEST_URI, REQUEST_HEADERS] [urlDecode, lowercase, REQUEST_FILENAME] [urlDecode, REQUEST_FILENAME] [urlDecode, lowercase, replaceComments, compressWhitespace, ARGS, REQUEST_FILENAME] [urlDecode, replaceNulls, removeWhitespace, REQUEST_URI, REQUEST_HEADERS, ARGS_POST] [...] Candidates: ------------- 100152A - replace t:removeWhitespace with t:compressWhitespace,t:removeWhitespace 100214 - replace t:lowercase with (?i) 100215 - replace t:lowercase with (?i) 100300 - consider REQUEST_URI over REQUEST_FILENAME 100137D - invert order of t:replaceNulls,t:lowercase [...] After identifying more than 40 rules, we rewrote them to take full advantage of memoization and added pre-filter checks where possible. Many of these changes were not immediately obvious, which is why we’re also creating tools to aid analysts in creating even more efficient rules. This also helps ensure they run in accordance with the latency budgets the team has set.This change resulted in an increase of the cache hit percentage from 56% to 74%, which crucially included the most expensive transformations. Most importantly, we also observed a sharp decrease of 40% in the average time the WAF takes to process and analyze an HTTP request at the Cloudflare edge.WAF Request Processing - Time AverageA comparable decrease was also observed for the 95th and 99th percentiles.Finally, we saw a drop of CPU consumption at the edge of around 4.3%.Next StepsWhile the Lua WAF has served us well throughout all these years, we are currently porting it to use the same engine powering Firewall Rules. It is based on our open-sourced wirefilter execution engine, which uses a filter syntax inspired by Wireshark®. In addition to allowing more flexible filter expressions, it provides better performance and safety.The rule optimizations we've described in this blog post are not lost when moving to the new engine, however, as the changes were deliberately not specific to the current Lua engine’s implementation. And while we're routinely profiling, benchmarking and making complex optimizations to the Firewall stack, sometimes just relatively simple changes can have a surprisingly huge effect.

How to Encourage Employees to Share Your LinkedIn Content: 4 Tips

Social Media Examiner -

Need more visibility on LinkedIn? Wondering how to get employees involved with your LinkedIn content strategy? In this article, you’ll discover four ways to help your employees share more company content with their personal networks on LinkedIn. Why Encourage Employees to Share Company Content on Your LinkedIn Page? Getting your colleagues involved with your LinkedIn […] The post How to Encourage Employees to Share Your LinkedIn Content: 4 Tips appeared first on Social Media Examiner | Social Media Marketing.

FindMyHost Releases July 2020 Editors’ Choice Awards

My Host News -

OKLAHOMA CITY, OK – Web Hosting Directory and Review site www.FindMyHost.com released the July Editor’s Choice Awards for 2020 today. Web Hosting companies strive to provide their customers with the very best service and support. We want to take the opportunity to acknowledge the hosts per category who have excelled in their field. The FindMyHost Editors’ Choice Awards are chosen based on Editor and Consumer Reviews. Customers who wish to submit positive reviews for the current or past Web Host are free to do so by visiting the customer review section of FindMyHost.com.  By doing so, you nominate your web host for next months Editor’s Choice awards. We would like to congratulate all the web hosts who participated and in particular the following who received top honors in their field: Dedicated Servers GlowHost.com   Visit GlowHost.com  View Report Card Business Hosting KnownSRV.com   Visit KnownSRV.com  View Report Card SSD Hosting KVCHosting.net   Visit KVCHosting.net  View Report Card VPS MightWeb.net   Visit MightWeb.net  View Report Card Secure Hosting VPSFX.com   Visit VPSFX.com  View Report Card Cloud Hosting BudgetVM.com   Visit BudgetVM.com  View Report Card Reseller Hosting ZipServers.com   Visit ZipServers.com  View Report Card Website Monitoring UptimeSpy.com   Visit UptimeSpy.com  View Report Card About FindMyHost FindMyHost, Inc. is an online magazine that provides editor reviews, consumer hosting news, interviews discussion forums and more. FindMyHost.com was established in January 2001 to protect web host consumers and web developers from making the wrong choice when choosing a web host. FindMyHost.com showcases a selection of web hosting companies who have undergone their approved host program testing and provides reviews from customers. FindMyHost’s extensive website can be found at www.FindMyHost.com.

AWS App2Container – A New Containerizing Tool for Java and ASP.NET Applications

Amazon Web Services Blog -

Our customers are increasingly developing their new applications with containers and serverless technologies, and are using modern continuous integration and delivery (CI/CD) tools to automate the software delivery life cycle. They also maintain a large number of existing applications that are built and managed manually or using legacy systems. Maintaining these two sets of applications with disparate tooling adds to operational overhead and slows down the pace of delivering new business capabilities. As much as possible, they want to be able to standardize their management tooling and CI/CD processes across both their existing and new applications, and see the option of packaging their existing applications into containers as the first step towards accomplishing that goal. However, containerizing existing applications requires a long list of manual tasks such as identifying application dependencies, writing dockerfiles, and setting up build and deployment processes for each application. These manual tasks are time consuming, error prone, and can slow down the modernization efforts. Today, we are launching AWS App2Container, a new command-line tool that helps containerize existing applications that are running on-premises, in Amazon Elastic Compute Cloud (EC2), or in other clouds, without needing any code changes. App2Container discovers applications running on a server, identifies their dependencies, and generates relevant artifacts for seamless deployment to Amazon ECS and Amazon EKS. It also provides integration with AWS CodeBuild and AWS CodeDeploy to enable a repeatable way to build and deploy containerized applications. AWS App2Container generates the following artifacts for each application component: Application artifacts such as application files/folders, Dockerfiles, container images in Amazon Elastic Container Registry (ECR), ECS Task definitions, Kubernetes deployment YAML, CloudFormation templates to deploy the application to Amazon ECS or EKS, and templates to set up a build/release pipeline in AWS Codepipeline which also leverages AWS CodeBuild and CodeDeploy. Starting today, you can use App2Container to containerize ASP.NET (.NET 3.5+) web applications running in IIS 7.5+ on Windows, and Java applications running on Linux—standalone JBoss, Apache Tomcat, and generic Java applications such as Spring Boot, IBM WebSphere, Oracle WebLogic, etc. By modernizing existing applications using containers, you can make them portable, increase development agility, standardize your CI/CD processes, and reduce operational costs. Now let’s see how it works! AWS App2Container – Getting Started AWS App2Container requires that the following prerequisites be installed on the server(s) hosting your application: AWS Command Line Interface (CLI) version 1.14 or later, Docker tools, and (in the case of ASP.NET) Powershell 5.0+ for applications running on Windows. Additionally, you need to provide appropriate IAM permissions to App2Container to interact with AWS services. For example, let’s look how you containerize your existing Java applications. App2Container CLI for Linux is packaged as a tar.gz archive. The file provides users an interactive shell script, install.sh to install the App2Container CLI. Running the script guides users through the install steps and also updates the user’s path to include the App2Container CLI commands. First, you can begin by running a one-time initialization on the installed server for the App2Container CLI with the init command. $ sudo app2container init Workspace directory path for artifacts[default: /home/ubuntu/app2container/ws]: AWS Profile (configured using 'aws configure --profile')[default: default]: Optional S3 bucket for application artifacts (Optional)[default: none]: Report usage metrics to AWS? (Y/N)[default: y]: Require images to be signed using Docker Content Trust (DCT)? (Y/N)[default: n]: Configuration saved This sets up a workspace to store application containerization artifacts (minimum 20GB of disk space available). You can extract them into your Amazon Simple Storage Service (S3) bucket using your AWS profile configured to use AWS services. Next, you can view Java processes that are running on the application server by using the inventory command. Each Java application process has a unique identifier (for example, java-tomcat-9e8e4799) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands. $ sudo app2container inventory { "java-jboss-5bbe0bec": { "processId": 27366, "cmdline": "java ... /home/ubuntu/wildfly-10.1.0.Final/modules org.jboss.as.standalone -Djboss.home.dir=/home/ubuntu/wildfly-10.1.0.Final -Djboss.server.base.dir=/home/ubuntu/wildfly-10.1.0.Final/standalone ", "applicationType": "java-jboss" }, "java-tomcat-9e8e4799": { "processId": 2537, "cmdline": "/usr/bin/java ... -Dcatalina.home=/home/ubuntu/tomee/apache-tomee-plume-7.1.1 -Djava.io.tmpdir=/home/ubuntu/tomee/apache-tomee-plume-7.1.1/temp org.apache.catalina.startup.Bootstrap start ", "applicationType": "java-tomcat" } } You can also intialize ASP.NET applications on an administrator-run PowerShell session of Windows Servers with IIS version 7.0 or later. Note that Docker tools and container support are available on Windows Server 2016 and later versions. You can select to run all app2container operations on the application server with Docker tools installed or use a worker machine with Docker tools using Amazon ECS-optimized Windows Server AMIs. PS> app2container inventory { "iis-smarts-51d2dbf8": { "siteName": "nopCommerce39", "bindings": "http/*:90:", "applicationType": "iis" } } The inventory command displays all IIS websites on the application server that can be containerized. Each IIS website process has a unique identifier (for example, iis-smarts-51d2dbf8) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands. You can choose a specific application by referring to its application ID and generate an analysis report for the application by using the analyze command. $ sudo app2container analyze --application-id java-tomcat-9e8e4799 Created artifacts folder /home/ubuntu/app2container/ws/java-tomcat-9e8e4799 Generated analysis data in /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/analysis.json Analysis successful for application java-tomcat-9e8e4799 Please examine the same, make appropriate edits and initiate containerization using "app2container containerize --application-id java-tomcat-9e8e4799" You can use the analysis.json template generated by the application analysis to gather information on the analyzed application that helps identify all system dependencies from the analysisInfo section, and update containerization parameters to customize the container images generated for the application using the containerParameters section. $ cat java-tomcat-9e8e4799/analysis.json { "a2CTemplateVersion": "1.0", "createdTime": "2020-06-24 07:40:5424", "containerParameters": { "_comment1": "*** EDITABLE: The below section can be edited according to the application requirements. Please see the analyisInfo section below for deetails discoverd regarding the application. ***", "imageRepository": "java-tomcat-9e8e4799", "imageTag": "latest", "containerBaseImage": "ubuntu:18.04", "coopProcesses": [ 6446, 6549, 6646] }, "analysisInfo": { "_comment2": "*** NON-EDITABLE: Analysis Results ***", "processId": 2537 "appId": "java-tomcat-9e8e4799", "userId": "1000", "groupId": "1000", "cmdline": [...], "os": {...}, "ports": [...] } } Also, you can run the $ app2container extract --application-id java-tomcat-9e8e4799 command to generate an application archive for the analyzed application. This depends on the analysis.json file generated earlier in the workspace folder for the application,and adheres to any containerization parameter updates specified in there. By using extract command, you can continue the workflow on a worker machine after running the first set of commands on the application server. Now you can containerize command generated Docker images for the selected application. $ sudo app2container containerize --application-id java-tomcat-9e8e4799 AWS pre-requisite check succeeded Docker pre-requisite check succeeded Extracted container artifacts for application Entry file generated Dockerfile generated under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts Generated dockerfile.update under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts Generated deployment file at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json Containerization successful. Generated docker image java-tomcat-9e8e4799 You're all set to test and deploy your container image. Next Steps: 1. View the container image with \"docker images\" and test the application. 2. When you're ready to deploy to AWS, please edit the deployment file as needed at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json. 3. Generate deployment artifacts using app2container generate app-deployment --application-id java-tomcat-9e8e4799 Using this command, you can view the generated container images using Docker images on the machine where the containerize command is run. You can use the docker run command to launch the container and test application functionality. Note that in addition to generating container images, the containerize command also generates a deployment.json template file that you can use with the next generate-appdeployment command. You can edit the parameters in the deployment.json template file to change the image repository name to be registered in Amazon ECR, the ECS task definition parameters, or the Kubernetes App name. $ cat java-tomcat-9e8e4799/deployment.json { "a2CTemplateVersion": "1.0", "applicationId": "java-tomcat-9e8e4799", "imageName": "java-tomcat-9e8e4799", "exposedPorts": [ { "localPort": 8090, "protocol": "tcp6" } ], "environment": [], "ecrParameters": { "ecrRepoTag": "latest" }, "ecsParameters": { "createEcsArtifacts": true, "ecsFamily": "java-tomcat-9e8e4799", "cpu": 2, "memory": 4096, "dockerSecurityOption": "", "enableCloudwatchLogging": false, "publicApp": true, "stackName": "a2c-java-tomcat-9e8e4799-ECS", "reuseResources": { "vpcId": "", "cfnStackName": "", "sshKeyPairName": "" }, "gMSAParameters": { "domainSecretsArn": "", "domainDNSName": "", "domainNetBIOSName": "", "createGMSA": false, "gMSAName": "" } }, "eksParameters": { "createEksArtifacts": false, "applicationName": "", "stackName": "a2c-java-tomcat-9e8e4799-EKS", "reuseResources": { "vpcId": "", "cfnStackName": "", "sshKeyPairName": "" } } } At this point, the application workspace where the artifacts are generated serves as an iteration sandbox. You can choose to edit the Dockerfile generated here to make changes to their application and use the docker build command to build new container images as needed. You can generate the artifacts needed to deploy the application containers in Amazon EKS by using the generate-deployment command. $ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799 AWS pre-requisite check succeeded Docker pre-requisite check succeeded Created ECR Repository Uploaded Cloud Formation resources to S3 Bucket: none Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml EKS Cloudformation templates and additional deployment artifacts generated successfully for application java-tomcat-9e8e4799 You're all set to use AWS Cloudformation to manage your application stack. Next Steps: 1. Edit the cloudformation template as necessary. 2. Create an application stack using the AWS CLI or the AWS Console. AWS CLI command: aws cloudformation deploy --template-file /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml --capabilities CAPABILITY_NAMED_IAM --stack-name java-tomcat-9e8e4799 3. Setup a pipeline for your application stack: app2container generate pipeline --application-id java-tomcat-9e8e4799 This command works based on the deployment.json template file produced as part of running the containerize command. App2Container will now generate ECS/EKS cloudformation templates as well and an option to deploy those stacks. The command registers the container image to user specified ECR repository, generates cloudformation template for Amazon ECS and EKS deployments. You can register ECS task definition with Amazon ECS and use kubectl to launch the containerized application on the existing Amazon EKS or self-managed kubernetes cluster using App2Container generated amazon-eks-master.template.deployment.yaml. Alternatively, you can directly deploy containerized applications by --deploy options into Amazon EKS. $ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799 --deploy AWS pre-requisite check succeeded Docker pre-requisite check succeeded Created ECR Repository Uploaded Cloud Formation resources to S3 Bucket: none Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml Initiated Cloudformation stack creation. This may take a few minutes. Please visit the AWS Cloudformation Console to track progress. Deploying application to EKS Handling ASP.NET Applications with Windows Authentication Containerizing ASP.NET applications is almost same process as Java applications, but Windows containers cannot be directly domain joined. They can however still use Active Directory (AD) domain identities to support various authentication scenarios. App2Container detects if a site is using Windows authentication and accordingly makes the IIS site’s application pool run as the network service identity, and generates the new cloudformation templates for Windows authenticated IIS applications. The creation of gMSA and AD Security group, domain join ECS nodes and making containers use this gMSA are all taken care of by those templates. Also, it provides two PowerShell scripts as output to the $ app2container containerize command along with an instruction file on how to use it. The following is an example output: PS C:\Windows\system32> app2container containerize --application-id iis-SmartStoreNET-a726ba0b Running AWS pre-requisite check... Running Docker pre-requisite check... Container build complete. Please use "docker images" to view the generated container images. Detected that the Site is using Windows Authentication. Generating powershell scripts into C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts required to setup Container host with Windows Authentication Please look at C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts\WindowsAuthSetupInstructions.md for setup instructions on Windows Authentication. A deployment file has been generated under C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b Please edit the same as needed and generate deployment artifacts using "app2container generate-deployment" The first PowerShellscript, DomainJoinAddToSecGroup.ps1, joins the container host and adds it to an Active Directory security group. The second script, CreateCredSpecFile.ps1, creates a Group Managed Service Account (gMSA), grants access to the Active Directory security group, generates the credential spec for this gMSA, and stores it locally on the container host. You can execute these PowerShellscripts on the ECS host. The following is an example usage of the scripts: PS C:\Windows\system32> .\DomainJoinAddToSecGroup.ps1 -ADDomainName Dominion.com -ADDNSIp 10.0.0.1 -ADSecurityGroup myIISContainerHosts -CreateADSecurityGroup:$true PS C:\Windows\system32> .\CreateCredSpecFile.ps1 -GMSAName MyGMSAForIIS -CreateGMSA:$true -ADSecurityGroup myIISContainerHosts Before executing the app2container generate-deployment command, edit the deployment.json file to change the value of dockerSecurityOption to the name of the CredentialSpec file that the CreateCredSpecFile script generated. For example, "dockerSecurityOption": "credentialspec:file://dominion_mygmsaforiis.json" Effectively, any access to network resource made by the IIS server inside the container for the site will now use the above gMSA to authenticate. The final step is to authorize this gMSA account on the network resources that the IIS server will access. A common example is authorizing this gMSA inside the SQL Server. Finally, if the application must connect to a database to be fully functional and you run the container in Amazon ECS, ensure that the application container created from the Docker image generated by the tool has connectivity to the same database. You can refer to this documentation for options on migrating: MS SQL Server from Windows to Linux on AWS, Database Migration Service, and backup and restore your MS SQL Server to Amazon RDS. Now Available AWS App2Container is offered free. You only pay for the actual usage of AWS services like Amazon EC2, ECS, EKS, and S3 etc based on their usage. For details, please refer to App2Container FAQs and documentations. Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for ECS, AWS Forum for EKS, or on the container roadmap on Github. — Channy;

Amazon RDS Proxy – Now Generally Available

Amazon Web Services Blog -

At AWS re:Invent 2019, we launched the preview of Amazon RDS Proxy, a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure. Following the preview of MySQL engine, we extended to the PostgreSQL compatibility. Today, I am pleased to announce that we are now generally available for both engines. Many applications, including those built on modern serverless architectures using AWS Lambda, Fargate, Amazon ECS, or EKS can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency, application scalability, and security. With RDS Proxy, failover times for Amazon Aurora and RDS databases are reduced by up to 66%, and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM). Amazon RDS Proxy can be enabled for most applications with no code change, and you don’t need to provision or manage any additional infrastructure and only pay per vCPU of the database instance for which the proxy is enabled. Amazon RDS Proxy – Getting started You can get started with Amazon RDS Proxy in just a few clicks by going to the AWS management console and creating an RDS Proxy endpoint for your RDS databases. In the navigation pane, choose Proxies and Create proxy. You can also see the proxy panel below. To create your proxy, specify the Proxy identifier, a unique name of your choosing, and choose the database engine – either MySQL or PostgreSQL. Choose the encryption setting if you want the proxy to enforce TLS / SSL for all connection between application and proxy, and specify a time period that a client connection can be idle before the proxy can close it. A client connection is considered idle when the application doesn’t submit a new request within the specified time after the previous request completed. The underlying connection between the proxy and database stays open and is returned to the connection pool. Thus, it’s available to be reused for new client connections. Next, choose one RDS DB instance or Aurora DB cluster in Database to access through this proxy. The list only includes DB instances and clusters with compatible database engines, engine versions, and other settings. Specify Connection pool maximum connections, a value between 1 and 100. This setting represents the percentage of the max_connections value that RDS Proxy can use for its connections. If you only intend to use one proxy with this DB instance or cluster, you can set it to 100. For details about how RDS Proxy uses this setting, see Connection Limits and Timeouts. Choose at least one Secrets Manager secret associated with the RDS DB instance or Aurora DB cluster that you intend to access with this proxy, and select an IAM role that has permission to access the Secrets Manager secrets you chose. If you don’t have an existing secret, please click Create a new secret before setting up the RDS proxy. After setting VPC Subnets and a security group, please click Create proxy. If you more settings in details, please refer to the documentation. You can see the new RDS proxy after waiting a few minutes and then point your application to the RDS Proxy endpoint. That’s it! You can also create an RDS proxy easily via AWS CLI command. aws rds create-db-proxy \ --db-proxy-name channy-proxy \ --role-arn iam_role \ --engine-family { MYSQL|POSTGRESQL } \ --vpc-subnet-ids space_separated_list \ [--vpc-security-group-ids space_separated_list] \ [--auth ProxyAuthenticationConfig_JSON_string] \ [--require-tls | --no-require-tls] \ [--idle-client-timeout value] \ [--debug-logging | --no-debug-logging] \ [--tags comma_separated_list] How RDS Proxy works Let’s see an example that demonstrates how open connections continue working during a failover when you reboot a database or it becomes unavailable due to a problem. This example uses a proxy named channy-proxy and an Aurora DB cluster with DB instances instance-8898 and instance-9814. When the failover-db-cluster command is run from the Linux command line, the writer instance that the proxy is connected to changes to a different DB instance. You can see that the DB instance associated with the proxy changes while the connection remains open. $ mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p Enter password: ... mysql> select @@aurora_server_id; +--------------------+ | @@aurora_server_id | +--------------------+ | instance-9814 | +--------------------+ 1 row in set (0.01 sec) mysql> [1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p $ # Initially, instance-9814 is the writer. $ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399 JSON output $ # After a short time, the console shows that the failover operation is complete. $ # Now instance-8898 is the writer. $ fg mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p mysql> select @@aurora_server_id; +--------------------+ | @@aurora_server_id | +--------------------+ | instance-8898 | +--------------------+ 1 row in set (0.01 sec) mysql> [1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p $ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399 JSON output $ # After a short time, the console shows that the failover operation is complete. $ # Now instance-9814 is the writer again. $ fg mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p mysql> select @@aurora_server_id; +--------------------+ | @@aurora_server_id | +--------------------+ | instance-9814 | +--------------------+ 1 row in set (0.01 sec) +---------------+---------------+ | Variable_name | Value | +---------------+---------------+ | hostname | ip-10-1-3-178 | +---------------+---------------+ 1 row in set (0.02 sec) With RDS Proxy, you can build applications that can transparently tolerate database failures without needing to write complex failure handling code. RDS Proxy automatically routes traffic to a new database instance while preserving application connections. You can review the demo for an overview of RDS Proxy and the steps you need take to access RDS Proxy from a Lambda function. If you want to know how your serverless applications maintain excellent performance even at peak loads, please read this blog post. For a deeper dive into using RDS Proxy for MySQL with serverless, visit this post. The following are a few things that you should be aware of: Currently, RDS Proxy is available for the MySQL and PostgreSQL engine family. This engine family includes RDS for MySQL 5.6 and 5.7, PostgreSQL 10.11 and 11.5. In an Aurora cluster, all of the connections in the connection pool are handled by the Aurora primary instance. To perform load balancing for read-intensive workloads, you still use the reader endpoint directly for the Aurora cluster. Your RDS Proxy must be in the same VPC as the database. Although the database can be publicly accessible, the proxy can’t be. Proxies don’t support compressed mode. For example, they don’t support the compression used by the --compress or -C options of the mysql command. Now Available! Amazon RDS Proxy is generally available in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London) , Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) and Asia Pacific (Tokyo) regions for Aurora MySQL, RDS for MySQL, Aurora PostgreSQL, and RDS for PostgreSQL, and it includes support for Aurora Serverless and Aurora Multi-Master. Take a look at the product page, pricing, and the documentation to learn more. Please send us feedback either in the AWS forum for Amazon RDS or through your usual AWS support contacts. – Channy;

Employee Spotlight: Rachel Noonan

WP Engine -

In this ongoing blog series, we speak with WP Engine employees around the globe to learn more about their roles, what they love about the cities they work in, and what they like most about working at WP Engine.  In this interview, we talk to Rachel Noonan, an Engineering Manager at WP Engine’s Limerick office,… The post Employee Spotlight: Rachel Noonan appeared first on WP Engine.

Helping 25 Million Job Seekers Get Back to Work

LinkedIn Official Blog -

This year is unlike any other. We’re enduring a global pandemic that’s taken lives and impacted the well-being, health, and jobs of so many people around the globe. And with the reality of systemic racism in our country front and center, we have a critical and shared responsibility to create a better, more inclusive future for the Black community. We are all also facing an economy that has changed dramatically. In the U.S., the unemployment rate swung from 50-year lows to 70-year highs in just... .

New Features to Give and Get Help From Your Community

LinkedIn Official Blog -

When it comes to your career, just one person opening a door can make all the difference. That’s why our community of millions of members across the globe are such a valuable resource to job seekers. And, in challenging times like these, that resource is one of the best ways to support one another. We’ve added some new features on LinkedIn that make it easier to give and get help. Share That You’re Open To Work A good first step to finding a new opportunity is to let others know you’re looking.... .

3 Surprising Ways Bloggers Can Drive More Site Traffic

HostGator Blog -

The post 3 Surprising Ways Bloggers Can Drive More Site Traffic appeared first on HostGator Blog. Don’t let what I’m about to say scare you (because, secretly, it’s awesome for you). Recent stats suggest one-fifth of bloggers report that it has become more challenging to get traffic from Google, and 50% of bloggers say it’s gotten harder to get traffic from Facebook.  This makes sense considering there were more than 500 million existing blogs in 2019, and the number of bloggers in the U.S. is expected to increase to 31.7 million in 2020. But, does the abundance of bloggers mean you should throw your hands up and abandon all plans? Can I get a loud and resounding “no” here? The reason why more and more people are building blogs is that it’s a surefire way to grow your business, establish credibility, capture more email subscribers, and help people find your business or side hustle online. While there is an abundance of bloggers, and some bloggers find it more difficult to get more traffic, it’s essential to remember a couple of things. First, Google’s algorithm is doing a better job than ever delivering up relevant content to search engine users. This means if you are writing helpful content on your blog, Google will reward you, and you won’t be one of the one-fifth worried about relevant traffic. There are also specific and surefire strategies you can employ on your blog posts to help Google understand what your site is about, so Google can deliver your content to the people that want to hear from you. These conventional strategies include writing helpful on-topic content consistently, engaging in off-page SEO, and optimizing your on-page content for the search engine results pages (SERPs).  But, surprise! We’re not going to talk about those today. This post is going to cover three of the less obvious ways you can drive more traffic to your site via your blog posts. 1. Include images or videos in all of your blog posts Blogging is all about the written word, right? Nope. Stats show that articles that include images get 94% more views than blog posts that don’t use any visuals. The wild thing is only 19% of bloggers are now including video in their posts. This means the second you start adding images to your blog posts, you are giving yourself a competitive edge over bloggers in the same niche. Here are two insider tips to know that will lead to a boost in website performance. Use real images in your blog posts You may be tempted to pay for a stock image. While stock images are better than no images at all, research shows that real photos can result in a 35% increase in conversion.  You may not be a pro photographer, but that doesn’t matter. People want to see a real picture of your garden, sourdough start, art project, or whatever it is you specialize in.  Use a free image design service If you want to design images to use in your blog post (like the one above), there are tons of free resources on the internet like Canva and Crello. The cool thing about these types of free image design services is they offer thousands of templates you can customize to fit the tone and style of your blog. That means you don’t have to be a designer to create a fun and shareable image. Additionally, you can count on the dimensions being correct for social shares. This means if someone goes to your blog and decides to share your post, the image will automatically be the correct size for a social share. Optimize your image for search engines If you’re a novice blogger and only know the basics of how the internet works, don’t worry. I’ve got you covered. Remember, Google reads words, not pictures.  The best way for Google to understand your images are is to label them appropriately. In SEO terms, this is called using an alt tag.  All you have to do to become a professional SEO alt tagger is save the picture you are going to use to your desktop, then right-click on your picture and select “rename.” Instead of “img10393” (or whatever mumbo jumbo the image is named), rename it your primary keyword. Here is an example of a recent image relabeled with the primary keyword “gardening tips.” If I include the “gardening tips” image instead of the “screenshot…” image, Google knows what my picture is. Magic. If you’re really ambitious, you can also include a video in your content. Video is particularly strategic, as 43% of consumers increasingly want video content, and video content is 50X more likely to drive organic search traffic than text only. 2. Follow blog headline best practices Writing a blog title can be one of the most difficult parts of publishing a blog post. It’s challenging to think of one line that summarizes your entire post, draws readers in, and makes them want to click.  However, having an awesome headline is one of the best ways to bring traffic to your website. In fact, making appropriate headline changes has the power to provide a 10% increase in clicks, according to MarketingExperiments. Here is a quick list of what makes a good headline. Include your target keyword in the first part of the headline When a searcher types a keyword into Google, Google’s algorithm searches through all relevant posts on the internet to deliver up the right content. Google looks at headlines to understand the theme of a post. That’s why it’s critical to include your primary keyword in your headline. Consider using a list-based headline (they are wildly popular) People browse the internet when they are looking for how to do something or for best practices. A list-based article is an outstanding way to present information. Lists are easy to scan, informative, and quickly give readers the information they seek. If you write a list-based article, then your headline should show readers the post is a list (e.g., 7 Top Gardening Tips for Novice Green Thumbs). According to ConversionXL, 36% of people prefer list-based headlines. As an added bit of advice, odd-numbered lists tend to outperform even-numbered listicles by 20%. Hit the headline word-count sweet spot (6-8 words, but up to 13) If a headline is too short, Google will have a hard time determining the relevance of the post. Google will have the same problem if the headline is too long. So, what’s the sweet spot? Stats show that headlines with 6-13 words attract the most traffic, and if your headline is between 6-8 words, it can increase your click-through rate by 21%. 3. Publish blog posts regularly The last tip for driving more site traffic to your website is to publish blog posts regularly. Research shows that companies that publish 16+ blog posts per month get nearly 3.5x more traffic than those that publish 0-4 monthly. There are a couple of reasons why Google rewards more active blogs with better search result rankings and why they get more traffic. Maybe the reasons are obvious, but since posting regularly is critical to the success of your blog, let’s cover them anyway. The first reason is that the more content you produce, the more pages Google’s algorithm has to sort through. When you have more content, it’s easier for Google to understand your particular niche, find relevant keywords, and deliver your content to the audience that is looking for you. The next reason is that the more high-quality content you have on your blog, the more it establishes you as an industry leader. As you provide excellent blog posts, people will come back and visit your site often for more of your expertise.  Not to mention, businesses that maintain blogs receive twice as much email traffic as those that don’t. Email is yet another way to reach your audience and drive meaningful traffic to your website. Start your blog with HostGator today! Blogging is undoubtedly one of the best ways to capture the attention (and business) of your target audience. That’s why there are over 500 million blogs on the internet.  If you follow SEO best practices as well as the three tips listed above, your blog will be a successful tool in capturing customers. To get started with your blog, sign up with HostGator today. You can either use HostGator’s Website Builder, or you can start a WordPress blog via HostGator. We can’t wait for you to build your website. Find the post on the HostGator Blog

Liquid Web Acquires ServerSide, a leading Microsoft Windows CMS Hosting Provider

Liquid Web Official Blog -

LANSING, Mich., June 30th, 2020 – Liquid Web, LLC, (https://www.liquidweb.com), the market leader in managed hosting and managed application services to SMBs and entrepreneurs, is excited to announce the acquisition of ServerSide adding proven experience in hosting the leading Microsoft Windows Content Management solutions to Liquid Web’s portfolio. The acquisition of ServerSide bolsters Liquid Web’s VMware cloud hosting capabilities for small to medium businesses launched in 2019. It also accelerates the company’s entrance into the Progress Sitefinity, Kentico, and Sitecore hosting market. The ServerSide team, including Steve Oren, founder, and CEO, have joined Liquid Web and have helped lead the effort to migrate customers onto the Liquid Web platform. “The acquisition of ServerSide supports Liquid Web’s mission to power leading content management platforms. With ServerSide, we are excited about building upon the relationships ServerSide had with Sitefinity, Kentico, and Sitecore and their ecosystem partners”, said Joe Oesterling, CTO. “We are excited about joining the Liquid Web team. We’ve successfully migrated our customers to Liquid Web’s platform, and we are working hand and hand to deploy our VMware architecture more broadly within Liquid Web”, said Steve Oren, Former CEO at ServerSide. “We look forward to using Liquid Web’s scale to be a bigger player in the leading Windows CMS ecosystems,” said Oren. To learn more about the Liquid Web Private Cloud powered by VMware and NetApp visit: https://www.liquidweb.com/products/private-cloud/. To learn more about the Liquid Web Windows CMS offerings visit:Kentico: https://www.liquidweb.com/products/add-ons/software/kentico/Progress® Sitefinity: https://www.liquidweb.com/products/add-ons/software/sitefinity/ElcomCMS: https://www.liquidweb.com/products/add-ons/software/elcom/Sitecore: https://www.liquidweb.com/products/add-ons/software/sitecore/ About the Liquid Web Family of BrandsBuilding on over 20 years of success, our Liquid Web Brand Family consists of four companies (Liquid Web, Nexcess, iThemes, and InterWorx), delivering software, solutions, and managed services for mission-critical sites, stores, and applications to SMBs and the designers, developers, and agencies who create for them. With more than 1.5 million sites under management, The Liquid Web Family of Brands serves over 45,000 customers spanning 150 countries. Collectively, the companies have assembled a world-class team of industry experts, provide unparalleled service from a dedicated group of solution engineers available 24/7/365, and own and manage 10 global data centers. As an industry leader in customer service*, the rapidly expanding brand family has been recognized among INC. Magazine’s 5000 Fastest-Growing Companies for twelve years. For more information, please visit https://www.liquidweb.com/ for more info. *2019 Net Promoter Score of 67 The post Liquid Web Acquires ServerSide, a leading Microsoft Windows CMS Hosting Provider appeared first on Liquid Web.

How to test HTTP/3 and QUIC with Firefox Nightly

CloudFlare Blog -

HTTP/3 is the third major version of the Hypertext Transfer Protocol, which takes the bold step of moving away from TCP to the new transport protocol QUIC in order to provide performance and security improvements.During Cloudflare's Birthday Week 2019, we were delighted to announce that we had enabled QUIC and HTTP/3 support on the Cloudflare edge network. This was joined by support from Google Chrome and Mozilla Firefox, two of the leading browser vendors and partners in our effort to make the web faster and more reliable for all. A big part of developing new standards is interoperability, which typically means different people analysing, implementing and testing a written specification in order to prove that it is precise, unambiguous, and actually implementable.At the time of our announcement, Chrome Canary had experimental HTTP/3 support and we were eagerly awaiting a release of Firefox Nightly. Now that Firefox supports HTTP/3 we thought we'd share some instructions to help you enable and test it yourselves.How do I enable HTTP/3 for my domain?Simply go to the Cloudflare dashboard and flip the switch from the "Network" tab manually:Using Firefox Nightly as an HTTP/3 clientFirefox Nightly has experimental support for HTTP/3. In our experience things are pretty good but be aware that you might experience some teething issues, so bear that in mind if you decide to enable and experiment with HTTP/3. If you're happy with that responsibility, you'll first need to download and install the latest Firefox Nightly build. Then open Firefox and enable HTTP/3 by visiting "about:config" and setting "network.http.http3.enabled" to true. There are some other parameters that can be tweaked but the defaults should suffice.about:config can be filtered by using a search term like "http3".Once HTTP/3 is enabled, you can visit your site to test it out. A straightforward way to check if HTTP/3 was negotiated is to check the Developer Tools "Protocol" column in the "Network" tab (on Windows and Linux the Developer Tools keyboard shortcut is Ctrl+Shift+I, on macOS it's Command+Option+I). This "Protocol" column might not be visible at first, so to enable it right-click one of the column headers and check "Protocol" as shown below.Then reload the page and you should see that "HTTP/3" is reported.The aforementioned teething issues might cause HTTP/3 not to show up initially. When you enable HTTP/3 on a zone, we add a header field such as alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400 to all responses for that zone. Clients see this as an advertisement to try HTTP/3 out and will take up the offer on the next request. So to make this happen you can reload the page but make sure that you bypass the local browser cache (via the "Disable Cache" checkbox, or use the Shift-F5 key combo) or else you'll just see the protocol used to fetch the resource the first time around. Finally, Firefox provides the "about:networking" page which provides a list of visited zones and the HTTP version that was used to load them; for example, this very blog.about:networking contains a table of all visited zones and the connection properties.Sometimes browsers can get sticky to an existing HTTP connection and will refuse to start an HTTP/3 connection, this is hard to detect by humans, so sometimes the best option is to close the app completely and reopen it. Finally, we've also seen some interactions with Service Workers that make it appear that a resource was fetched from the network using HTTP/1.1, when in fact it was fetched from the local Service Worker cache. In such cases if you're keen to see HTTP/3 in action then you'll need to deregister the Service Worker. If you're in doubt about what is happening on the network it is often useful to verify things independently, for example capturing a packet trace and dissecting it with Wireshark.What’s next?The QUIC Working Group recently announced a "Working Group Last Call", which marks an important milestone in the continued maturity of the standards. From the announcement:After more than three and a half years and substantial discussion, all 845 of the design issues raised against the QUIC protocol drafts have gained consensus or have a proposed resolution. In that time the protocol has been considerably transformed; it has become more secure, much more widely implemented, and has been shown to be interoperable. Both the Chairs and the Editors feel that it is ready to proceed in standardisation.The coming months will see the specifications settle and we anticipate that implementations will continue to improve their QUIC and HTTP/3 support, eventually enabling it in their stable channels. We're pleased to continue working with industry partners such as Mozilla to help build a better Internet together.In the meantime, you might want to check out our guides to testing with other implementations such as Chrome Canary or curl. As compatibility becomes proven, implementations will shift towards optimizing their performance; you can read about Cloudflare's efforts on comparing HTTP/3 to HTTP/2 and the work we've done to improve performance by adding support for CUBIC and HyStart++ to our congestion control module.

How to Set Up and Measure a TikTok Influencer Marketing Campaign

Social Media Examiner -

Want to get your product in front of TikTok’s growing audience? Wondering how to partner with influential creators on TikTok? In this article, you’ll discover tips and tools to set up and analyze a TikTok influencer marketing campaign. How TikTok Influencer Campaigns Work While still in its infancy, TikTok has become ripe for businesses interested […] The post How to Set Up and Measure a TikTok Influencer Marketing Campaign appeared first on Social Media Examiner | Social Media Marketing.

Find Your Most Expensive Lines of Code – Amazon CodeGuru Is Now Generally Available

Amazon Web Services Blog -

Bringing new applications into production, maintaining their code base as they grow and evolve, and at the same time respond to operational issues, is a challenging task. For this reason, you can find many ideas on how to structure your teams, on which methodologies to apply, and how to safely automate your software delivery pipeline. At re:Invent last year, we introduced in preview Amazon CodeGuru, a developer tool powered by machine learning that helps you improve your applications and troubleshoot issues with automated code reviews and performance recommendations based on runtime data. During the last few months, many improvements have been launched, including a more cost-effective pricing model, support for Bitbucket repositories, and the ability to start the profiling agent using a command line switch, so that you no longer need to modify the code of your application, or add dependencies, to run the agent. You can use CodeGuru in two ways: CodeGuru Reviewer uses program analysis and machine learning to detect potential defects that are difficult for developers to find, and recommends fixes in your Java code. The code can be stored in GitHub (now also in GitHub Enterprise), AWS CodeCommit, or Bitbucket repositories. When you submit a pull request on a repository that is associated with CodeGuru Reviewer, it provides recommendations for how to improve your code. Each pull request corresponds to a code review, and each code review can include multiple recommendations that appear as comments on the pull request. CodeGuru Profiler provides interactive visualizations and recommendations that help you fine-tune your application performance and troubleshoot operational issues using runtime data from your live applications. It currently supports applications written in Java virtual machine (JVM) languages such as Java, Scala, Kotlin, Groovy, Jython, JRuby, and Clojure. CodeGuru Profiler can help you find the most expensive lines of code, in terms of CPU usage or introduced latency, and suggest ways you can improve efficiency and remove bottlenecks. You can use CodeGuru Profiler in production, and when you test your application with a meaningful workload, for example in a pre-production environment. Today, Amazon CodeGuru is generally available with the addition of many new features. In CodeGuru Reviewer, we included the following: Support for Github Enterprise – You can now scan your pull requests and get recommendations against your source code on Github Enterprise on-premises repositories, together with a description of what’s causing the issue and how to remediate it. New types of recommendations to solve defects and improve your code – For example, checking input validation, to avoid issues that can compromise security and performance, and looking for multiple copies of code that do the same thing. In CodeGuru Profiler, you can find these new capabilities: Anomaly detection – We automatically detect anomalies in the application profile for those methods that represent the highest proportion of CPU time or latency. Lambda function support – You can now profile AWS Lambda functions just like applications hosted on Amazon Elastic Compute Cloud (EC2) and containerized applications running on Amazon ECS and Amazon Elastic Kubernetes Service, including those using AWS Fargate. Cost of issues in the recommendation report – Recommendations contain actionable resolution steps which explain what the problem is, the CPU impact, and how to fix the issue. To help you better prioritize your activities, you now have an estimation of the savings introduced by applying the recommendation. Color-my-code – In the visualizations, to help you easily find your own code, we are coloring your methods differently from frameworks and other libraries you may use. CloudWatch metrics and alerts – To keep track and monitor efficiency issues that have been discovered. Let’s see some of these new features at work! Using CodeGuru Reviewer with a Lambda Function I create a new repo in my GitHub account, and leave it empty for now. Locally, where I am developing a Lambda function using the Java 11 runtime, I initialize my Git repo and add only the README.md file to the master branch. In this way, I can add all the code as a pull request later and have it go through a code review by CodeGuru. git init git add README.md git commit -m "First commit" Now, I add the GitHub repo as origin, and push my changes to the new repo: git remote add origin https://github.com/<my-user-id>/amazon-codeguru-sample-lambda-function.git git push -u origin master I associate the repository in the CodeGuru console: When the repository is associated, I create a new dev branch, add all my local files to it, and push it remotely: git checkout -b dev git add . git commit -m "Code added to the dev branch" git push --set-upstream origin dev In the GitHub console, I open a new pull request by comparing changes across the two branches, master and dev. I verify that the pull request is able to merge, then I create it. Since the repository is associated with CodeGuru, a code review is listed as Pending in the Code reviews section of the CodeGuru console. After a few minutes, the code review status is Completed, and CodeGuru Reviewer issues a recommendation on the same GitHub page where the pull request was created. Oops! I am creating the Amazon DynamoDB service object inside the function invocation method. In this way, it cannot be reused across invocations. This is not efficient. To improve the performance of my Lambda function, I follow the CodeGuru recommendation, and move the declaration of the DynamoDB service object to a static final attribute of the Java application object, so that it is instantiated only once, during function initialization. Then, I follow the link in the recommendation to learn more best practices for working with Lambda functions. Using CodeGuru Profiler with a Lambda Function In the CodeGuru console, I create a MyServerlessApp-Development profiling group and select the Lambda compute platform. Next, I give the AWS Identity and Access Management (IAM) role used by my Lambda function permissions to submit data to this profiling group. Now, the console is giving me all the info I need to profile my Lambda function. To configure the profiling agent, I use a couple of environment variables: AWS_CODEGURU_PROFILER_GROUP_ARN to specify the ARN of the profiling group to use. AWS_CODEGURU_PROFILER_ENABLED to enable (TRUE) or disable (FALSE) profiling. I follow the instructions (for Maven and Gradle) to add a dependency, and include the profiling agent in the build. Then, I update the code of the Lambda function to wrap the handler function inside the LambdaProfiler provided by the agent. To generate some load, I start a few scripts invoking my function using the Amazon API Gateway as trigger. After a few minutes, the profiling group starts to show visualizations describing the runtime behavior of my Lambda function. For example, I can see how much CPU time is spent in the different methods of my function. At the bottom, there are the entry point methods. As I scroll up, I find methods that are called deeper in the stack trace. I right-click and hide the LambdaRuntimeClient methods to focus on my code. Note that my methods are colored differently than those in the packages I am using, such as the AWS SDK for Java. I am mostly interested in what happens in the handler method invoked by the Lambda platform. I select the handler method, and now it becomes the new “base” of the visualization. As I move my pointer on each of my methods, I get more information, including an estimation of the yearly cost of running that specific part of the code in production, based on the load experienced by the profiling agent during the selected time window. In my case, the handler function cost is estimated to be $6. If I select the two main functions above, I have an estimation of $3 each. The cost estimation works for code running on Lambda functions, EC2 instances, and containerized applications. Similarly, I can visualize Latency, to understand how much time is spent inside the methods in my code. I keep the Lambda function handler method selected to drill down into what is under my control, and see where time is being spent the most. The CodeGuru Profiler is also providing a recommendation based on the data collected. I am spending too much time (more than 4%) in managing encryption. I can use a more efficient crypto provider, such as the open source Amazon Corretto Crypto Provider, described in this blog post. This should lower the time spent to what is expected, about 1% of my profile. Finally, I edit the profiling group to enable notifications. In this way, if CodeGuru detects an anomaly in the profile of my application, I am notified in one or more Amazon Simple Notification Service (SNS) topics. Available Now Amazon CodeGuru is available today in 10 regions, and we are working to add more regions in the coming months. For regional availability, please see the AWS Region Table. CodeGuru helps you improve your application code and reduce compute and infrastructure costs with an automated code reviewer and application profiler that provide intelligent recommendations. Using visualizations based on runtime data, you can quickly find the most expensive lines of code of your applications. With CodeGuru, you pay only for what you use. Pricing is based on the lines of code analyzed by CodeGuru Reviewer, and on sampling hours for CodeGuru Profiler. To learn more, please see the documentation or check out the video by Jeff! — Danilo

Let WordPress Redirect Your 404 Pages Automatically

InMotion Hosting Blog -

404 pages are part and parcel of a fully functional website. It’s not entirely bad if your website has 404 pages, especially if it contains tons of pages or posts. This may be comforting to an extent but doesn’t warrant total negligence, especially when WordPress can redirect your 404 pages automatically.  The way you handle 404 errors determines how much impact it could have on your SEO and conversion rate in general. Continue reading Let WordPress Redirect Your 404 Pages Automatically at InMotion Hosting Blog.

The 6 Best WordPress Plugins Every Blog Needs

HostGator Blog -

The post The 6 Best WordPress Plugins Every Blog Needs appeared first on HostGator Blog. Blogs are a must-have to establish an online presence. With blogging, you get to communicate with your visitors about the latest trends, showcase your products and services, and even ask for their feedback.  But blogging is more than just writing a few words or posting a couple of videos. To run a successful blog, you also must consider search engine optimization, security measures, and promotion.  Luckily, WordPress plugins can help you with these pressing needs. Check out the six best WordPress plugins for blogs below. 1. Yoast SEO Blogging is a long-term strategy to bring attention to your brand. It’s a combination of writing relevant content and getting people excited to visit your website.  Search engine optimization plays a huge role in helping visitors discover your content on the web. Using the right keywords can ensure you’re attracting the right people. Adam Enfroy, a full-time blogger and affiliate marketing expert, says: “Optimizing your blog posts is not about stuffing as many relevant keywords into the article as you can (that can actually hurt your SEO now). It’s about writing for humans first, and search engines second.” Yoast SEO is an essential WordPress plugin to get your blog content ranked on search engines. This plugin comes with a readability analysis feature, title and meta description templating, breadcrumb controls, and XML Sitemaps functionality. 2. Newsletter Visitor engagement doesn’t stop when people land on your blog. The next step is to capture their email address, so you can send visitors more relevant content. That way, you can build a quality relationship with your audience.  Newsletter is a WordPress plugin that helps you with list building and sending emails. This email marketing tool allows you to create responsive newsletters with its drag-and-drop composer. There’s even a subscription spam check to block unwanted bots. Experts suggest building an email list as soon as you create your blog. It’s also wise to try different methods to boost your subscribers. Belle Beth Cooper, the first content crafter at Buffer, writes: “When you’re asking readers to sign up for your email list, you might want to try experimenting with a different language. Willy Franzen found that his subscription rate jumped 254% higher when he changed his call-to-action from ‘subscribe by email’ to ‘get jobs by email’.” 3. Wordfence Security Reports indicate that 43% of cyber-attacks are made against small businesses. One reason for this staggering statistic is the lack of security infrastructure. Similar to adding an alarm system to your new home, your website needs tools to protect it from potential breaches and suspicious attackers. There’s no better time to add security to your site than right now.  Wordfence Security keeps your website safe with its firewall and malware scanner. This plugin identifies and blocks malicious traffic and checks core files for malware, bad URLs, and SEO spam. You’ll get access to a dashboard with an overview of your site’s security including notifications and total attacks blocked.  This tool also comes with two-factor authentication and CAPTCHA to stop bots from logging into your site. If you upgrade to the premium version, you’ll get real-time malware signature updates along with checks to see if your site has been blacklisted for malicious activity. 4. wpDiscuz A blog serves as a central location for your brand to discuss topics relevant to your audience. Your blog posts will give insight into your business’s culture, products, and team.  But it’s also important to get feedback. The comment section of your blog gives readers a chance to express their opinions directly to you. Every once in a while, it’s okay to get a little controversial. “Begin a conversation in which you share your position and invite others to disagree. Be careful of overdoing this, though, as being contentious all the time can get weary. It can look like you’re just trying to pick a fight,” writes Jeff Goins, best-selling author of five books. Supercharge your blog comments with wpDiscuz. This plugin adds an interactive comment box on your posts. You can accept and deny specific comments, sort the comments by newest or oldest, and enable comment voting. 5. Google Analytics Dashboard for WP Getting traffic to your blog matters to your brand. So much so that there’s been a 93% increase in blogs using promotional techniques to drive traffic to their posts. You need a way to observe your traffic as it comes in. Google Analytics Dashboard for WP helps you set up all your tracking features without writing any code or hiring a developer. No more leaving WordPress to view key stats in Google Analytics; now, you can monitor them inside your dashboard. Get real-time stats of who’s viewing your website, where they’re coming from, how they found your site, and how long they’re staying on your site.  You also can automatically track clicks on affiliate links and track every file download with just one click. Haven’t set up Google Analytics for your WordPress blog just yet? Read our step-by-step guide. 6. Social Media Share Buttons & Social Sharing Icons Writing great content is only one part of a successful blog. The other part is actually getting people to read and engage with your blog posts. Beyond SEO, you will need additional content distribution channels to attract visitors to your blog.  Social media is an effective way to spread the word about your website. Ben Sailer, inbound marketing lead at CoSchedule, states:  “Another way to connect your audience to your content and encourage them to share it is to create content that revolves around their values. Your audience wants to know that the values of your company or product align with theirs.” Encourage your current visitors to share your blog posts with the Social Media Share Buttons & Social Sharing Icons plugin. You can pick from 16 different designs to match your brand’s site.  This tool gives you the option to make your social media icons static or dynamic. You also can add a counting feature to the buttons. Upgrade Your Blog With These WordPress Blog Plugins It’s time to attract new visitors to your blog. Use these six WordPress plugins to boost your SEO results, gain traffic from social media, and track your site’s analytics.  Find the post on the HostGator Blog

Liquid Web Vs. InMotion

Liquid Web Official Blog -

Comparing Hosting with Liquid Web vs InMotion? All too many hosting providers offer low prices in an effort to attract customers who don’t know exactly what they need. And then once that customer is through the door and has signed up, add-on fees and a la carte services come to light, stacking up and negating those earlier cost savings entirely. InMotion Hosting provides excellent infrastructure for sure, but the original appealing price is almost never the final bill. While price is always a consideration when making infrastructure investments, it can be difficult to compare one provider to another since details can vary so widely between one host and another. The entire exercise can be confusing to even the savviest technologist. So what’s the solution when you don’t want to pay too much, but also don’t want to miss out on capabilities and functionality that you need? How do you even know what those capabilities and functions even are in the first place? At Liquid Web, we remove the confusion by always offering full management of your infrastructure. From hardware to software, your infrastructure is under the watchful care of the Most Helpful Humans in Hosting. Rather than nickel and dime customers around the world for things like server administration and optimization, we offer complete management and monitoring so your team can focus on other parts of your business. When considering a hosting provider like InMotion, it is critical that you know what you’re getting with your purchase…and what you aren’t getting but probably need. Choosing InMotion for your infrastructure means choosing to pay additional fees for server management and performance monitoring. It means performing your own migration and potentially being left out in the cold when things start to go wrong. Learn More About Our Hosting Products Liquid Web vs InMotion: Dedicated and Virtual Private Servers Liquid Web is the world’s most loved hosting company for a reason. Our industry-leading web hosting solutions are built on best-in-class hardware and verified by independent testing to consistently outperform competitors. In fact, you’ll find that Liquid Web’s base server configurations outclass top-of-the-line offerings from InMotion. Our Dedicated Hosting and VPS Hosting are competitively priced, contract-free, and don’t require a long-term commitment or expensive add-ons to work as advertised. See for yourself how Liquid Web compares to InMotion Hosting:   Fully Managed Additional Fee Full Server Stack Support Additional Fee 24/7/365 Support Included Support Request SLA 59 Seconds or Less for Phone or Chat; 59 Minutes for Email Includes cPanel/WHM/Plesk cPanel Performance Optimization Additional Fee Service Monitoring Proactive Monitoring Included Outgoing Bandwidth 5 TB  4 TB 100% Network Uptime Guarantee 100% Power Uptime Guarantee SLA Remedy 1000% Unpublished Predictable Billing? Yes, Monthly Yes, Monthly Migrations Included? Backups Included? Nobody Includes More Than Liquid Web Managed Dedicated Server Hosting and Virtual Private Server Hosting at Liquid Web is engineered for peace of mind, with a full suite of performance, reliability, and security solutions included at no extra charge. CloudFlare® CDN We provide full management for one of the world’s most popular CDNs, and full support when your site is added to CloudFlare through our interface. CloudFlare will not only speed up your site, but also provide a further boost to security. Built-in Backups Local backups are always included at no extra charge. For an extra layer of backup protection, you can add our Acronis Cyber Backups, offserver backups especially made for our Dedicated and VMware product lineup. Enhanced Security Security is paramount, which is why we include ServerSecure with every Fully Managed server. Your server will be protected by a range of proprietary security enhancements to block unwanted access and keep your data secure. DDoS Attack Protection We provide free basic protection from small volumetric DDoS attacks with every server on our network. Best of all, it’s always on and ready to go. For larger and more sophisticated attacks, comprehensive protection and mitigation is available. The World’s Most-Loved Hosting Company Nobody delights customers more than Liquid Web. Our Net Promoter Score (NPS® ) of 67 puts us among the world’s most loved brands — and makes us No. 1 in the hosting industry. What makes us special? Our customers say it best: “Liquid Web’s support team goes above and beyond all my expectations. They helped me transition from one e-commerce platform to another and fixed all the bugs on the way. I call them every time I need any advice or help because they are experts at what they do and I trust them.” — Alex Genson “Wow! What a refreshing surprise in a world filled with mediocrity and poor customer service. I recently changed to Liquid Web … for hosting after nightmares with almost every other hosting company you can imagine. Kudos to Liquid Web and especially Alexander Houston who just expertly answered my questions and made essential changes to my account in a matter minutes rather than days in a simple knowledgeable LiveChat session.” — Barry C. McLawhorn Backed By The Most Helpful Humans in Hostingand the Best Guarantees in the Industry With more than two decades of helping small and mid-size businesses reach their goals, our team has the experience and expertise to keep you growing into the future. From small businesses starting their first web project to large enterprises running mission-critical applications, you can count on Liquid Web to provide the right solution for whatever’s next in your digital journey. Isn’t it time to find out what makes Liquid Web the most reliable hosting provider on earth? 24/7 Support from The Most Helpful Humans in HostingIt’s easy to say you have the best support, but we have the numbers to back it up. Our Support ranks No. 1 in customer satisfaction. 59 Second Initial Response Guarantee: Phone and ChatWe’re committed to answering your call or connecting to your LiveChat within 59 seconds. 59 Minute Initial Response Guarantee: EmailHelpDesk tickets receive an initial response via email within 59 minutes, guaranteed. 100% Network Uptime GuaranteeAll major routing devices within our network will be reachable from the global Internet 100% of the time. 100% Power Uptime GuaranteeBy owning — not leasing — our infrastructure, we can guarantee that power to your rack will always be online.   Learn More About Our Hosting Products Featured Clients The post Liquid Web Vs. InMotion appeared first on Liquid Web.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator