Industry Buzz

7 Best WordPress Quiz Plugins

HostGator Blog -

The post 7 Best WordPress Quiz Plugins appeared first on HostGator Blog. The purpose of your website is to engage visitors. With interactive content like quizzes, you can connect with your visitors and add them to your lead funnel.  A quiz is a powerful tool for getting to know your audience without being intrusive. Asking simple (and funny) questions can build a brand relationship. It also removes the monotony of having your visitor read another blog post. WordPress plugins make it possible to add quizzes to your website. Check out these seven plugins below to enhance the visitor experience. 1. WP Quiz Interactive content generates more than twice the conversions for your sales pipeline than passive content. The WP Quiz plugin helps you usher in new leads with professional and engaging quizzes, polls, and surveys. This plugin offers multiple options to create unique quizzes. You can add video, text, images, or a combination of all three to any poll. You can place your quiz on a single page or extend it across several pages. The image credit option also lets you acknowledge the content creator.  WP Quiz comes with a restart feature for quiz takers to clear their results and start over. When done, they can use the social media buttons to share their quiz results with family and friends. 2. Quiz and Survey Master  Quiz and Survey Master is the ultimate choice for seamlessly integrating quizzes, surveys, and polls into your website. From customer satisfaction surveys to employee polls, you can customize the experience for your quiz takers. Jodi Harris, the director of editorial content and curation at the Content Marketing Institute, expresses why this matters: “Interactive content enables users to personalize and participate in the content presented to them. By helping consumers see themselves in the brand experience, the technique offers the potential to deepen engagement and drive greater satisfaction.”  This WordPress quiz plugin lets you select a number of question formats, including multiple-choice, drop-down menus, checkboxes, and fill-in-the-blank. To make it easier for the quiz taker, you can even enable hints for each question. There’s also the option to set up time limits on each quiz. 3. Quiz Cat SnapApp reports that 53% of content marketers use interactive content to influence the buyer’s journey. Quizzes can engage your website visitors and move them into the sales funnel. With the Quiz Cat plugin, you can create remarkable content to capture qualified leads.  This WordPress plugin offers a built-in landing page for each quiz with a headline, subheadline, image, and “Start Quiz” button. You can set up as many quizzes as you desire, and your quizzes can include multiple-choice questions with two to four possible answers. Quiz Cat lets you create a custom message to display when your visitors complete the quiz, helping you build a more personalized experience. Plus, you can use WordPress shortcodes to embed the quiz into any post or page. 4. HD Quiz  HD Quiz gives you the power to create unlimited unique quizzes for your website. You can add featured images and tooltips to every question. Also, quiz takers can share their quiz results on Facebook and Twitter. “Interactive content increases your chances of going viral—or at least getting more exposure. And if you deliver a satisfying, enjoyable, entertaining, or educational experience, you’ll win viewers’ loyalty,” states Mike Kamo, CEO and co-founder of Hello Bar and Neil Patel Digital.  You can configure the plugin to randomize both the order of the questions and the answers. To make the quiz extra challenging, you can set a time limit on the quiz. Lastly, get creative by adding animated GIFs, images, and links to your quiz.  5. Riddle Quiz Maker Riddle Quiz Maker is a WordPress plugin for building quizzes, personality tests, and surveys. Users rave about the tool’s easy-to-use interface and customizability.  The plugin offers 14 different types of surveys, quizzes, and polls. With more than 75 customization options, you can personalize your quiz to match your website’s colors and fonts. It also includes built-in image editing to crop or add a filter to a picture. Go viral by adding social sharing buttons to your quizzes.  Riddle Quiz Maker comes with branching logic to show different questions to each quiz taker. You can collect visitors’ email addresses and add them to your lead pipeline. Then, you can automatically export those leads into your CRM software.  6. Watu Quiz  Watu Quiz is a feature-rich WordPress plugin offering several ways to create quizzes and exams. You can set up a quiz with required questions or have them pulled from a pool of questions. A basic bar chart is available to show quiz takers their points vs. the average points from others. “These quizzes not only boost engagement—they also help you get to know your audience. So the next time you construct a quiz, ask yourself what you’d like to know about them. You may gain some essential insights,” says Amy Balliett, co-founder and CEO of Killer Visual Strategies. The plugin also notifies you when someone takes a quiz. You get a list of who took the exam along with their answers. Then, you can export the results to a CSV file to add to your CRM. 7. Chained Quiz Quizzes serve as a pathway to connect website visitors with your content. Adding quizzes to your WordPress website is easy with Chained Quiz. Chained Quiz is a conditional logic quiz plugin designed to determine the next question based on the previous answer. You get an unlimited number of quizzes, questions, and results. Each question can be answered with radio buttons, checkboxes, or a text box. This WordPress quiz plugin also lets you assign points to each answer. Depending on the number of points accumulated, the quiz can direct the person to a specific results page. It’s a unique way to guide potential customers into a particular buyer’s journey. Build Your Sales Funnel with WordPress Quiz Plugins Quizzes are an effective way to engage your website visitors and capture new leads. Get creative by asking relevant questions and adding high-quality images to your quizzes. It’s simple to do with these WordPress quiz plugins. Find the post on the HostGator Blog

How to Use Instagram Quick Replies to Streamline Engagement

Social Media Examiner -

Do you use Instagram Direct Messages to engage with customers or followers? Want to save time spent answering the same questions over and over? In this article, you’ll learn how to use Instagram Quick Replies for business and find out how to turn past direct messages into quick replies. To learn how to set up […] The post How to Use Instagram Quick Replies to Streamline Engagement appeared first on Social Media Examiner | Social Media Marketing.

Virtual Interning Offers Unique Challenges and Opportunities

CloudFlare Blog -

I am in my third year at Northeastern University, pursuing an undergraduate degree in Marketing and Psychology. Five months ago I joined Cloudflare as an intern on the APAC Marketing team in the beautiful Singapore office. When searching for internships Cloudflare stood out as a place I could gain skills in marketing, learn from amazing mentors, and have space to take ownership in projects. As a young, but well-established company, Cloudflare provides the resources for their interns to work cross functionally and creatively and truly be a part of the exponential growth of the company.My experience at CloudflareEarlier this week, I hopped on a virtual meeting with a few coworkers, thinking everything was set to record a webinar. As I shared my screen to explain how to navigate the platform I realised the set up was incorrect and we couldn’t start on time. Due to the virtual nature of the meeting, my coworkers didn’t see the panic on my face and had no idea what was going on. I corrected the issue and set up an additional trial run session, issuing apologies to both coworkers. They both took it in stride and expressed that it happens to the best of us. At Cloudflare, everyone is understanding of hiccups and encourages me to find a solution. This understanding attitude has allowed me to reach out of my comfort zone and work on new skills. Still, there is no doubt that working remotely can lead to additional stressors for employees. For interns, who are prone to making mistakes since it is often our first exposure to the workplace, having limited access to coworkers increases our challenges. Though there have been some challenges, virtual interning still provides many opportunities. Over my time here, I have worked with my team to develop the trust and autonomy to lead projects and learn new systems and softwares. I had the opportunity to create and run campaigns, including setup, execution, and promotion. I took charge of our recent APAC-wide webinars. I promoted the webinars on social platforms and worked with vendors. Through this process, I learned to analyse the quality of leads from different sources which gave me the ability to develop post-quarter analyses looking at webinar performance and discerning lessons we can take into future quartersI also conducted various data analysis projects, beginning with data extraction and leading to the analysis of the holistic business impact. For instance, I led a detailed data analysis project looking into the performance of events and how they may be improved. I learned new software, such as Salesforce and how to tell a story with data. Through analysis of the sales cycle and conversion rates, we were able to pinpoint key improvement areas to the execution of events.Among these many exciting projects, I have also learned from my experienced teammates about how to work smart and I have been lucky to be part of a great company. As I come up on my final month as an intern at Cloudflare, I am excited to take the lessons I have learned over the past five months into my final years in school and to whatever I end up doing after.A guide for those beginning their virtual intern experienceCloudflare has provided a seamless transition to remote work for full-time employees, interns, and new hires. They have provided resources, such as virtual fitness classes and fireside chats, for us to stay healthy mentally, physically, and professionally. Even so, during these tumultuous times, it can be stressful to start an internship (possibly your first) in a remote setting. With one month left and seeing many of my fellow college students begin their own summer internship, I’m reflecting on the multitude of lessons I have learned at Cloudflare. While I was lucky to have three months working with the team in the office, I know many interns are worried about starting internships that are now fully remote. As I have been working from home for the past two months, I hope to provide incoming interns with some guidance how to excel during a remote internship.Set up a LOT of meetings and expand your networkRecently, I was curious to learn more about what the different teams were doing without being able to make in-person sales calls. I asked my manager if I could listen in to a few more meetings and he quickly agreed. I have since created a better picture of the different teams’ activities and initiated conversations with my manager that led to a deeper understanding of the sales cycle. Being engaged, interested, and forward with my request to attend more meetings provided me with additional learning experiences. Don’t wait around for people to set up meetings with you or give you tasks. Your co-workers still have a full time job to do so finding time to train you might slip their mind, especially since they can’t see you. When I first started my internship, my manager encouraged me to reach out to my team (and other teams) and come prepared with lots of questions. I started filling my calendar with short 15-30 minute meetings to get to know the different teams in the office. This is even more crucial for those working remotely. You may not have the opportunity to speak with co-workers in the elevator or the All Hands room. Make up for this by setting up introductory meetings in your first few weeks and don’t be afraid to ask to be part of meetings. You will be able to learn more about your organisation and what interests you. Speak up and don’t stay on muteAs an intern, I am usually the most inexperienced individual in the meeting, which can make it nerve-wracking to unmute myself and speak up. With all meetings now in a video conference format, it can be easy to say “hi,” mute yourself, and spend the rest of the time listening to everyone else speak. I have learned that I won’t get the most out of my experience unless I offer my opinion and ask questions. Often, I am wrong, but my teammates explain why. For example, I came prepared with a draft of an email to a meeting with my manager. He was able to help me edit it and make it even more effective. He then provided me with extra reading materials and templates to help me improve in the future. Because of the questions or opinions I share during these meetings, I now have a greater understanding of branding and how to position a company in the market.As an intern starting out in a virtual environment, be fully engaged in meetings so your team can learn from your opinions and vice versa. Work to overcome the intimidation you may be feeling and take initiative to show your team what you have to offer. Making sure your video is on during every meeting can help you stay present and focused.Everyone is dealing with unique circumstances; use this to get to know your coworkersIn many companies, almost all employees are working from home providing a unique commonality. It is an easy talking point to start with in any meeting and helps you get to know your coworkers. Use this as an opportunity to get to know them on a deeper level and share something about yourself. You can discuss interesting books you have read or TV shows you love. It is also a great opportunity to set up fun virtual activities. My manager recently set up a “Fancy Dress Happy Hour” where we all dressed up as our favourite fictional characters and chatted about life stuck at home. Don’t be afraid to set activities like this up. Chances are, the rest of your team is just as tired of being stuck at home as you are.Recognising this could be the new working reality (for a while more)The events of 2020 have led to drastic changes in the business world. Everyone is learning a new way to work and adapting to change. It may be too soon to know what a fully remote internship will look like, but it is a great opportunity to find new and innovative ways to intern. Being an intern is a unique experience where you are not only allowed, but encouraged to try new things, even those not included in your job description. Virtual interning offers many unique challenges, but also provides the opportunity to learn how to quickly adapt and find new opportunities.Cloudflare is a company that has urged me to gain a better grasp of my goals and provided me with opportunities to act towards fulfilling them. It is a great place to understand what a post-university job will look like and exemplifies how much fun it can be. This summer, they have doubled their intern class and work to amplify interns' voices so they are a meaningful part of the company. If you are interested in being part of an innovative, collaborative environment, consider applying for an internship experience at Cloudflare here.

Why InMotion Hosting Chose a Privacy First Architecture

InMotion Hosting Blog -

Protecting User Data in the Era of Public Clouds Personal information is more valuable and more under siege than ever before.  Public clouds, governments, private companies, and organizations of all sizes possess and lose more personal data than at any time in history.  Unfortunately, turning over some degree of personal information is unavoidable in the modern digital economy.  Some governments have attempted to legislate privacy, but it may take years before legacy public cloud and technology providers can update their infrastructure and their software to accommodate better privacy. Continue reading Why InMotion Hosting Chose a Privacy First Architecture at InMotion Hosting Blog.

Working Together to Create a Just and Equitable Future

LinkedIn Official Blog -

We’re experiencing a moment in history, a collective reckoning with the impact of racial oppression that feels both familiar in its tragic frequency and different in its scale and scope. The movement goes beyond the protests of the horrific death of George Floyd in Minneapolis or João Pedro Matos Pinto in Rio de Janeiro, and demands for accountability for those responsible. It represents a global call to overturn systems of inequity that have cost Black people their lives, their freedom, and... .

New – A Shared File System for Your Lambda Functions

Amazon Web Services Blog -

I am very happy to announce that AWS Lambda functions can now mount an Amazon Elastic File System (EFS), a scalable and elastic NFS file system storing data within and across multiple availability zones (AZ) for high availability and durability. In this way, you can use a familiar file system interface to store and share data across all concurrent execution environments of one, or more, Lambda functions. EFS supports full file system access semantics, such as strong consistency and file locking. To connect an EFS file system with a Lambda function, you use an EFS access point, an application-specific entry point into an EFS file system that includes the operating system user and group to use when accessing the file system, file system permissions, and can limit access to a specific path in the file system. This helps keeping file system configuration decoupled from the application code. You can access the same EFS file system from multiple functions, using the same or different access points. For example, using different EFS access points, each Lambda function can access different paths in a file system, or use different file system permissions. You can share the same EFS file system with Amazon Elastic Compute Cloud (EC2) instances, containerized applications using Amazon ECS and AWS Fargate, and on-premises servers. Following this approach, you can use different computing architectures (functions, containers, virtual servers) to process the same files. For example, a Lambda function reacting to an event can update a configuration file that is read by an application running on containers. Or you can use a Lambda function to process files uploaded by a web application running on EC2. In this way, some use cases are much easier to implement with Lambda functions. For example: Processing or loading data larger than the space available in /tmp (512MB). Loading the most updated version of files that change frequently. Using data science packages that require storage space to load models and other dependencies. Saving function state across invocations (using unique file names, or file system locks). Building applications requiring access to large amounts of reference data. Migrating legacy applications to serverless architectures. Interacting with data intensive workloads designed for file system access. Partially updating files (using file system locks for concurrent access). Moving a directory and all its content within a file system with an atomic operation. Creating an EFS File System To mount an EFS file system, your Lambda functions must be connected to an Amazon Virtual Private Cloud that can reach the EFS mount targets. For simplicity, I am using here the default VPC that is automatically created in each AWS Region. Note that, when connecting Lambda functions to a VPC, networking works differently. If your Lambda functions are using Amazon Simple Storage Service (S3) or Amazon DynamoDB, you should create a gateway VPC endpoint for those services. If your Lambda functions need to access the public internet, for example to call an external API, you need to configure a NAT Gateway. I usually don’t change the configuration of my default VPCs. If I have specific requirements, I create a new VPC with private and public subnets using the AWS Cloud Development Kit, or use one of these AWS CloudFormation sample templates. In this way, I can manage networking as code. In the EFS console, I select Create file system and make sure that the default VPC and its subnets are selected. For all subnets, I use the default security group that gives network access to other resources in the VPC using the same security group. In the next step, I give the file system a Name tag and leave all other options to their default values. Then, I select Add access point. I use 1001 for the user and group IDs and limit access to the /message path. In the Owner section, used to create the folder automatically when first connecting to the access point, I use the same user and group IDs as before, and 750 for permissions. With this permissions, the owner can read, write, and execute files. Users in the same group can only read. Other users have no access. I go on, and complete the creation of the file system. Using EFS with Lambda Functions To start with a simple use case, let’s build a Lambda function implementing a MessageWall API to add, read, or delete text messages. Messages are stored in a file on EFS so that all concurrent execution environments of that Lambda function see the same content. In the Lambda console, I create a new MessageWall function and select the Python 3.8 runtime. In the Permissions section, I leave the default. This will create a new AWS Identity and Access Management (IAM) role with basic permissions. When the function is created, in the Permissions tab I click on the IAM role name to open the role in the IAM console. Here, I select Attach policies to add the AWSLambdaVPCAccessExecutionRole and AmazonElasticFileSystemClientReadWriteAccess AWS managed policies. In a production environment, you can restrict access to a specific VPC and EFS access point. Back in the Lambda console, I edit the VPC configuration to connect the MessageWall function to all subnets in the default VPC, using the same default security group I used for the EFS mount points. Now, I select Add file system in the new File system section of the function configuration. Here, I choose the EFS file system and accesss point I created before. For the local mount point, I use /mnt/msg and Save. This is the path where the access point will be mounted, and corresponds to the /message folder in my EFS file system. In the Function code editor of the Lambda console, I paste the following code and Save. import os import fcntl MSG_FILE_PATH = '/mnt/msg/content' def get_messages(): try: with open(MSG_FILE_PATH, 'r') as msg_file: fcntl.flock(msg_file, fcntl.LOCK_SH) messages = fcntl.flock(msg_file, fcntl.LOCK_UN) except: messages = 'No message yet.' return messages def add_message(new_message): with open(MSG_FILE_PATH, 'a') as msg_file: fcntl.flock(msg_file, fcntl.LOCK_EX) msg_file.write(new_message + "\n") fcntl.flock(msg_file, fcntl.LOCK_UN) def delete_messages(): try: os.remove(MSG_FILE_PATH) except: pass def lambda_handler(event, context): method = event['requestContext']['http']['method'] if method == 'GET': messages = get_messages() elif method == 'POST': new_message = event['body'] add_message(new_message) messages = get_messages() elif method == 'DELETE': delete_messages() messages = 'Messages deleted.' else: messages = 'Method unsupported.' return messages I select Add trigger and in the configuration I select the Amazon API Gateway. I create a new HTTP API. For simplicity, I leave my API endpoint open. With the API Gateway trigger selected, I copy the endpoint of the new API I just created. I can now use curl to test the API: $ curl No message yet. $ curl -X POST -H "Content-Type: text/plain" -d 'Hello from EFS!' Hello from EFS! $ curl -X POST -H "Content-Type: text/plain" -d 'Hello again :)' Hello from EFS! Hello again :) $ curl Hello from EFS! Hello again :) $ curl -X DELETE Messages deleted. $ curl No message yet. It would be relatively easy to add unique file names (or specific subdirectories) for different users and extend this simple example into a more complete messaging application. As a developer, I appreciate the simplicity of using a familiar file system interface in my code. However, depending on your requirements, EFS throughput configuration must be taken into account. See the section Understanding EFS performance later in the post for more information. Now, let’s use the new EFS file system support in AWS Lambda to build something more interesting. For example, let’s use the additional space available with EFS to build a machine learning inference API processing images. Building a Serverless Machine Learning Inference API To create a Lambda function implementing machine learning inference, I need to be able, in my code, to import the necessary libraries and load the machine learning model. Often, when doing so, the overall size of those dependencies goes beyond the current AWS Lambda limits in the deployment package size. One way of solving this is to accurately minimize the libraries to ship with the function code, and then download the model from an S3 bucket straight to memory (up to 3 GB, including the memory required for processing the model) or to /tmp (up 512 MB). This custom minimization and download of the model has never been easy to implement. Now, I can use an EFS file system. The Lambda function I am building this time needs access to the public internet to download a pre-trained model and the images to run inference on. So I create a new VPC with public and private subnets, and configure a NAT Gateway and the route table used by the the private subnets to give access to the public internet. Using the AWS Cloud Development Kit, it’s just a few lines of code. I create a new EFS file system and an access point in the new VPC using similar configurations as before. This time, I use /ml for the access point path. Then, I create a new MLInference Lambda function with the same set up as before for permissions and connect the function to the private subnets of the new VPC. Machine learning inference is quite a heavy workload, so I select 3 GB for memory and 5 minutes for timeout. In the File system configuration, I add the new access point and mount it under /mnt/inference. The machine learning framework I am using for this function is PyTorch, and I need to put the libraries required to run inference in the EFS file system. I launch an Amazon Linux EC2 instance in a public subnet of the new VPC. In the instance details, I select one of the availability zones where I have an EFS mount point, and then Add file system to automatically mount the same EFS file system I am using for the function. For the security groups of the EC2 instance, I select the default security group (to be able to mount the EFS file system) and one that gives inbound access to SSH (to be able to connect to the instance). I connect to the instance using SSH and create a requirements.txt file containing the dependencies I need: torch torchvision numpy The EFS file system is automatically mounted by EC2 under /mnt/efs/fs1. There, I create the /ml directory and change the owner of the path to the user and group I am using now that I am connected (ec2-user). $ sudo mkdir /mnt/efs/fs1/ml $ sudo chown ec2-user:ec2-user /mnt/efs/fs1/ml I install Python 3 and use pip to install the dependencies in the /mnt/efs/fs1/ml/lib path: $ sudo yum install python3 $ pip3 install -t /mnt/efs/fs1/ml/lib -r requirements.txt Finally, I give ownership of the whole /ml path to the user and group I used for the EFS access point: $ sudo chown -R 1001:1001 /mnt/efs/fs1/ml Overall, the dependencies in my EFS file system are using about 1.5 GB of storage. I go back to the MLInference Lambda function configuration. Depending on the runtime you use, you need to find a way to tell where to look for dependencies if they are not included with the deployment package or in a layer. In the case of Python, I set the PYTHONPATH environment variable to /mnt/inference/lib. I am going to use PyTorch Hub to download this pre-trained machine learning model to recognize the kind of bird in a picture. The model I am using for this example is relatively small, about 200 MB. To cache the model on the EFS file system, I set the TORCH_HOME environment variable to /mnt/inference/model. All dependencies are now in the file system mounted by the function, and I can type my code straight in the Function code editor. I paste the following code to have a machine learning inference API: import urllib import json import os import torch from PIL import Image from torchvision import transforms transform_test = transforms.Compose([ transforms.Resize((600, 600), Image.BILINEAR), transforms.CenterCrop((448, 448)), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), ]) model = torch.hub.load('nicolalandro/ntsnet-cub200', 'ntsnet', pretrained=True, **{'topN': 6, 'device': 'cpu', 'num_classes': 200}) model.eval() def lambda_handler(event, context): url = event['queryStringParameters']['url'] img = scaled_img = transform_test(img) torch_images = scaled_img.unsqueeze(0) with torch.no_grad(): top_n_coordinates, concat_out, raw_logits, concat_logits, part_logits, top_n_index, top_n_prob = model(torch_images) _, predict = torch.max(concat_logits, 1) pred_id = predict.item() bird_class = model.bird_classes[pred_id] print('bird_class:', bird_class) return json.dumps({ "bird_class": bird_class, }) I add the API Gateway as trigger, similarly to what I did before for the MessageWall function. Now, I can use the serverless API I just created to analyze pictures of birds. I am not really an expert in the field, so I looked for a couple of interesting images on Wikipedia: An atlantic puffin. A western grebe. I call the API to get a prediction for these two pictures: $ curl {"bird_class": "106.Horned_Puffin"} $ curl {"bird_class": "053.Western_Grebe"} It works! Looking at Amazon CloudWatch Logs for the Lambda function, I see that the first invocation, when the function loads and prepares the pre-trained model for inference on CPUs, takes about 30 seconds. To avoid a slow response, or a timeout from the API Gateway, I use Provisioned Concurrency to keep the function ready. The next invocations take about 1.8 seconds. Understanding EFS Performance When using EFS with your Lambda function, is very important to understand how EFS performance work. For throughput, each file system can be configured to use bursting or provisioned mode. When using bursting mode, all EFS file systems, regardless of size, can burst at least to 100 MiB/s of throughput. Those over 1 TiB in the standard storage class can burst to 100 MiB/s per TiB of data stored in the file system. EFS uses a credit system to determine when file systems can burst. Each file system earns credits over time at a baseline rate that is determined by the size of the file system that is stored in the standard storage class. A file system uses credits whenever it reads or writes data. The baseline rate is 50 KiB/s per GiB of storage. You can monitor the use of credits in CloudWatch, each EFS file system has a BurstCreditBalance metric. If you see that you are consuming all credits, and the BurstCreditBalance metric is going to zero, you should enable provisioned throughput mode for the file system, from 1 to 1024 MiB/s. There is an additional cost when using provisioned throughput, based on how much throughput you are adding on top of the baseline rate. To avoid running out of credits, you should think of the throughput as the average you need during the day. For example, if you have a 10GB file system, you have 500 KiB/s of baseline rate, and every day you can read/write 500 KiB/s * 3600 seconds * 24 hours = 43.2 GiB. If the libraries and everything you function needs to load during initialization are about 2 GiB, and you only access the EFS file system during function initialization, like in the MLInference Lambda function above, that means you can initialize your function (for example because of updates or scaling up activities) about 20 times per day. That’s not a lot, and you would probably need to configure provisioned throughput for the EFS file system. If you have 10 MiB/s of provisioned throughput, then every day you have 10 MiB/s * 3600 seconds * 24 hours = 864 GiB to read or write. If you only use the EFS file system at function initialization to read about 2 GB of dependencies, it means that you can have 400 initializations per day. That may be enough for your use case. In the Lambda function configuration, you can also use the reserve concurrency control to limit the maximum number of execution environments used by a function. If, by mistake, the BurstCreditBalance goes down to zero, and the file system is relatively small (for example, a few GiBs), there is the possibility that your function gets stuck and can’t execute fast enough before reaching the timeout. In that case, you should enable (or increase) provisioned throughput for the EFS file system, or throttle your function by setting the reserved concurrency to zero to avoid all invocations until the EFS file system has enough credits. Understanding Security Controls When using EFS file systems with AWS Lambda, you have multiple levels of security controls. I’m doing a quick recap here because they should all be considered during the design and implementation of your serverless applications. You can find more info on using IAM authorization and access points with EFS in this post. To connect a Lambda function to an EFS file system, you need: Network visibility in terms of VPC routing/peering and security group. IAM permissions for the Lambda function to access the VPC and mount (read only or read/write) the EFS file system. You can specify in the IAM policy conditions which EFS access point the Lambda function can use. The EFS access point can limit access to a specific path in the file system. File system security (user ID, group ID, permissions) can limit read, write, or executable access for each file or directory mounted by a Lambda function. The Lambda function execution environment and the EFS mount point uses industry standard Transport Layer Security (TLS) 1.2 to encrypt data in transit. You can provision Amazon EFS to encrypt data at rest. Data encrypted at rest is transparently encrypted while being written, and transparently decrypted while being read, so you don’t have to modify your applications. Encryption keys are managed by the AWS Key Management Service (KMS), eliminating the need to build and maintain a secure key management infrastructure. Available Now This new feature is offered in all regions where AWS Lambda and Amazon EFS are available, with the exception of the regions in China, where we are working to make this integration available as soon as possible. For more information on availability, please see the AWS Region table. To learn more, please see the documentation. EFS for Lambda can be configured using the console, the AWS Command Line Interface (CLI), the AWS SDKs, and the Serverless Application Model. This feature allows you to build data intensive applications that need to process large files. For example, you can now unzip a 1.5 GB file in a few lines of code, or process a 10 GB JSON document. You can also load libraries or packages that are larger than the 250 MB package deployment size limit of AWS Lambda, enabling new machine learning, data modelling, financial analysis, and ETL jobs scenarios. Amazon EFS for Lambda is supported at launch in AWS Partner Network solutions, including Epsagon, Lumigo, Datadog, HashiCorp Terraform, and Pulumi. There is no additional charge for using EFS from Lambda functions. You pay the standard price for AWS Lambda and Amazon EFS. Lambda execution environments always connect to the right mount target in an AZ and not across AZs. You can connect to EFS in the same AZ via cross account VPC but there can be data transfer costs for that. We do not support cross region, or cross AZ connectivity between EFS and Lambda. — Danilo

Introducing Cache Analytics

CloudFlare Blog -

Today, I’m delighted to announce Cache Analytics: a new tool that gives deeper exploration capabilities into what Cloudflare’s caching and content delivery services are doing for your web presence.Caching is the most effective way to improve the performance and economics of serving your website to the world. Unsurprisingly, customers consistently ask us how they can optimize their cache performance to get the most out of Cloudflare.With Cache Analytics, it’s easier than ever to learn how to speed up your website, and reduce traffic sent to your origin. Some of my favorite capabilities include:See what resources are missing from cache, expired, or never eligible for cache in the first placeSlice and dice your data as you see fit: filter by hostnames, or see a list of top URLs that miss cacheSwitch between views of requests and data Transfer to understand both performance and costAn overview of Cache AnalyticsCache Analytics is available today for all customers on our Pro, Business, and Enterprise plans.In this blog post, I’ll explain why we built Cache Analytics and how you can get the most out of it.Why do we need analytics focused on caching?If you want to scale the delivery of a fast, high-performance website, then caching is critical. Caching has two main goals:First, caching improves performance. Cloudflare data centers are within 100ms of 90% of the planet; putting your content in Cloudflare’s cache gets it physically closer to your customers and visitors, meaning that visitors will see your website faster when they request it! (Plus, reading assets on our edge SSDs is really fast, rather than waiting for origins to generate a response.)Second, caching helps reduce bandwidth costs associated with operating a presence on the Internet. Origin data transfer is one of the biggest expenses of running a web service, so serving content out of Cloudflare’s cache can significantly reduce costs incurred by origin infrastructure.Because it’s not safe to cache all content (we wouldn’t want to cache your bank balance by default), Cloudflare relies on customers to tell us what’s safe to cache with HTTP Cache-Control headers and page rules. But even with page rules, it can be hard to understand what’s actually getting cached — or more importantly, what’s not getting cached, and why. Is a resource expired? Or was it even eligible for cache in the first place?Faster or cheaper? Why not both!Cache Analytics was designed to help users understand how Cloudflare’s cache is performing, but it can also be used as a general-purpose analytics tool. Here I’ll give a quick walkthrough of the interface.First, at the top-left, you should decide if you want to focus on requests or data transfer.Cache Analytics enables you to toggle between views of requests and data transfer.As a rule of thumb, requests (the default view) is more useful for understanding performance, because every request that misses cache results in a performance hit. Data transfer is useful for understanding cost, because most hosts charge for every byte that leaves their network — every gigabyte served by Cloudflare translates into money saved at the origin.You can always toggle between these two views while keeping filters enabled.A filter for every occasionLet’s say you’re focused on improving the performance of a specific subdomain on your zone. Cache Analytics allows flexible filtering of the data that’s important to you:Cache Analytics enables flexible filtering of data.Filtering is essential for zooming in on the chunk of traffic that you’re most interested in. You can filter by cache status, hostname, path, content type, and more. This is helpful, for example, if you’re trying to reduce data transfer for a specific subdomain, or are trying to tune the performance of your HTML pages.Seeing the big pictureWhen analyzing traffic patterns, it’s essential to understand how things change over time. Perhaps you just applied a configuration change and want to see the impact, or just launched a big sale on your e-commerce site.“Served by Cloudflare” indicates traffic that we were able to serve from our edge without reaching your origin server. “Served by Origin” indicates traffic that was proxied back to origin servers. (It can be really satisfying to add a page rule and see the amount of traffic “Served by Cloudflare” go up!)Note that this graph will change significantly when you switch between “Requests” and “Data Transfer.” Revalidated requests are particularly interesting; because Cloudflare checks with the origin before returning a result from cache, these count as “Served by Cloudflare” for the purposes of data transfer, but “Served by Origin” for the purposes of “requests.”Slicing the pieAfter the high-level summary, we show an overview of cache status, which explains why traffic might be served from Cloudflare or from origin. We also show a breakdown of cache status by Content-Type to give an overview on how different components of your website perform:Cache statuses are also essential for understanding what you need to do to optimize cache ratios. For example:Dynamic indicates that a request was never eligible for cache, and went straight to origin. This is the default for many file types, including HTML. Learn more about making more content eligible for cache using page rules. Fixing this is one of the fastest ways to reduce origin data transfer cost.Revalidated indicates content that was expired, but after Cloudflare checked the origin, it was still fresh! If you see a lot of revalidated content, it’s a good sign you should increase your Edge Cache TTLs through a page rule or max-age origin directive. Updating TTLs is one of the easiest ways to make your site faster.Expired resources are ones that were in our cache, but were expired. Consider if you can extend TTLs on these, or at least support revalidation at your origin.A miss indicates that Cloudflare has not seen that resource recently. These can be tricky to optimize, but there are a few potential remedies: Enable Argo Tiered Caching to check another datacenter’s cache before going to origin, or use a Custom Cache Key to make multiple URLs match the same cached resource (for example, by ignoring query string)For a full explanation of each cache status, see our help center.To the Nth dimensionFinally, Cache Analytics shows a number of what we call “Top Ns” — various ways to slice and dice the above data on useful dimensions.It’s often helpful to apply filters (for example, to a specific cache status) before looking at these lists. For example, when trying to tune performance, I often filter to just “expired” or “revalidated,” then see if there are a few URLs that dominate these stats.But wait, there’s moreCache Analytics is available now for customers on our Pro, Business, and Enterprise plans. Pro customers have access to up to 3 days of analytics history. Business and Enterprise customers have access to up to 21 days, with more coming soon.This is just the first step for Cache Analytics. We’re planning to add more dimensions to drill into the data. And we’re planning to add even more essential statistics — for example, about how cache keys are being used.Finally, I’m really excited about Cache Analytics because it shows what we have in store for Cloudflare Analytics more broadly. We know that you’ve asked for many features— like per-hostname analytics, or the ability to see top URLs — for a long time, and we’re hard at work on bringing these to Zone Analytics. Stay tuned!

What is WordCamp?

Nexcess Blog -

If you’ve never heard of  WordCamp before you might think it involves playing lots of Scrabble in tents in the woods. But WordCamps actually have nothing to do with camping & nothing specific to do with words or spelling. So, what is WordCamp? A WordCamp is (in non-pandemic times) an in-person gathering of WordPress fans in a specific geographic region with the goal to learn more about WordPress. Who is WordCamp For? WordCamps are for anyone who wants to learn more about WordPress. You could be a blogger looking for the best ways to edit, schedule, and update your posts. Or you could be a plugin or theme developer seeking information on security, performance, and best practices. Or you could be interested in starting a business on WordPress – like someone who wants to start their own WooCommerce store. In short: if you want to use WordPress, you can go to a WordCamp. There’s no secret handshake and no entry test. Just come to a WordCamp and mingle with fellow WordPress fans! What Topics are Covered at WordCamps? WordCamps truly cover anything and everything related to WordPress. If you want to browse some of the content yourself, you can check out where most WordCamps upload their videos. But to give you just a taste, here are talks you might see at your local WordCamp: Beginner Topics vs by Tim CovellBuilding Your Privacy Policy by Ronnie Burt Blogging / Writing / Content Marketing Why You Should Own Your Own Voice by David WolfpawCreating a Content Calendar by April Wier Business Growing Your Business While You’re Busy with Client Work by Nathan IngramSteps for Dealing with Difficult Clients by Kathy DrewienBig Mistakes in Life by our very own Chris Lema Development Find That Bug You Made Months Ago with Git Bisect by David NeedhamThe WordPress Developer’s Guide to Caching by Micah Wood Design The Ethics of Web Design by Morten Rand-HendriksenSquash and Stretch and Good UX- Using Animation To Enhance User Experience by Michelle Schulp WordCamps are Locally Organized Every WordCamp is a little different and can have a different focus. That’s because they’re locally organized by volunteers. Each local community will have a different focus. So your local WordCamp will focus on issues that matter in that community. Meet Your Local Community WordCamps also feature speakers from your local community. You won’t be learning from a plugin developer from New York City or San Francisco. You’ll be learning from someone who lives down the street. That way, it’s much easier to reach out to them, partner with them, or even hire them. To share a personal story, I met Brian Richards at WordCamp Chicago in 2013. We kept in touch for years, shared advice back and forth, and in 2018 when the stars aligned, we launched a collaborative project called WooSesh which we’re still running today. How Much Does It Cost To Attend WordCamp? If you’ve been to other tech conferences you know they can cost hundreds or thousands of dollars. Tech conferences are great but incredibly expensive. Something that sets WordCamps apart from other events is that it’s organized by volunteers and there’s no corporation trying to make a ton of money. That means they’re incredibly cheap for attendees. WordCamps are limited to $25 per day, so if you have a three day WordCamp the maximum it costs is $75. One of my first technology conferences was three days and it cost $2,000! Clearly, you get incredible value from a WordCamp. WordCamps in a Pandemic Up until this point I’ve focused on what WordCamps are like in typical times, but we’re in the middle of a global pandemic, so WordCamps have become virtual. Obviously, an online conference feels different. You don’t have those hallway chats like you do at an in-person event. But they’re also more flexible. You can view the schedule, and jump in for just a session or two if you like.  And of course you don’t have to drive or reserve a hotel room. This means they’re a lot cheaper. And virtual WordCamps are entirely free. That’s right – a big fat zero dollars. Find Your Local WordCamp Are you ready to try a WordCamp? You can find a schedule of WordCamps on the WordCamp Central website. You can also try WordCamp Denver which is virtual (and free) June 26-27. The post What is WordCamp? appeared first on Nexcess Blog.

How to Choose The Best Web Hosting Package

InMotion Hosting Blog -

So you have determined you need web hosting and you may have even picked out who you want your provider to be, but now you have to pick a hosting package and you don’t know where to start.  When it comes to web hosting, choosing a provider is one of the most important decisions you can make, but once you’ve decided on a provider, figuring out which web hosting package is best for you can be quite the challenge.  Continue reading How to Choose The Best Web Hosting Package at InMotion Hosting Blog.

eukhost Sponsors Let’s Encrypt

My Host News -

LEEDS, England – Web host, eukhost, has announced its sponsorship of Let’s Encrypt, the certificate authority that provides free SSL and TLS certificates for websites. The sponsorship deal will last for an initial 12 months with the company having the option to extend its support over the long-term. Let’s Encrypt, which is operated by the non-profit Internet Security Research Group (ISRG), aims to provide better internet security and privacy by giving free digital certificates to websites. SSL and TLS play a key role in online security, encrypting communications between websites and users’ browsers. Websites which have the certificates have addresses beginning with HTTPS and their security is highlighted to internet users via a padlock icon on the browser. eukhost fully supports the aims of Let’s Encrypt: as a web host, it recognises how its certificates protect both the website and the internet user from data theft. Easy to obtain, securely configured and automatically renewed, the free Let’s Encrypt certificates can be installed by eukhost customers from within their control panels. Robert King, Director at eukhost, said, “Encryption is no longer an option but a necessity for websites. We have been encouraging our customers to get SSL certificates for many years. Through our sponsorship of Let’s Encrypt, we can contribute to this very important project and, hopefully, persuade even more of our customers to install their certificates and make their sites more secure.” Let’s Encrypt has issued over one billion SSL and TLS certificates since its formation in 2014 and, today, serves over 200 million websites. During that time, global page loads using HTTPS have grown from 58% to 81%. eukhost joins a list of other sponsors, including Chrome, Facebook, IBM and Mozilla. About eukhost eukhost Ltd. is a web hosting solutions provider which operates from its registered office in Leeds and data centres in Wakefield, Nottingham, Maidenhead and York. It has over 18 years’ presence in the web hosting industry and hosts over 250,000 domains for more than 35,000 customers. It was one of the first companies in Europe to offer fully automated web hosting solutions backed by 24/7 live chat support.

3W Infra Moves HQs to maincubes Amsterdam AMS01 Data Center, Deploys Private Suite

My Host News -

Amsterdam, the Netherlands – 3W Infra, a global IaaS hosting provider from the Netherlands, has moved its headquarters to the office space in the maincubes AMS01 colocation data center in Amsterdam Schiphol-Rijk while deploying a private suite in this data center for its global network backbone and comprehensive server infrastructure. According to 3W Infra’s founder and CEO, Murat Bayhan, 3W Infra selected the maincubes AMS01 facility “as maincubes is the only true independent and European data center operator left in the Amsterdam area able to meet 3W Infra’s flexibility and scalability requirements.” 3W Infra had its HQs as well as its server and network infrastructure located in another facility in Amsterdam. During their tenancy this facility was acquired and became part of a global data center conglomerate. 3W Infra delivers highly customer-specific IaaS hosting infrastructures, closely tailored to individual needs, meaning they expect the highest flexibility approach from their data center operator as well. According to 3W Infra, the existing facility isn’t able to fully respond to these dynamic demands. As a privately-owned European data center operator with energy-efficient and sustainably operated colocation facilities located in Frankfurt, Germany, and Amsterdam, the Netherlands, maincubes provides colocation services from its AMS01 facility for small up to large (multiple MW) deployments, and from rack by rack and cages up to private suites as well as data center in data center setups. maincubes is able to provide 3W Infra with the flexibility and scalability options the company is looking for. The private data center suite offered to 3W Infra has been commissioned at the end of May. At the same time, 3W Infra has moved its headquarters to maincubes AMS01. Network Backbone Upgrade 3W Infra owns/operates a high-volume global backbone with redundant design. The company’s dedicated dark fiber ring in the Amsterdam region is at the basis of this network, connecting 5 data center Points-of-Presence (PoPs) in Amsterdam. As a hyper connected data center, the maincubes AMS01 facility will now be used as the main network PoP for 3W Infra’s global backbone and be home to its newly purchased network equipment, distributing 3W Infra’s new multiple 100G Transit ports across all PoPs in the Amsterdam area. The migration of this network equipment will take place during the coming weeks. “maincubes AMS01 in Amsterdam Schiphol-Rijk will now become our flagship data center, providing us with the highest colocation service flexibility we could wish for,” said Murat Bayhan, founder and CEO of 3W Infra. “The fact that maincubes is privately owned and probably the only true independent and European data center operator left in the Amsterdam area works to their advantage. That’s my opinion. They are willing to adapt to the dynamic and unpredictable client needs to which we must respond as an IaaS hosting company. They are perfectly able to meet 3W Infra’s flexibility and scalability requirements, also quickly, allowing us to rapidly deploy highly customized IaaS hosting setups. maincubes acts as a strategic business partner for us, so to say.” New HQs, OCP, Immersion Cooling The maincubes AMS01 data center is also home to the European Open Compute Project (OCP) Experience Center, operated by data center vendor Rittal and official OCP solutions provider Circle B. It is available as a demo center, while it can also be used to test new ‘OCP Accepted’ and ‘OCP Inspired’ data center environments as well as telecom solutions. Next to that, maincubes is offering ‘immersion cooling’ (liquid cooling) solutions in dedicated immersion cooling colocation suites in its Amsterdam AMS01 data center, aiming for HPC, AI and machine learning workload deployments. “As an IaaS hosting provider, we definitely like these extras,” added Mr. Bayhan. “It’s not just colocation maincubes is providing us and it’s quite an innovative and absolutely unique approach, I have to say. I’ve never seen a colocation provider offering so many extra’s under one roof, with different sustainable and state-of-the-art technology solutions to select and use.” “We’re providing Murat and his team with the ability to scale their infrastructure whenever needed,” said Joris te Lintelo, Vice President Sales at maincubes. “We’re also empowering them with a highly flexible approach. And if they would like to, they can test these new and innovative technologies or offer them to their clients. We’ll make sure they will feel right at home here. On top of that, the office and storage space we offer 3W Infra at maincubes AMS01 allow their engineers to quickly respond to provisioning requests or any requests that may occur.” About maincubes maincubes is part of German investor and real estate developer Art-Invest which is part of the German construction conglomerate Zech Group. maincubes has data centers in Frankfurt and Amsterdam, and a network of high-availability data centers of various sizes and types in Europe, enabling it to provide colocation services and secure ecosystems for the digital future of customers across various industries. Via the secureexchange® digital platform customers and partners of maincubes can use IT services worldwide such as IoT, (cyber) security and connectivity as well as cloud services to expand their business opportunities. maincubes offers secure, efficient and user-friendly services – and a secure home for your data. To learn more about maincubes, visit About 3W Infra Founded in 2014 by some Internet and hosting industry veterans, 3W Infra is a global Infrastructure-as-a-Service (IaaS) hosting provider with great engineering knowledge and skills headquartered in Amsterdam, the Netherlands. Its global network backbone now exceeds 320 Gigabit/sec (Gbps) of available bandwidth. The company’s enterprise-grade, ISO 27001 and PCI DSS certified hosting solutions are tailored to the specific needs of each customer. 3W Infra’s infrastructural solutions are engineered for scalability and costefficiency, with cloud-enabling services including colocation, dedicated servers, IP connectivity, and high-level customer support. These solutions come with 3W Infra’s Remote Hands including relocation engineering services at the world’s main Internet hubs in Amsterdam, Frankfurt and London. 3W Infra’s customer base includes some of the largest Internet, gaming, broadcasting and cloud services companies in Europe and beyond. To learn more about 3W Infra, visit

ASEOHosting Reports Remote Work Will Impact Digital Marketing

My Host News -

HUDSON, FL – Last month, Daniel Page, Director of Business Development at search engine optimization focused hosting provider ASEOHosting, said COVID-19 has impacted digital marketing because consumer habits have shifted significantly, perhaps irrevocably. “It’s inevitable that, when the dust settles, we’re going to see more people working remotely, and more organizations supporting a digital workplace,” Page said. “There is also little doubt in my mind that consumer attitudes towards online shopping and delivery have also shifted. As for how this impacts marketing, I see a few broad trends taking form.” “First, tolerance for manipulative marketing that plays on negative emotions like FOMO or anger will reach an all-time low,” he said. “Most of us are going through one of the most stressful, difficult experiences of our lives. I strongly advise brands to avoid any attempts to generate traffic and leads through the creation of further stress.” “Second, as more people work remotely, we may see a shift in screentime,” Page said. “On the one hand, we may see more people browsing the web and using social media during working hours who then disconnect after the workday ends. On the other, we may see people spending significantly more time online in general, meaning more opportunities for brands to grab their attention.” “Finally, brands that offer online shopping and order fulfillment will likely start to see greater success than those that don’t,” said Page, “Brick-and-mortar retail has already suffered significantly during COVID-19. Businesses that fail to pivot and offer digital options may suffer further hardships.” “Note, however, that these are simply predictions, based on my understanding of marketing and search,” he said. “It’s ultimately impossible to know with certainty what the future holds. The best any of us can do is be prepared and stay informed.” According to marketing publication Marketing Dive, which compiled research from several different sources, the following changes are immediately apparent: Consumers are likely to pay more for local products, trusted brands, and ethical brands, respectively. Until the pandemic ends, there are four broad consumer segments: those who are spending less across the board due to strained finances, those who are largely unaffected, those who are pessimistically saving and stockpiling, and those whose spending has increased. Consumer attitudes towards privacy have evolved, and people are becoming more willing and open to sharing personal data in the interest of defeating the pandemic. There is a possibility this may translate into marketing. About ASEOHosting Launched in 2002, ASEOHosting is a leader in providing SEO Hosting, including Shared SEO Hosting, Dedicated SEO Hosting, US Dedicated SEO Servers, and EU Dedicated SEO Servers. Based in Orlando, FL, and Detroit, MI, ASEOHosting has established one of the web’s premier solutions for reseller web hosting, multiple IP hosting, dedicated servers, and VPS hosting. For more information, visit

Hybrid Cloud Provider Partners with ScienceLogic to Deliver High Security Multi-Cloud Monitoring and Management for Federal Agencies

My Host News -

HILLSBORO, OR – Opus Interactive, a leading provider of complex hybrid cloud hosting services, announces a partnership with ScienceLogic, leading provider of AI-driven monitoring solutions for multi-cloud management, to deliver highly secure hybrid and multi-cloud monitoring and management. The joint solution offers federal agencies the ability to acquire the DISA-approved ScienceLogic SL1 hosted inside of the OpusGov FedRAMP Moderate Ready environment that resides in FISMA High rated datacenters. Similar to the commercial sector, hybrid cloud is the new norm for federal agencies who are moving forward with modernization efforts while mobilizing staff and resources in the COVID response. Survey results published in MeriTalk’s Juggling the Clouds: What Are Agencies Learning report 81% of Federal IT decision makers say their agency uses multiple cloud platforms (private cloud – 77%, public cloud – 57%, and edge -20%). Reasons listed include increased performance, reliability, compliance/security, and flexibility at reduced/predictable cost. Challenges they were anticipating pre-COVID included security, governance, interoperability, regulatory compliance, and budget overruns. Post-COVID adoptions facing federal agencies include the 80% of agency staff and contractors now working remote, as well as added security and compliance needs of telework, healthcare, and communications. “The need for security and compliance for federal agency solutions has never been more important,” says Shannon Hulbert, Opus Interactive CEO. “We’ve spent over 24 years building resilient solutions in the commercial sector and are excited to partner with ScienceLogic to offer that to federal agencies.” The joint Opus and ScienceLogic offering delivers real-time visibility and control across complex IT environments to provide reliability and high security by integrating the DISA approved ScienceLogic SL1 platform with FedRAMP Moderate Ready infrastructure housed inside of FISMA High-Rated facilities backed up in redundant geographies on separate energy grids. “As the first end-to-end IT infrastructure monitoring company ever to conform to the rigorous security and interoperability standards of DoD UC APL, combined with our close partnership with Opus to meet the standards of FedRAMP, ScienceLogic is fully committed to securing agencies’ digital transformation journey,” said Dave Link, CEO of ScienceLogic. “Whether improving the digital experience or minimizing the costs and risks of adopting the cloud, cross-agency teams need real-time insight into mission-critical services, and we are excited to fuel these initiatives.” FedRAMP is a government-wide program that provides a standardized approach to security assessment, authorization and continuous monitoring for cloud products and services. Through this framework, FedRAMP enables efficiencies in cost and time by enabling rapid procurement of information systems and services, streamlining assessment and ensuring consistent application of information security standards across government organizations. For more information on how Opus and ScienceLogic enable agency high security high compliance solutions to see, contextualize, and act on IT operational data in real-time, please visit: About Opus Interactive Founded in 1996, Opus Interactive has earned a reputation for custom IT solutions that fit unique requirements for security, scalability, cost, and future growth needs of its customers. An accredited member of the International Managed Services Provider Alliance, the Company operates from Tier III+ data centers located in Hillsboro, Portland, Dallas, and Northern Virginia. Through close partnerships with industry-leaders and a commitment to customer satisfaction, Opus delivers custom solutions for Cloud Hosting & IaaS, Colocation, DRaaS & Backup, Object Storage, VDI, and Public Cloud Monitoring & Management. Opus Interactive is a woman and minority-owned enterprise that has worked closely with VMware and HPE partnership programs since 2005. With past performance that includes more than 20 years of proven results and current compliance with PCI-DSS, HIPAA, FedRAMP Moderate and SSAE 18 SOC 2, Opus helps customers reduce cost and optimize resources using efficient operations. For more information please visit About ScienceLogic ScienceLogic is a leader in IT Operations Management, providing modern IT operations with actionable insights to predict and resolve problems faster in a digital, ephemeral world. Its solution sees everything across multi-clouds and distributed architectures, contextualizes data through relationship mapping, and acts on this insight through integration and automation. Trusted by thousands of organizations across the globe, ScienceLogic’s technology was designed for the rigorous security requirements of United States Department of Defense, proven for scale by the world’s largest service providers, and optimized for the needs of large enterprises.

1623 Farnam Announces Expansion of Omaha Edge Data Center

My Host News -

OMAHA, NE – 1623 Farnam, a regional leader in network-neutral, edge interconnection and data center services, announces today the details of its $40 million edge data center expansion. The expansion includes significant upgrades to the facility’s electrical power infrastructure and increases colocation capacity by converting floors six-through-nine into usable data center space. This expansion further supports the increase in demand for interconnected edge data centers, and comes after 1623 Farnam’s initial $10 million expansion last year to build out the facility’s fifth-floor space, which they are currently filling. The new construction will upgrade the facility’s interconnection capabilities by adding additional cabling, vaults and new redundant electrical plans that will support up to 8MW of power to the facility. “Our prime location in Omaha is at the nexus of the country’s east-west and north-south cable routes, making it important that we enable sufficient interconnection capabilities for our customers and partners,” says Todd Cushing, President of 1623 Farnam. “We are also increasing our capacity with build outs to the sixth through ninth floors of the building to better accommodate existing and new customers.” “There has been increasing demand in the data center space, especially now in the current global climate,” says Bill Severn, Executive Vice President of 1623 Farnam. “This build is being carried out largely with our customers and partners in mind. Our new vaults have new conduit access to enable rapid deployment and provide ease of access for establishing new fiber connections. We understand that the quicker and easier we can make it for our customers to get into the building, the better.” The details of the upgraded facility are as follows: 75,000 gross sq. ft. available 9 floors total (plus penthouse, lower level and sub-basement) 8 data center floors (2-9) 6,400 gross sq. ft. per floor 5,400 sq. ft. total “white” data center space per floor 3,100 sq. ft. for cabinets per floor 154 cabinet capacity per floor Both chilled water and air cooling options N+1 Concurrently Maintainable 8MW of power on a uniquely redundant power grid SOC 2 Type 2, SOC 2 Type 1, ISO 27001 and PCI Certifications Cloud on ramps for Telia Carrier, Megaport, Google Cloud Connect, AWS and Microsoft Azure The global edge data center market is currently booming, with a predicted YoY growth rate of 8.93% by the end of 2023, as noted by Marketwatch. This makes the expansion of 1623 Farnam’s edge interconnection facility crucial to serve this rapidly increasing demand; especially at a time when global internet usage has increased by as much as 70% due to the global COVID-19 pandemic. 1623 Farnam’s prime location at the center of the United States and in proximity to the largest Google Cloud node in North America and other significant hyperscale builds in the Omaha metro area makes it an ideal location for increased network and cloud capacity. 1623 Farnam is also host to the Omaha IX, offering robust peering capabilities, as well. The expansion will be deployed in stages throughout the third and fourth quarters of 2020. To learn more, please contact 1623 Farnam’s VP of Sales and Marketing Linn Gowen. Follow 1623 Farnam on Twitter and LinkedIn, and visit About 1623 Farnam 1623 Farnam is the leading network interconnect point providing secure direct edge connectivity to fiber and wireless network providers, major cloud and CDN properties, content providers and Fortune 500 enterprises. We support mission-critical infrastructure and applications with the highest levels of availability, enabling maximum levels of application performance. As the regional leader in network-neutral, edge interconnection, 1623 Farnam offers access to 50 network companies which have local, regional, national and international reach. Located in the heart of the Midwest, 1623 Farnam services over five million eyeballs and multiple Fortune 500 companies in our region. Nebraska is the 15th fastest growing tech state and 20th fastest population growing state in the nation. We pride ourselves on consistently earning high customer satisfaction scores resulting in customer peace-of-mind. For more information, please visit

How to Set Up a Content Calendar to Grow Your Business

HostGator Blog -

The post How to Set Up a Content Calendar to Grow Your Business appeared first on HostGator Blog. Small business marketing requires a lot of content—for your blog, your email campaigns and your social media accounts. How can you stay on top of all of it, avoid repeating yourself and stick to a schedule—without hiring an assistant or devoting hours every week to content management?  Build and use a content calendar for your business. Content calendars—also called editorial calendars–aren’t just for full-time bloggers and online magazines. An organized schedule for content marketing helps all kinds of businesses, from mom-and-pop local service providers to multinational conglomerates—and it can help you, too. Best of all, you can find templates or build your own content calendar for next to nothing or actual nothing. Here’s what a content calendar can do for your business marketing, how to set one up, what to include in your content calendar and where to find free content calendar templates.  What kinds of content can go on your content calendar? You can create a separate calendar for content in each marketing channel, but your content calendar will be most helpful if you include all the content channels you use, such as  Blog postsSocial media postsEmail marketing campaigns Within those channels, variety can keep your audience engaged—as long as you’re offering content that’s useful or entertaining (ideally, both). Here are a few ideas to kick-start your creativity: Blog posts on evergreen topics (core issues that your audience will probably always face)Blog posts on timely topics Case studies featuring your customersContests and giveawaysEmail series that go in-depth on topics your audience cares aboutFAQsInfographics, which are easier to make than you might expectInterviews with experts New product writeups, photos and videosPodcastsPolls of your blog readers or social media followersPromotions for upcoming launches and releasesQuizzesTutorialsUser-generated content, like customer unboxing videosWebinar videos This isn’t a complete list of all possible content types. And not every kind of content will appeal to your audience. Start by picking a few types of content that you think your audience will like and then branch out as you get a sense of what they prefer. The benefits of using a content calendar for your business Content calendars take time to set up, but they can save you more time by helping you keep track of what you’re planning to share with your followers and email to your list. With a calendar, you know what you’re posting and when. No more scrolling back through your blog or your feed to see what you’ve already done. Calendars also help your marketing stay on track. When you’re a solopreneur it can be easy to fall behind on content marketing when you’re busy or when your idea well runs dry. A calendar can help you focus—and it can help you avoid last-minute scrambles to write posts. A calendar can show you where there are gaps in your marketing that you can fill in. For example, if your November calendar doesn’t have any holiday shopping content, or if your August calendar lacks back-to-school content, you can fix that. How to set up your small business content calendar  The most common content calendar formats are an actual calendar layout and a spreadsheet. Some project planning programs, like Trello and Evernote, have their own content calendar templates.  You can also create content calendars yourself using tools like Google Sheets and Word templates. The format you choose will depend on which layout you prefer, the tools you already have access to and your budget.  Whatever format you choose, your calendar should include some basic information for each piece of content you plan to share: Content title or topic: You may blog often on the same topic so be specific. For example, instead of “Flowers for Mom,” a florist might title a post “2020 Mother’s Day Flowers Gift Guide.”Content channel: Is this a blog post, social media post, email or something else?Content type: Is this a quiz? A video? A newsletter? Describe the kind of content here.Publish date: When do you plan to share this content?Author: Who’s writing it?Audience/customer persona: Who are you creating this content for? For example, first-time buyers? Repeat customers? Keywords: How would a prospective customer search for this topic?Content purpose: Is this designed to introduce people to a new service, get people to call your office, or encourage people to “buy now”? Every piece of content you create should have a goal. Resources for free content calendar templates and tools The best content calendar is one you customize to meet the specific needs of your business. However, you probably won’t know the details of what you need in a content calendar until you’ve been working with one for a while.  Free content calendar templates and tools can help you get started, save you time and improve your marketing now, while you get a better idea of how your ideal content calendar should be structured later on. Here are a few options to consider. Calendars for WordPress Users If your small business blogs on WordPress, you can install the free Editorial Calendar plugin to plan, manage and schedule your blog posts and drafts.  You can adjust the plugin settings to display up to eight weeks at a time. Moving a post to a different day is a simple drag-and-drop. Editorial Calendar is a great solution for WordPress-powered business blogging, but it doesn’t include email and social media marketing.  Content Calendars for Google Users If you’re already a Google Calendar user, you can create a content calendar, configure it to be private and share access with employees or contractors who will help you with content.  If you go this route, make sure you include all the relevant information in each content “event” you create.  And remember that content doesn’t usually happen in a day. You can plan ahead by creating events for your outlines, drafts and final versions. Content Calendars for Spreadsheet Users Are you a spreadsheet person? (We know you’re out there!) You can build your own calendar or use a template. Smartsheet has a great free content calendar template you can download in their format, as an Excel file or for Google Sheets.  Save your own copy and you’ve got a file with a tab for each month and columns for content type, author, category, target audience buying stage, goal and publish date. (And because you can edit your file, you can add other columns, like keywords and images you want to include with each piece of content.) One other item for your content calendar… At least once a month, review your site traffic, conversions, shares and other analytics to see which pieces of content are performing well, which had a spike and then dropped off and which didn’t get the traction you’d hoped for. Regular reviews can help you grow your content marketing program and your business by helping you focus on the content that works best with your customers. Find the post on the HostGator Blog

Shark Tank’s Robert Herjavec Inspires at WP Engine Summit/2020

WP Engine -

Here’s a little-known fact: before Shark Tank was the massive TV hit we all know, it was a Canadian television program called Dragon’s Den, and it included only two members of the cast who are on the show today—Kevin O’Leary and Robert Herjavec. The latter—who in addition to his position on Shark Tank is the… The post Shark Tank’s Robert Herjavec Inspires at WP Engine Summit/2020 appeared first on WP Engine.

8 Ways to Use the LinkedIn Mobile App for Business

Social Media Examiner -

Do you use the LinkedIn app? Wondering how to get more done for your business using the app? In this article, you’ll discover eight ways to maximize productivity with the LinkedIn mobile app. #1: Get More Characters in Your LinkedIn Headline via the Mobile App Whenever you search for connections or view content on LinkedIn, […] The post 8 Ways to Use the LinkedIn Mobile App for Business appeared first on Social Media Examiner | Social Media Marketing.

Amazon EKS Now Supports EC2 Inf1 Instances

Amazon Web Services Blog -

Amazon Elastic Kubernetes Service (EKS) has quickly become a leading choice for machine learning workloads. It combines the developer agility and the scalability of Kubernetes, with the wide selection of Amazon Elastic Compute Cloud (EC2) instance types available on AWS, such as the C5, P3, and G4 families. As models become more sophisticated, hardware acceleration is increasingly required to deliver fast predictions at high throughput. Today, we’re very happy to announce that AWS customers can now use the Amazon EC2 Inf1 instances on Amazon Elastic Kubernetes Service, for high performance and the lowest prediction cost in the cloud. A primer on EC2 Inf1 instances Inf1 instances were launched at AWS re:Invent 2019. They are powered by AWS Inferentia, a custom chip built from the ground up by AWS to accelerate machine learning inference workloads. Inf1 instances are available in multiple sizes, with 1, 4, or 16 AWS Inferentia chips, with up to 100 Gbps network bandwidth and up to 19 Gbps EBS bandwidth. An AWS Inferentia chip contains four NeuronCores. Each one implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, saving I/O time in the process. When several AWS Inferentia chips are available on an Inf1 instance, you can partition a model across them and store it entirely in cache memory. Alternatively, to serve multi-model predictions from a single Inf1 instance, you can partition the NeuronCores of an AWS Inferentia chip across several models. Compiling Models for EC2 Inf1 Instances To run machine learning models on Inf1 instances, you need to compile them to a hardware-optimized representation using the AWS Neuron SDK. All tools are readily available on the AWS Deep Learning AMI, and you can also install them on your own instances. You’ll find instructions in the Deep Learning AMI documentation, as well as tutorials for TensorFlow, PyTorch, and Apache MXNet in the AWS Neuron SDK repository. In the demo below, I will show you how to deploy a Neuron-optimized model on an EKS cluster of Inf1 instances, and how to serve predictions with TensorFlow Serving. The model in question is BERT, a state of the art model for natural language processing tasks. This is a huge model with hundreds of millions of parameters, making it a great candidate for hardware acceleration. Building an EKS Cluster of EC2 Inf1 Instances First of all, let’s build a cluster with two inf1.2xlarge instances. I can easily do this with eksctl, the command-line tool to provision and manage EKS clusters. You can find installation instructions in the EKS documentation. Here is the configuration file for my cluster. Eksctl detects that I’m launching a node group with an Inf1 instance type, and will start your worker nodes using the EKS-optimized Accelerated AMI. apiVersion: kind: ClusterConfig metadata: name: cluster-inf1 region: us-west-2 nodeGroups: - name: ng1-public instanceType: inf1.2xlarge minSize: 0 maxSize: 3 desiredCapacity: 2 ssh: allow: true Then, I use eksctl to create the cluster. This process will take approximately 10 minutes. $ eksctl create cluster -f inf1-cluster.yaml Eksctl automatically installs the Neuron device plugin in your cluster. This plugin advertises Neuron devices to the Kubernetes scheduler, which can be requested by containers in a deployment spec. I can check with kubectl that the device plug-in container is running fine on both Inf1 instances. $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE aws-node-tl5xv 1/1 Running 0 14h aws-node-wk6qm 1/1 Running 0 14h coredns-86d5cbb4bd-4fxrh 1/1 Running 0 14h coredns-86d5cbb4bd-sts7g 1/1 Running 0 14h kube-proxy-7px8d 1/1 Running 0 14h kube-proxy-zqvtc 1/1 Running 0 14h neuron-device-plugin-daemonset-888j4 1/1 Running 0 14h neuron-device-plugin-daemonset-tq9kc 1/1 Running 0 14h Next, I define AWS credentials in a Kubernetes secret. They will allow me to grab my BERT model stored in S3. Please note that both keys needs to be base64-encoded. apiVersion: v1 kind: Secret metadata: name: aws-s3-secret type: Opaque data: AWS_ACCESS_KEY_ID: <base64-encoded value> AWS_SECRET_ACCESS_KEY: <base64-encoded value> Finally, I store these credentials on the cluster. $ kubectl apply -f secret.yaml The cluster is correctly set up. Now, let’s build an application container storing a Neuron-enabled version of TensorFlow Serving. Building an Application Container for TensorFlow Serving The Dockerfile is very simple. We start from an Amazon Linux 2 base image. Then, we install the AWS CLI, and the TensorFlow Serving package available in the Neuron repository. FROM amazonlinux:2 RUN yum install -y awscli RUN echo $'[neuron] \n\ name=Neuron YUM Repository \n\ baseurl= \n\ enabled=1' > /etc/yum.repos.d/neuron.repo RUN rpm --import RUN yum install -y tensorflow-model-server-neuron I build the image, create an Amazon Elastic Container Registry repository, and push the image to it. $ docker build . -f Dockerfile -t tensorflow-model-server-neuron $ docker tag IMAGE_NAME $ aws ecr create-repository --repository-name inf1-demo $ docker push Our application container is ready. Now, let’s define a Kubernetes service that will use this container to serve BERT predictions. I’m using a model that has already been compiled with the Neuron SDK. You can compile your own using the instructions available in the Neuron SDK repository. Deploying BERT as a Kubernetes Service The deployment manages two containers: the Neuron runtime container, and my application container. The Neuron runtime runs as a sidecar container image, and is used to interact with the AWS Inferentia chips. At startup, the latter configures the AWS CLI with the appropriate security credentials. Then, it fetches the BERT model from S3. Finally, it launches TensorFlow Serving, loading the BERT model and waiting for prediction requests. For this purpose, the HTTP and grpc ports are open. Here is the full manifest. kind: Service apiVersion: v1 metadata: name: eks-neuron-test labels: app: eks-neuron-test spec: ports: - name: http-tf-serving port: 8500 targetPort: 8500 - name: grpc-tf-serving port: 9000 targetPort: 9000 selector: app: eks-neuron-test role: master type: ClusterIP --- kind: Deployment apiVersion: apps/v1 metadata: name: eks-neuron-test labels: app: eks-neuron-test role: master spec: replicas: 2 selector: matchLabels: app: eks-neuron-test role: master template: metadata: labels: app: eks-neuron-test role: master spec: volumes: - name: sock emptyDir: {} containers: - name: eks-neuron-test image: command: ["/bin/sh","-c"] args: - "mkdir ~/.aws/ && \ echo '[eks-test-profile]' > ~/.aws/credentials && \ echo AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID >> ~/.aws/credentials && \ echo AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials; \ /usr/bin/aws --profile eks-test-profile s3 sync s3://jsimon-inf1-demo/bert /tmp/bert && \ /usr/local/bin/tensorflow_model_server_neuron --port=9000 --rest_api_port=8500 --model_name=bert_mrpc_hc_gelus_b4_l24_0926_02 --model_base_path=/tmp/bert/" ports: - containerPort: 8500 - containerPort: 9000 imagePullPolicy: Always env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: key: AWS_ACCESS_KEY_ID name: aws-s3-secret - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: key: AWS_SECRET_ACCESS_KEY name: aws-s3-secret - name: NEURON_RTD_ADDRESS value: unix:/sock/neuron.sock resources: limits: cpu: 4 memory: 4Gi requests: cpu: "1" memory: 1Gi volumeMounts: - name: sock mountPath: /sock - name: neuron-rtd image: securityContext: capabilities: add: - SYS_ADMIN - IPC_LOCK volumeMounts: - name: sock mountPath: /sock resources: limits: hugepages-2Mi: 256Mi 1 requests: memory: 1024Mi I use kubectl to create the service. $ kubectl create -f bert_service.yml A few seconds later, the pods are up and running. $ kubectl get pods NAME                           READY STATUS  RESTARTS AGE eks-neuron-test-5d59b55986-7kdml 2/2   Running 0        14h eks-neuron-test-5d59b55986-gljlq 2/2   Running 0        14h Finally, I redirect service port 9000 to local port 9000, to let my prediction client connect locally. $ kubectl port-forward svc/eks-neuron-test 9000:9000 & Now, everything is ready for prediction, so let’s invoke the model. Predicting with BERT on EKS and Inf1 The inner workings of BERT are beyond the scope of this post. This particular model expects a sequence of 128 tokens, encoding the words of two sentences we’d like to compare for semantic equivalence. Here, I’m only interested in measuring prediction latency, so dummy data is fine. I build 100 prediction requests storing a sequence of 128 zeros. I send them to the TensorFlow Serving endpoint via grpc, and I compute the average prediction time. import numpy as np import grpc import tensorflow as tf from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc import time if __name__ == '__main__': channel = grpc.insecure_channel('localhost:9000') stub = prediction_service_pb2_grpc.PredictionServiceStub(channel) request = predict_pb2.PredictRequest() = 'bert_mrpc_hc_gelus_b4_l24_0926_02' i = np.zeros([1, 128], dtype=np.int32) request.inputs['input_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape)) request.inputs['input_mask'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape)) request.inputs['segment_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape)) latencies = [] for i in range(100): start = time.time() result = stub.Predict(request) latencies.append(time.time() - start) print("Inference successful: {}".format(i)) print ("Ran {} inferences successfully. Latency average = {}".format(len(latencies), np.average(latencies))) On average, prediction took 5.92ms. As far as BERT goes, this is pretty good! Ran 100 inferences successfully. Latency average = 0.05920819044113159 In real-life, we would certainly be batching prediction requests in order to increase throughput. If needed, we could also scale to larger Inf1 instances supporting several Inferentia chips, and deliver even more prediction performance at low cost. Getting Started Kubernetes users can deploy Amazon Elastic Compute Cloud (EC2) Inf1 instances on Amazon Elastic Kubernetes Service today in the US East (N. Virginia) and US West (Oregon) regions. As Inf1 deployment progresses, you’ll be able to use them with Amazon Elastic Kubernetes Service in more regions. Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for Amazon Elastic Kubernetes Service, or on the container roadmap on Github. - Julien


Recommended Content

Subscribe to Complete Hosting Guide aggregator