Corporate Blogs

Women in Technology: Lindsey Miller

Liquid Web Official Blog -

Liquid Web’s Partner Marketing Manager on building community, nurturing relationships, and putting her time to good use. “I know that I make a difference in people’s businesses,” says Miller, “and that motivates me to come to work every day and do a great job.” Lindsey Miller is no stranger to enterprise. “I started my first business in kindergarten!” Miller made seasonal crafts— turkeys drawn from the outline of her hand, Christmas trees— and sold them to family members over holiday meals. “Yes,” she says, “I sold my grandparents my drawings instead of giving them away!” That Miller was so resourceful at such a young age is unsurprising, having grown up on 200 acres in Oologah, Oklahoma, the birthplace of Oklahoma’s Favorite Son, Will Rogers. “My conversations around cattle can surprise a lot of people,” she says. Her tech journey began almost ten years ago when she was working as a political fundraiser and met her now-husband, Cory. “He had a WordPress plugin company and he got me started by blogging about politics.” Then, in 2011, Miller started a non-profit called The Div, teaching kids to code. Her path in tech was solidified after that, working with WordPress and empowering businesses around the use of the platform. Though she now lives a few hours away from Oologah in Oklahoma City, Lindsey Miller puts her ingenuity to use as Liquid Web’s Partner Marketing Manager, investing in community and relationships. “I have been involved in the WordPress community for a long time,” says Miller. “I truly care about WordPress and those who build their businesses around it.” She takes pride that now, in her role at Liquid Web, she gets to help those who rely on WordPress to grow. Miller loves working in tech for the innovation that it entails. “I was a part of the team that brought the very first WooCommerce Hosting product to market. There was so much creativity! At Liquid Web, we’re encouraged to think outside of the box. That’s very exciting to me,” she says. But for Miller, success is about more than inventiveness. It’s about people. She loves exploring ways she can help those who turn to Liquid Web as they build their business. Miller is currently creating education opportunities like webinars, documents, and blueprints which businesses can use to reach their goals and increase revenue. Miller wants to build a community around people who create on the web and take Liquid Web beyond just being a hosting company. “If I can help our partners and their businesses,” she says, “then I feel that I will have accomplished something great. I have a strong perspective on how to build a relationship and create a community. It starts with caring about people over profit.” She recognizes the power of community and strong relationships in her own life, as well. “Much of my success, I attribute to people who believed in me.” She credits the many mentors, leaders, and colleagues who inspired and taught her along the way. “I am lucky to have had many wonderful advisors put me under their wings over the years and help me continue to grow and learn,” she says. Among those who have impacted her profoundly in both her personal and professional life are her husband, Cory— “He is the reason I learned as much as I have to get where I am in my career.” — and the leadership team at Liquid Web. “As my first real corporate job, I did not expect to get much attention from anyone other than my team and supervisors,” says Miller, “but I regularly connect with our leadership team including tremendous women like Terry Trout and Carrie Wheeler, among all of the talented colleagues that I learn from every day. I have grown tenfold since starting at Liquid Web two-and-a-half years ago.” Chris Lema, Liquid Web’s VP of Products and Innovation, has also been instrumental in Miller’s growth, challenging her to continue expanding her skill set. “If it wasn’t for Chris recognizing the skills learned in politics and developed during my time with Cory, I wouldn’t be here.” It’s been an important two-and-a-half years for Miller who takes great care about how she spends her time and who she spends it with. “Time is so precious,” she says. “I don’t want to waste it.” This outlook will come as no surprise to her colleagues. Spending her formative work years in politics, Miller learned to work quickly and diligently, always under a deadline. She jokes about the impact those experiences have had on her work style. “When asked when I need something, I tease my co-workers that my answer is always ‘as soon as possible’. Maybe my next career lesson will be learning how to wait.” Miller encourages young women considering a career in tech to focus on building relationships. “You will get further in life and work by growing with others instead of in spite of others or on the backs of others. Create relationships. Champion other people, as well as yourself. Working together makes everything better, personally and professionally.” A career in tech, she says, is also an exciting way to see the palpable outcome of hard work. “A great thing about working in tech is that there are not arbitrary results. What you do and your work product is there for everyone to see. For young women, it is a very tangible way to work towards something that takes intelligence and creativity.” And, Miller says, tech offers incredible space for growth. “It is a vast industry. The opportunities are endless.” The post Women in Technology: Lindsey Miller appeared first on Liquid Web.

Understanding the Architecture and Setup of VPS Hosting

Reseller Club Blog -

From our previous articles on what VPS Hosting is, to types of VPS Hosting, or how to install or enable a certain plugin, etc. we’ve covered it all, however, there are two things that we haven’t, that is the architecture and setup. For anything to be built or function properly there needs to be a process or an architecture in place. And it is true even when it comes to your web hosting. The aim of this article is to make you the reader understand how VPS Hosting works and how to setup VPS Hosting on your hosting package. What is VPS Hosting? VPS (Virtual Private Server) Hosting, is the kind of a server that hosts several websites on a single physical server but gives the user the experience of an isolated server. Here, each individual server gets its own resources like CPU, RAM and OS with users having complete root access. Thus, VPS Hosting is said to be a combination of Shared and Dedicated Hosting. Working of VPS Hosting To segregate your physical server into multiple virtual servers, your hosting provider requires a virtualization software, known as a hypervisor. The hypervisor acts as a virtualization layer. It essentially extracts resources on the physical server and lets your customers have access to a virtual replica of the original server. This server is known as a Virtual Machine (VM). Each VM has its own dedicated resources like CPU, RAM, OS and individual applications. As you can see from the above diagram, in the virtual architecture a single physical server is divided into three separate servers and there is a layer of virtualization between the operating system and the physical server. Also, all these servers are isolated from each other. The advantage of VPS Hosting is that each user has full root access due to the isolated nature of the servers, which ensures privacy and better security. Now that we’ve seen how VPS Hosting works let us now move onto understanding how to set up a Virtual Private Server. For our benefit, we will be provisioning VPS on a ResellerClub hosting package. Let’s begin! Setting up VPS Hosting Login to your Reseller AccountLogin to our ResellerClub Control Panel, using your Reseller ID and Password. Go to the top right side of the dashboard and click on Buy to purchase orders. Place an Order In order to purchase VPS Hosting you first need to have a domain name linked to it. For your benefit, we would be purchasing both the Domain and VPS Hosting. Purchasing Domain Name To purchase a domain, go to ‘Select Product’ and select Domain Registration from the drop-down list Enter the domain you want and check if it is available. Should you want Privacy Protection you can add it at an added cost Purchasing VPS Hosting After you’ve purchased your domain name it is time for you to link it to your choice of hosting. Refresh the page and in the same ‘Select Product’ drop-down, select Linux KVM VPS Type the domain name you want to link the hosting with, as well as, all the product specification details as well (we will link it with the domain we purchased) Next, choose if you want any Add-ons, the control panels viz. cPanel and Plesk and, WHMCS (Billing) Add-on available with VPS Hosting. We have selected cPanel and WHMCS. If you don’t want any Add-On select None Accessing your VPS Hosting Post purchasing your domain name and VPS Hosting are now automatically added to your control panel To access the orders, go to the main dashboard and click on Products → List All Orders → Click on the Order you want to access. We will be choosing VPS Hosting Setting up your VPS HostingWith ResellerClub, your VPS server is provisioned instantly post purchase of the order and you need not set it up manually. To access your VPS server, click on the ‘Admin Details’ tab and a new window opens. You can now access the Server Management Panel, WHMCS and cPanel to manage your orders. Conclusion: With this, we come to an end on our series of VPS Hosting. We hope you now know how VPS Hosting works, as well as, how to setup VPS Hosting. With ResellerClub, setting up VPS is very easy. If you have any suggestions, queries, or questions feel free to leave a comment below and we’ll get back to you. .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post Understanding the Architecture and Setup of VPS Hosting appeared first on ResellerClub Blog.

Join Cloudflare India Forum in Bangalore on 6 June 2019!

CloudFlare Blog -

Please join us for an exclusive gathering to discover the latest in cloud solutions for Internet Security and Performance.Cloudflare Bangalore MeetupThursday, 6 June, 2019:  15:30 - 20:00Location: the Oberoi (37-39, MG Road, Yellappa Garden, Yellappa Chetty Layout, Sivanchetti Gardens, Bengalore)We will discuss the newest security trends and introduce serverless solutions.We have invited renowned leaders across industries, including big brands and some of the fastest-growing startups. You will  learn the insider strategies and tactics that will help you to protect your business, to accelerate the performance and to identify the quick-wins in a complex internet environment.Speakers:Vaidik Kapoor, Head of Engineering, GrofersNithyanand Mehta, VP of Technical Services & GM India, CatchpointViraj Patel, VP of Technology, BookmyshowKailash Nadh, CTO, ZerodhaTrey Guinn, Global Head of Solution Engineering, CloudflareAgenda:15:30 - 16:00 - Registration and Refreshment16:00 - 16:30 - DDoS Landscapes and Security Trends16:30 - 17:15 - Workers Overview and Demo17:15 - 18:00 - Panel Discussion - Best Practice on Successful Cyber Security and Performance Strategy18:00 - 18:30 - Keynote #1 - Future edge computing18:30 - 19:00 -  Keynote # 2 - Cyber attacks are evolving, so should you: How to adopt a quick-win security policy19:00 - 20:00 - Happy HourView Event Details & Register Here »We look forward to meeting you there!

Amazon Managed Streaming for Apache Kafka (MSK) – Now Generally Available

Amazon Web Services Blog -

I am always amazed at how our customers are using streaming data. For example, Thomson Reuters, one of the world’s most trusted news organizations for businesses and professionals, built a solution to capture, analyze, and visualize analytics data to help product teams continuously improve the user experience. Supercell, the social game company providing games such as Hay Day, Clash of Clans, and Boom Beach, is delivering in-game data in real-time, handling 45 billion events per day. Since we launched Amazon Kinesis at re:Invent 2013, we have continually expanded the ways in in which customers work with streaming data on AWS. Some of the available tools are: Kinesis Data Streams, to capture, store, and process data streams with your own applications. Kinesis Data Firehose, to transform and collect data into destinations such as Amazon S3, Amazon Elasticsearch Service, and Amazon Redshift. Kinesis Data Analytics, to continuously analyze data using SQL or Java (via Apache Flink applications), for example to detect anomalies or for time series aggregation. Kinesis Video Streams, to simplify processing of media streams. At re:Invent 2018, we introduced in open preview Amazon Managed Streaming for Apache Kafka (MSK), a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data. I am excited to announce that Amazon MSK is generally available today! How it works Apache Kafka (Kafka) is an open-source platform that enables customers to capture streaming data like click stream events, transactions, IoT events, application and machine logs, and have applications that perform real-time analytics, run continuous transformations, and distribute this data to data lakes and databases in real time. You can use Kafka as a streaming data store to decouple applications producing streaming data (producers) from those consuming streaming data (consumers). While Kafka is a popular enterprise data streaming and messaging framework, it can be difficult to setup, scale, and manage in production. Amazon MSK takes care of these managing tasks and makes it easy to set up, configure, and run Kafka, along with Apache ZooKeeper, in an environment following best practices for high availability and security. Your MSK clusters always run within an Amazon VPC managed by the MSK service. Your MSK resources are made available to your own VPC, subnet, and security group through elastic network interfaces (ENIs) which will appear in your account, as described in the following architectural diagram: Customers can create a cluster in minutes, use AWS Identity and Access Management (IAM) to control cluster actions, authorize clients using TLS private certificate authorities fully managed by AWS Certificate Manager (ACM), encrypt data in-transit using TLS, and encrypt data at rest using AWS Key Management Service (KMS) encryption keys. Amazon MSK continuously monitors server health and automatically replaces servers when they fail, automates server patching, and operates highly available ZooKeeper nodes as a part of the service at no additional cost. Key Kafka performance metrics are published in the console and in Amazon CloudWatch. Amazon MSK is fully compatible with Kafka versions 1.1.1 and 2.1.0, so that you can continue to run your applications, use Kafka’s admin tools, and and use Kafka compatible tools and frameworks without having to change your code. Based on our customer feedback during the open preview, Amazon MSK added may features such as: Encryption in-transit via TLS between clients and brokers, and between brokers Mutual TLS authentication using ACM private certificate authorities Support for Kafka version 2.1.0 99.9% availability SLA HIPAA eligible Cluster-wide storage scale up Integration with AWS CloudTrail for MSK API logging Cluster tagging and tag-based IAM policy application Defining custom, cluster-wide configurations for topics and brokers AWS CloudFormation support is coming in the next few weeks. Creating a cluster Let’s create a cluster using the AWS management console. I give the cluster a name, select the VPC I want to use the cluster from, and the Kafka version. I then choose the Availability Zones (AZs) and the corresponding subnets to use in the VPC. In the next step, I select how many Kafka brokers to deploy in each AZ. More brokers allow you to scale the throughtput of a cluster by allocating partitions to different brokers. I can add tags to search and filter my resources, apply IAM policies to the Amazon MSK API, and track my costs. For storage, I leave the default storage volume size per broker. I select to use encryption within the cluster and to allow both TLS and plaintext traffic between clients and brokers. For data at rest, I use the AWS-managed customer master key (CMK), but you can select a CMK in your account, using KMS, to have further control. You can use private TLS certificates to authenticate the identity of clients that connect to your cluster. This feature is using Private Certificate Authorities (CA) from ACM. For now, I leave this option unchecked. In the advanced setting, I leave the default values. For example, I could have chosen here a different instance type for my brokers. Some of these settings can be updated using the AWS CLI. I create the cluster and monitor the status from the cluster summary, including the Amazon Resource Name (ARN) that I can use when interacting via CLI or SDKs. When the status is active, the client information section provides specific details to connect to the cluster, such as: The bootstrap servers I can use with Kafka tools to connect to the cluster. The Zookeper connect list of hosts and ports. I can get similar information using the AWS CLI: aws kafka list-clusters to see the ARNs of your clusters in a specific region aws kafka get-bootstrap-brokers --cluster-arn <ClusterArn> to get the Kafka bootstrap servers aws kafka describe-cluster --cluster-arn <ClusterArn> to see more details on the cluster, including the Zookeeper connect string Quick demo of using Kafka To start using Kafka, I create two EC2 instances in the same VPC, one will be a producer and one a consumer. To set them up as client machines, I download and extract the Kafka tools from the Apache website or any mirror. Kafka requires Java 8 to run, so I install Amazon Corretto 8. On the producer instance, in the Kafka directory, I create a topic to send data from the producer to the consumer: bin/kafka-topics.sh --create --zookeeper <ZookeeperConnectString> \ --replication-factor 3 --partitions 1 --topic MyTopic Then I start a console-based producer: bin/kafka-console-producer.sh --broker-list <BootstrapBrokerString> \ --topic MyTopic On the consumer instance, in the Kafka directory, I start a console-based consumer: bin/kafka-console-consumer.sh --bootstrap-server <BootstrapBrokerString> \ --topic MyTopic --from-beginning Here’s a recording of a quick demo where I create the topic and then send messages from a producer (top terminal) to a consumer of that topic (bottom terminal): Pricing and availability Pricing is per Kafka broker-hour and per provisioned storage-hour. There is no cost for the Zookeeper nodes used by your clusters. AWS data transfer rates apply for data transfer in and out of MSK. You will not be charged for data transfer within the cluster in a region, including data transfer between brokers and data transfer between brokers and ZooKeeper nodes. You can migrate your existing Kafka cluster to MSK using tools like MirrorMaker (that comes with open source Kafka) to replicate data from your clusters into a MSK cluster. Upstream compatibility is a core tenet of Amazon MSK. Our code changes to the Kafka platform are released back to open source. Amazon MSK is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), EU (Frankfurt), EU (Ireland), EU (Paris), and EU (London). I look forward to see how are you going to use Amazon MSK to simplify building and migrating streaming applications to the cloud!

Now Available – AWS IoT Things Graph

Amazon Web Services Blog -

We announced AWS IoT Things Graph last November and described it as a tool to let you build IoT applications visually. Today I am happy to let you know that the service is now available and ready for you to use! As you will see in a moment, you can represent your business logic in a flow composed of devices and services. Each web service and each type of device (sensor, camera, display, and so forth) is represented in Things Graph as a model. The models hide the implementation details that are peculiar to a particular brand or model of device, and allow you to build flows that can evolve along with your hardware. Each model has a set of actions (inputs), events (outputs), and states (attributes). Things Graph includes a set of predefined models, and also allows you to define your own. You can also use mappings as part of your flow to convert the output from one device into the form expected by other devices. After you build your flow, you can deploy it to the AWS Cloud or an AWS IoT Greengrass-enabled device for local execution. The flow, once deployed, orchestrates interactions between locally connected devices and web services. Using AWS IoT Things Graph Let’s take a quick walk through the AWS IoT Things Graph Console! The first step is to make sure that I have models which represent the devices and web services that I plan to use in my flow. I click Models in the console navigation to get started: The console outlines the three steps that I must follow to create a model, and also lists my existing models: The presence of aws/examples in the URN for each of the devices listed above indicates that they are predefined, and part of the public AWS IoT Things Graph namespace. I click on Camera to learn more about this model; I can see the Properties, Actions, and Events: The model is defined using GraphQL; I can view it, edit it, or upload a file that contains a model definition. Here’s the definition of the Camera: This model defines an abstract Camera device. The model, in turn, can reference definitions for one or more actual devices, as listed in the Devices section: Each of the devices is also defined using GraphQL. Of particular interest is the use of MQTT topics & messages to define actions: Earlier, I mentioned that models can also represent web services. When a flow that references a model of this type is deployed, activating an action on the model invokes a Greengrass Lambda function. Here’s how a web service is defined: Now I can create a flow. I click Flows in the navigation, and click Create flow: I give my flow a name and enter a description: I start with an empty canvas, and then drag nodes (Devices, Services, or Logic) to it: For this demo (which is fully explained in the AWS IoT Things Graph User Guide), I’ll use a MotionSensor, a Camera, and a Screen: I connect the devices to define the flow: Then I configure and customize it. There are lots of choices and settings, so I’ll show you a few highlights, and refer you to the User Guide for more info. I set up the MotionSensor so that a change of state initiates this flow: I also (not shown) configure the Camera to perform the Capture action, and the Screen to display it. I could also make use of the predefined Services: I can also add Logic to my flow: Like the models, my flow is ultimately defined in GraphQL (I can view and edit it directly if desired): At this point I have defined my flow, and I click Publish to make it available for deployment: The next steps are: Associate – This step assigns an actual AWS IoT Thing to a device model. I select a Thing, and then choose a device model, and repeat this step for each device model in my flow: Deploy – I create a Flow Configuration, target it at the Cloud or Greengrass, and use it to deploy my flow (read Creating Flow Configurations to learn more). Things to Know I’ve barely scratched the surface here; AWS IoT Things Graph provides you with a lot of power and flexibility and I’ll leave you to discover more on your own! Here are a couple of things to keep in mind: Pricing – Pricing is based on the number of steps executed (for cloud deployments) or deployments (for edge deployments), and is detailed on the AWS IoT Things Graph Pricing page. API Access – In addition to console access, you can use the AWS IoT Things Graph API to build your models and flows. Regions – AWS IoT Things Graph is available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions. — Jeff;    

New – Data API for Amazon Aurora Serverless

Amazon Web Services Blog -

If you have ever written code that accesses a relational database, you know the drill. You open a connection, use it to process one or more SQL queries or other statements, and then close the connection. You probably used a client library that was specific to your operating system, programming language, and your database. At some point you realized that creating connections took a lot of clock time and consumed memory on the database engine, and soon after found out that you could (or had to) deal with connection pooling and other tricks. Sound familiar? The connection-oriented model that I described above is adequate for traditional, long-running programs where the setup time can be amortized over hours or even days. It is not, however, a great fit for serverless functions that are frequently invoked and that run for time intervals that range from milliseconds to minutes. Because there is no long-running server, there’s no place to store a connection identifier for reuse. Aurora Serverless Data API In order to resolve this mismatch between serverless applications and relational databases, we are launching a Data API for the MySQL-compatible version of Amazon Aurora Serverless. This API frees you from the complexity and overhead that come along with traditional connection management, and gives you the power to quickly and easily execute SQL statements that access and modify your Amazon Aurora Serverless Database instances. The Data API is designed to meet the needs of both traditional and serverless apps. It takes care of managing and scaling long-term connections to the database and returns data in JSON form for easy parsing. All traffic runs over secure HTTPS connections. It includes the following functions: ExecuteStatement – Run a single SQL statement, optionally within a transaction. BatchExecuteStatement – Run a single SQL statement across an array of data, optionally within a transaction. BeginTransaction – Begin a transaction, and return a transaction identifier. Transactions are expected to be short (generally 2 to 5 minutes). CommitTransaction – End a transaction and commit the operations that took place within it. RollbackTransaction – End a transaction without committing the operations that took place within it. Each function must run to completion within 1 minute, and can return up to 1 megabyte of data. Using the Data API I can use the Data API from the Amazon RDS Console, the command line, or by writing code that calls the functions that I described above. I’ll show you all three in this post. The Data API is really easy to use! The first step is to enable it for the desired Amazon Aurora Serverless database. I open the Amazon RDS Console, find & select the cluster, and click Modify: Then I scroll down to the Network & Security section, click Data API, and Continue: On the next page I choose to apply the settings immediately, and click Modify cluster: Now I need to create a secret to store the credentials that are needed to access my database. I open the Secrets Manager Console and click Store a new secret. I leave Credentials for RDS selected, enter a valid database user name and password, optionally choose a non-default encryption key, and then select my serverless database. Then I click Next: I name my secret and tag it, and click Next to configure it: I use the default values on the next page, click Next again, and now I have a brand new secret: Now I need two ARNs, one for the database and one for the secret. I fetch both from the console, first for the database: And then for the secret: The pair of ARNs (database and secret) provides me with access to my database, and I will protect them accordingly! Using the Data API from the Amazon RDS Console I can use the Query Editor in the Amazon RDS Console to run queries that call the Data API. I open the console and click Query Editor, and create a connection to the database. I select the cluster, enter my credentials, and pre-select the table of interest. Then I click Connect to database to proceed: I enter a query and click Run, and view the results within the editor: Using the Data API from the Command Line I can exercise the Data API from the command line: $ aws rds-data execute-statement \ --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL" \ --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1" \ --database users \ --sql "show tables" \ --output json I can use jq to pick out the part of the result that is of interest to me: ... | jq .records [ { "values": [ { "stringValue": "users" } ] } ] I can query the table and get the results (the SQL statement is "select * from users where userid='jeffbarr'"): ... | jq .records [ { "values": [ { "stringValue": "jeffbarr" }, { "stringValue": "Jeff" }, { "stringValue": "Barr" } ] } If I specify --include-result-metadata, the query also returns data that describes the columns of the result (I’ll show only the first one in the interest of frugality): ... | jq .columnMetadata[0] { "type": 12, "name": "userid", "label": "userid", "nullable": 1, "isSigned": false, "arrayBaseColumnType": 0, "scale": 0, "schemaName": "", "tableName": "users", "isCaseSensitive": false, "isCurrency": false, "isAutoIncrement": false, "precision": 15, "typeName": "VARCHAR" } The Data API also allows me to wrap a series of statements in a transaction, and then either commit or rollback. Here’s how I do that (I’m omitting --secret-arn and --resource-arn for clarity): $ $ID=`aws rds-data begin-transaction --database users --output json | jq .transactionId` $ echo $ID "ATP6Gz88GYNHdwNKaCt/vGhhKxZs2QWjynHCzGSdRi9yiQRbnrvfwF/oa+iTQnSXdGUoNoC9MxLBwyp2XbO4jBEtczBZ1aVWERTym9v1WVO/ZQvyhWwrThLveCdeXCufy/nauKFJdl79aZ8aDD4pF4nOewB1aLbpsQ==" $ aws rds-data execute-statement --transaction-id $ID --database users --sql "..." $ ... $ aws rds-data execute-statement --transaction-id $ID --database users --sql "..." $ aws rds-data commit-transaction $ID If I decide not to commit, I invoke rollback-transaction instead. Using the Data API with Python and Boto Since this is an API, programmatic access is easy. Here’s some very simple Python / Boto code: import boto3 client = boto3.client('rds-data') response = client.execute_sql( secretArn = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL', database = 'users', resourceArn = 'arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1', sql = 'select * from users' ) for user in response['records']: userid = user[0]['stringValue'] first_name = user[1]['stringValue'] last_name = user[2]['stringValue'] print(userid + ' ' + first_name + ' ' + last_name) And the output: $ python data_api.py jeffbarr Jeff Barr carmenbarr Carmen Barr Genuine, production-quality code would reference the table columns symbolically using the metadata that is returned as part of the response. By the way, my Amazon Aurora Serverless cluster was configured to scale capacity all the way down to zero when not active. Here’s what the scaling activity looked like while I was writing this post and running the queries: Now Available You can make use of the Data API today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. There is no charge for the API, but you will pay the usual price for data transfer out of AWS. — Jeff;

Why (and how) WordPress Works for Us

WP Engine -

Today WP Engine is the digital experience platform for WordPress used by 95,000 customers across 150 countries. But, we didn’t start that way. In 2010, I founded WP Engine based on the knowledge that there was a need for premium WordPress service that would deliver the speed, scalability, and security that websites required. The idea… The post Why (and how) WordPress Works for Us appeared first on WP Engine.

New – AWS IoT Events: Detect and Respond to Events at Scale

Amazon Web Services Blog -

As you may have been able to tell from many of the announcements that we have made over the last four or five years, we are working to build a wide-ranging set of Internet of Things (IoT) services and capabilities. Here’s a quick recap: October 2015 – AWS IoT Core – A fundamental set of Cloud Services for Connected Devices. Jun 2017 – AWS Greengrass – The ability to Run AWS Lambda Functions on Connected Devices. November 2017 – AWS IoT Device Management – Onboarding, Organization, Monitoring, and Remote Management of Connected Devices. November 2017 – AWS IoT Analytics – Advanced Data Analysis for IoT Devices. November 2017 – Amazon FreeRTOS – An IoT Operating System for Microcontrollers. April 2018 – Greengrass ML Inference – The power to do Machine Learning Inference at the Edge. August 2018 – AWS IoT Device Defender – A service that helps to Keep Your Connected Devices Safe. Last November we also announced our plans to launch four new IoT Services: AWS IoT SiteWise to collect, structure, and search data from industrial equipment at scale. AWS IoT Events to detect and respond to events at scale. AWS IoT Things Graph to build IoT applications visually. AWS IoT Greengrass Connectors to simplify and accelerate the process of connecting devices. You can use these services individually or together to build all sorts of powerful, connected applications! AWS IoT Events Now Available Today we are making AWS IoT Events available in production form in four AWS Regions. You can use this service to monitor and respond to events (patterns of data that identify changes in equipment or facilities) at scale. You can detect a misaligned robot arm, a motion sensor that triggers outside of business hours, an unsealed freezer door, or a motor that is running outside of tolerance, all with the goal of driving faster and better-informed decisions. As you will see in a moment, you can easily create detector models that represent your devices, their states, and the transitions (driven by sensors and events, both known as inputs) between the states. The models can trigger actions when critical events are detected, allowing you to build robust, highly automated systems. Actions can, for example, send a text message to a service technician or invoke an AWS Lambda function. You can access AWS IoT Events from the AWS IoT Event Console or by writing code that calls the AWS IoT Events API functions. I’ll use the Console, and I will start by creating a detector model. I click Create detector model to begin: I have three options; I’ll go with the demo by clicking Launch demo with inputs: This shortcut creates an input and a model, and also enables some “demo” functionality that sends data to the model. The model looks like this: Before examining the model, let’s take a look at the input. I click on Inputs in the left navigation to see them: I can see all of my inputs at a glance; I click on the newly created input to learn more: This input represents the battery voltage measured from a device that is connected to a particular powerwallId: Ok, let’s return to (and dissect) the detector model! I return to the navigation, click Detector models, find my model, and click it: There are three Send options at the top; each one sends data (an input) to the detector model. I click on Send data for Charging to get started. This generates a message that looks like this; I click Send data to do just that: Then I click Send data for Charged to indicate that the battery is fully charged. The console shows me the state of the detector: Each time an input is received, the detector processes it. Let’s take a closer look at the detector. It has three states (Charging, Charged, and Discharging): The detector starts out in the Charging state, and transitions to Charged when the Full_charge event is triggered. Here’s the definition of the event, including the trigger logic: The trigger logic is evaluated each time an input is received (your IoT app must call BatchPutMessage to inform AWS IoT Events). If the trigger logic evaluates to a true condition, the model transitions to the new (destination) state, and it can also initiate an event action. This transition has no actions; I can add one (or more) by clicking Add action. My choices are: Send MQTT Message – Send a message to an MQTT topic. Send SNS Message – Send a message to an SNS target, identifed by an ARN. Set Timer – Set, reset, or destroy a timer. Times can be expressed in seconds, minutes, hours, days, or months. Set Variable – Set, increment, or decrement a variable. Returning (once again) to the detector, I can modify the states as desired. For example, I could fine-tune the Discharging aspect of the detector by adding a LowBattery state: After I create my inputs and my detector, I Publish the model so that my IoT devices can use and benefit from it. I click Publish and fill in a few details: The Detector generation method has two options. I can Create a detector for each unique key value (if I have a bunch of devices), or I can Create a single detector (if I have one device). If I choose the first option, I need to choose the key that separates one device from another. Once my detector has been published, I can send data to it using AWS IoT Analytics, IoT Core, or from a Lambda function. Get Started Today We are launching AWS IoT Events in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions and you can start using it today! — Jeff;    

Reseller Hosting Is Your Business Model Delivered

InMotion Hosting Blog -

Our Reseller Hosting is a Linux-powered open-source juggernaut. Yes, we’re proud of it. And we stand behind the features and addons that make our service not only meet but exceed industry standards and expectations. If you’re not familiar with the “reseller” hosting model and how it differs from an individual hosting account, we’re going to go through it point by point. Reseller Hosting is a Model For Your Own Hosting Business Have you ever though about starting your own business? Continue reading Reseller Hosting Is Your Business Model Delivered at The Official InMotion Hosting Blog.

What Is VPS Used For?

HostGator Blog -

The post What Is VPS Used For? appeared first on HostGator Blog. You may be here because you’ve heard the term VPS thrown around a lot, and you’re wondering what this acronym actually means. VPS stands for Virtual Private Server, and the term is usually used when referring to VPS hosting. It’s also often confused with VPN, although VPS and VPN are two different things. Of course, there’s a lot more you can do with a VPS server than just host a website, but we’ll get into that below. VPS hosting is typically a natural next step after you’ve run into the limitations of a traditional shared hosting plan. Below we’ll answer the questions what does VPS stand for, and what is VPS used for in depth. By the end of this post, you’ll know if a VPS is going to be right for your needs. What Does VPS Stand For? As you learned in the introduction, VPS stands for Virtual Private Server. This kind of server environment operates similarly to a dedicated server. However, instead of a single physical server, you’re sharing multiple physical servers, which are linked together through virtualization technologies. You can think of VPS like a cross between a shared server and a dedicated server. You’re sharing a physical server with other website owners, but it’s divided up in a way so there are multiple virtual dedicated servers present, hence the “virtual” aspect of VPS. What is VPS Hosting? If you have asked yourself “What is VPS hosting?”, this section will provide you with an in-depth look at this hosting service. VPS hosting is a step up from basic shared hosting. When you’re just getting started online, shared hosting will probably be the form of hosting you start with. With shared hosting, you have a single physical server which is then divided up between multiple different user accounts. In this scenario, you’re splitting physical server resources, which helps to keep costs low. On a basic level, VPS hosting has a similar setup. When you sign up for VPS hosting you have a guaranteed amount of server resources allocated to you, but you’re still sharing space on a physical server with other users. There are many differences between VPS hosting and shared hosting due to the virtualization technologies employed in VPS hosting. Even though you might be sharing the same physical server there won’t be any overlap in resource use, and the other VPS accounts won’t affect your site in any way. Think of it as a single dedicated server that’s split over multiple physical server environments. VPS hosting is a great choice for website owners who have outgrown shared hosting, yet aren’t quite ready for the price tag or features offered by a dedicated server. You can easily migrate from shared hosting to VPS hosting while still staying within a reasonable price point. Pros of VPS Hosting For some website owners, VPS hosting will be a godsend. Acting as the intermediary between shared and dedicated hosting, VPS can provide you with a lot of benefits. Here are the most common reasons website owners decide to upgrade to VPS hosting. 1. High Level of Performance If you currently have a slow loading website, then you’re doing a disservice to your visitors and your website as a whole. If you’ve been utilizing shared hosting and have been noticing a drop in performance, then one of the first things you’ll notice is an improvement in your loading speeds and overall site performance. VPS hosting is equipped to handle higher traffic levels right out of the gate. Plus, you have the ability to scale your server resources if your needs expand over time. 2. Improved Overall Security When your site starts to grow in popularity there’s a chance you’ll start to experience more security threats. Even if you’ve done everything in your power to harden your site’s security you could still be experiencing issues. In this case, it’s time to upgrade your hosting. VPS hosting offers you very high levels of security. You’re not only completely protected from other sites using the same physical server, but you’ll be able to implement other security hardening protocols as well. 3. Great Value Pricing VPS hosting might not be in everyone’s budget, but it offers a great value for the resources you have access to. Essentially, you’re getting access to a dedicated server at the fraction of the cost. Plus, with VPS hosting you’ll be enabling higher levels of performance and elevating the security protocols surrounding your site. When compared to shared hosting you’re getting a serious upgrade in hosting quality without a massive jump in price. 4. Greater Server Access and Customization VPS web hosting will generally provide you with a greater level of server access, along with the ability to customize your server environment as you see fit. Some, like WordPress VPS hosting, will have certain restrictions for plugin use and overall configuration. However, others will operate more or less like a clean slate, allowing you to choose your operating system and build whatever configuration will supercharge your site the most. Keep in mind that some hosts will also offer managed VPS web hosting, which means that the majority of the technical tasks required to manage your server will be taken care of by their teams. This option will help to free up your time and ensure your server is always fully optimized according to your website’s specifications. Cons of VPS Hosting Even though VPS hosting seems pretty great it’s not the perfect fit for every kind of website owner. Here are some of the most common reasons people decide not to go with VPS hosting: 1. Prohibitive Pricing Even though VPS hosting is quite cost-effective, especially with all of the features, the pricing can still be steep for some website owners. If a basic shared hosting plan is stretching your budget, then VPS might not be the right option for you. VPS hosting does seem cheap when compared to the more expensive dedicated hosting plans. However, it’s still a pretty sizable step up from shared hosting. 2. Poor Resource Allocation With Low-Quality Hosts VPS hosting relies upon proper resource allocation. If you’re using a low-quality host, another site that’s on the same physical server may impact your site, or your site otherwise won’t be able to perform at the level you’ve grown used to. However, using a high-quality host should help you easily avoid either of these issues. What is VPS Used For? Beyond hosting a website, VPS servers have a myriad of other uses. Even if you’re currently happy with your existing hosting plan, you might want to check out VPS hosting for the other types of scenarios it provides. Here are the most common VPS use cases beyond your standard hosting plan: 1. Hosting Your Own Personal Server There’s a multitude of reasons to run your own server environment, outside of simply hosting your website. A VPS server gives you your own virtual playground for additional online activities. For example, maybe you want your own dedicated servers for games? For some people, the cost of a dedicated server might be prohibitive, but instead, you could run a VPS server to host smaller game matches or create your own custom game environment. Not every hosting company will allow you to run a gaming server via VPS, so make sure you read the terms and conditions, or contact support, before you decide to go this route.   2. Testing New Applications If you regularly deploy web applications or test out custom server setups, you’ll need your own server environment to test these things out. But, an entire dedicated server might be too expensive to warrant simple testing. In this case, a VPS will fit the bill perfectly. This will give you a playground to do whatever you wish without incurring high monthly costs. 3. Additional File Storage Sometimes, you want to create another backup of your files, but using cloud storage accounts can become expensive. If you want to create secure and easily accessible backups, then consider using a VPS server. Overall, this might end up being cheaper than a cloud hosting account, depending on the overall volume of the files you need stored. However, keep in mind that not every hosting provider will allow their VPS accounts to be used for pure file storage, so double check the terms and conditions before you move forward. VPS Hosting Showdown By now you understand what a VPS hosting solution is, and the other reasons you might want to deploy a Virtual Private Server. Now it’s time to see how VPS hosting compares to the other forms of hosting out there. For those thinking about upgrading their current hosting package, this section is for you. 1. VPS vs Shared Hosting We went into shared hosting a bit above, but it’s worth digging in a bit more detail. With shared hosting, you’re renting space on a physical server that’s being shared with multiple other users. The server is partitioned between users, but there is a chance that other sites on the same server could impact your site. With a VPS hosting solution you’re still sharing a physical server with other users. But, the underlying technology is much different. A VPS utilizes what’s known as a hypervisor. This ensures that you always have access to the guaranteed level of server resources as specified in your hosting plan. Shared hosting is a great place to start, but once you’ve run into its limits, VPS is a great next step. Plus, VPS hosting has the added benefit of being able to scale with your site. 2. VPS vs Dedicated Hosting Dedicated hosting is pretty simple. You’re renting an entire physical server that’s yours to do whatever you want. It’s one of the more expensive forms of hosting available, but it’ll provide you with very high levels of performance and security while offering you the ability to customize your server however you see fit. A VPS server vs. a dedicated server will behave differently, in that you have your own virtualized dedicated server to use how you see fit. However, you don’t have your own physical dedicated server, just a virtual one. If you have a very high traffic website, or require very high levels of security, then a dedicated server might be a better fit. However, keep in mind that you’ll need a larger budget when compared to VPS hosting. But, if you don’t have the budget for a dedicated host, then VPS hosting will suit you fine until it’s possible to upgrade. 3. VPS vs Cloud Hosting Cloud hosting is one of the newer forms of hosting on the block. Overall, cloud hosting is similar to VPS in that it uses virtualization technologies to create a server environment. However, when comparing cloud hosting vs. VPS hosting, there’s a network of servers that are grouped together to create a cloud server cluster. This setup provides you with very high levels of reliability and scalability. So, if your traffic levels swing up and down from month to month, then this style of hosting could be advantageous. VPS hosting operates in a similar fashion by creating a virtualized web server environment across a few physical servers (if your resource needs require it). However, with VPS hosting you should have a more stable volume of traffic per month, even if it’s rising on a consistent basis. In Closing: Do You Need to Use VPS? VPS hosting is a perfect fit for those who require the resources that a dedicated server can provide, but aren’t quite ready for a dedicated web server. When it comes to your website, using VPS hosting will offer you higher levels of performance, storage, and scalability if the need arises. However, you might also think about utilizing a VPS for deploying and testing projects, running your own personal server, or even for additional file storage or website backups. Whether or not you need to upgrade to a VPS depends on if you’ve currently hit the limits of your existing hosting package, or want to test out a VPS for any of the reasons highlighted above. Hopefully, you have a better understanding of what a VPS is used for, even beyond the realm of hosting your website. If you’ve currently hit the limits of your shared hosting account, then upgrading to VPS hosting can be a great decision for the future of your website. Find the post on the HostGator Blog

Cloudflare Repositories FTW

CloudFlare Blog -

This is a guest post by Jim “Elwood” O’Gorman, one of the maintainers of Kali Linux. Kali Linux is a Debian based GNU/Linux distribution popular amongst the security research communities.Kali Linux turned six years old this year!In this time, Kali has established itself as the de-facto standard open source penetration testing platform. On a quarterly basis, we release updated ISOs for multiple platforms, pre-configured virtual machines, Kali Docker, WSL, Azure, AWS images, tons of ARM devices, Kali NetHunter, and on and on and on. This has lead to Kali being trusted and relied on to always being there for both security professionals and enthusiasts alike.But that popularity has always led to one complication: How to get Kali to people?With so many different downloads plus the apt repository, we have to move a lot of data. To accomplish this, we have always relied on our network of first- and third-party mirrors.The way this works is, we run a master server that pushes out to a number of mirrors. We then pay to host a number of servers that are geographically dispersed and use them as our first-party mirrors. Then, a number of third parties donate storage and bandwidth to operate third-party mirrors, ensuring that we have even more systems that are geographically close to you. When you go to download, you hit a redirector that will send you to a mirror that is close to you, ideally allowing you to download your files quickly.This solution has always been pretty decent, however it has some drawbacks. First, our network of first-party mirrors is expensive. Second, some mirrors are not as good as others. Nothing is worse than trying to download Kali and getting sent to a slow mirror, where your download might drag on for hours. Third, we always always need more mirrors as Kali continues to grow in popularity.This situation led to us encountering Cloudflare thanks to some extremely generous outreach https://t.co/k6M5UZxhWF and we can chat more about your specific use case.— Justin (@xxdesmus) June 29, 2018 I will be honest, we are a bunch of security nerds, so we were a bit skeptical at first. We have some pretty unique needs, we use a lot of bandwidth, syncing an apt repository to a CDN is no small task, and well, we are paranoid. We have an average of 1,000,000 downloads a month on just our ISO images. Add in our apt repos and you are talking some serious, serious traffic. So how much help could we really expect from Cloudflare anyway? Were we really going to be able to put this to use, or would this just be a nice fancy front end to our website and nothing else?On the other hand, it was a chance to use something new and shiny, and it is an expensive product, so of course we dove right in to play with it.Initially we had some sync issues. A package repository is a mix of static data (binary and source packages) and dynamic data (package lists are updated every 6 hours). To make things worse, the cryptographic sealing of the metadata means that we need atomic updates of all the metadata (the signed top-level ‘Release’ file contains checksums of all the binary and source package lists).The default behavior of a CDN is not appropriate for this purpose as it caches all files for a certain amount of time after they have been fetched for the first time. This means that you could have different versions of various metadata files in the cache, resulting in invalid checksums errors returned by apt-get. So we had to implement a few tweaks to make it work and reap the full benefits of Cloudflare’s CDN network.First we added an “Expires” HTTP header to disable expiration of all files that will never change. Then we added another HTTP header to tag all metadata files so that we could manually purge those files from the CDN cache through an API call that we integrated at the end of the repository update procedure on our backend server.With nginx in our backend, the configuration looks like this:location /kali/dists/ { add_header Cache-Tag metadata,dists; } location /kali/project/trace/ { add_header Cache-Tag metadata,trace; expires 1h; } location /kali/pool/ { add_header Cache-Tag pool; location ~ \.(deb|udeb|dsc|changes|xz|gz|bz2)$ { expires max; } } The API call is a simple shell script launched by a hook of the repository mirroring script:#!/bin/sh curl -sS -X POST "https://api.cloudflare.com/client/v4/zones/xxxxxxxxxxx/purge_cache" \ -H "Content-Type:application/json" \ -H "X-Auth-Key:XXXXXXXXXXXXX" \ -H "X-Auth-Email:your-account@example.net" \ --data '{"tags":["metadata"]}' With this simple yet powerful feature, we ensure that the CDN cache always contains consistent versions of the metadata files. Going further, we might want to configure Prefetching so that Cloudflare downloads all the package lists as soon as a user downloads the top-level ‘Release’ file.In short, we were using this system in a way that was never intended, but it worked! This really reduced the load on our backend, as a single server could feed the entire CDN. Putting the files geographically close to users, allowing the classic apt dist-upgrade to occur much, much faster than ever before.A huge benefit, and was not really a lot of work to set up. Sevki Hasirci was there with us the entire time as we worked through this process, ensuring any questions we had were answered promptly. A great win.However, there was just one problem.Looking at our logs, while the apt repo was working perfectly, our image distribution was not so great. None of those images were getting cached, and our origin server was dying.Talking with Sevki, it turns out there were limits to how large of a file Cloudflare would cache. He upped our limit to the system capacity, but that still was not enough for how large some of our images are. At this point, we just assumed that was that--we could use this solution for the repo but for our image distribution it would not help. However, Sevki told us to wait a bit. He had a surprise in the works for us.After some development time, Cloudflare pushed out an update to address our issue, allowing us to cache very large files. With that in place, everything just worked with no additional tweaking. Even items like partial downloads for users using download accelerators worked just fine. Amazing!To show an example of what this translated into, let’s look at some graphs. Once the very large file support was added and we started to push out our images through Cloudflare, you could see that there is not a real increase in requests:However, looking at Bandwidth there is a clear increase:After it had been implemented for a while, we see a clear pattern.This pushed us from around 80 TB a week when we had just the repo, to now around 430TB a month when its repo and images. As you can imagine, that's an amazing bandwidth savings for an open source project such as ours.Performance is great, and with a cache hit rate of over 97% (amazingly high considering how often and frequently files in our repo changes), we could not be happier.So what’s next? That's the question we are asking ourselves. This solution has worked so well, we are looking at other ways to leverage it, and there are a lot of options. One thing is for sure, we are not done with this.Thanks to Cloudflare, Sevki, Justin, and Matthew for helping us along this path. It is fair to say this is the single largest contribution to Kali that we have received outside of the support by Offensive Security.Support we received from Cloudflare was amazing. The Kali project and community thanks you immensely every time they update their distribution or download an image.

Thinking Global Business? You need a .GLOBAL Domain name

BigRock Blog -

The Internet has brought the world closer. All you need is a website to start tapping the global market. But you need to choose the right domain name extension for attracting international visitors. And you will want to clarify a few apprehensions as well: Should I go with the “.com” domain? Can a ccTLD work well in different markets? If I chose a new generic TLD, will it rank the same way? When you try to find answers, you may even think of having separate ccTLDs for countries to cater country-specific market. But there is a better way to meet your global aspirations. Go with .global Domain Extension It is a generic top-level domain that came into existence in 2014. With it, you can reach out to the worldwide audience and portray your business as a global brand. Whether your business is of product or services, you can use the “.global” extension. Thousands of websites are already registered with this domain in a short period since its inception. Some compelling reasons to choose this extension are: Global Image When you have this generic top-level-domain with your domain name, for instance, www. domain-name.global, it instantly helps you portray a global branding message that you want to convey to your visitors. What if you had chosen “.com” or some other extension? You get the feeling that they can’t create the same effect. Global Traffic When you are doing business across the globe, and are dealing across the international borders, using this extension will help. Foreign visitors coming to the site can easily connect to the domain. You will surely not want them to ignore your website due to country restrictions. When the website is accessible to any user without any limitation of location, “.global” extension can help to attract traffic from visitors across the globe. Global SEO With this extension, search engine optimization for the site is improved for the better. Visitors looking for companies with a global presence may use the word “global” while searching on the internet. When you have “.global” domain extension, it will serve as a keyword as well which results in improved rank on the search results. Use Case For This Extension You are a multinational organization which has business interests in many countries across the globe. You want to differentiate the local website from the international one. Here is what you can do: Global Website: www.businessdomainname.global Country Sites: www.businessdomainname.in www.businessdomainname.sg www.businessdomainname.uk You can use the site with “.global” extension to cater to international users. Once they reach this site, they come across your global brand. However, if they are looking for something location-specific, you can guide them to country-specific site and connect with the local offices. With the use of this extension for the global site, you are showcasing your diversity which allows international site visitors to connect instantly.  The word ‘global’ is prevalent in most languages, so there are minimal linguistic issues in sending the branding message. Can You Register Your Site With It? You can easily register your site with no restrictions at all. Whether you are a global organization or an individual you can register with ease. There are thousands of sites which use this domain extension without any issues. There are minimal chances that you will find a domain name ‘unavailable’ with this extension. Leading Sites Using This Extension https://mobian.global/ –  It is a market place for mobility providers and mobility resellers. Here they can connect quickly. https://urc.global/ – This company has its presence in 90 countries. It provides healthcare, social services, and health education across the world. http://h2go.global – The company provides clean drinking water to remote and rural areas. More than 1.7 million people across the globe are using its products. Benefits of register a “.global” domain extension with BigRock Instant Global Brand Image for the domain name. Availability of domain is not an issue. Improved SEO. Helps attract global traffic. Search Now or Call @ 1800 266 7625

What Happens When There’s a Catastrophic Failure in Your Infrastructure

Liquid Web Official Blog -

The Horrific Reality of Catastrophic Failure The Exorcist doesn’t hold a candle to the idea of a catastrophic failure wiping out your data, your web presence… your entire operation (cue the vomit). It should scare you. Our livelihoods—our lives—are increasingly digital. Your IT infrastructure is integral to your operations. Whether it’s your website, your database, or your inter-office communications and operations, downtime is intolerable. A catastrophe-level shutdown is unfathomable. Fortunately, there are plenty of ways to safeguard your business from the worst. You can read about how to prevent a disaster with redundancies, a high availability (HA) infrastructure, and other solutions, here, here, and here. However, things happen and even the best-laid plans are well intended, but sometimes a tornado comes through a takes out your data center. In the event that something catastrophic does occur, you need to be ready and the best way to be ready is to understand exactly what happens if (and with the right protection that’s a pretty big if) the walls you’ve built around your business come tumbling down. You need to expect the unexpected, so you’re prepared for anything that comes your way.  Subscribe to the Liquid Web weekly newsletter to get the latest on high availability technology sent straight to your inbox. Failures Occur. When? There isn’t an infrastructure out there (no matter how well designed, implemented, or maintained) that is impervious to failure. They happen. That’s why HA systems are a thing; it’s why you have redundancies, backups, and other preventative measures. But, where do they occur? When do they occur? Well, there are 5 particularly vulnerable points in your infrastructure—housing, hardware, ISP, software, and data. Your first vulnerable point, housing, is your physical accommodations and include the building that houses your servers/computers, your climate controls, and your electrical supply. Your housing is only vulnerable in highly specific instances (natural disasters, brownouts, blackouts, etc.) and is pretty easily mitigated. For example, two separate sources of power, uninterruptible power supplies, battery backups, restricted access to server rooms, routine building maintenance, etc. can reliably safeguard this vulnerability in your infrastructure. This goes for your ISP (fiber, cable, wireless) and other vendors, as well. Thoroughly vetted, high-quality vendors will have their own HA systems in place, making this vulnerability in your infrastructure a low probability for catastrophic failure. However, your hardware, software, and data are significantly more vulnerable even though there are steps your company can take to prevent failures. Servers, computers, peripherals, and network equipment age, break down, and fail; it’s just the reality of physical systems. But, non-physical systems (productivity and communication software, websites, applications, etc.) are also open to certain failures, including external attacks—DDoS, hacking, bugs, viruses, and human error. Finally, your data can get corrupted by itself or can fail as a result of another failure in the chain; a hardware failure, for example, could wipe out your data. While some failures can be predicted and prevented—regular maintenance and replacement of equipment to prevent breakdowns, for example—others simply can’t be anticipated. A sudden equipment failure, power outages, natural disasters, a DDoS attack; these can all occur seemingly out of nowhere. You simply have to have a plan in place to react to these events in case they do (almost inevitably) happen. A good rule of thumb is to create an infrastructure that doesn’t have (or at least attempts to eliminate) a single point of failure. All of these vulnerability points—housing, hardware, ISP, software, and data—are susceptible to single points of failure. Housing? Make sure you have a physical space you can use in case the first space become unviable. Hardware? Make sure you have redundant equipment you can swap in, in case of a failure. ISP, software, data? Redundancies, backups, and backups of backups. Be prepared. What is the Worst Case Scenario? In 2007, according to the Los Angeles Times, “a malfunctioning network interface card on a single desktop computer in the Tom Bradley International Terminal at LAX” brought international air travel to an absolute standstill; for nine hours. For nine hours, 17,000 passengers were stranded on board—because this was software used by U.S. Customs, software used to authorize entry and exit, no one was allowed to disembark. This not only stopped international travel in its tracks, U.S. Customs and the airlines themselves had to supply food, water, and diapers to passengers, and had to keep refueling to keep the environmental controls on the aircraft operating. Oh, and shortly after the system was restored, again according to the Los Angeles Times, it gave out again: “The second outage was caused by a power supply failure.” Now that’s a worst case scenario. You’re not U.S. Customs or LAX, but you can relate. Almost nine hours of downtime in a single day exceeds what 81% of businesses said they could tolerate in a single year (thanks Information Technology and Intelligence Corp). Everyone’s worst case scenario is different, but a massive failure that cripples your infrastructure for even a few hours in a single day can have irrevocably adverse effects on your revenue, your workflow, and your relationship with your clients/customers. Any significant downtime should be a cause for concern. Is it a worst case scenario? Maybe not, but a few days in a row—or even over the course of a year—could be. Automatic Failover vs. No Automatic Failover While a systems failure is a spectrum of what can go wrong, there are two scenarios on either end—an automatic failover and a catastrophic failure in which a failover doesn’t take place either manually or automatically. Failover systems themselves can fail, but it’s more likely that there isn’t a system in place to automate a switch to a redundant system. What follows is a look into what actually happens during an automatic failover and what would happen if such a system wasn’t in place. What Happens During an Automatic Failover Several scenarios can trigger a failover—your secondary node(s) does/do not receive a heartbeat signal; a primary node experiences a hardware failure; a network interface fails; your HA monitor detects a significant dip in performance, or a failover command is manually sent. In the event that a secondary node does not receive a heartbeat signal (synchronous, two-way monitor of server operation and performance), there are several causes including network failure, a hardware failure, or a software crash/reboot. As you can see, an automatic failover is triggered (predominantly) by an equipment failure. Any time a piece of equip stops operating—or even begins to perform below its expected values—a failover will be triggered. It should be noted that there is a difference between a switchover and failover. A switchover is simply a role reversal of the primary and a secondary node; a secondary node is chosen to become the primary node and the primary node becomes a secondary node. This is almost always anticipated and done intentionally. A common switchover scenario is maintenance and upgrading. In a switchover, there is no data loss. A failover, on the other hand, is a role reversal of the primary node and a secondary node in the event of a systems failure (network, hardware, software, power, etc.). A failover may result in data loss depending on the safeguards in place. So, what does happen in an automatic failover? Let’s break it down: An event occurs that initiates failover. This could be a network failure, a power outage, a software failure, or a hardware failure. In all cases, the heartbeat link between the primary node and the elected secondary mode is severed and failover is initiated. An error log (why was a failover initiated?) is created. The elected secondary node takes on the role of the primary node. The primary node is removed from the cluster. What Happens With No Automatic Failover Ok, so you don’t have an automatic failover safeguard in place and something breaks—or, even worse, a lot of things break. What happens? Well, that’s going to depend on what systems you have in place. If you have working backups, but no automatic failover systems in place, you’ll retain your data. However, depending on your infrastructure, the amount of time it will take to recognize a failure and the amount of time it takes to manually switch over will be much longer than an automatic solution. However, if your system is sketchy and there are vulnerabilities throughout, things get significantly more complicated and need to be addressed on a case-by-case basis. We can, though, examine what happens in systems with one or more single points of failure at critical junctures. You’re sure to remember housing, hardware, ISP, software, and data. Housing. In May of 2011, a major tornado ripped through Joplin, MO. In the tornado’s path were a hospital and the hospital’s adjoining data center. The data center held both electronic and physical records. Serendipitously, the hospital IT staff was in the middle of mass digitization and data migration to an off-site central center with redundant satellites. Which meant that most of the data was saved (although some records were irrevocably destroyed) and the hospital was able to mobilize services quickly. However, if the tornado had come any earlier, the data loss would have been extreme. While this scenario (indeed, any IT housing disaster) is rare, it does happen and there are ways to safeguard your equipment and your data. According to Pergravis, (offsite backups notwithstanding) the best data center is constructed from reinforced concrete and is designed as a box—the data center—within a shell—the structure surrounding the data center—which creates a secondary barrier. This is, obviously, a pie-in-the-sky scenario, but Pegravis does offer simpler solutions for shoring up an existing data center. For example, they suggest locating your data center in the middle of your facility away from exterior walls. If that’s not an option, however, removing and sealing exterior windows will help safeguard your equipment from weather damage. Hardware. The key to any secure system (the key to HA, as we’ve discussed here and here) is redundancy. That includes redundant hardware that you might not immediately think of. A few years ago, Microsoft Azure Cloud services in Japan went down for an extended period of time because of a bad rotary uninterruptible power supply (RUPS). As the temperatures in the data center rose, equipment began shutting itself off in order to preserve data, disrupting cloud service in the Japan East region. It’s not always going to be a storage device that fails or even a network appliance. Besides, most systems are over-engineered in terms of server component, data backup, and network equipment redundancies. It’s up to you to work with your company to conceive of, prepare for, and shore up any weaknesses in your IT infrastructure—if you prepare for the worst, it will never come. ISP. According to the Uptime Institute, between 2016 and 2018, 27% of all data center outages were network-related. As more and more systems migrate to the cloud and more and more services are network-dependent, redundant network solutions are becoming increasingly important. In some cases, that could mean two or more providers or two or more kinds of services—fiber, cable, and wireless, as an example. Software. Whether it’s unintended consequences (Y2K) or a straight-up engineering faceplant—in 1998, NASA lost the Mars Polar Lander because a subcontractor used imperial units instead of metric like they were supposed to—software is vulnerable. When software goes bad, there’s usually a human to blame, and that’s true for cyber attacks, too; DDoS attacks other cyber intrusions are on the rise. According to IndustryWeek, in 2018 there was, “…a 350% increase in ransomware attacks, a 250% increase in spoofing or business email compromise (BEC) attacks and a 70% increase in spear-phishing attacks in companies overall.” What does this mean for you? It means defensive redundancies—threat detection, firewalls, encryptions, etc. It also means having a robust HA infrastructure in case you do come under attack. With an HA system with automatic failover, you can quickly take down the affected systems and bring up clean ones. Data. In 2015, a Google data center in Belgium was struck—multiple times in quick succession—by lightning. While most of the servers were unaffected, some users lost data. Data redundancy is the cornerstone of any HA infrastructure and new and improved options for data retention are constantly emerging. With the increase in virtual networks, virtual machines, and cloud computing, your company needs to consider both physical and virtual solutions—redundant physical servers, redundant virtual servers—in addition to multiple geographical locations. How the Right Protection Saves You As has been mentioned, it’s up to you and your company to examine and identify single points of failure—and other weak spots—in your infrastructure. A firm grasp of where vulnerabilities most often occur (housing, hardware, ISP, software, and data) will give you a better understanding of you own system’s limitations, flaws, and gaps. While you can’t prepare for (or predict) everything, you can eliminate single points of failure and shore up your IT environment. An HA system with plenty of redundancies, no single points of failure, and automatic failover, you’ll not only safeguard your revenue stream, you’ll maintain productivity, inter-office operations, keep staff on other tasks, and get better sleep at night (you know, from less anxiety about everything coming to a grinding halt). What We Offer at Liquid Web At Liquid Web, we worry about catastrophic failures (preventing them, primarily, but recovering from them too) so you don’t have to. To this end, we make automatic failovers—and cluster monitoring for the shortest and most seamless transitions—a top priority. Heartbeat, our multi-node monitor, and the industry standard, keeps a close eye on the health of your systems, automatically performing failovers when needed. Heartbeat can quickly and accurately identify critical failures and seamlessly transition to an elected secondary node. The automatic failover system in place at Liquid Web is one of many components that comprise our HA infrastructure and uptime guarantee. We offer 1000% compensation as outlined in our SLA’s 100% uptime guarantee. What does this mean? This means that if you experience downtime we will credit you at 10x the amount of time you were down. At Liquid Web, we also continue to operate at 99.999% (or five 9s), a gold standard for the industry—this equates to only 5.26 minutes of downtime a year, 25.9 seconds of downtime per month, and 6.05 seconds of downtime a week. Five 9s is incredibly efficient and we are proud to operate in that range. However, we are constantly striving for more efficiency, more uptime, and optimization. A Final Reminder: Failures Do Happen Failures do happen. If Google is susceptible to a catastrophic failure, everyone is susceptible to a catastrophic failure. You can, however, mitigate the frequency and severity of catastrophic failures with a thorough accounting of your infrastructure, a shoring up of your systems, a solid and sensible recovery plan, and plenty of redundancies. Oh, and don’t forget an automatic failover system; it will save you time (and data) when you have to transition from a failing primary node to a healthy secondary node. The post What Happens When There’s a Catastrophic Failure in Your Infrastructure appeared first on Liquid Web.

Local Development in WordPress [Webinar]

WP Engine -

A local development environment enables you to build and test your web development projects right on your computer, which gives you the freedom to focus on coding first without the fear of a problematic deployment. Local dev environments are ideal if you’ve been tasked with building a website or app for someone else or if… The post Local Development in WordPress [Webinar] appeared first on WP Engine.

Com vs Net: Which Should You Choose?

The Domain.com Blog -

There are two basic components within a website address. First, there’s the domain name, it’s what connects the website to a company or individual. It usually contains the name of the business, or speaks to what the business offers, or both. Then, there’s the domain name extension, it identifies what kind of website it is. There are over a thousand domain extensions although these are the most common: .com.net.org.edu.gov The two most frequently used domain extensions (.com and .net) are used by individuals and businesses who are trying to expand their reach online. Having a website allows you to buy and sell products online, offer research into a specific topic, and to spread a captivating message. So with both .com and .net being so common, which domain extension should you use? It all starts with a great domain. Get yours at Domain.com. 3 Factors to Consider When Choosing the Right Domain Extension Whether you’re a for-profit business, a blogger, or a conspiracy theory debunker, the right domain extension sets the proper expectation for users accessing your site. Imagine trying to purchase shoes online and seeing that the domain extension is a .org. One might make the logical leap that purchasing these shoes is in some way benefiting a nonprofit (as most nonprofits and charities will use the extension .org). While at first, this sounds great — even more reason to buy those shoes! — some might consider that a dishonest use of a domain extension. (Not that there are many requirements as to which TLD (or, top-level domain) businesses can use, but there are certain expectations and connotations for each one.) To properly utilize the .com or .net domain extension, consider these three factors. What is the Purpose of the Website Are you selling a product? Are you offering information? Are you trying to save a species of animals? These are important questions because they strike at the heart of your business and determine which domain extension is appropriate. Here is a breakdown of the most common domain extensions: .com – Usually offers a product or service. “Com” is short for “Commercial.” Commercial businesses, for-profit companies, personal blogs, and non-personal blogs are all standard for owning a .com domain. That being said, because of its generality, almost any website is acceptable as a .com. .net – Stands for “Network,” and is generally associated with “umbrella” sites — sites that are home to a wide range of smaller websites. Network sites were initially created for services like internet providers, emailing services, and internet infrastructure. If a business’s desired .com domain name is taken, .net can be considered an alternative. Other commonly used domain extensions have a more specific purpose: .org – Short for “Organization.” These sites are generally associated with nonprofits, charities, and other informational organizations that are trying to drive traffic not for commercial purposes. Other organizations who use .org are sports teams, religious groups, and community organizations..edu – or “Education.” Schools, universities, and other educational sites will utilize the .edu domain extension for an air of authority in the education space..gov – or “Government.” These sites are required to be part of the U.S. Government. Anything related to U.S. government programs or other departments must have a .gov domain extension. How Common is Your Business Name Imagine: A business offers standard products like sewing equipment and materials. The name of the company is something equally familiar like Incredible Sewing. Because “incredible” and “sewing” are two commonly used words, the chances that the appropriate domain is available for a .com domain extension is much less than for .net. (Although as of writing this, Incredible Sewing is available in the domain space.) The reason for this is how frequently each domain extension is used. In 2018, upwards of 46% of all registered domains used the .com TLD while only 3.7% used .net. When trying to come up with the perfect web address, sometimes it feels like every one-word or two-word .com domain name is already taken. This is one reason why some individuals and businesses will choose to use a .net extension versus a .com. (Note: It might be beneficial to check if your desired domain is available before moving forward with a project or company. Going to great lengths to plan in the beginning will save time and prevent you from having to remake those business cards due to an unavailable domain name! If you’re wondering how to search for your domain, check out Domain.com.) Memorability: Com vs Net Has this ever happened to you: An advertisement is playing, and you barely catch the tailwind of it? You type in the website address only to have it come up blank. Later, you find out you had put in .com when it was a .net, or some other, domain. The fact is, the basic assumption about websites is that they all have a .com domain extension. This is because the second most common top-level domain is only used about 5% of the time (.org). By going with the tried and true .com, companies can ditch this confusion and not worry about decreased traffic. If this seems absurd, consider this: Most cell phone keyboards now come with a “.com” button, though none come with .net, .org or any other domain extensions attached to it. Other Considerations for Creating a Web Address While both .com and .net are resourceful domains, there are other considerations to think about when creating a web address. Some of those center around: Traditional vs nontraditional domainsDomain protectionSEO: how each performs Traditional vs Nontraditional Domains For most businesses, straddling the traditional and nontraditional is part of the balancing act. While companies want to seem edgy and unique, unconventional ways can be viewed negatively by more traditional businesses and customers. In the web domain space, there are now over a thousand domain extensions available to the consumer. All but a handful are looked at as “nontraditional.” So, while it might seem valuable to stand out, be sure to consider how it may be viewed professionally. New TLDs Back in 2012, ICANN decided to allow businesses to apply for unique domain extensions. This quickly rose the number of TLDs from its original 22. Some of the early applications for domain extensions involved words such as: .design.lol.love.book.tech Some of these new TLDS offered immediate value to businesses and consumers who wanted a new and noteworthy domain. Others seemed more like gag websites (hence the stereotype of new TLDs being unprofessional). Either way, these new TLDs have exploded to a comprehensive list. Now, if you’re a yoga company, you can use .yoga. Sell yachts? Make tech? Play tennis? Eat soy? These are all available as domain extensions. Which means not only can you create more unique web addresses, but you can also be more specific. If having a new TLD sounds perfect for your business, be sure to check through the full list to find one that fits your needs. Domain Protection Depending on what you want to accomplish with your business website, it might be worth registering both .com and .net. In this way, you can protect yourself from competing companies taking a very similar domain. Otherwise, another company can ride off your success and potentially drive traffic away. As companies grow, they become more susceptible to being confronted with these sorts of schemes. They are then forced to decide whether to buy out the competing website or to let them be. Needless to say, the larger the company, the more they’re going to have to pay. What are other things you should look out for when it comes to people using similar domain names? Typosquatting Typosquatting is when individuals purchase web domains based on common misspellings of words. From our last example of Incredible Sewing, they might take the web domain by spelling “incredible” as “incredibel.” By systematically using misspellings, these forms of leaching can drive substantial traffic away from the intended website. These typosquatters can then offer to be bought out, or they’ll just continue to steer traffic to other organizations that they own. As of right now, the most viable option for protecting yourself is to purchase multiple domains. Although, this is becoming more difficult with each new TLD. SEO: How Each Performs Search engine optimization has to do with complex algorithms that determine how relevant your website is to a given search. In terms of which domain extension you should pick (between com vs net), there is no evidence that suggests one does better over another. It can be noted, however, that having certain keywords within your web domain can improve your SEO ranking. Having “sewing” within your domain will make your site more relevant for keyword searches around sewing. It’s that straightforward. Com: Pros and Cons As an overview, let’s run through the benefits and pitfalls of using a .com domain extension: Pros – Using a “commercial” extension, companies and individuals can signify their intention. Whether that’s to sell a product or service or to promote your work, the .com does this in a matter that’s professional and can be trusted. Also, there’s no worrying over your web address being confused. Cons – Because nearly half of all websites are based on .com, finding the perfect domain name that isn’t already in use can be tough. It can be pricey to buy out an existing domain and time-consuming to find one available. Net: Pros and Cons Originally designed for any network organization like internet providers and email sites, .net sites have been rising in popularity as an alternative to .com. Pros – Many fewer .net website domains have been registered than .com domains. This means there’s a higher chance of getting your ideal web domain. Also, because of its original design, .net sites are often associated with having a community around them. This can promote a positive image.Cons – These websites will need to market harder to compete with a similar .com site. Automatically, people think that any website is a .com site, which means businesses can lose traffic due to confusion. It all starts with a great domain. Get yours at Domain.com. How to Create the Perfect Domain Name Once you’ve decided whether you’re going with a .com or a .net domain extension, it’s then important to make sure it’s paired with the perfect domain name. The ideal address will do one of three things: State your businessState what your business doesIncite intrigue The first two are preferred, while the third is more of a backup strategy. Because many .com and .net sites have already been taken, sometimes a roundabout domain will be the best solution. A domain name should also have a few decisive characteristics. Try creating a web address that is some combination of: ClearConciseUnmistakable Short Straightforward Approach The first step is always to check if the business name is available as a domain. If your business name has been taken, check to see how up-to-date the website is. If it’s not current and doesn’t look like it’s being used, it might be possible to purchase the domain name from whomever owns it. Having the business name as the domain name is ideal because it’s the logical extension of that business. Starbucks has Starbucks.com. Apple has apple.com. If the business name is unavailable, sometimes it helps to add a modifier word. If starbucks.com was already taken, the next logical domain would be starbuckscoffee.com. In the same way, Apple would be able to use appleelectronics.com. It’s not as short as only having the business name, but it is still clear, concise, and unmistakable. Branding a Unique Term Another idea for getting the perfect website domain is to coin a term that’s unique to your business. Then you can use that term within your brand’s website. By doing this, you not only have crafted a unique web identity, but it can also be concise and short. Conclusion When determining which domain extension is better, com vs net, always be sure to look inward first. Acknowledge the purpose of putting your content online. Whether it’s to market a brand, sell a product, or connect various smaller sites by theme, each domain extension has its proper setting. By crafting the perfect domain name with the suitable domain extension, you can have a web address that is memorable, unique, and fitting for your business. More Information To find out more about the differences between new TLDs and gTLDs check out our domain blog today! There you’ll find other resources like How to Block an IP Address, How to Design a Website, and more. The post Com vs Net: Which Should You Choose? appeared first on Domain.com | Blog.

What Are Live-State Snapshots with My VPS Plan

InMotion Hosting Blog -

If you have VPS hosting, then you probably already know how important the safety and back-up features are for keeping your website secure. One of the more recent additions to this is the ability to take a snapshot of your VPS. Many different companies offer their own snapshot products. Essentially, they offer the same features. The software will take a single backup of the VPS and give you the option to restore back to the original save point at any time. Continue reading What Are Live-State Snapshots with My VPS Plan at The Official InMotion Hosting Blog.

Meet The British History Podcast: “History, the Way It’s Meant to Be Heard”

DreamHost Blog -

Think you hate history? You’re probably wrong, says Jamie Jeffers, founder of The British History Podcast. The problem, he says, isn’t that history is dry or boring — the problem is that it is taught that way, with rote memorization and little relevance to the modern world. “People are people,” Jeffers says. The stories of history, even ancient history, “are relevant and compelling on their own. They are only made irrelevant by poor storytellers who forget that simple truth — that history is the story of humanity. It’s about all of us.” With his podcast, which has been in production for almost a decade and has cultivated a loyal fan base over hundreds of episodes, Jeffers tells the stories of British history by tapping into that humanity. In his chronological retelling, you won’t hear lists of names, treaties, and battles, but rather tales of the cultural underpinnings behind the actions of kings and the day-to-day lives of the people of Britain. In Jeffer’s words, it was a happy convergence of “transatlantic immigration, global financial collapse, and ancient human traditions” that took him from unemployed lawyer to full-time podcaster creating the ultimate passion project, one that draws on his own personal history, builds his future, and connects us all to the past. Related: Step-by-Step Guide: How to Start a Podcast With WordPress History Through Storytelling It’s all his grandfather’s fault, Jeffers says. Jeffers, who moved to the US from the UK when he was a kid, learned the history of his homeland from his grandfather, who wanted to make sure young Jeffers heard stories of his ancestors alongside his American education. “He took it upon himself to teach me what he knew about British History as I was growing up,” Jeffers says. “He was an amazing storyteller, and so my first experience with history was through hearing about amazing events and figures. It was learning history as people traditionally taught it, as an oral history.” His grandfather’s storytelling taught Jeffers to love history — at least until he actually studied the subject in school. “I went to high school, and history was suddenly reduced to memorizing dates and names for a test,” Jeffers says. “No context, no nuance, no wonder at our shared past. It was such a disappointing experience that I lost interest in the study of history.” Eventually, Jeffers went on to study English in college and then become a lawyer. For the most part, Jeffers tabled his interest in history — that is, until the recession forced it back into his life. Global Financial Collapse The 2008 financial crisis wasn’t kind to most people — Jeffers included. As money got tight, he looked around for cheap sources of entertainment, leading him straight into the world of podcasting. “The first show I found was The Memory Palace, which is still going, and it became a regular companion when I was at the gym or taking my dog, Kerouac, for a walk. The host, Nate DiMeo, couldn’t have known it, but the way he talked about little odd stories from history made me feel like I was reconnecting with part of my childhood.” But a search for podcasts about British history was disappointing, to say the least. It brought him to a “show that was done by a guy who seemed to be reading random entries off Wikipedia. Incorrect entries, for that matter.” Back in those pre-Serial days, podcasting was a new thing — it was “pretty punk rock,” he says. “Few people knew about it, and even fewer people did it, which meant that many topics weren’t being covered and those that were weren’t being covered well. Quality was definitely a problem.” Jeffers did find a history show or two but occasionally found himself wishing for a good podcast that took on a chronological history of Britain. Then one day the financial collapse hit closer to home, and Jeffers lost his job as an attorney. “The part that people rarely talk about with unemployment is how boring it is,” he says. “So I decided that any time that I wasn’t job searching would go towards making that show I always wanted.” The podcast launched with its first season in May 2011, beginning with the Ice Age and prehistoric Britannia and moving into the Roman conquest of Britain. At first, Jeffers’ vision was nothing more than a fun hobby that only his parents would listen to. “Eight years later, it’s my life’s work,” Jeffers says. “Oh, and my parents still don’t listen to it. But a lot of other people do.” Today, the podcast boasts more than 3,000 reviews on iTunes and shows up on lists such as recommended podcasts for fans of Serial and Parade’s list of top history podcasts. Beyond the Battle Search your favorite podcasting app these days, and you’ll find history shows aplenty. But the British History Podcast (BHP) isn’t your run-of-the-mill history podcast, Jeffers says. “Many history podcasts are dry accounts that only perk up when they can talk about men swinging swords. They skip over the culture of the time, other than as it pertains to kings and generals, and then give you incredibly granular details of men killing other men in battle.” What interests Jeffers (and his audience) are the stories behind the conflicts. To truly understand and care about an action-packed battle, audiences need to appreciate the stakes. “There’s a reason why The Phantom Menace sucked, and it wasn’t the fight choreography,” he says. “Context is king, and that’s where our focus is.” That’s why the BHP discusses at length through the political, social, and cultural realities that drive the “action scenes” of history. Another way the show’s different: “We talk about women. It’s strange how often they’re written out.” Jeffers cites one of his favorite little-known figures from history: Lady Æthelflaed of Mercia, who reigned in an era when women we so overlooked, even vilified, that there weren’t any queens — just women known as “the king’s wife.” “And then you have the noble daughter of Alfred the Great, a woman named Æthelflaed, who ruled Mercia on her own after her husband died. She led armies. She fought off a massive force of Vikings at Chester by throwing everything, up to and including the town’s beehives, at them. This woman was so influential that after she died, even though the culture was deeply misogynistic, the Mercians chose to follow her daughter.” Jeffers’ favorite era of British history is the Middle Ages — “which I’m sure most of our listeners already know since we’ve spent about seven years in them so far.” The BHP is currently detailing the reign of King Æthelred Unræd (aka King Ethelred the Unready), who is often blamed for the downfall of the Anglo Saxons — “though I think there’s plenty of blame to go around.” Jeffers is most looking forward to covering the 15th-century Wars of the Roses, a series of English civil wars: “the diaries we have out of that era are stunning and show the real human toll that this conflict was taking on the population.” The planned finish line is the dawn of WWII, which could take another decade to reach. Until then, Jeffers is dedicated to dissecting and retelling as many stories and cultural tidbits as he finds relevant — a quest that fits nicely in the podcasting sphere. “Can you imagine The History Channel allowing me to do over 300 episodes of British History and spend literally hours just talking about how food was handled in the middle ages? Part of what makes podcasting so amazing is that it allows for niche shows like the BHP to exist.” Want to meet more awesome site owners?Subscribe to the DreamHost Digest for inside scoops, expert tips, and exclusive deals.Sign Me Up Behind the Scenes Jeffers is quick to remind his audience that he isn’t a professional historian, though his “magpie approach to education” serves him well as a “history communicator.” “My educational background has a common throughline of narrative building and research,” Jeffers says. “I studied storytelling in college, getting a degree in creative writing while also spending a lot of time taking courses in subjects like critical and cultural theory. As for law, my focus was as a litigator. What many people don’t realize about litigators is that a lot of what you do is tell stories to the judge or jury. You do a lot of deep research and then turn it into an easy to digest narrative for why your side should win. Turns out that these skills serve very well for teaching history — especially little-known history.” Each 25- to 40-minute episode takes about 40 to 50 hours to produce. As for structuring the stories, Jeffers rarely finds a clear “pop history narrative” to build around because the history of medieval Britain he aims to create simply doesn’t exist elsewhere. Instead, he digs through secondary sources, fact checks primary sources, scans and fact checks scholarly articles for alternative theories, and then looks into “any rabbit holes that pop up during the research.” The lengthy editing process is a collaboration between Jeffers and his partner and co-producer Meagan Zurn — or Zee, as she prefers. “Then I finally record the episode, do sound editing, and launch. It’s quite a process.” There’s no way Jeffers could juggle a full-time job with all of the research and planning involved. But thanks to a dedicated community of listeners, the podcast moved from passion project to day job. He doesn’t even need to run ads — it’s funded entirely through donations and a membership, which grants paying listeners access to exclusive content. “I’ve really lucked out in the community that has developed around the podcast,” Jeffers says. In fact, he says his favorite part of producing the podcast is connecting and collaborating with the community. “They’re really supportive and enthusiastic people.” The British History Podcast official web page, complete with membership content and a full archive of eight years worth of podcasting, is proudly hosted by DreamHost. Like the podcast itself, the website has been a DIY project: “When you’re a small project like this, anything you can do yourself, you do.” The site uses DreamPress Pro with Cloudflare Plus, “which has allowed us to have a rather stable user experience even during high load times like on launch days,” Jeffers says. “The tech support team has been really helpful in finding solutions to some of the more thorny problems of running a podcast site with a membership component.” Do What You Love with DreamPressDreamPress' automatic updates and strong security defenses take server management off your hands so you can focus on creating great content.Check Out Plans A Romance for the Ages Jeffers says he’s met some incredible people through the podcast community, including his producer — and now wife — Zee. In addition to co-producing the BHP, Jamie (left) and Zee (right) are partnering up for a new venture: parenting. Back in the early days of the BHP, Jeffers used an “old clunky Frankenstein computer that kept breaking down. I had a hard drive crash, a power supply short, a motherboard fry. I swear that damn computer had gremlins, and as a result, I repeatedly had to go on our community page and apologize for episodes getting delayed.” The community ganged up and insisted his problems stemmed from using a PC — all except one person, who stood her ground against the Mac fans. “I believe her exact phrase was, ‘You’re all caught up in a marketing gimmick,’” Jeffers says. A few months later, when he had an idea for a side project and wanted honest feedback, he remembered this listener’s well-researched uncompromising arguments. “And half a world away, in Southern England, Zee got a message out of the blue,” Jeffers says. “It ended up being the smartest thing I’ve ever done. The person I reached out to was a Ph.D. candidate in sociology and media with a background in anthropology and archaeology. She understood on an intrinsic level the ethics of the show, the long-term strategy, the purpose of it, and what it could be going forward.” And just like that, Jeffers had a collaborator: “One day, I was doing the show entirely on my own; the next day I was running all my ideas by her, and I structured my life so that I could work with her.” They discussed the show daily; Zee reviewed Jeffers’ scripts and prompted heated debates over the content. “And through that, the show dramatically improved in tone and style. She also became my best friend. Truth be told, I think she was my best friend from the first time we talked.” “Much later, we met in person, and it was clear my ferociously intelligent best friend was also really attractive. Eventually, we started dating. Then she proposed to me one Christmas morning, and now we’re expecting our son this July.” By the way, Jeffers still uses a PC. Looking Forward Overall, creating the podcast has been a rewarding creative outlet for both Jeffers and Zee — but the work can be draining. “It’s very satisfying but very intensive work to hit the quality we demand of ourselves.” For now, outside the show, his and Zee’s primary focus is preparing for parenthood. The podcast is likewise approaching a monumental milestone: the Norman Conquest of 1066. “This invasion changed everything, and it’s going to usher in a whole new era of the podcast as well,” Jeffers says. “We have a whole new culture to talk about, along with larger-than-life characters to introduce. The story is about to get a whole lot bigger.” What’s your next great idea? Tell the world (wide web) about it with DreamHost’s Managed WordPress Hosting, built to bring your dream to life without breaking the bank — or making any compromises in quality. The post Meet The British History Podcast: “History, the Way It’s Meant to Be Heard” appeared first on Website Guides, Tips and Knowledge.

What Is ASP.NET Hosting?

HostGator Blog -

The post What Is ASP.NET Hosting? appeared first on HostGator Blog. One of the most important decisions every website owner must make is choosing the right type of web hosting services. And there are a lot of different types of hosting plans out there. Selecting the best web hosting solutions for your website depends on a number of different factors, including the programs you use to build and maintain your website. For a certain subset of website owners, that makes considering ASP.NET web hosting services an important part of the process of finding the best plan for you. Before we can provide a good explanation of what ASP.NET web hosting is and who it’s right for, we need to define what ASP.NET is. What Is ASP.NET? ASP.NET is an open source framework programmers can use to build dynamic websites, apps, games, and online services with the .NET platform. In ASP.NET, programmers build web forms that become the building blocks of the larger website or app they work to create. While ASP.NET is not as commonly used as PHP—the most ubiquitous of the programming languages used to build websites—it provides some distinct benefits for web designers that make it a strong choice for many websites. 10 Pros of Using ASP.NET ASP.NET isn’t for everybody, which is why it has a much smaller market share than PHP. But the pros of using ASP.NET to build your website or app are notable enough to make it well worth consideration. Here are ten top reasons to consider using ASP.NET. 1. It’s open source. As an open-source framework, any developer or programmer can make changes to the ASP.NET architecture to make it work the way they need. And often developers will share any updates or improvements they make with the larger community, so you can benefit from the work being done by a wide number of talented, skilled ASP.NET programmers. Any open source piece of software or program gets the benefit of all the great minds that use it. Every programmer that sees a way to make it more flexible, secure, or feature-rich can contribute to it. With over 60,000 active contributors, you can count on ASP.NET to just keep getting better. 2. It’s known for being high speed. ASP.NET makes it easier to build a site while using less code than other programming options. With less code to process, websites and apps load faster and more efficiently. ASP.NET packages also uses compiled code rather than interpreted code. Compiled code is translated into object code once, then executed. And every time after that, it loads faster. In contrast, interpreted code has to be read and interpreted every time a user accesses it, which slows things down. While you always have options for speeding up your website, no matter what you build it with, ASP.NET means you’re starting off with a website that will work and load that much faster than with other options you could choose. 3. It’s low cost. In addition to being open source, ASP.NET is also free. You can download the latest version of the software from the website for nothing. You can write ASP.NET code in any simple text editor, including free options like Microsoft’s Visual Studio application. In some cases, as with Visual Studio, the most useful text editors have a free basic plan you can use to start, and paid versions that provide more useful features for the common needs of big businesses, such as collaboration options. You may end up spending some money to get the full use of it you need, but businesses on a budget have the option of using ASP.NET for free. 4. It’s relatively easy to use. While PHP has a reputation for being easier to use, ASP.NET also has many features that make it intuitive for programmers or reduce the amount of work required to create a website or app. For one thing, programming with ASP.NET requires creating less code than most other options. That both means less time spent working on code for developers, and that your pages will load faster because it takes less time to process the code that’s there.   For another, it offers code behind mode, which separates the design and the code. This creates separate files for the design part of a page, and the code part of a page. That makes it easier to test things out and make changes as you go without messing anything up. Finally, ASP.NET allows for template-based page development and server-side caching, both of which mean you can make the design elements you build go further and easily re-use them for different parts of the website or application. While ASP.NET is primarily a resource for professional developers rather than beginners, they have a range of free resources available for those who want to learn the ropes. 5. It has a large developer community. Even though ASP.NET is relatively easy to use, many website owners will want to hire a professional developer to help with the particulars of building out a website or app. Luckily, the ASP.NET community is big enough that finding a skilled developer to hire who has experience in using the framework shouldn’t be a problem in most cases. And having a large community also means that, as an open source software, there are more smart minds working to improve upon ASP.NET on a regular basis. Many of the issues it had in the past have been fixed, and anything about it you don’t like today may well be taken care of in the months or years to come. 6. It requires less setup for Windows users. If your business already uses Windows products, then picking a Windows framework to build your website or app on will make the overall process easier on your team. Since it’s made by Windows, ASP.NET works seamlessly with other Windows applications. Getting your various products to play nice together and work in tandem will be simple. And you won’t have to worry about an update to ASP.NET or any of your other Windows applications screwing up compatibility. Windows will make sure that updated versions of its various products and applications still work well together, even as they all evolve over time. 7. It offers support for multiple languages. Programmers using ASP.NET have a couple of different programming languages they can choose from: C# and VB.net. C# in particular is a popular option with many developers because it’s powerful, flexible, and easy to learn.  It’s one of the most popular programming languages today and is known for being particularly well suited for building Microsoft applications, games, and mobile development. 8. It’s now compatible with all servers. Some articles on ASP.NET list one of the main disadvantages as being that it only works with Windows servers. In fact, several years ago Windows released the ASP.NET Core which made the program compatible with all types of servers—Linux, MacOS, and Windows. While it still may work best with a Windows server, since it was initially designed with that compatibility in mind, you can use ASP.NET no matter which type of web server you prefer. 9. It’s supported by Microsoft. Microsoft is one of the biggest and most powerful tech companies in the world. Any product that has their backing can count on regular maintenance, updates, and improvements. With some free products, there’s always the risk that their creators will stop supporting them and anyone dependent on them will have to start from scratch, but ASP.NET has the power of a company that’s not going anywhere behind it. 10. It’s got a great reputation for security. One of the main areas where most experts agree that the ASP.NET service beats PHP is for security. The program supports multi-factor authentication protocols that allow users to control who has access to the website or app they create with the framework. And ASP.NET includes built-in features that protect against common hacker protocols like cross-site scripting (XSS), SQL (structured query language) injection attacks, open redirect attacks, and cross-site request forgery (CSRF). Website security is an increasingly important issue for all website owners to consider, especially as hacks and high-profile data breaches become more common. Choosing ASP.NET is one of several steps you can take to make your website more secure. 5 Cons of Using ASP.NET That’s a long list of pros, which may have you wondering why so many people still choose PHP over ASP.NET. It’s not all positives, there are a few downsides to choosing ASP.NET as well. 1. It’s not compatible with fewer CMSes than PHP. One of the main reasons that some people prefer PHP is that it works with popular content management systems like WordPress. For people more comfortable using a CMS, which makes creating and updating a website easier if you don’t know how to code, ASP.NET puts a serious limitation in their path. With over a quarter of the entire internet running on WordPress, and content management systems like Drupal and Joomla powering much of the web as well, that makes PHP the natural choice for a majority of websites.   2. It has fewer templates and plugins. Because ASP.NET has fewer users, it also has fewer extras. With fewer people to develop useful features like templates and plugins, there just aren’t as many available to users of ASP.NET. These kinds of extras extend the functionality of a program and can make it easier for people to create the exact kind of website or app they want. While there are still definitely options you can take advantage of with ASP.NET, fewer choices means getting your website where you want it to be will be harder. 3. It’s potentially expensive if you’re not already using Windows. As we already mentioned, using ASP.NET is technically free. But using it tends to make the most sense for companies that already have access to a number of Windows products. One of the big benefits it offers is working seamlessly with all those other Windows solutions, so if you need something a Windows product offers while working on your website in ASP.NET, you’ll likely have to shell out for an additional product. Not everyone that uses ASP.NET will feel the need to spend money on other Windows solutions, but some will. If you end up deciding you need the additional functionality various Windows products provide, the cost can quickly add up. 4. It has a smaller community than PHP. While ASP.NET has a community that’s devoted, it’s much smaller than the community that uses PHP. That means fewer support resources and fewer developers working to make the framework better. It also means businesses will find it harder to find professional developers that are skilled in ASP.NET than PHP (although far from impossible). And you won’t have as many forums or user groups to turn to with questions. While that is an inconvenience, there is enough of a community out there that you may not feel a lack if you do choose to go with ASP.NET. But if having a supportive community is an important part of your decision when choosing what to build your website or app with, other options beat ASP.NET in this category. 5. It’s harder to learn than PHP. ASP.NET is relatively easy for developers to learn, but it has more of a learning curve than PHP. And because you can’t use intuitive content management systems like WordPress with it, it’s generally out of reach for many beginners that can’t afford to learn programming languages themselves or hire a professional when building out their website. For big businesses with a budget to put toward building a website or app, this is likely to be a non-issue since finding skilled ASP.NET programmers to hire won’t be too hard. But for smaller businesses and individuals building a more basic website, it’s a good reason to pick a simpler solution. What Is ASP.NET Hosting? Now that we’ve covered the basics of what ASP.NET itself is, we come back around to the main question at hand: what is ASP.NET web hosting? ASP.NET hosting is any web hosting plan designed to be compatible with ASP.NET. In many cases, that means Windows hosting, but since ASP.NET is now compatible with other types of servers, it doesn’t have to mean that. Two main things define ASP.NET hosting services: 1. It promises compatibility with ASP.NET and all associated web applications. ASP.NET hosting solutions must provide seamless compatibility with ASP.NET itself. But you’ll also want to make sure your web hosting plan provides compatibility with other web applications you’re likely to use with ASP.NET, such as the Plesk Control Panel and any other Windows products you use.   2. It has an easy installation option. A good ASP.NET hosting plan will include simple one-click installation that lets you add ASP.NET to your web hosting platform within seconds. You have enough work to do building your website, game, or app—you don’t have time to spend on a complicated installation process. A good ASP.NET hosting option ensures you don’t have to spend any longer on this step than necessary. What to Look for in an ASP.NET Web Hosting Plan If you determine that using ASP.NET is the best option for your website, then an ASP.NET hosting plan is a smart choice. When researching your options, look for a web hosting plan that includes: A 99.9% Uptime Guarantee – Uptime is the amount of time your website is working and accessible to visitors. It’s one of the main differentiating factors between different web hosting companies. The best companies promise at least 99.9% uptime and back that claim up with a money-back guarantee. 24/7 Customer Support – The moment you have an issue with your website, you want to get it fixed. 24/7 customer support means you can reach someone right away and get the problem taken care of faster. Plenty of Bandwidth – Look for an ASP.NET hosting provider that offers plans at different levels, especially if your website or app will need a significant amount of bandwidth. If you need it, make sure you can get an enterprise-level plan compatible with ASP.NET.A Reputation for Security – Choosing ASP.NET to build your website is one smart step you can take for security, choosing the right web hosting provider is another. A web hosting provider that uses strong firewalls and offers security features like an SSL certificate that will provide an extra level of protection that keeps your website and its visitors safer. HostGator’s ASP.NET web hosting services offer everything on the list. We make it easy to add ASP.NET to your hosting account so you can get started faster. And we have one of the top reputations of any web hosting company in the industry. If you’re still not sure about the right web hosting provider or company for your ASP.NET website, our sales representatives and support team are available 24/7 to answer any questions you have. If you’re looking into a different service like dedicated server hosting, cloud hosting, or shared hosting plans, our experienced team can help you find the best package for your needs. Find the post on the HostGator Blog

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs