Industry Buzz

Using cPanel Webmail for Branded Email Accounts

cPanel Blog -

You need a professional email address for your business, and here’s how to make that happen with cPanel webmail. Putting your best foot forward as both an individual and a business can start with something as simple as having a professional-looking email address. For a bit of context- think back to the email address you had in high school or college. How many of you had a favorite movie or band or sports team in ...

WordPress 5.4 “Adderley”

WordPress.org News -

Here it is! Named “Adderley” in honor of Nat Adderley, the latest and greatest version of WordPress is available for download or update in your dashboard. Say hello to more and better. More ways to make your pages come alive. With easier ways to get it all done and looking better than ever—and boosts in speed you can feel. Welcome to WordPress 5.4 Every major release adds more to the block editor. More ways to make posts and pages come alive with your best images. More ways to bring your visitors in, and keep them engaged, with the richness of embedded media from the web’s top services. More ways to make your vision real, and put blocks in the perfect place—even if a particular kind of block is new to you. More efficient processes. And more speed everywhere, so as you build sections or galleries, or just type in a line of prose, you can feel how much faster your work flows. Two new blocks. And better blocks overall. Two brand-new blocks: Social Icons and Buttons make adding interactive features fast and easy.New ways with color: Gradients in the Buttons and Cover block, toolbar access to color options in Rich Text blocks, and for the first time, color options in the Group and Columns blocks.Guess a whole lot less! Version 5.4 streamlines the whole process for placing and replacing multimedia in every block. Now it works the same way in almost every block!And if you’ve ever thought your image in the Media+Text block should link to something else—perhaps a picture of a brochure should download that brochure as a document? Well, now it can. Cleaner UI, clearer navigation—and easier tabbing! Clearer block navigation with block breadcrumbs. And easier selection once you get there.For when you need to navigate with the keyboard, better tabbing and focus. Plus, you can tab over to the sidebar of nearly any block.Speed! 14% faster loading of the editor, 51% faster time-to-type!Tips are gone. In their place, a Welcome Guide window you can bring up when you need it—and only when you need it—again and again.Know at a glance whether you’re in a block’s Edit or Navigation mode. Or, if you have restricted vision, your screen reader will tell you which mode you’re in. Of course, if you want to work with the very latest tools and features, install the Gutenberg plugin. You’ll get to be the first to use new and exciting features in the block editor before anyone else has seen them! Your fundamental right: privacy 5.4 helps with a variety of privacy issues around the world. So when users and stakeholders ask about regulatory compliance, or how your team handles user data, the answers should be a lot easier to get right. Take a look: Now personal data exports include users session information and users location data from the community events widget. Plus, a table of contents!See progress as you process export and erasure requests through the privacy tools.Plus, little enhancements throughout give the privacy tools a little cleaner look. Your eyes will thank you! Just for developers Add custom fields to menu items—natively Two new actions let you add custom fields to menu items—without a plugin and without writing custom walkers. On the Menus admin screen, wp_nav_menu_item_custom_fields fires just before the move buttons of a nav menu item in the menu editor. In the Customizer, wp_nav_menu_item_custom_fields_customize_template fires at the end of the menu-items form-fields template. Check your code and see where these new actions can replace your custom code, and if you’re concerned about duplication, add a check for the WordPress version. Blocks! Simpler styling, new APIs and embeds Radically simpler block styling. Negative margins and default padding are gone! Now you can style blocks the way you need them. And, a refactor got rid of four redundant wrapper divs.If you build plugins, now you can register collections of your blocks by namespace across categories—a great way to get more brand visibility.Let users do more with two new APIs: block variations and gradients.In embeds, now the block editor supports TikTok—and CollegeHumor is gone. There’s lots more for developers to love in WordPress 5.4. To discover more and learn how to make these changes shine on your sites, themes, plugins and more, check the WordPress 5.4 Field Guide. The Squad This release was led by Matt Mullenweg, Francesca Marano, and David Baumwald. They were enthusiastically supported by a release squad: Editor Tech: Jorge Filipe Costa (@jorgefelipecosta)Editor Design: Mark Uraine (@mapk)Core Tech: Sergey Biryukov (@sergeybiryukov)Design: Tammie Lister (@karmatosed)Docs Coordinator: JB Audras (@audrasjb)Docs & Comms Wrangler: Mary Baum (@marybaum) The squad was joined throughout the release cycle by 552 generous volunteer contributors who collectively worked on 361 tickets on Trac and 1226 pull requests on GitHub. Put on a Nat Adderley playlist, click that update button (or download it directly), and check the profiles of the fine folks that helped: 0v3rth3d4wn, 123host, 1naveengiri, @dd32, Aaron Jorbin, Abhijit Rakas, abrightclearweb, acosmin, Adam Silverstein, adamboro, Addie, adnan.limdi, Aezaz Shaikh, Aftab Ali Muni, Aki Björklund, Akib, Akira Tachibana, akshayar, Alain Schlesser, Albert Juhé Lluveras, Alex Concha, Alex Mills, AlexHolsgrove, alexischenal, alextran, alishankhan, allancole, Allen Snook, alpipego, Amir Seljubac, Amit Dudhat, Amol Vhankalas, Amr Gawish, Amy Kamala, Anantajit JG, Anders Norén, Andrés, Andrea Fercia, Andrea Tarantini, andreaitm, Andrei Draganescu, Andrew Dixon, Andrew Duthie, Andrew Nacin, Andrew Ozz, Andrew Serong, Andrew Wilder, Andrey Savchenko, Andy Fragen, Andy Meerwaldt, Andy Peatling, Angelika Reisiger, Ankit Panchal, Anthony Burchell, Anthony Ledesma, apedog, Apermo, apieschel, Aravind Ajith, archon810, arenddeboer, Ari Stathopoulos, Arslan Ahmed, ashokrd2013, Ataur R, Ate Up With Motor, autotutorial, Ayesh Karunaratne, BackuPs, bahia0019, Bappi, Bart Czyz, Ben Greeley, benedictsinger, Benjamin Intal, bibliofille, bilgilabs, Birgir Erlendsson, Birgit Pauli-Haack, BMO, Boga86, Boone Gorges, Brad Markle, Brandon Kraft, Brent Swisher, Cameron Voell, Carolina Nymark, ceyhun0, Chetan Prajapati, Chetan Satasiya, Chintesh Prajapati, Chip Snyder, Chris Klosowski, Chris Trynkiewicz (Sukces Strony), Chris Van Patten, Christian Sabo, Christiana Mohr, clayisland, Copons, Corey McKrill, crdunst, Csaba (LittleBigThings), Dademaru, Damián Suárez, Daniel Bachhuber, Daniel James, Daniel Llewellyn, Daniel Richards, Daniele Scasciafratte, daniloercoli, Darren Ethier (nerrad), darrenlambert, Dave Mackey, Dave Smith, daveslaughter, DaveWP196, David Artiss, David Binovec, David Herrera, David Ryan, David Shanske, David Stone, Debabrata Karfa, dekervit, Delowar Hossain, Denis Yanchevskiy, Dhaval kasavala, dhurlburtusa, Dilip Bheda, dingo-d, dipeshkakadiya, djp424, dominic_ks, Dominik Schilling, Dotan Cohen, dphiffer, dragosh635, Drew Jaynes, eclev91, ecotechie, eden159, Edi Amin, edmundcwm, Eduardo Toledo, Ella van Durpe, Ellen Bauer, Emil E, Enrique Piqueras, Enrique Sánchez, equin0x80, erikkroes, Estela Rueda, Fabian, Fabian Kägy, Fahim Murshed, Faisal Alvi, Felipe Elia, Felipe Santos, Felix Arntz, Fernando Souza, fervillz, fgiannar, flaviozavan, Florian TIAR, Fotis Pastrakis, Frank Martin, Gal Baras, Garrett Hyder, Gary Jones, Gary Pendergast, Gaurang Dabhi, George Stephanis, geriux, Girish Panchal, Gleb Kemarsky, Glenn, Goto Hayato, grafruessel, Greg Rickaby, Grzegorz Ziółkowski, Grzegorz.Janoszka, Gustavo Bordoni, gwwar, hamedmoodi, hAmpzter, happiryu, Hareesh Pillai, Harry Milatz, Haz, helgatheviking, Henry Holtgeerts, Himani Lotia, Hubert Kubiak, i3anaan, Ian Belanger, Ian Dunn, ianatkins, ianmjones, IdeaBox Creations, Ihtisham Zahoor, intimez, Ipstenu (Mika Epstein), Isabel Brison, ispreview, Jake Spurlock, Jakub Binda, James Huff, James Koster, James Nylen, jameslnewell, Janki Moradiya, Jarret, Jasper van der Meer, jaydeep23290, jdy68, Jean-Baptiste Audras, Jean-David Daviet, Jeff Bowen, Jeff Ong, Jeff Paul, Jeffrey Carandang, jeichorn, Jenil Kanani, Jenny Wong, jepperask, Jer Clarke, Jeremy Felt, Jeremy Herve, Jeroen Rotty, Jerry Jones, Jessica Lyschik, Jip Moors, Joe Dolson, Joe Hoyle, Joe McGill, Joen Asmussen, John Blackbourn, John James Jacoby, johnwatkins0, Jon, Jon Quach, Jon Surrell, Jonathan Desrosiers, Jonathan Goldford, Jonny Harris, Jono Alderson, Joonas Vanhatapio, Joost de Valk, Jorge Bernal, Jorge Costa, Josepha Haden, JoshuaWold, Joy, jqz, jsnajdr, Juanfra Aldasoro, Julian Weiland, julian.kimmig, Juliette Reinders Folmer, Julio Potier, Junko Nukaga, jurgen, justdaiv, Justin Ahinon, K. Adam White, kaggdesign, KalpShit Akabari, Kantari Samy, Kaspars, Kelly Dwan, Kennith Nichol, Kevin Hagerty, Kharis Sulistiyono, Khushbu Modi, killerbishop, kinjaldalwadi, kitchin, Kite, Kjell Reigstad, kkarpieszuk, Knut Sparhell, KokkieH, Konstantin Obenland, Konstantinos Xenos, Krystyna, kubiq, kuflievskiy, Kukhyeon Heo, kyliesabra, Laken Hafner, leandroalonso, leogermani, lgrev01, linuxologos, lisota, Lorenzo Fracassi, luisherranz, luisrivera, lukaswaudentio, Lukasz Jasinski, Luke Cavanagh, Lydia Wodarek, M A Vinoth Kumar, maciejmackowiak, Mahesh Waghmare, Manzoor Wani, marcelo2605, Marcio Zebedeu, MarcoZ, Marcus Kazmierczak, Marek Dědič, Marius Jensen, Marius84, Mark Jaquith, Mark Marzeotti, Mark Uraine, Martin Stehle, Marty Helmick, Mary Baum, Mat Gargano, Mat Lipe, Mathieu Viet, Matt Keys, Matt van Andel, mattchowning, Matthew Kevins, mattnyeus, maxme, mayanksonawat, mbrailer, Mehidi Hassan, Mel Choyce-Dwan, mensmaximus, Michael Arestad, Michael Ecklund, Michael Panaga, Michelle Schulp, miette49, Miguel Fonseca, Miguel Torres, mihdan, Miina Sikk, Mikael Korpela, Mike Auteri, Mike Hansen, Mike Schinkel [WPLib Box project lead], Mike Schroder, mikejdent, Mikko Saari, Milan Patel, Milan Petrovic, mimi, mircoraffinetti, mjnewman, mlbrgl, Morgan Estes, Morteza Geransayeh, mppfeiffer, mryoga, mtias, Muhammad Usama Masood, mujuonly, Mukesh Panchal, Nadir Seghir, nagoke, Nahid Ferdous Mohit, Nate Finch, Nazmul Ahsan, nekomajin, NextScripts, Nick Daugherty, Nick Halsey, Nicklas Sundberg, Nicky Lim, nicolad, Nicolas Juen, nicole2292, Niels Lange, nikhilgupte, nilamacharya, noahtallen, noyle, nsubugak, oakesjosh, oldenburg, Omar Alshaker, Otto Kekäläinen, Ov3rfly, page-carbajal, pagewidth, Paragon Initiative Enterprises, Pascal Birchler, Pascal Casier, Paul Bearne, Paul Biron, Paul Kevin, Paul Schreiber, pcarvalho, Pedro Mendonça, perrywagle, Peter Wilson, Philip Jackson, Pierre Gordon, Pierre Lannoy, pikamander2, Prashant Singh, Pratik Jain, Presskopp, Priyanka Behera, Raam Dev, Rachel Cherry, Rachel Peter, ragnarokatz, Rami Yushuvaev, raoulunger, razamalik, Remco Tolsma, rephotsirch, rheinardkorf, Riad Benguella, Ricard Torres, Rich Tabor, rimadoshi, Rinku Y, Rob Cutmore, rob006, Robert Anderson, Roi Conde, Roland Murg, Rostislav Wolný, Roy Tanck, Russell Heimlich, Ryan, Ryan Fredlund, Ryan McCue, Ryan Welcher, Ryo, Sébastien SERRE, sablednah, Sampat Viral, Samuel Wood (Otto), SamuelFernandez, Sander, santilinwp, Sathiyamoorthy V, Schuhwerk, Scott Reilly, Scott Taylor, scruffian, scvleon, Sebastian Pisula, Sergey Biryukov, Sergio de Falco, sergiomdgomes, sgastard, sgoen, Shaharia Azam, Shannon Smith, shariqkhan2012, Shawntelle Coker, sheparddw, Shital Marakana, Shizumi Yoshiaki, simonjanin, sinatrateam, sirreal, skorasaurus, smerriman, socalchristina, Soren Wrede, spenserhale, sproutchris, squarecandy, starvoters1, SteelWagstaff, steevithak, Stefano Minoia, Stefanos Togoulidis, steffanhalv, Stephen Bernhardt, Stephen Edgar, Steve Dufresne, Steve Grunwell, stevenlinx, Stiofan, straightvisions GmbH, stroona.com, Subrata Mal, Subrata Sarkar, Sultan Nasir Uddin, swapnild, Sybre Waaijer, Sérgio Estêvão, Takayuki Miyauchi, Takeshi Furusato, Tammie Lister, Tanvirul Haque, TBschen, tdlewis77, Tellyworth, Thamaraiselvam, thefarlilacfield, ThemeZee, Tim Havinga, Tim Hengeveld, timon33, Timothée Brosille, Timothy Jacobs, Tkama, tmanoilov, tmatsuur, tobifjellner (Tor-Bjorn Fjellner), Tom Greer, Tom J Nowell, tommix, Toni Viemerö, Toro_Unit (Hiroshi Urabe), torres126, Torsten Landsiedel, Towhidul Islam, tristangemus, tristanleboss, tsuyoring, Tung Du, Udit Desai, Ulrich, upadalavipul, Utsav tilava, Vaishali Panchal, Valentin Bora, varunshanbhag, Veminom, Vinita Tandulkar, virgodesign, Vlad. S., vortfu, waleedt93, WebMan Design | Oliver Juhas, websupporter, Weston Ruter, William Earnhardt, William Patton, wpgurudev, WPMarmite, wptoolsdev, xedinunknown-1, yale01, Yannicki, Yordan Soares, Yui, zachflauaus, Zack Tollman, Zebulan Stanphill, Zee, and zsusag. Many thanks to all of the community volunteers who contribute in the support forums. They answer questions from people across the world, whether they are using WordPress for the first time or since the first release. These releases are more successful for their efforts! Finally, thanks to all the community translators who worked on WordPress 5.4. Their efforts bring WordPress fully translated to 46 languages at release time, with more on the way. If you want to learn more about volunteering with WordPress, check out Make WordPress or the core development blog.

Helping health organizations make COVID-19 information more accessible

Google Webmaster Central Blog -

Health organizations are busier than ever providing information to help with the COVID-19 pandemic. To better assist them, Google has created a best practices article to guide health organizations to make COVID-19 information more accessible on Search. We’ve also created a new technical support group for eligible health organizations.Best practices for search visibilityBy default, Google tries to show the most relevant, authoritative information in response to any search. This process is more effective when content owners help Google understand their content in appropriate ways.To better guide health-related organizations in this process (known as SEO, for "search engine optimization"), we have produced a new help center article with some important best practices, with emphasis on health information sites, including:How to help users access your content on the goThe importance of good page content and titlesWays to check how your site appears for coronavirus-related queriesHow to analyze the top coronavirus related user queriesHow to add structured data for FAQ contentNew support group for health organizationsIn addition to our best practices help page, health organizations can take part in our new technical support group that's focused on helping health organizations who publish COVID-19 information with Search related questions.We’ll be approving requests for access on a case-by-case basis. At first we’ll be accepting only domains under national health ministries and US state level agencies. We'll inform of future expansions here in this blog post, and on our Twitter account. You’ll need to register using either an email under those domains (e.g. name@health.gov) or have access to the website Search Console account.Fill this form to request access to the COVID-19 Google Search group The group was created to respond to the current needs of health organizations, and we intend to deprecate the group as soon as COVID-19 is no longer considered a Public Health Emergency by WHO or some similar deescalation is widely in place.Everyone is welcome to use our existing webmaster help forum, and if you have any questions or comments, please let us know on Twitter.Posted by Daniel Waisberg, Search Advocate & Ofir Roval, Search Console Lead PM

Amazon Detective – Rapid Security Investigation and Analysis

Amazon Web Services Blog -

Almost five years ago, I blogged about a solution that automatically analyzes AWS CloudTrail data to generate alerts upon sensitive API usage. It was a simple and basic solution for security analysis and automation. But demanding AWS customers have multiple AWS accounts, collect data from multiple sources, and simple searches based on regular expressions are not enough to conduct in-depth analysis of suspected security-related events. Today, when a security issue is detected, such as compromised credentials or unauthorized access to a resource, security analysts cross-analyze several data logs to understand the root cause of the issue and its impact on the environment. In-depth analysis often requires scripting and ETL to connect the dots between data generated by multiple siloed systems. It requires skilled data engineers to answer basic questions such as “is this normal?”. Analysts use Security Information and Event Management (SIEM) tools, third-party libraries, and data visualization tools to validate, compare, and correlate data to reach their conclusions. To further complicate the matters, new AWS accounts and new applications are constantly introduced, forcing analysts to constantly reestablish baselines of normal behavior, and to understand new patterns of activities every time they evaluate a new security issue. Amazon Detective is a fully managed service that empowers users to automate the heavy lifting involved in processing large quantities of AWS log data to determine the cause and impact of a security issue. Once enabled, Detective automatically begins distilling and organizing data from AWS Guard Duty, AWS CloudTrail, and Amazon Virtual Private Cloud Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment. At re:invent 2019, we announced a preview of Amazon Detective. Today, it is our pleasure to announce its availability for all AWS Customers. Amazon Detective uses machine learning models to produce graphical representations of your account behavior and helps you to answer questions such as “is this an unusual API call for this role?” or “is this spike in traffic from this instance expected?”. You do not need to write code, to configure or to tune your own queries. To get started with Amazon Detective, I open the AWS Management Console, I type “detective” in the search bar and I select Amazon Detective from the provided results to launch the service. I enable the service and I let the console guide me to configure “member” accounts to monitor and the “master” account in which to aggregate the data. After this one-time setup, Amazon Detective immediately starts analyzing AWS telemetry data and, within a few minutes, I have access to a set of visual interfaces that summarize my AWS resources and their associated behaviors such as logins, API calls, and network traffic. I search for a finding or resource from the Amazon Detective Search bar and, after a short while, I am able to visualize the baseline and current value for a set of metrics. I select the resource type and ID and start to browse the various graphs. I can also investigate a AWS Guard Duty finding by using the native integrations within the Guard Duty and AWS Security Hub consoles. I click the “Investigate” link from any finding from AWS Guard Duty and jump directly into a Amazon Detective console that provides related details, context, and guidance to investigate and to respond to the issue. In the example below, Guard Duty reports an unauthorized access that I decide to investigate: Amazon Detective console opens: I scroll down the page to check the graph of failed API calls. I click a bar in the graph to get the details, such as the IP addresses where the calls originated: Once I know the source IP addresses, I click New behavior: AWS role and observe where these calls originated from to compare with the automatically discovered baseline. Amazon Detective works across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned “master” account making it easy to view behavioral patterns and connections across your entire AWS environment. There are no agents, sensors, or additional software to deploy in order to use the service. Amazon Detective retrieves, aggregates and analyzes data from AWS Guard Duty, AWS CloudTrail and Amazon Virtual Private Cloud Flow Logs. Amazon Detective collects existing logs directly from AWS without touching your infrastructure, thereby not causing any impact to cost or performance. Amazon Detective can be administered via the AWS Management Console or via the Amazon Detective management APIs. The management APIs enable you to build Amazon Detective into your standard account registration, enablement, and deployment processes. Amazon Detective is a regional service. I activate the service in every AWS Regions in which I want to analyze findings. All data are processed in the AWS Region where they are generated. Amazon Detective maintains data analytics and log summaries in the behavior graph for a 1-year rolling period from the date of log ingestion. This allows for visual analysis and deep dives over a large data set for a long period of time. When I disable the service, all data is expunged to ensure no data remains. There are no additional charges or upfront commitments required to use Amazon Detective. We charge per GB of data ingested from AWS AWS CloudTrail, Amazon Virtual Private Cloud Flow Logs, and AWS Guard Duty findings. Amazon Detective offers a 30-day free trial. As usual, check the pricing page for the details. Amazon Detective is available in these 14 AWS Regions : US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Europe (Stockholm), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo). You can start to use it today. -- seb

20 Ways to Stay Social in an Age of Social Distancing

DreamHost Blog -

A few months ago, working from home sounded like a dream. Now, thanks to a global pandemic and the ever-looming threat of COVID-19, students, parents, workers, and business owners are stuck at home, doing their part to #flattenthecurve. Many of you are under shelter-in-place orders, leaving home only for the essentials, and the rest are carefully practicing social distancing, avoiding gatherings of more than 10 people and staying at least six feet apart. We’re right there with you. Our DreamHost offices in California and Oregon shut down, sending our diligent employees home to support you remotely. With no real end in sight to all this social distancing, the weeks (and months) are stretching ahead rather bleakly. “Loneliness is psychologically poisonous; it increases sleeplessness, depression, as well as immune and cardiovascular problems,” says Stanford psychologist Jamil Zaki. “In fact, chronic loneliness produces a similar mortality risk to smoking 15 cigarettes a day. We must do the right thing for public health and shelter-in-place now, but if doing so produces chronic, widespread loneliness, a long-term mental and physical health crisis might follow this viral one.” Social distancing doesn’t have to mean social isolation — Zaki suggests reframing it as “socializing from a distance.” Thanks to the internet, there are plenty of ways to connect with friends, family, and coworkers, all while keeping everyone safe and fighting to reduce coronavirus infections. How can you socialize for the sake of your sanity — and your relationships — while safely flattening the curve? We have a few ideas. Working From Home?Now is the perfect time to build a website. We offer budget-friendly Shared Hosting services with robust features to help you thrive online. Plans start at $2.59/mo.Choose Your Plan Essential Apps Before you can bring your social life to the digital world, you’ve got to build up your tool kit. Chances are you already use many of these, and others may be less familiar. Either way, these apps, plus your social media accounts, will keep you connected to others while physically apart. Smartphone Video Calling Your smartphone probably already has one of your best tools for video calling: FaceTime for iPhones, Google Duo for Android. FaceTime, only available on iOS, hosts up to 32 people; on Google Duo, up to eight people can chat. These apps are best for one-on-one conversations with friends and family. Zoom Thanks to school and workplace closures sending the masses to work and learn at home, video conferencing software Zoom has become a surprising hero in the age of quarantine — and the inspiration for a wave of memes. It really shines for professional uses, such as connecting with clients, coworkers, and classrooms, but it can work for friend hangouts too. Download it onto your phone or tablet or use on your computer for free one-on-one chats or up to 40-minute meetings; upgrade to $14.99/month for longer meetings. Google Hangouts Sign in to Hangouts with your Google account (you already have one if you use Gmail) to video chat with friends for free. Up to 25 people can video chat at once, and 150 can join a voice-only group. If your friends or coworkers are all on Google (or willing to get an account), this is an easy option for some group facetime. Related: The 30 Best Web Apps for Small Businesses in 2020 Skype A staple of online communication for years, Skype is free to download and use on phones, tablets, and computers with web cameras. Video call up to 10 people at once, depending on connection speeds, and easily share screens. You can also instant message and make voice calls on Skype. This app is great for a virtual hangout with friends, no matter what devices they use. WhatsApp Facebook-owned WhatsApp is an excellent option for free one-on-one messaging, video calling, and voice calls on both iOS and Android. It uses end-to-end encryption for added security, and its popularity around the world makes it a fantastic way to connect with friends and family in other countries. Marco Polo When video chats are hard to coordinate between conference calls and Netflix-a-thons, Marco Polo can help you still connect “face to face.” Leave a video voicemail of sorts — send a video message to a friend, who will watch and respond when they are ready. This is a helpful app for those with friends and family in different time zones. Neighborhood Groups Find an online meeting place for your neighborhood and community. Some neighborhoods are more active on Facebook groups, others on Nextdoor. Find your people and use the forum to meet neighbors, connect with friends holed up in their apartment down the block, and trade war stories about tracking down toilet paper. Related: The 7 Best Web Management Tools for Small Businesses How to Use Tech to Socialize from a Distance Armed with an internet connection and a webcam or smartphone, plus one or more of the handy apps above, you’re ready for a world of virtual socializing. Try out these ideas with your friends and family. 1. Meet with Your Book Club Move your meetings online, maybe to Skype or Google Hangouts, or start your own group from scratch. Book clubs are a great way to make sure you meet regularly with your friends — and, with all the staying inside you’re doing, you might actually read the book this time. Related: Bibliophiles, Unite! Meet the DreamHost Customers Behind Silent Book Club 2. Throw a Birthday Party People with birthdays in the next two months (or more!) can still celebrate with family and friends, albeit digitally. Gather on a group video chat with the birthday boy or girl, each party-goer with their own dessert, to sing “Happy Birthday,” blow out candles, have a dance party, and share memories. 3. Go on a Date There’s no reason your dating life has to fizzle out. You definitely shouldn’t meet up with a stranger in person right now, but don’t delete your Tinder and Bumble accounts: schedule video chat dates with matches for a chance to connect with someone new. 4. Play Games Don’t cancel game night — a number of your favorite board games (including the ever-timely Pandemic) and party games are available online. If you and your friends have a copy of the same physical game you can play together, moving the pieces in sync. Also try Houseparty, a social media app that lets you play digital games over video chat. 5. Try a Table-Top RPG Game Maybe you and your friends have been Adventurers for years — if so, move your Dungeons and Dragons game online. If you’ve never played an RPG game, there’s no better time to try. D&D offers a short version of the rules online for free. Players only need a pencil, paper, and dice; this guide can help you start your first game. 6. Host a Movie Night The Netflix Party Chrome extension lets you and your friends watch a movie or TV show in sync while hosting a chat session. You each need the extension and your own Netflix account. Pop some popcorn, argue over what to watch, and settle down to enjoy together. 7. Sing Karaoke The bars are closed, so take the party to your video app of choice. Get music inspiration from this list of the 50 best karaoke songs, search for karaoke versions of your song choice on YouTube, and sing like no one is listening — but they are, because you invited them into your Skype chatroom. 8. Take a Zoom Happy Hour After a long day of teleconferencing, you and your coworkers could do with a celebration. Schedule a Zoom meeting (or whatever platform your organization uses) just for happy hour, and relax a bit together, while safely separate. 9. Chat Around the Zoom Watercooler No office is strictly business all the time. Make time for your coworkers to take breaks together while working from home, around the proverbial watercooler. Create a Zoom channel or meeting just to chat about anything other than work. These moments can help build the solidarity and connection that are so important to a healthy team. Related: 10 Ways You Can Create ‘Watercooler Moments’ While Working Remotely 10. Eat Out Together Log in to your favorite video chat app to host a remote dinner party. Get some takeout (support a local business, as long as social distancing guidelines in your area allow), or pick a recipe that everyone can cook and then eat together. 11. Play Together If you have kids stuck at home with you, chances are they could use some social connection too. Connect with the parents of their friends and hold a virtual playdate. They can color together, play Pictionary, and share quarantine adventures. The family chat app Caribu lets kids read and play games together. 12. Share on Social Media Use Instagram Live or Facebook Live to share something you’re skilled at with your friends who are also stuck at home. Give a concert, read poetry (your own or a favorite poet’s), give a walking tour of local landmarks, teach a few Japanese lessons — the sky’s the limit. Doing a little good for others will go a long way in helping you feel less lonely. Related: 10 Social Media Marketing Tips for Your Small Business Low-Tech Ways to Connect You can still maintain ties without the smartphone, all while staying a safe distance away from other people. These low-tech ways to connect will build solidarity between neighbors, communities, and friends — just make sure before trying any that you’re following local recommendations and keeping high-risk groups safe. 13. Plan a Neighborhood Art Walk Use your community Facebook group or Nextdoor to put on a neighborhood art walk. Have everyone hang posters, drawings, and messages on their doors or windows, or draw outside with sidewalk chalk, and then take a walk and enjoy your neighbors’ creativity. 14. Dance with Your Neighbors Channel the quarantined Italians who sang together from balconies by putting on your own neighborhood dance party or singalong. 15. Cheer at 8 p.m. A Twitter campaign called #solidarityat8 encourages Americans to stand on their porches or balconies at 8 p.m. every at to applaud the healthcare workers on the frontlines of COVID-19. Stand with your neighbors, wave at them, chat a bit from a safe distance, and cheer together for the workers who don’t get to stay home. 16. Run a Virtual 5K So many walks and runs this spring (and possibly summer) are getting canceled. Instead of throwing in the towel, plan a 5K with your friends, family, or neighbors. Get out and run or walk, wherever you are, at the same time (six feet apart, of course!), and enjoy some solidarity and exercise. 17. Throw a Social Distancing Concert If you live in a suburban neighborhood and play a loud instrument, put on a concert for your neighbors. Stand out in your lawn or backyard and play for all within earshot. Tell your neighbors about it ahead of time, so they can come outside and enjoy — or know when to turn up the volume on Netflix. Want More Remote-Work Content?Subscribe to our monthly newsletter so you never miss an article.Sign Me Up 18. Send Snail Mail and Play Window Tic Tac Toe The elderly are already at high risk for loneliness and depression, so if there’s an age 60+ loved one in your life or neighborhood, don’t forget about them. Call them on the phone, buy them groceries, send letters and cards in the mail — you might head to their home and write messages on their windows, or even play tic tac toe with them on the other side of the glass. 19. Deck the Halls Hallmark is bringing back Christmas to spread some cheer through the coronavirus gloom — why not do the same for your neighbors? Put your Christmas light back up and bust out your tackiest decorations. Walk the dog in your Halloween costume, put Valentines on doors, hang giant paper bunnies in your window — anything to entertain and surprise your neighbors. 20. Be a Helper Fred, AKA Mr., Rogers told children, “When I was a boy and I would see scary things in the news, my mother would say to me, ‘Look for the helpers. You will always find people who are helping.’” Anything to help, from giving to a food bank, to ordering takeout at a local restaurant, to donating blood, will do some good and help you feel connected to your neighborhood and community. Now’s the Time Between your remote working and your distant socializing, stave off quarantine-induced loneliness and boredom by tackling a project you’ve been putting off. Now is a great time to finally build the website you always dreamed of. Your DreamHost team may be working from home for now, but we are still here to help you get your website up and running. The post 20 Ways to Stay Social in an Age of Social Distancing appeared first on Website Guides, Tips and Knowledge.

Announcing the Results of the 1.1.1.1 Public DNS Resolver Privacy Examination

CloudFlare Blog -

On April 1, 2018, we took a big step toward improving Internet privacy and security with the launch of the 1.1.1.1 public DNS resolver — the Internet's fastest, privacy-first public DNS resolver. And we really meant privacy first. We were not satisfied with the status quo and believed that secure DNS resolution with transparent privacy practices should be the new normal. So we committed to our public resolver users that we would not retain any personal data about requests made using our 1.1.1.1 resolver. We also built in technical measures to facilitate DNS over HTTPS to help keep your DNS queries secure. We’ve never wanted to know what individuals do on the Internet, and we took technical steps to ensure we can’t know.We knew there would be skeptics. Many consumers believe that if they aren’t paying for a product, then they are the product. We don’t believe that has to be the case. So we committed to retaining a Big 4 accounting firm to perform an examination of our 1.1.1.1 resolver privacy commitments.Today we’re excited to announce that the 1.1.1.1 resolver examination has been completed and a copy of the independent accountants’ report can be obtained from our compliance page.The examination processWe gained a number of observations and lessons from the privacy examination of the 1.1.1.1 resolver. First, we learned that it takes much longer to agree on terms and complete an examination when you ask an accounting firm to do what we believe is the first of its kind examination of custom privacy commitments for a recursive resolver.We also observed that privacy by design works. Not that we were surprised -- we use privacy by design principles in all our products and services. Because we baked anonymization best practices into the 1.1.1.1 resolver when we built it, we were able to demonstrate that we didn’t have any personal data to sell. More specifically, in accordance with RFC 6235, we decided to truncate the client/source IP at our edge data centers so that we never store in non-volatile storage the full IP address of the 1.1.1.1 resolver user.We knew that a truncated IP address would be enough to help us understand general Internet trends and where traffic is coming from. In addition, we also further improved our privacy-first approach by replacing the truncated IP address with the network number (the ASN) for our internal logs. On top of that, we committed to only retaining those anonymized logs for a limited period of time. It’s the privacy version of belt plus suspenders plus another belt.Finally, we learned that aligning our examination of the 1.1.1.1 resolver with our SOC 2 report most efficiently demonstrated that we had the appropriate change control procedures and audit logs in place to confirm that our IP truncation logic and limited data retention periods were in effect during the examination period. The 1.1.1.1 resolver examination period of February 1, 2019, through October 31, 2019, was the earliest we could go back to while relying on our SOC 2 report.Details on the examination When we launched the 1.1.1.1 resolver, we committed that we would not track what individual users of our 1.1.1.1 resolver are searching for online. The examination validated that our system is configured to achieve what we think is the most important part of this commitment -- we never write the querying IP addresses together with the DNS query to disk and therefore have no idea who is making a specific request using the 1.1.1.1 resolver. This means we don’t track which sites any individual visits, and we won't sell your personal data, ever.We want to be fully transparent that during the examination we uncovered that our routers randomly capture up to 0.05% of all requests that pass through them, including the querying IP address of resolver users. We do this separately from the 1.1.1.1 service for all traffic passing into our network and we retain such data for a limited period of time for use in connection with network troubleshooting and mitigating denial of service attacks.To explain -- if a specific IP address is flowing through one of our data centers a large number of times, then it is often associated with malicious requests or a botnet. We need to keep that information to mitigate attacks against our network and to prevent our network from being used as an attack vector itself. This limited subsample of data is not linked up with DNS queries handled by the 1.1.1.1 service and does not have any impact on user privacy.We also want to acknowledge that when we made our privacy promises about how we would handle non-personally identifiable log data for 1.1.1.1 resolver requests, we made what we now see were some confusing statements about how we would handle those anonymous logs.For example, we learned that our blog post commitment about retention of anonymous log data was not written clearly enough and our previous statements were not as clear because we referred to temporary logs, transactional logs, and permanent logs in ways that could have been better defined. For example, our 1.1.1.1 resolver privacy FAQs stated that we would not retain transactional logs for more than 24 hours but that some anonymous logs would be retained indefinitely. However, our blog post announcing the public resolver didn’t capture that distinction. You can see a clearer statement about our handling of anonymous logs on our privacy commitments page mentioned below.With this in mind, we updated and clarified our privacy commitments for the 1.1.1.1 resolver as outlined below. The most critical part of these commitments remains unchanged: We don’t want to know what you do on the Internet — it’s none of our business — and we’ve taken the technical steps to ensure we can’t.Our 1.1.1.1 public DNS resolver commitmentsWe have refined our commitments to 1.1.1.1 resolver privacy as part of our examination effort. The nature and intent of our commitments remain consistent with our original commitments. These updated commitments are what was included in the examination:Cloudflare will not sell or share public resolver users’ personal data with third parties or use personal data from the public resolver to target any user with advertisements.Cloudflare will only retain or use what is being asked, not information that will identify who is asking it. Except for randomly sampled network packets captured from at most 0.05% of all traffic sent to Cloudflare’s network infrastructure, Cloudflare will not retain the source IP from DNS queries to the public resolver in non-volatile storage (more on that below). The randomly sampled packets are solely used for network troubleshooting and DoS mitigation purposes.A public resolver user’s IP address (referred to as the client or source IP address) will not be stored in non-volatile storage. Cloudflare will anonymize source IP addresses via IP truncation methods (last octet for IPv4 and last 80 bits for IPv6). Cloudflare will delete the truncated IP address within 25 hours.Cloudflare will retain only the limited transaction and debug log data (“Public Resolver Logs”) for the legitimate operation of our Public Resolver and research purposes, and Cloudflare will delete the Public Resolver Logs within 25 hours.Cloudflare will not share the Public Resolver Logs with any third parties except for APNIC pursuant to a Research Cooperative Agreement. APNIC will only have limited access to query the anonymized data in the Public Resolver Logs and conduct research related to the operation of the DNS system.Proving privacy commitmentsWe created the 1.1.1.1 resolver because we recognized significant privacy problems: ISPs, WiFi networks you connect to, your mobile network provider, and anyone else listening in on the Internet can see every site you visit and every app you use — even if the content is encrypted. Some DNS providers even sell data about your Internet activity or use it to target you with ads. DNS can also be used as a tool of censorship against many of the groups we protect through our Project Galileo.If you use DNS-over-HTTPS or DNS-over-TLS to our 1.1.1.1 resolver, your DNS lookup request will be sent over a secure channel. This means that if you use the 1.1.1.1 resolver then in addition to our privacy guarantees an eavesdropper can’t see your DNS requests. We promise we won’t be looking at what you’re doing.We strongly believe that consumers should expect their service providers to be able to show proof that they are actually abiding by their privacy commitments. If we were able to have our 1.1.1.1 resolver privacy commitments examined by an independent accounting firm, we think other organizations can do the same. We encourage other providers to follow suit and help improve privacy and transparency for Internet users globally. And for our part, we will continue to engage well-respected auditing firms to audit our 1.1.1.1 resolver privacy commitments. We also appreciate the work that Mozilla has undertaken to encourage entities that operate recursive resolvers to adopt data handling practices that protect the privacy of user data.Details of the 1.1.1.1 resolver privacy examination and our accountant’s opinion can be found on Cloudflare’s Compliance page.Visit https://developers.cloudflare.com/1.1.1.1/ from any device to get started with the Internet's fastest, privacy-first DNS service.PS Cloudflare has traditionally used tomorrow, April 1, to release new products. Two years ago we launched the 1.1.1.1 free, fast, privacy-focused public DNS resolver. One year ago we launched WARP our way of securing and accelerating mobile Internet access.And tomorrow?Then three key changesOne before the weft, alsoSafety to the roost

Why You Should ONLY Load jQuery from Google Libraries

HostGator Blog -

The post Why You Should ONLY Load jQuery from Google Libraries appeared first on HostGator Blog. You probably already know that it’s better to load static files from a Content Distribution Network (CDN). JavaScript, CSS, and image files fall into this category. However, there’s another step beyond a CDN – hosted libraries. These hosted libraries are high speed, geographically distributed servers that serve as content distribution networks for popular, open source Javascript libraries. You can call on these well-known JavaScript libraries and add them to your site with a small bit of code. There are many well known hosted libraries – the two most famous being Google and CDNJS. It might seem like a good idea to serve all your JavaScript files from these libraries, but that might not be a generally good idea. In this article, I’ll show you that the most important use-case for using them is jQuery. And that too ONLY from Google’s network. Buy Hostgator Plans with an In-Built CDN Both hosted libraries and CDNs share the same goals. They serve your content from servers located geographically close to users at very high speeds. But not all web hosting plans include a dedicated CDN. For example, Hostgator’s optimized WordPress packages use the SiteLock CDN network, but not the shared hosting plans, which rely on Cloudflare. Here’s the Hostgator coupon list to help you find the best deal! However, there are a few differences between hosted public libraries and traditional CDNs. DNS Resolution Some CDNs like Cloudflare use a “reverse proxy” setup. What this means is that your static files will be served over your site’s URL like this: https://www.yoursite.com/jquery.js, instead of this: https://yoursite.somecdnnetwork.com/jquery.js This has an important side effect. It means that for the first URL, the browser won’t have to perform an additional DNS lookup to retrieve the file jquery.js. The second URL however, is hosted on a different domain name compared to your own site, so there is an added lag while the browser gets the IP address for the new domain. Ideally, we want to reduce the number of DNS lookups as much as possible. Internal testing has convinced me that in most cases, the additional DNS lookup isn’t worth it. Therefore, I prefer CDNs that function on a reverse proxy model like Cloudflare instead of traditional networks that change the static file URLs. However, public libraries are by definition hosted on an external URL. Google’s URLs start with “ajax.googleapis.com” and CDNJS URLs are “cdnjs.cloudflare.com”. This means that unless the browser has already cached the response from another site, there will always be an additional DNS lookup. This makes public libraries very tricky to use, compared to reverse proxy CDNs, or just hosting the files on your own server. Globally Distributed Networks Comparison Not all CDNs are built the same. While most try and do a fairly decent job of spreading out their servers across the globe, there are some locations that are chronically underserved. Africa is one glaring example. None of the well-known retail CDNs that I’ve tested, serve the African subcontinent well enough. They usually have just one location in Johannesburg and that’s it. However, publicly hosted libraries like Google and CDNJS have a much stronger network than most CDNs. CDNJS now uses Cloudflare’s network, which means it has a very strong presence across the globe with multiple server locations for any given area. Bottom line: Large public JavaScript libraries are faster than ordinary CDN networks. Not All JavaScript is Equally Important – jQuery is Unique jQuery holds a distinguished position amongst JavaScript libraries. Currently, almost 75% of all websites use it as shown here: It’s also quite large compared to the other external JavaScript files on your site. So if you had to choose one JavaScript library to speed up, it would undoubtedly by jQuery. jQuery is Usually Render-Blocking I’d written earlier on the Hostgator blog about how to optimize your site for speed. There we see that you should “defer” or “async” all your JavaScript so that it doesn’t block your site from rendering. Unfortunately, jQuery is referenced often by both external and inline scripts. This means that generally speaking, you should keep jQuery loaded in the header, and this slows down your page rendering. If you ignore my advice and load jQuery via “defer” or “async”, your site will break one day, and you won’t know why. Just trust me on this. I would love to defer the loading of jQuery, but it’s just too unstable to do so. For this reason, I want to use every means possible to speed up the delivery of jQuery. And for that, publicly hosted libraries are the best. This is true for two reasons. Public Libraries Are Ideal for Browser Caching Perhaps the biggest difference between a traditional CDN network and a public library, is that the former is accessed by only your website, and the latter is accessed by thousands – even millions – of people. Browsers typically cache the JavaScript they receive for differing periods of time – even up to a year! The idea is that if it sees the same URL again, it doesn’t need to download it again. It can simply use its cached copy and bypass the process entirely. This is the absolute best-case scenario for us. Ideally, the visitor’s browser will already have jQuery cached in its memory and thus solve our render-blocking problems in one go. But for this, we need to use well-known public libraries that everyone else is using. A private CDN will not bring the same caching and performance benefits. This is one huge advantage in favor of public hosted libraries. The Same Goes for DNS Lookups Browsers cache not just files, but also DNS lookups. So if millions of people are using a certain public library, the chances are that an average user will already have the DNS entry in their browser, and thus avoid the lookup altogether. This sidesteps the penalty of DNS lookups. But again – it will only work with a public hosted library where everyone uses the same URL. Not a traditional CDN. Which Public Library is the Best? To test this, I downloaded a simple program that probes the cached files in a variety of browsers. I searched for the two most well-known CDNs in today’s market – Google and CDNJS. Here are the results for Google’s library: And here are the results for CDNJS: As you can see, both Google and CDNJS files are cached in my browser from one site or the other. So from a DNS resolution point of view, both Google and CDNJS are on par. Both public libraries are likely to have been used, and they’re both spared the penalty of a DNS lookup. But Google Wins for jQuery But look at the results more closely. Out of the two, you can see that CDNJS has only one version of jQuery cached – 2.2.4. Whereas Google’s library has 11 of them! So unless your website uses JavaScript version 2.2.4, the Google library will be far better for you than CDNJS. This is because for one reason or another, more people use Google’s libraries to download jQuery than any other file. I don’t know why this is the case, but that’s just the way it is. WordPress Doesn’t Use the Latest jQuery Version At the time of this writing, WordPress still uses jQuery version 1.12.4. This is for compatibility, since a lot of plugins rely on the older versions and they don’t want to break them. Looking at the screenshots above, you can see that only Google’s library has served jQuery version 1.12.4. If I were to use CDNJS as my source, most browsers wouldn’t have it in their cache and would need to download it. So it’s not enough for a hosted library to serve jQuery. They need to be popular with a lot of different versions of jQuery, to maximize the chances that any particular version will be in a random browser’s cache. Using Google’s Library to Server jQuery on WordPress The procedure will be different for each software framework. But if you want to use Google’s library for jQuery with WordPress, paste the following code into your theme’s functions.php file. function load_google_jquery () {         if (is_admin()) {                 return;         }         global $wp_scripts;         if (isset($wp_scripts->registered[‘jquery’]->ver)) {                 $ver = $wp_scripts->registered[‘jquery’]->ver;                 $ver = str_replace(“-wp”, “”, $ver);         } else {                 $ver = ‘1.12.4’;         }         wp_deregister_script(‘jquery’);         wp_register_script(‘jquery’, “//ajax.googleapis.com/ajax/libs/jquery/$ver/jquery.min.js”, false, $ver); } add_action(‘init’, ‘load_google_jquery’); This code checks the version of jQuery that WordPress is using, and then constructs the URL for use with Google’s library. Moral of the Story: Google is Best for jQuery I have nothing against CDNJS. In fact, I prefer them from a philosophical point of view since they’re FOSS, and partner with Cloudflare – another company I like. But numbers are numbers, and technology doesn’t allow for sentimentality. In the contest for which library is better to serve jQuery, Google comes out head and shoulders above the competition. And as we’ve seen above, jQuery is the one JavaScript library that shouldn’t be deferred or asynced. And so using Google libraries is a no-brainer. For the remaining JavaScript files, it doesn’t matter that much since they don’t block your page. But pay special attention to jQuery – it can make or break your page speed times! Find the post on the HostGator Blog

How to Guide Your Employees to Post More on Social Media

Social Media Examiner -

Want your employees to share more about your business on social media? Wondering how best to guide their social media posts? In this article, you’ll discover how to develop guidelines to help employees post more on social media and find examples of types of posts employees can model. #1: Create Clear Social Media Guidelines for […] The post How to Guide Your Employees to Post More on Social Media appeared first on Social Media Marketing | Social Media Examiner.

Design Series Part II – Selecting the Right Colours & Fonts for Your Website

BigRock Blog -

In our previous article of the design series, we talked at length about selecting the right theme for your website. However, a theme is not everything, is it? There are several elements that help design an appealing website.  In our second article of the series, we will explore colours and fonts, and talk in-depth about how they can help transform your website.  Do Colours and Fonts Matter? Colours can impact us – a sunshine yellow can cheer you up on a sad day or a blue may literally make you feel blue. Bright cheery colours give me fun and exciting vibes as opposed to subtle colours like grey and black that induce practicality. This visual feeling is even applicable to the websites you visit.  In fact, there is a whole field of colour psychology that talks about how colour sway emotions, help make you form perceptions and when it comes to a website, impact in the conversion rates. In short, your website colour can be a deal-breaker when it comes to capturing your audiences’ interest when they land on your website.  Similarly, fonts too are important. If the text is not readable or doesn’t go with the kind of content your website has, it can have a negative impact. Thus, it is safe to say that both fonts and colour matter.  Choosing the Right Colour  Decide the number of colours you need Too much salt can spoil the soup, and so can too many colours on your website.  Take a look at the image below. The web design is a mix of colours and none of them compliment each other. This goes for background colour, font colour, as well as, headings.  How does it look, appealing or too much? Well, if I were to visit such a website I wouldn’t return. This is true even though the colours are not harsh but pleasing to the eye. The simple reason for me would be too many colours when not required and none of them sync together.  So, is there any rule or the number of recommended colours? Well, most designers tend to follow something called as a 60-30-10 rule. For simplicity, think of a man in a business suit, 60% is the blazer and pants, 30% is the shirt, and 10% is the tie (the pocket square too will match the tie).  Similarly, if your website is divided into three colours then it would not only look presentable but also engage the user into reading the content. For this reason, there are three sections or types where colours are used namely: primary colour, secondary colour and the background colour. Primary Colour The primary colour otherwise known as dominant colour is the colour associated with your brand. This is the colour of your logo and will be scattered throughout your design.  Take, for example, our website. The brand logo comprises of three colours with black and orange being more prominent.  However, it is the orange colour that takes precedence visually. Throughout the website colour scheme, you can notice how the colour is used to capture customer attention. If you’re still unsure or confused about your primary brand colour, take this quiz, to help you select the best fit for your website.  Secondary colour  Finding the right secondary colour can be a bigger struggle than getting the primary colour right. You may wonder why? Well, simply because you need to match the colour scheme.  Visualise, yellow as your primary colour and light green or red as the secondary colour, clashing isn’t it? Hence, it is equally important to select the right secondary colour. Secondary colours are usually used for links, buttons and more so that they stand out.  Head to Colour Lovers or ColorSpace to find out which colours work well with each other.  Simple, isn’t it? Background colour  Now, we’ve chosen our primary, as well as, secondary colours. It is now time to choose our background colour.  Ideally, the choice of your background colour depends on the kind of website you have. However, most websites choose neutral colours as their background colour. Neutral colours include black, white and grey.   The idea of the background colour is that it should hold the entire web design together. If you’re an e-commerce website or a content-heavy blog it is best to stick to white. Dark colours like grey and black usually appeal more to tech, game-based or photography websites. Source: https://whatshotblog.com/ Fonts Selection Process Now, that you’ve decided the colour scheme of your website or blog, it is now time to choose the fonts. How to choose a font Choosing a font can be a tedious task, especially with numerous fonts available. However, one thing you must remember is that your font should be consistent throughout your website. In fact, it is right to say your font too is a part of your brand guidelines that you should adhere to.  Here are some basic guidelines when choosing a font: Use as few fonts as possible, the maximum number being four  Try using a different font for your website name, the heading and content Experiment with fonts when it comes to the brand/website name – it makes you stand out but make sure it is readable For the body of the text, it is advisable to choose fonts from sans-serif fonts family  Pairing font with content Pairing the font with your content is extremely important, as mentioned above in the second point. Imagine, you run a fashion blog and the fonts you have chosen for your website are simple.  Now, there is nothing wrong with choosing a simple ‘Times New Roman’ font but given the fact you have a fashion website, having a creative font style could add to your business portfolio. Similarly, if you run a technical blog, you can’t choose fonts that are cursive or similar, it is just not advisable. However, there is no rule that you can’t choose them. It is just how well can you merge them with your content that is of importance. Source: https://www.glorialo.design/ Conclusion While the colours and fonts are important, keeping them in line with your brand and engaging customers is also important. In a nutshell, research well, as well as, experiment with colours and fonts to find out which works well for you.  So, what are you waiting for? Unleash your creativity and paint your website in the colour of your choice! Until next time!

Webinar: Retention is the New Growth

WP Engine -

Driving organic traffic to your website is a critical part of your digital marketing strategy, and SEO remains a major focus for doing just that. But SEO is getting harder, additional strategies like paid search are becoming more expensive, and making an impact on social seems like a never-ending—and costly—challenge. Now more than ever, it’s… The post Webinar: Retention is the New Growth appeared first on WP Engine.

New – Use AWS IAM Access Analyzer in AWS Organizations

Amazon Web Services Blog -

Last year at AWS re:Invent 2019, we released AWS Identity and Access Management (IAM) Access Analyzer that helps you understand who can access resources by analyzing permissions granted using policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues. AWS IAM Access Analyzer uses automated reasoning, a form of mathematical logic and inference, to determine all possible access paths allowed by a resource policy. We call these analytical results provable security, a higher level of assurance for security in the cloud. Today I am pleased to announce that you can create an analyzer in the AWS Organizations master account or a delegated member account with the entire organization as the zone of trust. Now for each analyzer, you can create a zone of trust to be either a particular account or an entire organization, and set the logical bounds for the analyzer to base findings upon. This helps you quickly identify when resources in your organization can be accessed from outside of your AWS Organization. AWS IAM Access Analyzer for AWS Organizations – Getting started You can enable IAM Access Analyzer, in your organization with one click in the IAM Console. Once enabled, IAM Access Analyzer analyzes policies and reports a list of findings for resources that grant public or cross-account access from outside your AWS Organizations in the IAM console and through APIs. When you create an analyzer on your organization, it recognizes your organization as a zone of trust, meaning all accounts within the organization are trusted to have access to AWS resources. Access analyzer will generate a report that identifies access to your resources from outside of the organization. For example, if you create an analyzer for your organization then it provides active findings for resource such as S3 buckets in your organization that are accessible publicly or from outside the organization. When policies change, IAM Access Analyzer automatically triggers a new analysis and reports new findings based on the policy changes. You can also trigger a re-evaluation manually. You can download the details of findings into a report to support compliance audits. Analyzers are specific to the region in which they are created. You need to create a unique analyzer for each region where you want to enable IAM Access Analyzer. You can create multiple analyzers for your entire organization in your organization’s master account. Additionally, you can also choose a member account in your organization as a delegated administrator for IAM Access Analyzer. When you choose a member account as the delegated administrator, the member account has a permission to create analyzers within the organization. Additionally individual accounts can create analyzers to identify resources accessible from outside those accounts. IAM Access Analyzer sends an event to Amazon EventBridge for each generated finding, for a change to the status of an existing finding, and when a finding is deleted. You can monitor IAM Access Analyzer findings with EventBridge. Also, all IAM Access Analyzer actions are logged by AWS CloudTrail and AWS Security Hub. Using the information collected by CloudTrail, you can determine the request that was made to Access Analyzer, the IP address from which the request was made, who made the request, when it was made, and additional details. Now available! This integration is available in all AWS Regions where IAM Access Analyzer is available. There is no extra cost for creating an analyzer with organization as the zone of trust. You can learn more through these talks of Dive Deep into IAM Access Analyzer and Automated Reasoning on AWS at AWS re:Invent 2019. Take a look at the feature page and the documentation to learn more. Please send us feedback either in the AWS forum for IAM or through your usual AWS support contacts. – Channy;

4 Ways to Stay Cybersecure When You Start Working from Home

HostGator Blog -

The post 4 Ways to Stay Cybersecure When You Start Working from Home appeared first on HostGator Blog. Are you one of the millions of workers who’ve suddenly gone from days at the office to working at home because of the coronavirus outbreak? Working from home takes some adjustment, especially on short notice. And one of the most important adjustments is thinking differently about cybersecurity at home.  When you work in an office, your company’s IT people focus on keeping hackers out of the system. When you work at home, it’s your responsibility, too.  That’s because hackers are exploiting the rapid shift to remote work by targeting employees with malware and phishing attacks. Often, they’re doing it by impersonating health officials and setting up fake websites that say they provide news about covid-19. Ugh. It’s a lot to deal with all at once. But taking these four steps can protect your company—and your livelihood—while everyone hunkers down at home. 1. Got a company-issued laptop or phone? Keep it safe If you’re lucky enough to have tech tools provided by your employer, protect them from data thieves. Here are three keys to locking hackers out of your company-issued gear. Store your company laptop and phone securely when you’re not using them. Thieves will break into cars to steal electronics, and sometimes those robberies lead to data breaches that cost companies their reputation, customers and fines or settlements. Use your company tech only for work. Save the social media and personal emails for your own phone and computer. Why? There’s a world of phishing websites, social media scams and email phishing fraud related to the covid-19 pandemic.  If you accidentally click on one of those traps, you could end up with malware on your company device—and in your company’s network. Worst case scenario, ransomware locks up your company’s databases until your employer pays up or shuts down.  Don’t install any new software or apps on work devices without company approval. Every new application comes with vulnerabilities, a responsibility to keep them updated and the risk of installing something corrupted. Stick with what your company wants you to use.   2. Connect to work securely Ideally, your company will have a virtual private network (VPN) that you must use to log in to your work email and files. If so, you’ve got a secure, encrypted connection to work, and no one can see the data you’re sending and receiving.  If your company has a VPN but you don’t have to use it to log in, use it anyway. Yes, it will likely slow down your connection, but it will cover any gaps in your home internet security (which we’ll look at in a bit). If you’re using a public Wi-Fi network for work, yikes. You’re putting your company’s data at risk—including things like your email ID and password—unless you use a VPN.  Check your cybersecurity setup at home. Many of us are relaxed about cybersecurity at home because we don’t think cyberthieves go for small targets. However, thanks to the magic of the internet, hackers can search online for vulnerable IP addresses and go after them from anywhere.  Stepping up your home internet security makes your personal information safer. And when you work from home, it protects your company, too. Here’s what to check: 1. Do you have malware protection on your devices? This is important whether you’re using a company-issued computer or your own. Regular scans and firewall protection can keep viruses and other crud off your computer and phone, where they could otherwise find their way into your employer’s system. 2. Do you keep your operating systems, apps and programs up to date? It’s true—Windows and Android updates can take longer than you’d like when you’re busy with work. But when a security update announces itself, the time to install it is now.  That’s because by the time the company sends that security update out, hackers know about it, too—and they’re busy looking for machines that aren’t updated yet so they can break into them. (Unpatched software is how the Equifax hack happened.) 3. Is your home Wi-Fi network password strong and unique? A strong, unique password will keep snoops and opportunists out of your home network—and out of your work at home. Especially if you live in a crowded area where plenty of people nearby can see your network when they search for Wi-Fi, you need a good password.  Strong means your password is at least 8 characters long, with a random mix of letters, numbers and characters. Unique means you only use that password for your home Wi-Fi network, not for any other accounts like email and social media. That’s because if someone guesses your Wi-Fi password, they could then also get into those other accounts. 4. Can anyone with an internet connection log in to your home internet gateway? You might be surprised. Even if you’ve created a strong home Wi-Fi password, you should still check your internet hardware.  That’s because your router may have arrived with default login credentials of “admin/admin.” Those are weak, but who’s going to get close enough to your router to mess with it? Anybody who cares to look it up. Hackers can search for IP addresses with default router login credentials, log in and take over—all from the comfort of wherever they happen to be.  If that happens, attackers can see everything that happens on your network. That means they can easily steal your work email login information and then go on to hack your employer. Here’s a basic walk-through of how to change the password on your router and other network hardware. 3. Step up your password security Strong and unique passwords aren’t just for your home Wi-Fi network. Ideally, you would use a unique password for every single account you have and use a password manager to keep track of them all. But at the very least, you need strong, unique passwords for your work email and other work-related accounts, plus your personal email, social media, banking and utility accounts.  When you use a different password for each account, it prevents hackers from using a stolen password of yours like a skeleton key to unlock your other accounts. And that can keep criminals out of your company data as well as your personal information. 4. Watch out for phishing  There’s so much malware out there right now related to the coronavirus. Scammers are going after people in their inbox with fake cures and offers of “new information” designed to trick victims into giving up their email or Office365 login information.  Other coronavirus scams are dumping ransomware into health care providers’ systems at the worst possible time. And still others are tricking workers into paying fake invoices related in some way to the coronavirus outbreak. What can you do? Practice good email hygiene.  Check the sender’s email address (not just the sender’s name) before clicking on links or attachments in an email, especially if you didn’t expect to receive it. Scammers often impersonate company owners or executives to trick employees into making funds transfers.Verify unusual or urgent email requests from others in the company by phone, video or chat before you act. Scammers know that creating a sense of urgency can cause people to rush into actions they wouldn’t otherwise take. Don’t click on any links, attachments or pop-up boxes if you’re not certain who sent them and why. You could end up with malware or stolen login credentials.Be careful about visiting unfamiliar websites, especially if you’re looking for covid-19 information. A lot of malicious websites with “coronavirus” in the domain name have cropped up in recent weeks, designed to steal visitor information or spread malware.Report suspected phishing emails to your company’s IT people. You may not be the only employee who’s getting them. So that’s the basics of cybersecurity for the new remote worker: Protect your company-issued devices, connect securely, use strong and unique passwords and watch out for phishing. By following these steps, you can protect your company and everyone who works for it.  Run a small business? Read our checklist for securing your employees’ remote workspaces. Find the post on the HostGator Blog

Now Open – Third Availability Zone in the AWS Canada (Central) Region

Amazon Web Services Blog -

When you start an EC2 instance, it’s easy to underestimate what an AWS Region is. Right now, we have 22 across the world, and while they look like dots on a global map, they are architected to let you run applications and store data with high availability and fault tolerance. In fact, each of our Regions is made up of multiple data centers, which are geographically separated into what we call Availability Zones (AZs). Today, I am very happy to announce that we added a third AZ to the AWS Canada (Central) Region to support our customer base in Canada. It’s important to notice that Amazon S3 storage classes that replicate data across a minimum of three AZs, are always doing so, even when fewer than three AZs are publicly available. This third AZ provides customers with additional flexibility to architect scalable, fault-tolerant, and highly available applications, and will support additional AWS services in Canada. We opened the Canada (Central) Region in December 2016, just over 3 years ago, and we’ve more than tripled the number of available services as we bring on this third AZ. Each AZ is in a separate and distinct geographic location with enough distance to significantly reduce the risk of a single event impacting availability in the Region, yet near enough for business continuity applications that require rapid failover and synchronous replication. For example, our Canada (Central) Region is located in the Montreal area of Quebec, and the upcoming new AZ will be on the mainland more than 45 kms/28 miles away from the next-closest AZ as the crow flies. Where we place our Regions and AZs is a deliberate and thoughtful process that takes into account not only latency or distance, but also risk profiles. To keep the risk profile low, we look at decades of data related to floods and other environmental factors before we settle on a location. Montreal was heavily impacted in 1998 by a massive ice storm that crippled the power grid and brought down more than 1,000 transmission towers, leaving four million people in neighboring provinces and some areas of New York and Maine without power. In order to ensure that AWS infrastructure can withstand inclement weather such as this, half of the AZs interconnections use underground cables and are out of the impact of potential ice storms. In this way, every AZ is connected to the other two AZs by at least one 100% underground fiber path. We’re excited to bring a new AZ to Canada to serve our incredible customers in the region. Here are some examples from different industries, courtesy of my colleagues in Canada: Healthcare – AlayaCare delivers cloud-based software to home care organizations across Canada and all over the world. As a home healthcare technology company, they need in-country data centers to meet regulatory requirements. Insurance – Aviva is delivering a world-class digital experience to its insurance clients in Canada and the expansion of the AWS Region is welcome as they continue to move more of their applications to the cloud. E-Learning – D2L leverages various AWS Regions around the world, including Canada to deliver a seamless experience for their clients. They have been on AWS for more than four years, and recently completed an all-in migration. With this launch, AWS has now 70 AZs within 22 geographic Regions around the world, plus 5 new regions coming. We are continuously looking at expanding our infrastructure footprint globally, driven largely by customer demand. To see how we use AZs in Amazon, have look at this article on Static stability using Availability Zones by Becky Weiss and Mike Furr. It’s part of the Amazon Builders’ Library, a place where we share what we’ve learned over the years. For more information on our global infrastructure, and the custom hardware we use, check out this interactive map. — Danilo Une troisième zone de disponibilité pour la Région AWS Canada (Centre) est lancée Lorsque vous lancez une instance EC2, il est facile de sous-estimer l’étendue d’une région infonuagique AWS. À l’heure actuelle, nous avons 22 régions dans le monde. Bien que ces dernières ne ressemblent qu’à des petits points sur une grande carte, elles sont conçues pour vous permettre de lancer des applications et de stocker des données avec une grande disponibilité et une tolérance aux pannes. En fait, chacune de nos régions comprend plusieurs centres de données distincts, regroupés dans ce que nous appelons des zones de disponibilités. Aujourd’hui, je suis très heureux d’annoncer que nous avons ajouté une troisième zone de disponibilité à la Région AWS Canada (Centre) afin de répondre à la demande croissante de nos clients canadiens. Il est important de noter que que les classes de stockage Amazon S3 qui répliquent des données à travers au moins trois zones de disponibilité, le font toujours, même lorsque moins de trois zones de disponibilité sont accessibles publiquement. Cette troisième zone de disponibilité offre aux clients une souplesse additionnelle, leur permettant de concevoir des applications évolutives, tolérantes et hautement disponibles. Cette zone de disponibilité permettra également la prise en charge d’un plus grand nombre de services AWS au Canada. Nous avons ouvert la région infonuagique en décembre 2016, il y a un peu plus de trois ans, et nous avons plus que triplé le nombre de services disponibles en lançant cette troisième zone. Chaque zone de disponibilité AWS se situe dans un lieu géographique séparé et distinct, suffisamment éloignée pour réduire le risque qu’un seul événement puisse avoir une incidence sur la disponibilité dans la région, mais assez rapproché pour permettre le bon fonctionnement d’applications de continuité d’activités qui nécessitent un basculement rapide et une réplication synchrone. Par exemple, notre Région Canada (Centre) se situe dans la région du grand Montréal, au Québec. La nouvelle zone de disponibilité sera située à plus de 45 km à vol d’oiseau de la zone de disponibilité la plus proche. Définir l’emplacement de nos régions et de nos zones de disponibilité est un processus délibéré et réfléchi, qui tient compte non seulement de la latence/distance, mais aussi des profils de risque. Par exemple, nous examinons les données liées aux inondations et à d’autres facteurs environnementaux sur des décennies avant de nous installer à un endroit. Ceci nous permet de maintenir un profil de risque faible. En 1998, Montréal a été lourdement touchée par la tempête du verglas, qui a non seulement paralysé le réseau électrique et engendré l’effondrement de plus de 1 000 pylônes de transmission, mais qui a également laissé quatre millions de personnes sans électricité dans les provinces avoisinantes et certaines parties dans les états de New York et du Maine. Afin de s’assurer que l’infrastructure AWS résiste à de telles intempéries, la moitié des interconnexions câblées des zones de disponibilité d’AWS sont souterraines, à l’abri des tempêtes de verglas potentielles par exemple. Ainsi, chaque zone de disponibilité est reliée aux deux autres zones par au moins un réseau de fibre entièrement souterrain. Nous nous réjouissons d’offrir à nos clients canadiens une nouvelle zone de disponibilité pour la région. Voici quelques exemples clients de différents secteurs, gracieuseté de mes collègues canadiens : Santé – AlayaCare fournit des logiciels de santé à domicile basés sur le nuage à des organismes de soins à domicile canadiens et partout dans le monde. Pour une entreprise de technologie de soins à domicile, le fait d’avoir des centres de données au pays est essentiel et lui permet de répondre aux exigences réglementaires. Assurance – Aviva offre une expérience numérique de classe mondiale à ses clients du secteur de l’assurance au Canada. L’expansion de la région AWS est bien accueillie alors qu’ils poursuivent la migration d’un nombre croissant de leurs applications vers l’infonuagique. Apprentissage en ligne – D2L s’appuie sur diverses régions dans le monde, dont celle au Canada, pour offrir une expérience homogène à ses clients. Ils sont sur AWS depuis plus de quatre ans et ont récemment effectué une migration complète. Avec ce lancement, AWS compte désormais 70 zones de disponibilité dans 22 régions géographiques au monde – et cinq nouvelles régions à venir. Nous sommes continuellement à la recherche de moyens pour étendre notre infrastructure à l’échelle mondiale, entre autres en raison de la demande croissante des clients. Pour comprendre comment nous utilisons les zones de disponibilité chez Amazon, consultez cet article sur la stabilité statique à l’aide des zones de disponibilité par Becky Weiss et Mike Furr. Ce billet se retrouve dans la bibliothèque des créateurs d’Amazon, un lieu où nous partageons ce que nous avons appris au fil des années. Pour plus d’informations sur notre infrastructure mondiale et le matériel informatique personnalisé que nous utilisons, consultez cette carte interactive. — Danilo

Introducing Quicksilver: Configuration Distribution at Internet Scale

CloudFlare Blog -

Cloudflare’s network processes more than fourteen million HTTP requests per second at peak for Internet users around the world. We spend a lot of time thinking about the tools we use to make those requests faster and more secure, but a secret-sauce which makes all of this possible is how we distribute configuration globally. Every time a user makes a change to their DNS, adds a Worker, or makes any of hundreds of other changes to their configuration, we distribute that change to 200 cities in 90 countries where we operate hardware. And we do that within seconds. The system that does this needs to not only be fast, but also impeccably reliable: more than 26 million Internet properties are depending on it. It also has had to scale dramatically as Cloudflare has grown over the past decade.Historically, we built this system on top of the Kyoto Tycoon (KT) datastore. In the early days, it served us incredibly well. We contributed support for encrypted replication and wrote a foreign data wrapper for PostgreSQL. However, what worked for the first 25 cities was starting to show its age as we passed 100. In the summer of 2015 we decided to write a replacement from scratch. This is the story of how and why we outgrew KT, learned we needed something new, and built what was needed.How KT Worked at CloudflareWhere should traffic to example.com be directed to?What is the current load balancing weight of the second origin of this website?Which pages of this site should be stored in the cache?These are all questions which can only be answered with configuration, provided by users and delivered to our machines around the world which serve Internet traffic. We are massively dependent on our ability to get configuration from our API to every machine around the world.It was not acceptable for us to make Internet requests on demand to load this data however, the data had to live in every edge location. The architecture of the edge which serves requests is designed to be highly failure tolerant. Each data center must be able to successfully serve requests even if cut off from any source of central configuration or control.Our first large-scale attempt to solve this problem relied on deploying Kyoto-Tycoon (KT) to thousands of machines. Our centralized web services would write values to a set of root nodes which would distribute the values to management nodes living in every data center. Each server would eventually get its own copy of the data from a management node in the data center in which it was located:Data flow from the API to data centres and individual machinesDoing at least one read from KT was on the critical path of virtually every Cloudflare service. Every DNS or HTTP request would send multiple requests to a KT store and each TLS handshake would load certificates from it. If KT was down, many of our services were down, if KT was slow, many of our services were slow. Having a service like KT was something of a superpower for us, making it possible for us to deploy new services and trust that configuration would be fast and reliable. But when it wasn’t, we had very big problems.As we began to scale one of our first fixes was to shard KT into different instances. For example, we put Page Rules in a different KT than DNS records. In 2015, we used to operate eight such instances storing a total of 100 million key-value (KV) pairs, with about 200 KV values changed per second. Across our infrastructure, we were running tens of thousands of these KT processes. Kyoto Tycoon is, to say the least, very difficult to operate at this scale. It’s clear we pushed it past its limits and for what it was designed for.To put that in context it’s valuable to look back to the description of KT provided by its creators:[It] is a lightweight datastore server with auto expiration mechanism, which is useful to handle cache data and persistent data of various applications. (http://fallabs.com/kyototycoon/)It seemed likely that we were not uncovering design failures of KT, but rather we were simply trying to get it to solve problems it was not designed for. Let’s take a deeper look at the issues we uncovered.Exclusive write lock… or not?To talk about the write lock it’s useful to start with another description provided by the KT documentation:Functions of API are reentrant and available in multi-thread environment. Different database objects can be operated in parallel entirely. For simultaneous operations against the same database object, rwlock (reader-writer lock) is used for exclusion control. That is, while a writing thread is operating an object, other reading threads and writing threads are blocked. However, while a reading thread is operating an object, reading threads are not blocked. Locking granularity depends on data structures. The hash database uses record locking. The B+ tree database uses page locking.On first glance this sounds great! There is no exclusive write lock over the entire DB. But there are no free lunches; as we scaled we started to detect poor performance when writing and reading from KT at the same time.In the world of Cloudflare, each KT process replicated from a management node and was receiving from a few writes per second to a thousand writes per second. The same process would serve thousands of read requests per second as well. When heavy write bursts were happening we would notice an increase in the read latency from KT. This was affecting production traffic, resulting in slower responses than we expect of our edge.Here are the percentiles for read latency of a script reading from KT the same 20 key/value pairs in an infinite loop. Each key and each value is 2 bytes.  We never update these key/value pairs so we always get the same value.Without doing any writes, the read performance is somewhat acceptable even at high percentiles:P99: 9msP99.9: 15msWhen we add a writer sequentially adding a 40kB value, however, things get worse. After running the same read performance test, our latency values have skyrocketed:P99: 154msP99.9: 250msAdding 250ms of latency to a request through Cloudflare would never be acceptable. It gets even worse when we add a second writer, suddenly our latency at the 99.9th percentile (the 0.1% slowest reads) is over a second!P99: 701msP99.9: 1215msThese numbers are concerning: writing more increases the read latency significantly. Given how many sites change their Cloudflare configuration every second, it was impossible for us to imagine a world where write loads would not be high. We had to track down the source of this poor performance. There must be either resource contention or some form of write locking, but where?The Lock HuntAfter looking into the code the issue seems to be that when reading from KT, the function accept in the file kcplandb.h of Kyoto Cabinet acquires a lock:This lock is also acquired in the synchronize function of kcfile.cc in charge of flushing data to disk:This is where we have a problem. Flushing to disk blocks all reads and flushing is slow.So in theory the storage engine can handle parallel requests but in reality we found at least one place where this is not true. Based on this and other experiments and code review we came to the conclusion that KT was simply not designed for concurrent access. Due to the exclusive write lock implementation of KT, I/O writes degraded read latency to unacceptable levels.In the beginning, the occurrence of that issue was rare and not a top priority. But as our customer base grew at rocket speed, all related datasets grew at the same pace. The number of writes per day was increasing constantly and this contention started to have an unacceptable impact on performance.As you can imagine our immediate fix was to do less writes. We were able to make small-scale changes to writing services to reduce their load, but this was quickly eclipsed by the growth of the company and the launch of new products. Before we knew it, our write levels were right back to where they began!As a final step we disabled the fsync which KT was doing on each write. This meant KT would only flush to disk on shutdown, introducing potential data corruption which required its own tooling to detect and repair.Unsynchronized SwimmingContinuing our theme of beginning with the KT documentation, it’s worth looking at how they discuss non-durable writes:If an application process which opened a database terminated without closing the database, it involves risks that some records may be missing and the database may be broken. By default, durability is settled when the database is closed properly and it is not settled for each updating operation.At Cloudflare scale, kernel panics or even processor bugs happen and unexpectedly kill services. By turning off syncing to improve performance we began to experience database corruption. KT comes with a mechanism to repair a broken DB which we used successfully at first. Sadly, on our largest databases, the rebuilding process took a very long time and in many cases would not complete at all. This created a massive operational problem for our SRE team.Ultimately we turned off the auto-repair mechanism so KT would not start if the DB was broken and each time we lost a database we copied it from a healthy node. This syncing was being done manually by our SRE team. That team’s time is much better spent building systems and investigating problems; the manual work couldn’t continue.Not syncing to disk caused another issue: KT had to flush the entire DB when it was being shut down. Again, this worked fine at the beginning, but with the DB getting bigger and bigger, the shut down time started to sometimes hit the systemd grace period and KT was terminated with a SIGKILL. This led to even more database corruption.Because all of the KT DB instances were growing at the same pace, this issue went from minor to critical seemingly overnight. SREs wasted hours syncing DBs from healthy instances before we understood the problem and greatly increased the grace period provided by systemd.We also experienced numerous random instances of database corruption. Too often KT was shut down cleanly without any error but when restarted the DB was corrupted and had to be restored. In the beginning, with 25 data centers, it happened rarely. Over the years we added thousands of new servers to Cloudflare infrastructure and it was occurring multiple times a day.Writing MoreMost of our writes are adding new KV pairs, not overwriting or deleting. We can see this in our key count growth:In 2015, we had around 100 million KV pairsIn 2017, we passed 200 millionIn 2018, we passed 500 millionIn 2019, we exceeded 1 billionUnfortunately in a world where the quantity of data is always growing, it’s not realistic to think you will never flush to disk. As we write new keys the page cache quickly fills. When it’s full, it is flushed to disk. I/O saturation was leading to the very same contention problems we experienced previously.Each time KT received a heavy write burst, we could see the read latency from KT increasing in our DC. At that point it was obvious to us that the KT DB locking implementation could no longer do the job for us with or without syncing. Storage wasn’t our only problem however, the other key function of KT and our configuration system is replication.The Best Effort Replication ProtocolKT replication protocol is based solely on timestamp. If a transaction fails to replicate for any reason but it is not detected, the timestamp will continue to advance forever missing that entry.How can we have missing log entries? KT replicates data by sending an ordered list of transaction logs. Each log entry details what change is being made to our configuration database. These logs are kept for a period of time, but are eventually ‘garbage collected’, with old entries removed.Let’s think about a KT instance being down for days, it then restarts and asks for the transaction log from the last one it got. The management node receiving the request will send the nearest entries to this timestamp, but there could be missing transaction logs due to garbage collection. The client would not get all the updates it should and this is how we quietly end up with an inconsistent DB.Another weakness we noticed happens when the timestamp file is being written. Here is a snippet of the file ktserver.cc where the client replication code is implemented:This code snippet runs the loop as long as replication is working fine. The timestamp file is only going to be written when the loop terminates. The call to write_rts (the function writing to disk the last applied transaction log) can be seen at the bottom of the screenshot.If KT terminates unexpectedly in the middle of that loop, the timestamp file won’t be updated. When this KT restarts and if it successfully repairs the database it will replicate from the last value written to the rts file. KT could end up replaying days of transaction logs which were already applied to the DB and values written days ago could be made visible again to our services for some time before everything gets back up to date!We also regularly experienced databases getting out of sync without any reason. Sometimes these caught up by themselves, sometimes they didn’t. We have never been able to properly identify the root cause of that issue. Distributed systems are hard, and distributed databases are brutal. They require extensive observability tooling to deploy properly which didn’t exist for KT.Upgrading Kyoto Tycoon in ProductionMultiple processes cannot access one database file at the same time. A database file is locked by reader-writer lock while a process is connected to it.We release hundreds of software updates a day across our many engineering teams. However, we only very rarely deploy large-scale updates and upgrades to the underlying infrastructure which runs our code. This frequency has increased over time, but in 2015 we would do a “CDN Release” once per quarter.To perform a CDN Release we reroute traffic from specific servers and take the time to fully upgrade the software running on those machines all the way down to the kernel. As this was only done once per quarter, it could take several months for an upgrade to a service like KT to make it to every machine.Most Cloudflare services now implement a zero downtime upgrade mechanism where we can upgrade the service without dropping production traffic. With this we can release a new version of our web or DNS servers outside of a CDN release window, which allows our engineering teams to move much faster. Many of our services are now actually implemented with Cloudflare Workers which can be deployed even faster (using KT’s replacement!).Unfortunately this was not the case in 2015 when KT was being scaled. Problematically, KT does not allow multiple processes to concurrently access the same database file so starting a new process while the previous one was still running was impossible. One idea was that we could stop KT, hold all incoming requests and start a new one. Unfortunately stopping KT would usually take over 15 minutes with no guarantee regarding the DB status.Because stopping KT is very slow and only one KT process can access the DB, it was not possible to upgrade KT outside of a CDN release, locking us into that aging process.High-ish AvailabilityOne final quote from the KT documentation:Kyoto Tycoon supports "dual master" replication topology which realizes higher availability. It means that two servers replicate each other so that you don't have to restart the survivor when one of them crashed.[...]Note that updating both of the servers at the same time might cause inconsistency of their databases. That is, you should use one master as a "active master" and the other as a "standby master".Said in other words: When dual master is enabled all writes should always go to the same root node and a switch should be performed manually to promote the standby master when the root node dies.Unfortunately that violates our principles of high availability. With no capability for automatic zero-downtime failover it wasn’t possible to handle the failure of the KT top root node without some amount of configuration propagation delay.Building QuicksilverAddressing these issues in Kyoto Tycoon wasn’t deemed feasible. The project had no maintainer, the last official update being from April 2012, and was composed of a code base of 100k lines of C++. We looked at alternative open source systems at the time, none of which fit our use case well.Our KT implementation suffered from some fundamental limitations:No high availabilityWeak replication protocolExclusive write lockNot zero downtime upgrade friendlyIt was also unreliable, critical replication and database functionality would break quite often. At some point, keeping KT up in running at Cloudflare was consuming 48 hours of SRE time per week.We decided to build our own replicated key value store tailored for our needs and we called it Quicksilver. As of today Quicksilver powers an average of 2.5 trillion reads each day with an average latency in microseconds.Fun fact: the name Quicksilver was picked by John Graham-Cumming, Cloudflare’s CTO. The terrible secret that only very few members of the humankind know is that he originally named it “Velocireplicator”. It is a secret though. Don’t tell anyone. Thank you.Storage EngineOne major complication with our legacy system, KT, was the difficulty of bootstrapping new machines. Replication is a slow way to populate an empty database, it’s much more efficient to be able to instantiate a new machine from a snapshot containing most of the data, and then only use replication to keep it up to date. Unfortunately KT required nodes to be shut down before they could be snapshotted, making this challenging. One requirement for Quicksilver then was to use a storage engine which could provide running snapshots. Even further, as Quicksilver is performance critical, a snapshot must also not have a negative impact on other services that read from Quicksilver. With this requirement in mind we settled on a datastore library called LMDB after extensive analysis of different options.LMDB’s design makes taking consistent snapshots easy. LMDB is also optimized for low read latency rather than write throughput. This is important since we serve tens of millions of reads per second across thousands of machines, but only change values relatively infrequently. In fact, systems that switched from KT to Quicksilver saw drastically reduced read response times, especially on heavily loaded machines. For example, for our DNS service, the 99th percentile of reads dropped by two orders of magnitude!LMDB also allows multiple processes to concurrently access the same datastore. This is very useful for implementing zero downtime upgrades for Quicksilver: we can start the new version while still serving current requests with the old version. Many data stores implement an exclusive write lock which requires only a single user to write at a time, or even worse, restricts reads while a write is conducted. LMDB does not implement any such lock.LMDB is also append-only, meaning it only writes new data, it doesn’t overwrite existing data. Beyond that, nothing is ever written to disk in a state which could be considered corrupted. This makes it crash-proof, after any termination it can immediately be restarted without issue. This means it does not require any type of crash recovery tooling.Transaction LogsLMDB does a great job of allowing us to query Quicksilver from each of our edge servers, but it alone doesn’t give us a distributed database. We also needed to develop a way to distribute the changes made to customer configurations into the thousands of instances of LMDB we now have around the world. We quickly settled on a fan-out type distribution where nodes would query master-nodes, who would in turn query top-masters, for the latest updates.Unfortunately there is no such thing as a perfectly reliable network or system. It is easy for a network to become disconnected or a machine to go down just long enough to miss critical replication updates. Conversely though, when users make changes to their Cloudflare configuration it is critical that they propagate accurately whatever the condition of the network. To ensure this, we used one of the oldest tricks in the book and included a monotonically increasing sequence number in our Quicksilver protocol:… < 0023 SET “hello” “world” < 0024 SET “lorem” “ipsum” < 0025 DEL “42”… It is now easily possible to detect whether an update was lost, by comparing the sequence number and making sure it is exactly one higher than the last message we have seen. The astute reader will notice that this is simply a log. This process is pleasantly simple because our system does not need to support global writes, only reads. As writes are relatively infrequent and it is easy for us to elect a single data center to aggregate writes and ensure the monotonicity of our counter.One of the most common failure modes of a distributed system are configuration errors. An analysis of how things could go wrong led us to realize that we had missed a simple failure case: since we are running separate Quicksilver instances for different kinds of data, we could corrupt a database by misconfiguring it. For example, nothing would prevent the DNS database from being updated with changes for the Page Rules database. The solution was to add unique IDs for each database and to require these IDs when initiating the replication protocol.Through our experience with our legacy system, KT, we knew that replication does not always scale as easily as we would like. With KT it was common for us to saturate IO on our machines, slowing down reads as we tried to replay the replication log. To solve this with Quicksilver we decided to engineer a batch mode where many updates can be combined into a single write, allowing them all to be committed to disk at once. This significantly improved replication performance by reducing the number of disk writes we have to make. Today, we are batching all updates which occur in a 500ms window, and this has made highly-durable writes manageable.Where should we store the transaction logs? For design simplicity we decided to store these within a different bucket of our LMDB database. By doing this, we can commit the transaction log and the update to the database in one shot. Originally the log was kept in a separate file, but storing it with the database simplifies the code.Unfortunately this came at a cost: fragmentation. LMDB does not naturally fragment values, it needs to store every value in a sequential region of the disk. Eventually the disk begins to fill and the large regions which offer enough space to fit a particularly big value start to become hard to find. As our disks begin to fill up, it can take minutes for LMDB to find enough space to store a large value. Unfortunately we exacerbated this problem by storing the transaction log within the database. This fragmentation issue was not only causing high write latency, it was also making the databases grow very quickly. When the reasonably-sized free spaces between values start to become filled, less and less of the disk becomes usable. Eventually the only free space on disk is too small to store any of the actual values we can store. If all of this space were compacted into a single region, however, there would be plenty of space available.The compaction process requires rewriting an entire DB from scratch. This is something we do after bringing data centers offline and its benefits last for around 2 months, but it is far from a perfect solution. To do better we began fragmenting the transaction log into page-sized chunks in our code to improve the write performance. Eventually we will also split large values into chunks that are small enough for LMDB to happily manage, and we will handle assembling these chunks in our code into the actual values to be returned.We also implemented a key-value level CRC. The checksum is written when the transaction log is applied to the DB and checks the KV pair is read. This checksum makes it possible to quickly identify and alert on any bugs in the fragmentation code. Within the QS team we are usually against this kind of defensive measure, we prefer focusing on code quality instead of defending against the consequences of bugs, but database consistency is so critical to the company that even we couldn’t argue against playing defense.LMDB stability has been exceptional. It has been running in production for over three years. We have experienced only a single bug and zero data corruption. Considering we serve over 2.5 trillion read requests and 30 million write requests a day on over 90,000 database instances across thousands of servers, this is very impressive.OptimizationsTransaction logs are a critical part of our replication system, but each log entry ends up being significantly larger than the size of the values it represents. To prevent our disk space from being overwhelmed we use Snappy to compress entries. We also periodically garbage collect entries, only keeping the most recent required for replication.For safety purposes we also added an incremental hash within our transaction logs. The hash helps us to ensure that messages have not been lost or incorrectly ordered in the log.One other potential misconfiguration which scares us is the possibility of a Quicksilver node connecting to, and attempting to replicate from, itself. To prevent this we added a randomly generated process ID which is also exchanged in the handshake:type ClientHandshake struct { // Unique ID of the client. Verifies that client and master have distinct IDs. Defaults to a per-process unique ID. ID uuid.ID // ID of the database to connect to. Verifies that client and master have the same ID. Disabled by default. DBID uuid.ID // At which index to start replication. First log entry will have Index = LastIndex + 1. LastIndex logentry.Index // At which index to stop replication. StopIndex logentry.Index // Ignore LastIndex and retrieve only new log entries. NewestOnly bool // Called by Update when a new entry is applied to the db. Metrics func(*logentry.LogEntry) } Each Quicksilver instance has a list of primary servers and secondary servers. It will always try to replicate from a primary node which is often another QS node near it. There are a variety of reasons why this replication may not work, however. For example, if the target machine’s database is too old it will need a larger changeset than exists on the source machine. To handle this, our secondary masters store a significantly longer history to allow machines to be offline for a full week and still be correctly resynchronized on startup.UpgradesBuilding a system is often much easier than maintaining it. One challenge was being able to do weekly releases without stopping the service.Fortunately the LMDB datastore supports multiple process reading and writing to the DB file simultaneously. We use Systemd to listen on incoming connections, and immediately hand the sockets over to our Quicksilver instance. When it’s time to upgrade we start our new instance, and pass the listening socket over to the new instance seamlessly.We also control the clients used to query Quicksilver. By adding an automatic retry to requests we are able to ensure that momentary blips in availability don’t result in user-facing failures.After years of experience maintaining this system we came to a surprising conclusion: handing off sockets is really neat, but it might involve more complexity than is warranted. A Quicksilver restart happens in single-digit milliseconds, making it more acceptable than we would have thought to allow connections to momentarily fail without any downstream effects. We are currently evaluating the wisdom of simplifying the update system, as in our experience simplicity is often the best true proxy for reliability.ObservabilityIt’s easy to overlook monitoring when designing a new system. We have learned however that a system is only as good as our ability to both know how well it is working, and our ability to debug issues as they arise. Quicksilver is as mission-critical as anything possibly can be at Cloudflare, it is worth the effort to ensure we can keep it running.We use Prometheus for collecting our metrics and we use Grafana to monitor Quicksilver. Our SRE team uses a global dashboard, one dashboard for each datacenter, and one dashboard per server, to monitor its performance and availability. Our primary alerting is also driven by Prometheus and distributed using PagerDuty.We have learned that detecting availability is rather easy, if Quicksilver isn’t available countless alerts will fire in many systems throughout Cloudflare. Detecting replication lag is more tricky, as systems will appear to continue working until it is discovered that changes aren’t taking effect. We monitor our replication lag by writing a heartbeat at the top of the replication tree and computing the time difference on each server.ConclusionQuicksilver is, on one level, an infrastructure tool. Ideally no one, not even most of the engineers who work here at Cloudflare, should have to think twice about it. On another level, the ability to distribute configuration changes in seconds is one of our greatest strengths as a company. It makes using Cloudflare enjoyable and powerful for our users, and it becomes a key advantage for every product we build. This is the beauty and art of infrastructure: building something which is simple enough to make everything built on top of it more powerful, more predictable, and more reliable.We are planning on open sourcing Quicksilver in the near future and hope it serves you as well as it has served us. If you’re interested in working with us on this and other projects, please take a look at our jobs page and apply today.

How to Choose the Right Facebook Attribution Model

Social Media Examiner -

Are you struggling to track the impact of your Facebook ads? Wondering which Facebook attribution model to use? In this article, you’ll discover seven different Facebook ad attribution models to assess your campaigns’ performance. About Facebook Attribution Models The Facebook attribution tool gives you insights into your customers’ purchasing journey and the roles of different […] The post How to Choose the Right Facebook Attribution Model appeared first on Social Media Marketing | Social Media Examiner.

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

CloudFlare Blog -

It’s never been more crucial to help remote workforces stay fully operational — for the sake of countless individuals, businesses, and the economy at large. In light of this, Cloudflare recently launched a program that offers our Cloudflare for Teams suite for free to any company, of any size, through September 1. Some of these firms have been curious about how Cloudflare itself uses these tools.Here’s how Cloudflare’s next-generation VPN alternative, Cloudflare Access, came to be.Rewind to 2015. Back then, as with many other companies, all of Cloudflare’s internally-hosted applications were reached via a hardware-based VPN. When one of our on-call engineers received a notification (usually on their phone), they would fire up a clunky client on their laptop, connect to the VPN, and log on to Grafana.It felt a bit like solving a combination lock with a fire alarm blaring overhead. But for three of our engineers enough was enough. Why was a cloud network security company relying on clunky on-premise hardware? And thus, Cloudflare Access was born.A Culture of DogfoodingMany of the products Cloudflare builds are a direct result of the challenges our own team is looking to address, and Access is a perfect example. Development on Access originally began in 2015, when the project was known internally as EdgeAuth.Initially, just one application was put behind Access. Engineers who received a notification on their phones could tap a link and, after authenticating via their browser, they would immediately have access to the key details of the alert in Grafana. We liked it a lot — enough to get excited about what we were building.Access solved a variety of issues for our security team as well. Using our identity provider of choice, we were able to restrict access to internal applications at L7 using Access policies. This once onerous process of managing access control at the network layer with a VPN was replaced with a few clicks in the Cloudflare dashboard. After Grafana, our internal Atlassian suite including Jira and Wiki, and hundreds of other internal applications, the Access team began working to support non-HTTP based services. Support for git allowed Cloudflare’s developers to securely commit code from anywhere in the world in a fully audited fashion. This made Cloudflare’s security team very happy. Here’s a slightly modified example of a real authentication event that was generated while pushing code to our internal git repository.It didn’t take long for more and more of Cloudflare’s internal applications to make their way behind Access. As soon as people started working with the new authentication flow, they wanted it everywhere. Eventually our security team mandated that we move our apps behind Access, but for a long time it was totally organic: teams were eager to use it.Incidentally, this highlights a perk of utilizing Access: you can start by protecting and streamlining the authentication flows for your most popular internal tools — but there’s no need for a wholesale rip-and-replace. For organizations that are experiencing limits on their hardware-based VPNs, it can be an immediate salve that is up and running after just one setup call with a Cloudflare onboarding expert (you can schedule a time here). That said, there are some upsides to securing everything with Access.Supporting a Global TeamVPNs are notorious for bogging down Internet connections, and the one we were using was no exception. When connecting to internal applications, having all of our employees’ Internet connections pass through a standalone VPN was a serious performance bottleneck and single point of failure.Cloudflare Access is a much saner approach. Authentication occurs at our network edge, which extends to 200 cities in over 90 countries globally. Rather than having all of our employees route their network traffic through a single network appliance, employees connecting to internal apps are connecting to a data center just down the road instead.As we support a globally-distributed workforce, our security team is committed to protecting our internal applications with the most secure and usable authentication mechanisms.With Cloudflare Access we’re able to rely on the strong two-factor authentication mechanisms of our identity provider, which was much more difficult to do with our legacy VPN.On-Boarding and Off-Boarding with ConfidenceOne of the trickiest things for any company is ensuring everyone has access to the tools and data they need — but no more than that. That’s a challenge that becomes all the more difficult as a team scales. As employees and contractors leave, it is similarly essential to ensure that their permissions are swiftly revoked. Managing these access controls is a real challenge for IT organizations around the world — and it’s greatly exacerbated when each employee has multiple accounts strewn across different tools in different environments. Before using Access, our team had to put in a lot of time to make sure every box was checked.Now that Cloudflare’s internal applications are secured with Access, on- and offboarding is much smoother. Each new employee and contractor is quickly granted rights to the applications they need, and they can reach them via a launchpad that makes them readily accessible. When someone leaves the team, one configuration change gets applied to every application, so there isn’t any guesswork. Access is also a big win for network visibility. With a VPN, you get minimal insight into the activity of users on the network – you know their username and IP address. but that’s about it. If someone manages to get in, it’s difficult to retrace their steps.Cloudflare Access is based on a zero-trust model, which means that every packet is authenticated. It allows us to assign granular permissions via Access Groups to employees and contractors. And it gives our security team the ability to detect unusual activity across any of our applications, with extensive logging to support analysis. Put simply: it makes us more confident in the security of our internal applications.But It’s Not Just for UsWith the massive transition to a remote work model for many organizations, Cloudflare Access can make you more confident in the security of your internal applications — while also driving increased productivity in your remote employees. Whether you rely on Jira, Confluence, SAP or custom-built applications, it can secure those applications and it can be live in minutes.Cloudflare has made the decision to make Access completely free to all organizations, all around the world, through September 1. If you’d like to get started, follow our quick start guide here:Or, if you’d prefer to onboard with one of our specialists, schedule a 30 minute call at this link: calendly.com/cloudflare-for-teams/onboarding?month=2020-03

Migrating from VPN to Access

CloudFlare Blog -

With so many people at Cloudflare now working remotely, it's worth stepping back and looking at the systems we use to get work done and how we protect them. Over the years we've migrated from a traditional "put it behind the VPN!" company to a modern zero-trust architecture. Cloudflare hasn’t completed its journey yet, but we're pretty darn close. Our general strategy: protect every internal app we can with Access (our zero-trust access proxy), and simultaneously beef up our VPN’s security with Spectrum (a product allowing the proxying of arbitrary TCP and UDP traffic, protecting it from DDoS).Before Access, we had many services behind VPN (Cisco ASA running AnyConnect) to enforce strict authentication and authorization. But VPN always felt clunky: it's difficult to set up, maintain (securely), and scale on the server side. Each new employee we onboarded needed to learn how to configure their client. But migration takes time and involves many different teams. While we migrated services one by one, we focused on the high priority services first and worked our way down. Until the last service is moved to Access, we still maintain our VPN, keeping it protected with Spectrum.Some of our services didn't run over HTTP or other Access-supported protocols, and still required the use of the VPN: source control (git+ssh) was a particular sore spot. If any of our developers needed to commit code they'd have to fire up the VPN to do so. To help in our new-found goal to kill the pinata, we introduced support for SSH over Access, which allowed us to replace the VPN as a protection layer for our source control systems.Over the years, we've been whittling away at our services, one-by-one. We're nearly there, with only a few niche tools remaining behind the VPN and not behind Access. As of this year, we are no longer requiring new employees to set up VPN as part of their company onboarding! We can see this in our Access logs, with more users logging into more apps every month:During this transition period from VPN to Access, we've had to keep our VPN service up and running. As VPN is a key tool for people doing their work while remote, it's extremely important that this service is highly available and performant.Enter Spectrum: our DDoS protection and performance product for any TCP and UDP-based protocol. We put Spectrum in front of our VPN very early on and saw immediate improvement in our security posture and availability, all without any changes in end-user experience. With Spectrum sitting in front of our VPN, we now use the entire Cloudflare edge network to protect our VPN endpoints against DDoS and improve performance for VPN end-users.Setup was a breeze, with only minimal configuration needed:Cisco AnyConnect uses HTTPS (TCP) to authenticate, after which the actual data is tunneled using a DTLS encrypted UDP protocol. Although configuration and setup was a breeze, actually getting it to work was definitely not. Our early users quickly noted that although authenticating worked just fine, they couldn’t actually see any data flowing through the VPN. We quickly realized our arch nemesis, the MTU (maximum transmission unit) was to blame. As some of our readers might remember, we have historically always set a very small MTU size for IPv6. We did this because there might be IPv6 to IPv4 tunnels in between eyeballs and our edge. By setting it very low we prevented PTB (packet too big) packets from ever getting sent back to us, which causes problems due to our ECMP routing inside our data centers. But with a VPN, you always increase the packet size due to the VPN header. This means that the 1280 MTU that we had set would never be enough to run a UDP-based VPN. We ultimately settled on an MTU of 1420, which we still run today and allows us to protect our VPN entirely using Spectrum. Over the past few years this has served us well, knowing that our VPN infrastructure is safe and people will be able to continue to work remotely no matter what happens. All in all this has been a very interesting journey, whittling down one service at a time, getting closer and closer to the day we can officially retire our VPN. To us, Access represents the future, with Spectrum + VPN to tide us over and protect our services until they’ve migrated over. In the meantime, as of the start of 2020, new employees no longer get a VPN account by default!

Social Media Use Surges: How Marketers Should Respond

Social Media Examiner -

Welcome to this week’s edition of the Social Media Marketing Talk Show, a news show for marketers who want to stay on the leading edge of social media. On this week’s Social Media Marketing Talk Show, we explore what global spikes in social media usage mean for marketing strategies and LinkedIn’s new conversation ads format […] The post Social Media Use Surges: How Marketers Should Respond appeared first on Social Media Marketing | Social Media Examiner.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator