Amazon Web Services Blog

The Wide World of Microsoft Windows on AWS

You have been able to run Microsoft Windows on AWS since 2008 (my ancient post, Big Day for Amazon EC2: Production, SLA, Windows, and 4 New Capabilities, shows you just how far AWS come in a little over a decade). According to IDC, AWS has nearly twice as many Windows Server instances in the cloud as the next largest cloud provider. Today, we believe that AWS is the best place to run Windows and Windows applications in the cloud. You can run the full Windows stack on AWS, including Active Directory, SQL Server, and System Center, while taking advantage of 61 Availability Zones across 20 AWS Regions. You can run existing .NET applications and you can use Visual Studio or VS Code build new, cloud-native Windows applications using the AWS SDK for .NET. Wide World of Windows Starting from this amazing diagram drawn by my colleague Jerry Hargrove, I’d like to explore the Windows-on-AWS ecosystem in detail: 1 – SQL Server Upgrades AWS provides first-class support for SQL Server, encompassing all four Editions (Express, Web, Standard, and Enterprise), with multiple version of each edition. This wide-ranging support has helped SQL Server to become one of the most popular Windows workloads on AWS. The SQL Server Upgrade Tool (an AWS Systems Manager script) makes it easy for you to upgrade an EC2 instance that is running SQL Server 2008 R2 SP3 to SQL Server 2016. The tool creates an AMI from a running instance, upgrades the AMI to SQL Server 2016, and launches the new AMI. To learn more, read about the AWSEC2-CloneInstanceAndUpgradeSQLServer action. Amazon RDS makes it easy for you to upgrade your DB Instances to new major or minor upgrades to SQL Server. The upgrade is performed in-place, and can be initiated with a couple of clicks. For example, if you are currently running SQL Server 2014, you have the following upgrades available: You can also opt-in to automatic upgrades to new minor versions that take place within your preferred maintenance window: Before you upgrade a production DB Instance, you can create a snapshot backup, use it to create a test DB Instance, upgrade that instance to the desired new version, and perform acceptance testing. To learn more, about upgrades, read Upgrading the Microsoft SQL Server DB Engine. 2 – SQL Server on Linux If your organization prefers Linux, you can run SQL Server on Ubuntu, Amazon Linux 2, or Red Hat Enterprise Linux using our License Included (LI) Amazon Machine Images. Read the most recent launch announcement or search for the AMIs in AWS Marketplace using the EC2 Launch Instance Wizard: This is a very cost-effective option since you do not need to pay for Windows licenses. You can use the new re-platforming tool (another AWS Systems Manager script) to move your existing SQL Server databases (2008 and above, either in the cloud or on-premises) from Windows to Linux. 3 – Always-On Availability Groups (Amazon RDS for SQL Server) If you are running enterprise-grade production workloads on Amazon RDS (our managed database service), you should definitely enable this feature! It enhances availability and durability by replicating your database between two AWS Availability Zones, with a primary instance in one and a hot standby in another, with fast, automatic failover in the event of planned maintenance or a service disruption. You can enable this option for an existing DB Instance, and you can also specify it when you create a new one: To learn more, read Multi-AZ Deployments Using Microsoft SQL Mirroring or Always On. 4 – Lambda Support Let’s talk about some features for developers! Launched in 2014, and the subject of continuous innovation ever since, AWS Lambda lets you run code in the cloud without having to own, manage, or even think about servers. You can choose from several .NET Core runtimes for your Lambda functions, and then write your code in either C# or PowerShell: To learn more, read Working with C# and Working with PowerShell in the AWS Lambda Developer Guide. Your code has access to the full set of AWS services, and can make use of the AWS SDK for .NET; read the Developing .NET Core AWS Lambda Functions post for more info. 5 – CDK for .NET The AWS CDK (Cloud Development Kit) for .NET lets you define your cloud infrastructure as code and then deploy it using AWS CloudFormation. For example, this code (stolen from this post) will generate a template that creates an Amazon Simple Queue Service (SQS) queue and an Amazon Simple Notification Service (SNS) topic: var queue = new Queue(this, "MyFirstQueue", new QueueProps { VisibilityTimeoutSec = 300 } var topic = new Topic(this, "MyFirstTopic", new TopicProps { DisplayName = "My First Topic Yeah" }); 6 – EC2 AMIs for .NET Core If you are building Linux applications that make use of .NET Core, you can use use our Amazon Linux 2 and Ubuntu AMIs. With .NET Core, PowerShell Core, and the AWS Command Line Interface (CLI) preinstalled, you’ll be up and running— and ready to deploy applications—in minutes. You can find the AMIs by searching for core when you launch an EC2 instance: 7 – .NET Dev Center The AWS .Net Dev Center contains materials that will help you to learn how design, build, and run .NET Applications on AWS. You’ll find articles, sample code, 10-minute tutorials, projects, and lots more: 8 – AWS License Manager We want to help you to manage and optimize your Windows and SQL Server applications in new ways. For example,  AWS License Manager helps you to manage the licenses for the software that you run in the cloud or on-premises (read my post, New AWS License Manager – Manage Software Licenses and Enforce Licensing Rules, to learn more). You can create custom rules that emulate those in your licensing agreements, and enforce them when an EC2 instance is launched: The License Manager also provides you with information on license utilization so that you can fine-tune your license portfolio, possibly saving some money in the process! 9 – Import, Export, and Migration You have lots of options and choices when it comes to moving your code and data into and out of AWS. Here’s a very brief summary: TSO Logic – This new member of the AWS family (we acquired the company earlier this year) offers an analytics solution that helps you to plan, optimize, and save money as you make your journey to the cloud. VM Import/Export – This service allows you to import existing virtual machine images to EC2 instances, and export them back to your on-premises environment. Read Importing a VM as an Image Using VM Import/Export to learn more. AWS Snowball – This service lets you move petabyte scale data sets into and out of AWS. If you are at exabyte scale, check out the AWS Snowmobile. AWS Migration Acceleration Program – This program encompasses AWS Professional Services and teams from our partners. It is based on a three step migration model that includes a readiness assessment, a planning phase, and the actual migration. 10 – 21st Century Applications AWS gives you a full-featured, rock-solid foundation and a rich set of services so that you can build tomorrow’s applications today! You can go serverless with the .NET Core support in Lambda, make use of our Deep Learning AMIs for Windows, host containerized apps on Amazon ECS or eks], and write code that makes use of the latest AI-powered services. Your applications can make use of recommendations, forecasting, image analysis, video analysis, text analytics, document analysis, text to speech, translation, transcription, and more. 11 – AWS Integration Your existing Windows Applications, both cloud-based and on-premises, can make use of Windows file system and directory services within AWS: Amazon FSx for Windows Server – This fully managed native Windows file system is compatible with the SMB protocol and NTFS. It provides shared file storage for Windows applications, backed by SSD storage for fast & reliable performance. To learn more, read my blog post. AWS Directory Service – Your directory-aware workloads and AWS Enterprise IT applications can use this managed Active Directory that runs in the AWS Cloud. Join our Team If you would like to build, manage, or market new AWS offerings for the Windows market, be sure to check out our current openings. Here’s a sampling: Senior Digital Campaign Marketing Manager – Own the digital tactics for product awareness and run adoption campaigns. Senior Product Marketing Manager – Drive communications and marketing, create compelling content, and build awareness. Developer Advocate – Drive adoption and community engagement for SQL Server on EC2. Learn More Our freshly updated Windows on AWS and SQL Server on AWS pages contain case studies, quick starts, and lots of other useful information. — Jeff;

Docker, Amazon ECS, and Spot Fleets: A Great Fit Together

Guest post by AWS Container Hero Tung Nguyen. Tung is the president and founder of BoltOps, a consulting company focused on cloud infrastructure and software on AWS. He also enjoys writing for the BoltOps Nuts and Bolts blog. EC2 Spot Instances allow me to use spare compute capacity at a steep discount. Using Amazon ECS with Spot Instances is probably one of the best ways to run my workloads on AWS. By using Spot Instances, I can save 50–90% on Amazon EC2 instances. You would think that folks would jump at a huge opportunity like a black Friday sales special. However, most folks either seem to not know about Spot Instances or are hesitant. This may be due to some fallacies about Spot. Spot Fallacies With the Spot model, AWS can remove instances at any time. It can be due to a maintenance upgrade; high demand for that instance type; older instance type; or for any reason whatsoever. Hence the first fear and fallacy that people quickly point out with Spot: What do you mean that the instance can be replaced at any time? Oh no, that must mean that within 20 minutes of launching the instance, it gets killed. I felt the same way too initially. The actual Spot Instance Advisor website states: The average frequency of interruption across all Regions and instance types is less than 5%. From my own usage, I have seen instances run for weeks. Need proof? Here’s a screenshot from an instance in one of our production clusters. If you’re wondering how many days that is…. Yes, that is 228 continuous days. You might not get these same long uptimes, but it disproves the fallacy that Spot Instances are usually interrupted within 20 minutes from launch. Spot Fleets With Spot Instances, I place a single request for a specific instance in a specific Availability Zone. With Spot Fleets, instead of requesting a single instance type, I can ask for a variety of instance types that meet my requirements. For many workloads, as long as the CPU and RAM are close enough, many instance types do just fine. So, I can spread my instance bets across instance types and multiple zones with Spot Fleets. Using Spot Fleets dramatically makes the system more robust on top of the already mentioned low interruption rate. Also, I can run an On-Demand cluster to provide additional safeguard capacity. ECS and Spot Fleets: A Great Fit Together This is one of my favorite ways to run workloads because it gives me a scalable system at a ridiculously low cost. The technologies are such a great fit together that one might think they were built for each other. Docker provides a consistent, standard binary format to deploy. If it works in one Docker environment, then it works in another. Containers can be pulled down in seconds, making them an excellent fit for Spot Instances, where containers might move around during an interruption. ECS provides a great ecosystem to run Docker containers. ECS supports a feature called connection instance draining that allows me to tell ECS to relocate the Docker containers to other EC2 instances. Spot Instances fire off a two-minute warning signal letting me know when it’s about to terminate the instance. These are the necessary pieces I need for building an ECS cluster on top of Spot Fleet. I use the two-minute warning to call ECS connection draining, and ECS automatically moves containers to another instance in the fleet. Here’s a CloudFormation template that achieves this: ecs-ec2-spot-fleet. Because the focus is on understanding Spot Fleets, the VPC is designed to be simple. The template specifies two instance types in the Spot Fleet: t3.small and t3.medium with 2 GB and 4 GB of RAM, respectively. The template weights the t3.medium twice as much as the t3.small. Essentially, the Spot Fleet TargetCapacity value equals the total RAM to provision for the ECS cluster. So if I specify 8, the Spot Fleet service might provision four t3.small instances or two t3.medium instances. The cluster adds up to at least 8 GB of RAM. To launch the stack run, I run the following command: aws cloudformation create-stack --stack-name ecs-spot-demo --template-body file://ecs-spot-demo.yml --capabilities CAPABILITY_IAM The CloudFormation stack launches container instances and registers them to an ECS cluster named developmentby default. I can change this with the EcsCluster parameter. For more information on the parameters, see the README and the template source. When I deploy the application, the deploy tool creates the ECS cluster itself. Here are the Spot Instances in the EC2 console. Deploy the demo app After the Spot cluster is up, I can deploy a demo app on it. I wrote a tool called Ufo that is useful for these tasks: Build the Docker image. Register the ECS task definition. Register and deploy the ECS service. Create the load balancer. Docker should be installed as a prerequisite. First, I create an ECR repo and set some variables: ECR_REPO=$(aws ecr create-repository --repository-name demo/sinatra | jq -r '.repository.repositoryUri') VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values="demo vpc" | jq -r '.Vpcs[].VpcId') Now I’m ready to clone the demo repo and deploy a sample app to ECS with ufo. git clone https://github.com/tongueroo/demo-ufo.git demo cd demo ufo init --image $ECR_REPO --vpc-id $VPC_ID ufo current --service demo-web ufo ship # deploys to ECS on the Spot Fleet cluster Here’s the ECS service running: I then grab the Elastic Load Balancing endpoint from the console or with ufo ps. $ ufo ps Elb: develop-Elb-12LHJWU4TH3Q8-597605736.us-west-2.elb.amazonaws.com $ Now I test with curl: $ curl develop-Elb-12LHJWU4TH3Q8-597605736.us-west-2.elb.amazonaws.com 42 The application returns “42,” the meaning of life, successfully. That’s it! I now have an application running on ECS with Spot Fleet Instances. Parting thoughts One additional advantage of using Spot is that it encourages me to think about my architecture in a highly available manner. The Spot “constraints” ironically result in much better sleep at night as the system must be designed to be self-healing. Hopefully, this post opens the world of running ECS on Spot Instances to you. It’s a core of part of the systems that BoltOps has been running on its own production system and for customers. I still get excited about the setup today. If you’re interested in Spot architectures, contact me at BoltOps. One last note: Auto Scaling groups also support running multiple instance types and purchase options. Jeff mentions in his post that weight support is planned for a future release. That’s exciting, as it may streamline the usage of Spot with ECS even further.

In the Works – AWS Region in Indonesia

Last year we launched two new AWS Regions—a second GovCloud Region in the United States, and our first Nordic Region in Sweden—and we announced that we are working on regions in Cape Town, South Africa and Milan, Italy. Jakarta in the Future Today, I am happy to announce that we are working on the AWS Asia Pacific (Jakarta) Region in Indonesia. The new region will be based in Greater Jakarta, will be comprised of three Availability Zones, and will give AWS customers and partners the ability to run their workloads and store their data in Indonesia. The AWS Asia Pacific (Jakarta) Region will be our ninth region in Asia Pacific, joining existing regions in Beijing, Mumbai, Ningxia, Seoul, Singapore, Sydney, Tokyo, and an upcoming region in Hong Kong SAR. AWS customers are already making use of 61 Availability Zones across 20 infrastructure regions worldwide. Today’s announcement brings the total number of global regions (operational and in the works) up to 25. We are looking forward to serving new and existing customers in Indonesia and working with partners across Asia Pacific. The addition of the AWS Asia Pacific (Jakarta) Region will enable more Indonesian organizations to leverage advanced technologies such as Analytics, Artificial Intelligence, Database, Internet of Things (IoT), Machine Learning, Mobile services, and more to drive innovation. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Indonesia. We are already working to help prepare developers in Indonesia for the digital future, with programs like AWS Educate, and AWS Activate. Dozens of universities and business schools across Indonesia are already participating in our educational programs, as are a plethora of startups and accelerators. Stay Tuned I’ll be sure to share additional news about this and other upcoming AWS regions as soon as I have it, so stay tuned! — Jeff;

Learn about AWS Services & Solutions – April AWS Online Tech Talks

Join us this April to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now! Note – All sessions are free and in Pacific Time. Tech talks this month: Blockchain May 2, 2019 | 11:00 AM – 12:00 PM PT – How to Build an Application with Amazon Managed Blockchain – Learn how to build an application on Amazon Managed Blockchain with the help of demo applications and sample code. Compute April 29, 2019 | 1:00 PM – 2:00 PM PT – How to Optimize Amazon Elastic Block Store (EBS) for Higher Performance – Learn how to optimize performance and spend on your Amazon Elastic Block Store (EBS) volumes. May 1, 2019 | 11:00 AM – 12:00 PM PT – Introducing New Amazon EC2 Instances Featuring AMD EPYC and AWS Graviton Processors – See how new Amazon EC2 instance offerings that feature AMD EPYC processors and AWS Graviton processors enable you to optimize performance and cost for your workloads. Containers April 23, 2019 | 11:00 AM – 12:00 PM PT – Deep Dive on AWS App Mesh – Learn how AWS App Mesh makes it easy to monitor and control communications for services running on AWS. March 22, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application. Databases April 23, 2019 | 1:00 PM – 2:00 PM PT – Selecting the Right Database for Your Application – Learn how to develop a purpose-built strategy for databases, where you choose the right tool for the job. April 25, 2019 | 9:00 AM – 10:00 AM PT – Mastering Amazon DynamoDB ACID Transactions: When and How to Use the New Transactional APIs – Learn how the new Amazon DynamoDB’s transactional APIs simplify the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables. DevOps April 24, 2019 | 9:00 AM – 10:00 AM PT – Running .NET applications with AWS Elastic Beanstalk Windows Server Platform V2 – Learn about the easiest way to get your .NET applications up and running on AWS Elastic Beanstalk. Enterprise & Hybrid April 30, 2019 | 11:00 AM – 12:00 PM PT – Business Case Teardown: Identify Your Real-World On-Premises and Projected AWS Costs – Discover tools and strategies to help you as you build your value-based business case. IoT April 30, 2019 | 9:00 AM – 10:00 AM PT – Building the Edge of Connected Home – Learn how AWS IoT edge services are enabling smarter products for the connected home. Machine Learning April 24, 2019 | 11:00 AM – 12:00 PM PT – Start Your Engines and Get Ready to Race in the AWS DeepRacer League – Learn more about reinforcement learning, how to build a model, and compete in the AWS DeepRacer League. April 30, 2019 | 1:00 PM – 2:00 PM PT – Deploying Machine Learning Models in Production – Learn best practices for training and deploying machine learning models. May 2, 2019 | 9:00 AM – 10:00 AM PT – Accelerate Machine Learning Projects with Hundreds of Algorithms and Models in AWS Marketplace – Learn how to use third party algorithms and model packages to accelerate machine learning projects and solve business problems. Networking & Content Delivery April 23, 2019 | 9:00 AM – 10:00 AM PT – Smart Tips on Application Load Balancers: Advanced Request Routing, Lambda as a Target, and User Authentication – Learn tips and tricks about important Application Load Balancers (ALBs) features that were recently launched. Productivity & Business Solutions April 29, 2019 | 11:00 AM – 12:00 PM PT – Learn How to Set up Business Calling and Voice Connector in Minutes with Amazon Chime – Learn how Amazon Chime Business Calling and Voice Connector can help you with your business communication needs. May 1, 2019 | 1:00 PM – 2:00 PM PT – Bring Voice to Your Workplace – Learn how you can bring voice to your workplace with Alexa for Business. Serverless April 25, 2019 | 11:00 AM – 12:00 PM PT – Modernizing .NET Applications Using the Latest Features on AWS Development Tools for .NET – Get a dive deep and demonstration of the latest updates to the AWS SDK and tools for .NET to make development even easier, more powerful, and more productive. May 1, 2019 | 9:00 AM – 10:00 AM PT – Customer Showcase: Improving Data Processing Workloads with AWS Step Functions’ Service Integrations – Learn how innovative customers like SkyWatch are coordinating AWS services using AWS Step Functions to improve productivity. Storage April 24, 2019 | 1:00 PM – 2:00 PM PT – Amazon S3 Glacier Deep Archive: The Cheapest Storage in the Cloud – See how Amazon S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices significantly lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data offsite.

New – Advanced Request Routing for AWS Application Load Balancers

AWS Application Load Balancers have been around since the summer of 2016! They support content-based routing, work well for serverless & container-based applications, and are highly scalable. Many AWS customers are using the existing host and path-based routing to power their HTTP and HTTPS applications, while also taking advantage of other ALB features such as port forwarding (great for container-based applications), health checks, service discovery, redirects, fixed responses, and built-in authentication. Advanced Request Routing The host-based routing feature allows you to write rules that use the Host header to route traffic to the desired target group. Today we are extending and generalizing this feature, giving you the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the query string, and the source IP address. We are also making the rules and conditions more powerful; rules can have multiple conditions (AND’ed together), and each condition can specify a match on multiple values (OR’ed). You can use this new feature to simplify your application architecture, eliminate the need for a proxy fleet for routing, and to block unwanted traffic at the load balancer. Here are some use cases: Separate bot/crawler traffic from human traffic. Assign customers or groups of customers to cells (distinct target groups) and route traffic accordingly. Implement A/B testing. Perform canary or blue/green deployments. Route traffic to microservice handlers based on method (PUTs to one target group and GETs to another, for example). Implement access restrictions based on IP address or CDN. Selectively route traffic to on-premises or in-cloud target groups. Deliver different pages or user experiences to various types and categories of devices. Using Advanced Request Routing You can use this feature with your existing Application Load Balancers by simply editing your existing rules. I will start with a simple rule that returns a fixed, plain-text response (the examples in this post are for testing and illustrative purposes; I am sure that yours will be more practical and more interesting): I can use curl to test it: $ curl http://TestALB-156468799.elb.amazonaws.com Default rule reached! I click Insert Rule to set up some advanced request routing: Then I click Add condition and examine the options that are available to me: I select Http header, and create a condition that looks for a cookie named user with value jeff. Then I create an action that returns a fixed response: I click Save, wait a few seconds for the change to take effect, and then issue a pair of requests: $ curl http://TestALB-156468799.elb.amazonaws.com Default rule reached! $ curl --cookie "user=jeff" http://TestALB-156468799.elb.amazonaws.com Hello Jeff I can also create a rule that matches one or more CIDR blocks of IP addresses: $ curl http://TestALB-156468799.elb.amazonaws.com Hello EC2 Instance I can match on the query string (this is very useful for A/B testing): $ curl http://TestALB-156468799.elb.amazonaws.com?ABTest=A A/B test, option A selected I can also use a wildcard if all I care about is the presence of a particular field name: I can match a standard or custom HTTP method. Here, I will invent one called READ: $ curl --request READ http://TestALB-156468799.elb.amazonaws.com Custom READ method invoked I have a lot of flexibility (not new, but definitely worth reviewing) when it comes to the actions: Forward to routes the request to a target group (a set of EC2 instances, a Lambda function, or a list of IP addresses). Redirect to generates a 301 (permanent) or 302 (found) response, and can also be used to switch between HTTP and HTTPS. Return fixed response generates a static response with any desired response code, as I showed you earlier. Authenticate uses Amazon Cognito or an OIDC provider to authenticate the request (applicable to HTTPS listeners only). Things to Know Here are a couple of other things that you should know about this cool and powerful new feature: Metrics – You can look at the Rule Evaluations and HTTP fixed response count CloudWatch metrics to learn more about activity related to your rules (learn more): Programmatic Access – You can also create, modify, examine, and delete rules using the ALB API and CLI (CloudFormation support will be ready soon). Rule Matching – The rules are powered by string matching, so test well and double-check that your rules are functioning as intended. The matched_rule_priority and actions_executed fields in the ALB access logs can be helpful when debugging and testing (learn more). Limits – Each ALB can have up to 100 rules, not including the defaults. Each rule can reference up to 5 values and can use up to 5 wildcards. The number of conditions is limited only by the number of unique values that are referenced. Available Now Advanced request routing is available now in all AWS regions at no extra charge (you pay the usual prices for the Application Load Balancer). — Jeff;  

AWS App Mesh – Application-Level Networking for Cloud Applications

AWS App Mesh helps you to run and monitor HTTP and TCP services at scale. You get a consistent way to route and monitor traffic, giving you insight into problems and the ability to re-route traffic after failures or code changes. App Mesh uses the open source Envoy proxy, giving you access to a wide range of tools from AWS partners and the open source community. Services can run on AWS Fargate, Amazon EC2, Amazon ECS, Amazon Elastic Container Service for Kubernetes, or Kubernetes. All traffic in and out of the each service goes through the Envoy proxy so that it can be routed, shaped, measured, and logged. This extra level of indirection lets you build your services in any desired languages without having to use a common set of communication libraries. App Mesh Concepts Before we dive in, let’s review a couple of important App Mesh concepts and components: Service Mesh – A a logical boundary for network traffic between the services that reside within it. A mesh can contain virtual services, virtual nodes, virtual routers, and routes. Virtual Service – An abstraction (logical name) for a service that is provided directly (by a virtual node) or indirectly (through a virtual router). Services within a mesh use the logical names to reference and make use of other services. Virtual Node – A pointer to a task group (an ECS service or a Kubernetes deployment) or a service running on one or more EC2 instances. Each virtual node can accept inbound traffic via listeners, and can connect to other virtual nodes via backends. Also, each node has a service discovery configuration (currently a DNS name) that allows other nodes to discover the IP addresses of the tasks, pods, or instances. Virtual Router – A handler for one or more virtual services within a mesh. Each virtual router listens for HTTP traffic on a specific port. Route – Routes use prefix-based matching on URLs to route traffic to virtual nodes, with optional per-node weights. The weights can be used to test new service versions in production while gradually increasing the amount of traffic that they handle. Putting it all together, each service mesh contains a set of services that can be accessed by URL paths specified by routes. Within the mesh, services refer to each other by name. I can access App Mesh from the App Mesh Console, the App Mesh CLI, or the App Mesh API. I’ll show you how to use the Console and take a brief look at the CLI. Using the App Mesh Console The console lets me create my service mesh and the components within it. I open the App Mesh Console and click Get started: I enter the name of my mesh and my first virtual service (I can add more later), and click Next: I define the first virtual node: I can click Additional configuration to specify service backends (other services that this one can call) and logging: I define my node’s listener via protocol (HTTP or TCP) and port, set up an optional health check, and click Next: Next, I define my first virtual router and a route for it: I can apportion traffic across several virtual nodes (targets) on a percentage basis, and I can use prefix-based routing for incoming traffic: I review my choices and click Create mesh service: The components are created in a few seconds and I am just about ready to go: The final step, as described in the App Mesh Getting Started Guide, is to update my task definitions (Amazon ECS or AWS Fargate) or pod specifications (Amazon EKS or Kubernetes) to reference the Envoy container image and the proxy container image. If my service is running on an EC2 instance, I will need to deploy Envoy there. Using the AWS App Mesh Command Line App Mesh lets you specify each type of component in a simple JSON form and provides you with command-line tools to create each one (create-mesh, create-virtual-service, create-virtual-node, and create-virtual-router). For example, I can define a virtual router in a file: { "meshName": "mymesh", "spec": { "listeners": [ { "portMapping": { "port": 80, "protocol": "http" } } ] }, "virtualRouterName": "serviceA" } And create it with one command: $ aws appmesh create-virtual-router --cli-input-json file://serviceA-router.json Now Available AWS App Mesh is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions today. — Jeff;

New – AWS Deep Learning Containers

We want to make it as easy as possible for you to learn about deep learning and to put it to use in your applications. If you know how to ingest large datasets, train existing models, build new models, and to perform inferences, you’ll be well-equipped for the future! New Deep Learning Containers Today I would like to tell you about the new AWS Deep Learning Containers. These Docker images are ready to use for deep learning training or inferencing using TensorFlow or Apache MXNet, with other frameworks to follow. We built these containers after our customers told us that they are using Amazon EKS and ECS to deploy their TensorFlow workloads to the cloud, and asked us to make that task as simple and straightforward as possible. While we were at it, we optimized the images for use on AWS with the goal of reducing training time and increasing inferencing performance. The images are pre-configured and validated so that you can focus on deep learning, setting up custom environments and workflows on Amazon ECS, Amazon Elastic Container Service for Kubernetes, and Amazon Elastic Compute Cloud (EC2) in minutes! You can find them in AWS Marketplace and Elastic Container Registry, and use them at no charge. The images can be used as-is, or can be customized with additional libraries or packages. Multiple Deep Learning Containers are available, with names based on the following factors (not all combinations are available): Framework – TensorFlow or MXNet. Mode – Training or Inference. You can train on a single node or on a multi-node cluster. Environment – CPU or GPU. Python Version – 2.7 or 3.6. Distributed Training – Availability of the Horovod framework. Operating System – Ubuntu 16.04. Using Deep Learning Containers In order to put an AWS Deep Learning Container to use, I create an Amazon ECS cluster with a p2.8xlarge instance: $ aws ec2 run-instances --image-id ami-0ebf2c738e66321e6 \ --count 1 --instance-type p2.8xlarge \ --key-name keys-jbarr-us-east ... I verify that the cluster is running, and check that the ECS Container Agent is active: Then I create a task definition in a text file (gpu_task_def.txt): { "requiresCompatibilities": [ "EC2" ], "containerDefinitions": [ { "command": [ "tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=saved_model_half_plus_two_gpu --model_base_path=/models/saved_model_half_plus_two_gpu" ], "entryPoint": [ "sh", "-c" ], "name": "EC2TFInference", "image": "841569659894.dkr.ecr.us-east-1.amazonaws.com/sample_tf_inference_images:gpu_with_half_plus_two_model", "memory": 8111, "cpu": 256, "resourceRequirements": [ { "type": "GPU", "value": "1" } ], "essential": true, "portMappings": [ { "hostPort": 8500, "protocol": "tcp", "containerPort": 8500 }, { "hostPort": 8501, "protocol": "tcp", "containerPort": 8501 }, { "containerPort": 80, "protocol": "tcp" } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "/ecs/TFInference", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "ecs" } } } ], "volumes": [], "networkMode": "bridge", "placementConstraints": [], "family": "Ec2TFInference" } I register the task definition and capture the revision number (3): Next, I create a service using the task definition and revision number: I use the console to navigate to the task: Then I find the external binding for port 8501: Then I run three inferences (this particular model was trained on the function y = ax + b, with a = 0.5 and b = 2): $ curl -d '{"instances": [1.0, 2.0, 5.0]}' \ -X POST http://xx.xxx.xx.xx:8501/v1/models/saved_model_half_plus_two_gpu:predict { "predictions": [2.5, 3.0, 4.5 ] } As you can see, the inference predicted the values 2.5, 3.0, and 4.5 when given inputs of 1.0, 2.0, and 5.0. This is a very, very simple example but it shows how you can use a pre-trained model to perform inferencing in ECS via the new Deep Learning Containers. You can also launch a model for training purposes, perform the training, and then run some inferences. — Jeff;

New – Concurrency Scaling for Amazon Redshift – Peak Performance at All Times

Amazon Redshift is a data warehouse that can expand to exabyte-scale. Today, tens of thousands of AWS customers (including NTT DOCOMO, Finra, and Johnson & Johnson) use Redshift to run mission-critical BI dashboards, analyze real-time streaming data, and run predictive analytics jobs. A challenge arises when the number of concurrent queries grows at peak times. When a multitude of business analysts all turn to their BI dashboards or long-running data science workloads compete with other workloads for resources, Redshift will queue queries until enough compute resources become available in the cluster. This ensures that all of the work gets done, but it can mean that performance is impacted at peak times. Two options present themselves: Overprovision the cluster to meet peak needs. This option addresses the immediate issue, but wastes resources and costs more than necessary. Optimize the cluster for typical workloads. This option forces you to wait longer for results at peak times, possibly delaying important business decisions. New Concurrency Scaling Today I would like to offer a third option. You can now configure Redshift to add more query processing power on an as-needed basis. This happens transparently and in a manner of seconds, and provides you with fast, consistent performance even as the workload grows to hundreds of concurrent queries. Additional processing power is ready in seconds and does not need to be pre-warmed or pre-provisioned. You pay only for what you use, with per-second billing and also accumulate one hour of concurrency scaling cluster credits every 24 hours while your main cluster is running. The extra processing power is removed when it is no longer needed, making this a perfect way to address the bursty use cases that I described above. You can allocate the burst power to specific users or queues, and you can continue to use your existing BI and ETL applications. Concurrency Scaling Clusters are used to handle many forms of read-only queries, with additional flexibility in the works; read about Concurrency Scaling to learn more. Using Concurrency Scaling This feature can be enabled for an existing cluster in minutes! We recommend starting with a fresh Redshift Parameter Group for testing purposes, so I start by creating one: Then I edit my cluster’s Workload Management Configuration, select the new parameter group, set the Concurrency Scaling Mode to auto, and click Save: I will use the Cloud Data Warehouse Benchmark Derived From TPC-DS as a source of test data and test queries. I download the DDL, customize it with my AWS credentials, and use psql to connect to my cluster and create the test data: sample=# create database sample; CREATE DATABASE sample=# \connect sample; psql (9.2.24, server 8.0.2) WARNING: psql version 9.2, server version 8.0. Some psql features might not work. SSL connection (cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256) You are now connected to database "sample" as user "awsuser". sample=# \i ddl.sql The DDL creates the tables and loads populates them using data stored in an S3 bucket: sample=# \dt List of relations schema | name | type | owner --------+------------------------+-------+--------- public | call_center | table | awsuser public | catalog_page | table | awsuser public | catalog_returns | table | awsuser public | catalog_sales | table | awsuser public | customer | table | awsuser public | customer_address | table | awsuser public | customer_demographics | table | awsuser public | date_dim | table | awsuser public | dbgen_version | table | awsuser public | household_demographics | table | awsuser public | income_band | table | awsuser public | inventory | table | awsuser public | item | table | awsuser public | promotion | table | awsuser public | reason | table | awsuser public | ship_mode | table | awsuser public | store | table | awsuser public | store_returns | table | awsuser public | store_sales | table | awsuser public | time_dim | table | awsuser public | warehouse | table | awsuser public | web_page | table | awsuser public | web_returns | table | awsuser public | web_sales | table | awsuser public | web_site | table | awsuser (25 rows) Then I download the queries and open up a bunch of PuTTY windows so that I can generate a meaningful load for my Redshift cluster: I run an initial set of parallel queries, and then ramp up over time, I can see them in the Cluster Performance tab for my cluster: I can see the additional processing power come online as needed, and then go away when no longer needed, in the Database Performance tab: As you can see, my cluster scales as needed in order to handle all of the queries as expeditiously as possible. The Concurrency Scaling Usage shows me how many seconds of additional processing power I have consumed (as I noted earlier, each cluster accumulates a full hour of concurrency credits every 24 hours). I can use the parameter max_concurrency_scaling_clusters to control the number of Concurrency Scaling Clusters that can be used (the default limit is 10, but you can request an increase if you need more). Available Today You can start making use of Concurrency Scaling Clusters today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions today, with more to come later this year. — Jeff;  

New AMD EPYC-Powered Amazon EC2 M5ad and R5ad Instances

Last year I told you about our New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances. Built on the AWS Nitro System, these instances are powered by custom AMD EPYC processors running at 2.5 GHz. They are priced 10% lower than comparable EC2 M5 and R5 instances, and give you a new opportunity to balance your instance mix based on cost and performance. Today we are adding M5ad and R5ad instances, both powered by custom AMD EPYC 7000 series processors and built on the AWS Nitro System. M5ad and R5ad Instances These instances add high-speed, low latency local (physically connected) block storage to the existing M5a and R5a instances that we launched late last year. M5ad instances are designed for general purpose workloads such as web servers, app servers, dev/test environments, gaming, logging, and media processing. They are available in 6 sizes: Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth m5ad.large 2 8 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps m5ad.xlarge 4 16 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps m5ad.2xlarge 8 32 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps m5ad.4xlarge 16 64 GiB 2 x 300 GB NVMe SSD 2.120 Gbps Up to 10 Gbps m5ad.12xlarge 48 192 GiB 2 x 900 GB NVMe SSD 5 Gbps 10 Gbps m5ad.24xlarge 96 384 GiB 4 x 900 GB NVMe SSD 10 Gbps 20 Gbps R5ad instances are designed for memory-intensive workloads: data mining, in-memory analytics, caching, simulations, and so forth. The R5ad instances are available in 6 sizes: Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth r5ad.large 2 16 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps r5ad.xlarge 4 32 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps r5ad.2xlarge 8 64 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps r5ad.4xlarge 16 128 GiB 2 x 300 GB NVMe SSD 2.120 Gbps Up to 10 Gbps r5ad.12xlarge 48 384 GiB 2 x 900 GB NVMe SSD 5 Gbps 10 Gbps r5ad.24xlarge 96 768 GiB 4 x 900 GB NVMe SSD 10 Gbps 20 Gbps Again, these instances are available in the same sizes as the M5d and R5d instances, and the AMIs work on either, so go ahead and try both! Here are some things to keep in mind about the local NMVe storage on the M5ad and R5ad instances: Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted. Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated. Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated. M5ad and R5ad instances are available in the US East (N. Virginia), US West (Oregon), US East (Ohio), and Asia Pacific (Singapore) Regions in On-Demand, Spot, and Reserved Instance form. — Jeff;  

New Amazon S3 Storage Class – Glacier Deep Archive

Many AWS customers collect and store large volumes (often a petabyte or more) of important data but seldom access it. In some cases raw data is collected and immediately processed, then stored for years or decades just in case there’s a need for further processing or analysis. In other cases, the data is retained for compliance or auditing purposes. Here are some of the industries and use cases that fit this description: Financial – Transaction archives, activity & audit logs, and communication logs. Health Care / Life Sciences – Electronic medical records, images (X-Ray, MRI, or CT), genome sequences, records of pharmaceutical development. Media & Entertainment – Media archives and raw production footage. Physical Security – Raw camera footage. Online Advertising – Clickstreams and ad delivery logs. Transportation – Vehicle telemetry, video, RADAR, and LIDAR data. Science / Research / Education – Research input and results, including data relevant to seismic tests for oil & gas exploration. Today we are introducing a new and even more cost-effective way to store important, infrequently accessed data in Amazon S3. Amazon S3 Glacier Deep Archive Storage Class The new Glacier Deep Archive storage class is designed to provide durable and secure long-term storage for large amounts of data at a price that is competitive with off-premises tape archival services. Data is stored across 3 or more AWS Availability Zones and can be retrieved in 12 hours or less. You no longer need to deal with expensive and finicky tape drives, arrange for off-premises storage, or worry about migrating data to newer generations of media. Your existing S3-compatible applications, tools, code, scripts, and lifecycle rules can all take advantage of Glacier Deep Archive storage. You can specify the new storage class when you upload objects, alter the storage class of existing objects manually or programmatically, or use lifecycle rules to arrange for migration based on object age. You can also make use of other S3 features such as Storage Class Analysis, Object Tagging, Object Lock, and Cross-Region Replication. The existing S3 Glacier storage class allows you to access your data in minutes (using expedited retrieval) and is a good fit for data that requires faster access. To learn more about the entire range of options, read Storage Classes in the S3 Developer Guide. If you are already making use of the Glacier storage class and rarely access your data, you can switch to Deep Archive and begin to see cost savings right away. Using Glacier Deep Archive Storage – Console I can switch the storage class of an existing S3 object to Glacier Deep Archive using the S3 Console. I locate the file and click Properties: Then I click Storage class: Next, I select Glacier Deep Archive and click Save: I cannot download the object or edit any of its properties or permissions after I make this change: In the unlikely event that I need to access this 2013-era video, I select it and choose Restore from the Actions menu: Then I specify the number of days to keep the restored copy available, and choose either bulk or standard retrieval: Using Glacier Deep Archive Storage – Lifecycle Rules I can also use S3 lifecycle rules. I select the bucket and click Management, then select Lifecycle: Then I click Add lifecycle rule and create my rule. I enter a name (ArchiveOldMovies), and can optionally use a path or tag filter to limit the scope of the rule: Next, I indicate that I want the rule to apply to the Current version of my objects, and specify that I want my objects to transition to Glacier Deep Archive 30 days after they are created: Using Glacier Deep Archive – CLI / Programmatic Access I can use the CLI to upload a new object and set the storage class: $ aws s3 cp new.mov s3://awsroadtrip-videos-raw/ --storage-class DEEP_ARCHIVE I can also change the storage class of an existing object by copying it over itself: $ aws s3 cp s3://awsroadtrip-videos-raw/new.mov s3://awsroadtrip-videos-raw/new.mov --storage-class DEEP_ARCHIVE If I am building a system that manages archiving and restoration, I can opt to receive notifications on an SNS topic, an SQS queue, or a Lambda function when a restore is initiated and/or completed: Other Access Methods You can also use Tape Gateway configuration of AWS Storage Gateway to create a Virtual Tape Library (VTL) and configure it to use Glacier Deep Archive for storage of archived virtual tapes. This will allow you to move your existing tape-based backups to the AWS Cloud without making any changes to your existing backup workflows. You can retrieve virtual tapes archived in Glacier Deep Archive to S3 within twelve hours. With Tape Gateway and S3 Glacier Deep Archive, you no longer need on-premises physical tape libraries, and you don’t need to manage hardware refreshes and rewrite data to new physical tapes as technologies evolve. For more information, visit the Test Your Gateway Setup with Backup Software page of Storage Gateway User Guide. Now Available The S3 Glacier Deep Archive storage class is available today in all commercial regions and in both AWS GovCloud regions. Pricing varies by region, and the storage cost is up to 75% less than for the existing S3 Glacier storage class; visit the S3 Pricing page for more information. — Jeff;

New – Gigabit Connectivity Options for Amazon Direct Connect

AWS Direct Connect gives you the ability to create private network connections between your datacenter, office, or colocation environment and AWS. The connections start at your network and end at one of 91 AWS Direct Connect locations and can reduce your network costs, increase throughput, and deliver a more consistent experience than an Internet-based connection. In most cases you will need to work with an AWS Direct Connect Partner to get your connection set up. As I prepared to write this post, I learned that my understanding of AWS Direct Connect was incomplete, and that the name actually encompasses three distinct models. Here’s a summary: Dedicated Connections are available with 1 Gbps and 10 Gbps capacity. You use the AWS Management Console to request a connection, after which AWS will review your request and either follow up via email to request additional information or provision a port for your connection. Once AWS has provisioned a port for you, the remaining time to complete the connection by the AWS Direct Connect Partner will vary between days and weeks. A Dedicated Connection is a physical Ethernet port dedicated to you. Each Dedicated Connection supports up to 50 Virtual Interfaces (VIFs). To get started, read Creating a Connection. Hosted Connections are available with 50 to 500 Mbps capacity, and connection requests are made via an AWS Direct Connect Partner. After the AWS Direct Connect Partner establishes a network circuit to your premises, capacity to AWS Direct Connect can be added or removed on demand by adding or removing Hosted Connections. Each Hosted Connection supports a single VIF; you can obtain multiple VIFs by acquiring multiple Hosted Connections. The AWS Direct Connect Partner provisions the Hosted Connection and sends you an invite, which you must accept (with a click) in order to proceed. Hosted Virtual Interfaces are also set up via AWS Direct Connect Partners. A Hosted Virtual Interface has access to all of the available capacity on the network link between the AWS Direct Connect Partner and an AWS Direct Connect location. The network link between the AWS Direct Connect Partner and the AWS Direct Connect location is shared by multiple customers and could possibly be oversubscribed. Due to the possibility of oversubscription in the Hosted Virtual Interface model, we no longer allow new AWS Direct Connect Partner service integrations using this model and recommend that customers with workloads sensitive to network congestion use Dedicated or Hosted Connections. Higher Capacity Hosted Connections Today we are announcing Hosted Connections with 1, 2, 5, or 10 Gbps of capacity. These capacities will be available through a select set of AWS Direct Connect Partners who have been specifically approved by AWS. We are also working with AWS Direct Connect Partners to implement additional monitoring of the network link between the AWS Direct Connect Partners and AWS. Most AWS Direct Connect Partners support adding or removing Hosted Connections on demand. Suppose that you archive a massive amount of data to Amazon Glacier at the end of every quarter, and that you already have a pair of resilient 10 Gbps circuits from your AWS Direct Connect Partner for use by other parts of your business. You then create a pair of resilient 1, 2, 5 or 10 Gbps Hosted Connections at the end of the quarter, upload your data to Glacier, and then delete the Hosted Connections. You pay AWS for the port-hour charges while the Hosted Connections are in place, along with any associated data transfer charges (see the Direct Connect Pricing page for more info). Check with your AWS Direct Connect Partner for the charges associated with their services. You get a cost-effective, elastic way to move data to the cloud while creating Hosted Connections only when needed. Available Now The new higher capacity Hosted Connections are available through select AWS Direct Connect Partners after they are approved by AWS. — Jeff; PS – As part of this launch, we are reducing the prices for the existing 200, 300, 400, and 500 Mbps Hosted Connection capacities by 33.3%, effective March 1, 2019.  

In the Works – EC2 Instances (G4) with NVIDIA T4 GPUs

I’ve written about the power and value of GPUs in the past, and I have written posts to launch many generations of GPU-equipped EC2 instances including the CG1, G2, G3, P2, P3, and P3dn instance types. Today I would like to give you a sneak peek at our newest GPU-equipped instance, the G4. Designed for machine learning training & inferencing, video transcoding, and other demanding applications, G4 instances will be available in multiple sizes and also in bare metal form. We are still fine-tuning the specs, but you can look forward to: AWS-custom Intel CPUs (4 to 96 vCPUs) 1 to 8 NVIDIA T4 Tensor Core GPUs Up to 384 GiB of memory Up to 1.8 TB of fast, local NVMe storage Up to 100 Gbps networking The brand-new NVIDIA T4 GPUs feature 320 Turing Tensor cores, 2,560 CUDA cores, and 16 GB of memory. In addition to support for machine learning inferencing and video processing, the T4 includes RT Cores for real-time ray tracing and can provide up to 2x the graphics performance of the NVIDIA M60 (watch Ray Tracing in Games with NVIDIA RTX to learn more). I’ll have a lot more to say about these powerful, high-end instances very soon, so stay tuned! — Jeff; PS – If you are interested in joining a private preview, sign up now.

AWS Heroes: Putting AWS security services to work for you

Guest post by AWS Community Hero Mark Nunnikhoven. Mark is the Vice President of Cloud Research at long-time APN Advanced Technology Partner Trend Micro. In addition to helping educate the AWS community about modern security and privacy, he has spearheaded Trend Micro’s launch-day support of most of the AWS security services and attended every AWS re:Invent! Security is a pillar of the AWS Well-Architected Framework. It’s critical to the success of any workload. But it’s also often misunderstood. It’s steeped in jargon and talked about in terms of threats and fear. This has led to security getting a bad reputation. It’s often thought of as a roadblock and something to put up with. Nothing could be further from the truth. At its heart, cybersecurity is simple. It’s a set of processes and controls that work to make sure that whatever I’ve built works as intended… and only as intended. How do I make that happen in the AWS Cloud? Shared responsibility It all starts with the shared responsibility model. The model defines the line where responsibility for day-to-day operations shifts from AWS to me, the user. AWS provides the security of the cloud and I am responsible for security in the cloud. For each type of service, more and more of my responsibilities shift to AWS. My tinfoil hat would be taken away if I didn’t mention that everyone needs to verify that AWS is holding up their end of the deal (#protip: they are and at world-class levels). This is where AWS Artifact enters the picture. It is an easy way to download the evidence that AWS is fulfilling their responsibilities under the model. But what about my responsibilities under the model? AWS offers help there in the form of various services under the Security, Identity, & Compliance category. Security services The trick is understanding how all of these security services fit together to help me meet my responsibilities. Based on conversations I’ve had around the world and helping teach these services at various AWS Summits, I’ve found that grouping them into five subcategories makes things clearer: authorization, protected stores, authentication, enforcement, and visibility. A few of these categories are already well understood. Authentication services help me identify my users. Authorization services allow me to determine what they—and other services—are allowed to do and under what conditions. Protected stores allow me to encrypt sensitive data and regulate access to it. Two subcategories aren’t as well understood: enforcement and visibility. I use the services in these categories daily in my security practice and they are vital to ensuring that my apps are working as intended. Enforcement Teams struggle with how to get the most out of enforcement controls and it can be difficult to understand how to piece these together into a workable security practice. Most of these controls detect issues, essentially raising their hand when something might be wrong. To protect my deployments, I need a process to handle those detections. By remembering the goal of ensuring that whatever I build works as intended and only as intended, I can better frame how each of these services helps me. AWS CloudTrail logs nearly every API action in an account but mining those logs for suspicious activity is difficult. Enter Amazon GuardDuty. It continuously scours CloudTrail logs—as well as Amazon VPC flow logs and DNS logs—for threats and suspicious activity at the AWS account level. Amazon EC2 instances have the biggest potential for security challenges as they are running a full operating system and applications written by various third parties. All that complexity added up to over 13,000 reported vulnerabilities last year. Amazon Inspector runs on-demand assessments of your instances and raises findings related to the operating system and installed applications that include recommended mitigations. Despite starting from a locked-down state, teams often make mistakes and sometimes accidentally expose sensitive data in an Amazon S3 bucket. Amazon Macie continuously scans targeted buckets looking for sensitive information and misconfigurations. This augments additional protections like S3 Block Public Access and Trusted Advisor checks. AWS WAF and AWS Shield work on AWS edge locations and actively stop attacks that they are configured to detect. AWS Shield targets DDoS activity and AWS WAF takes aim at layer seven or web attacks. Each of these services support the work teams do in hardening configurations and writing quality code. They are designed to help highlight areas of concern for taking action. The challenge is prioritizing those actions. Visibility Prioritization is where the visibility services step in. As previously mentioned, AWS Artifact provides visibility into AWS’ activities under the shared responsibility model. The new AWS Security Hub helps me understand the data generated by the other AWS security, identity, and compliance services along with data generated by key APN Partner solutions. The goal of AWS Security Hub is to be the first stop for any security activity. All data sent to the hub is normalized in the Amazon Finding Format, which includes a standardized severity rating. This provides context for each findings and helps me determine which actions to take first. This prioritized list of findings quickly translates in a set of responses to undertake. At first, these might be manual responses but as with anything in the AWS Cloud, automation is the key to success. Using AWS Lambda to react to AWS Security Hub findings is a wildly successful and simple way of modernizing an approach to security. This automated workflow sits atop a pyramid of security controls: • Core AWS security services and APN Partner solutions at the bottom • The AWS Security Hub providing visibility in the middle • Automation as the crown jewel on top What’s next? In this post, I described my high-level approach to security success in the AWS Cloud. This aligns directly with the AWS Well-Architected Framework and thousands of customer success stories. When you understand the shared responsibility model and the value of each service, you’re well on your way to demystifying security and building better in the AWS Cloud.

New – Open Distro for Elasticsearch

Elasticsearch is a distributed, document-oriented search and analytics engine. It supports structured and unstructured queries, and does not require a schema to be defined ahead of time. Elasticsearch can be used as a search engine, and is often used for web-scale log analytics, real-time application monitoring, and clickstream analytics. Originally launched as a true open source project, some of the more recent additions to Elasticsearch are proprietary. My colleague Adrian explains our motivation to start Open Distro for Elasticsearch in his post, Keeping Open Source Open. As strong believers in, and supporters of, open source software, we believe this project will help continue to accelerate open source Elasticsearch innovation. Open Distro for Elasticsearch Today we are launching Open Distro for Elasticsearch. This is a value-added distribution of Elasticsearch that is 100% open source (Apache 2.0 license) and supported by AWS. Open Distro for Elasticsearch leverages the open source code for Elasticsearch and Kibana. This is not a fork; we will continue to send our contributions and patches upstream to advance these projects. In addition to Elasticsearch and Kibana, the first release includes a set of advanced security, event monitoring & alerting, performance analysis, and SQL query features (more on those in a bit). In addition to the source code repo, Open Distro for Elasticsearch and Kibana are available as RPM and Docker containers, with separate downloads for the SQL JDBC and the PerfTop CLI. You can run this code on your laptop, in your data center, or in the cloud. Contributions are welcome, as are bug reports and feature requests. Inside Open Distro for Elasticsearch Let’s take a quick look at the features that we are including in Open Distro for Elasticsearch. Some of these are currently available in Amazon Elasticsearch Service; others will become available in future updates. Security – This plugin that supports node-to-node encryption, five types of authentication (basic, Active Directory, LDAP, Kerberos, and SAML), role-based access controls at multiple levels (clusters, indices, documents, and fields), audit logging, and cross-cluster search so that any node in a cluster can run search requests across other nodes in the cluster. Learn More… Event Monitoring & Alerting – This feature notifies you when data from one or more Elasticsearch indices meets certain conditions. You could, for example, notify a Slack channel if an application logs more than five HTTP 503 errors in an hour. Monitoring is based on jobs that run on a defined schedule, checking indices against trigger conditions, and raising alerts when a condition has been triggered. Learn More… Deep Performance Analysis – This is a REST API that allows you to query a long list of performance metrics for your cluster. You can access the metrics programmatically or you can visualize them using perf top and other perf tools. Learn More… SQL Support – This feature allows you to query your cluster using SQL statements. It is an improved version of the elasticsearch-sql plugin, and supports a rich set of statements. This is just the beginning; we have more in the works, and also look forward to your contributions and suggestions! — Jeff;  

Building serverless apps with components from the AWS Serverless Application Repository

Guest post by AWS Serverless Hero Aleksandar Simovic. Aleksandar is a Senior Software Engineer at Science Exchange and co-author of “Serverless Applications with Node.js” with Slobodan Stojanovic, published by Manning Publications. He also writes on Medium on both business and technical aspects of serverless. Many of you have built a user login or an authorization service from scratch a dozen times. And you’ve probably built another dozen services to process payments and another dozen to export PDFs. We’ve all done it, and we’ve often all done it redundantly. Using the AWS Serverless Application Repository, you can now spend more of your time and energy developing business logic to deliver the features that matter to customers, faster. What is the AWS Serverless Application Repository? The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources. How to use AWS Serverless Application Repository in production I wanted to build an application that enables customers to select a product and pay for it. Sounds like a substantial effort, right? Using AWS Serverless Application Repository, it didn’t actually take me much time. Broadly speaking, I built: A product page with a Buy button, automatically tied to the Stripe Checkout SDK. When a customer chooses Buy, the page displays the Stripe Checkout payment form. A Stripe payment service with an API endpoint that accepts a callback from Stripe, charges the customer, and sends a notification for successful transactions. For this post, I created a pre-built sample static page that displays the product details and has the Stripe Checkout JavaScript on the page. Even with the pre-built page, integrating the payment service is still work. But many other developers have built a payment application at least once, so why should I spend time building identical features? This is where AWS Serverless Application Repository came in handy. Find and deploy a component First, I searched for an existing component in the AWS Serverless Application Repository public library. I typed “stripe” and opted in to see applications that created custom IAM roles or resource policies. I saw the following results: I selected the application titled api-lambda-stripe-charge and chose Deploy on the component’s detail page. Before I deployed any component, I inspected it to make sure it was safe and production-ready. Evaluate a component The recommended approach for evaluating an AWS Serverless Application Repository component is a four-step process: Check component permissions. Inspect the component implementation. Deploy and run the component in a restricted environment. Monitor the component’s behavior and cost before using in production. This might appear to negate the quick delivery benefits of AWS Serverless Application Repository, but in reality, you only verify each component one time. Then you can easily reuse and share the component throughout your company. Here’s how to apply this approach while adding the Stripe component. 1. Check component permissions There are two types of components: public and private. Public components are open source, while private components do not have to be. In this case, the Stripe component is public. I reviewed the code to make sure that it doesn’t give unnecessary permissions that could potentially compromise security. In this case, the Stripe component is on GitHub. On the component page, I opened the template.yaml file. There was only one AWS Lambda function there, so I found the Policies attribute and reviewed the policies that it uses.   CreateStripeCharge: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs8.10 Timeout: 10 Policies: - SNSCrudPolicy: TopicName: !GetAtt SNSTopic.TopicName - Statement: Effect: Allow Action: - ssm:GetParameters Resource: !Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${SSMParameterPrefix}/* The component was using a predefined AWS SAM policy template and a custom one. These predefined policy templates are sets of AWS permissions that are verified and recommended by the AWS security team. Using these policies to specify resource permissions is one of the recommended practices for serverless components on AWS Serverless Application Repository. The other custom IAM policy allows the function to retrieve AWS System Manager parameters, which is the best practice to store secure values, such as the Stripe secret key. 2. Inspect the component implementation I wanted to ensure that the component’s main business logic did only what it was meant to do, which was to create a Stripe charge. It’s also important to look out for unknown third-party HTTP calls to prevent leaks. Then I reviewed this project’s dependencies. For this inspection, I used PureSec, but tools like those offered by Protego are another option. The main business logic was in the charge-customer.js file. It revealed straightforward logic to simply invoke the Stripe create charge and then publish a notification with the created charge. I saw this reflected in the following code: return paymentProcessor.createCharge(token, amount, currency, description) .then(chargeResponse => { createdCharge = chargeResponse; return pubsub.publish(createdCharge, TOPIC_ARN); }) .then(() => createdCharge) .catch((err) => { console.log(err); throw err; }); The paymentProcessor and pubsub values are adapters for the communication with Stripe and Amazon SNS, respectively. I always like to look and see how they work. 3. Deploy and run the component in a restricted environment Maintaining a separate, restricted AWS account in which to test your serverless applications is a best practice for serverless development. I always ensure that my test account has strict AWS Billing and Amazon CloudWatch alarms in place. I signed in to this separate account, opened the Stripe component page, and manually deployed it. After deployment, I needed to verify how it ran. Because this component only has one Lambda function, I looked for that function in the Lambda console and opened its details page so that I could verify the code. 4. Monitor behavior and cost before using a component in production When everything works as expected in my test account, I usually add monitoring and performance tools to my component to help diagnose any incidents and evaluate component performance. I often use Epsagon and Lumigo for this, although adding those steps would have made this post too long. I also wanted to track the component’s cost. To do this, I added a strict Billing alarm that tracked the component cost and the cost of each AWS resource within it. After the component passed these four tests, I was ready to deploy it by adding it to my existing product-selection application. Deploy the component to an existing application To add my Stripe component into my existing application, I re-opened the component Review, Configure, and Deploy page and chose Copy as SAM Resource. That copied the necessary template code to my clipboard. I then added it to my existing serverless application by pasting it into my existing AWS SAM template, under Resources. It looked like the following: Resources: ShowProduct: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs8.10 Timeout: 10 Events: Api: Type: Api Properties: Path: /product/:productId Method: GET   apilambdastripecharge: Type: AWS::Serverless::Application Properties: Location: ApplicationId: arn:aws:serverlessrepo:us-east-1:375983427419:applications/api-lambda-stripe-charge SemanticVersion: 3.0.0 Parameters: # (Optional) Cross-origin resource sharing (CORS) Origin. You can specify a single origin, all origins with "*", or leave it empty and no CORS is applied. CorsOrigin: YOUR_VALUE # This component assumes that the Stripe secret key needed to use the Stripe Charge API is stored as SecureStrings in Parameter Store under the prefix defined by this parameter. See the component README.        # SSMParameterPrefix: lambda-stripe-charge # Uncomment to override the default value Outputs: ApiUrl: Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Stage/product/123 Description: The URL of the sample API Gateway I copied and pasted an AWS::Serverless::Application AWS SAM resource, which points to the component by ApplicationId and its SemanticVersion. Then, I defined the component’s parameters. I set CorsOrigin to “*” for demonstration purposes. I didn’t have to set the SSMParameterPrefix value, as it picks up a default value. But I did set up my Stripe secret key in the Systems Manager Parameter Store, by running the following command: aws ssm put-parameter --name lambda-stripe-charge/stripe-secret-key --value --type SecureString --overwrite In addition to parameters, components also contain outputs. An output is an externalized component resource or value that you can use with other applications or components. For example, the output for the api-lambda-stripe-charge component is SNSTopic, an Amazon SNS topic. This enables me to attach another component or business logic to get a notification when a successful payment occurs. For example, a lambda-send-email-ses component that sends an email upon successful payment could be attached, too. To finish, I ran the following two commands: aws cloudformation package --template-file template.yaml --output-template-file output.yaml --s3-bucket YOUR_BUCKET_NAME aws cloudformation deploy --template-file output.yaml --stack-name product-show-n-pay --capabilities CAPABILITY_IAM For the second command, you could add parameter overrides as needed. My product-selection and payment application was successfully deployed! Summary AWS Serverless Application Repository enables me to share and reuse common components, services, and applications so that I can really focus on building core business value. In a few steps, I created an application that enables customers to select a product and pay for it. It took a matter of minutes, not hours or days! You can see that it doesn’t take long to cautiously analyze and check a component. That component can now be shared with other teams throughout my company so that they can eliminate their redundancies, too. Now you’re ready to use AWS Serverless Application Repository to accelerate the way that your teams develop products, deliver features, and build and share production-ready applications.

Learn about AWS Services & Solutions – March AWS Online Tech Talks

Join us this March to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register now! Note – All sessions are free and in Pacific Time. Tech talks this month: Compute March 26, 2019 | 11:00 AM – 12:00 PM PT – Technical Deep Dive: Running Amazon EC2 Workloads at Scale – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand. March 27, 2019 | 9:00 AM – 10:00 AM PT – Introduction to AWS Outposts – Learn how you can run AWS infrastructure on-premises with AWS Outposts for a truly consistent hybrid experience. March 28, 2019 | 1:00 PM – 2:00 PM PT – Deep Dive on OpenMPI and Elastic Fabric Adapter (EFA) – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand. Containers March 21, 2019 | 11:00 AM – 12:00 PM PT – Running Kubernetes with Amazon EKS – Learn how to run Kubernetes on AWS with Amazon EKS. March 22, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application. Data Lakes & Analytics March 19, 2019 | 9:00 AM – 10:00 AM PT – Fuzzy Matching and Deduplicating Data with ML Transforms for AWS Lake Formation – Learn how to use ML Transforms for AWS Glue to link and de-duplicate matching records. March 20, 2019 | 9:00 AM – 10:00 AM PT – Customer Showcase: Perform Real-time ETL from IoT Devices into your Data Lake with Amazon Kinesis – Learn best practices for how to perform real-time extract-transform-load into your data lake with Amazon Kinesis. March 20, 2019 | 11:00 AM – 12:00 PM PT – Machine Learning Powered Business Intelligence with Amazon QuickSight – Learn how Amazon QuickSight leverages powerful ML and natural language capabilities to generate insights that help you discover the story behind the numbers. Databases March 18, 2019 | 9:00 AM – 10:00 AM PT – What’s New in PostgreSQL 11 – Find out what’s new in PostgreSQL 11, the latest major version of the popular open source database, and learn about AWS services for running highly available PostgreSQL databases in the cloud. March 19, 2019 | 1:00 PM – 2:00 PM PT – Introduction on Migrating your Oracle/SQL Server Databases over to the Cloud using AWS’s New Workload Qualification Framework – Get an introduction on how AWS’s Workload Qualification Framework can help you with your application and database migrations. March 20, 2019 | 1:00 PM – 2:00 PM PT – What’s New in MySQL 8 – Find out what’s new in MySQL 8, the latest major version of the world’s most popular open source database, and learn about AWS services for running highly available MySQL databases in the cloud. March 21, 2019 | 9:00 AM – 10:00 AM PT – Building Scalable & Reliable Enterprise Apps with AWS Relational Databases – Learn how AWS Relational Databases can help you build scalable & reliable enterprise apps. DevOps March 19, 2019 | 11:00 AM – 12:00 PM PT – Introduction to Amazon Corretto: A No-Cost Distribution of OpenJDK – Learn how to transform your approach to secure desktop delivery with a cloud desktop solution like Amazon WorkSpaces. End-User Computing March 28, 2019 | 9:00 AM – 10:00 AM PT – Fireside Chat: Enabling Today’s Workforce with Cloud Desktops – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand. Enterprise March 26, 2019 | 1:00 PM – 2:00 PM PT – Speed Your Cloud Computing Journey With the Customer Enablement Services of AWS: ProServe, AMS, and Support – Learn how to accelerate your cloud journey with AWS’s Customer Enablement Services. IoT March 26, 2019 | 9:00 AM – 10:00 AM PT – How to Deploy AWS IoT Greengrass Using Docker Containers and Ubuntu-snap – Learn how to bring cloud services to the edge using containerized microservices by deploying AWS IoT Greengrass to your device using Docker containers and Ubuntu snaps. Machine Learning March 18, 2019 | 1:00 PM – 2:00 PM PT – Orchestrate Machine Learning Workflows with Amazon SageMaker and AWS Step Functions – Learn about how ML workflows can be orchestrated with the rich features of Amazon SageMaker and AWS Step Functions. March 21, 2019 | 1:00 PM – 2:00 PM PT – Extract Text and Data from Any Document with No Prior ML Experience – Learn how to extract text and data from any document with no prior machine learning experience. March 22, 2019 | 11:00 AM – 12:00 PM PT – Build Forecasts and Individualized Recommendations with AI – Learn how you can build accurate forecasts and individualized recommendation systems using our new AI services, Amazon Forecast and Amazon Personalize. Management Tools March 29, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive on Inventory Management and Configuration Compliance in AWS – Learn how AWS helps with effective inventory management and configuration compliance management of your cloud resources. Networking & Content Delivery March 25, 2019 | 1:00 PM – 2:00 PM PT – Application Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield – Learn how to secure and accelerate your applications using AWS’s Edge services in this demo-driven tech talk. Robotics March 28, 2019 | 11:00 AM – 12:00 PM PT – Build a Robot Application with AWS RoboMaker – Learn how to improve your robotics application development lifecycle with AWS RoboMaker. Security, Identity, & Compliance March 27, 2019 | 11:00 AM – 12:00 PM PT – Remediating Amazon GuardDuty and AWS Security Hub Findings – Learn how to build and implement remediation automations for Amazon GuardDuty and AWS Security Hub. March 27, 2019 | 1:00 PM – 2:00 PM PT – Scaling Accounts and Permissions Management – Learn how to scale your accounts and permissions management efficiently as you continue to move your workloads to AWS Cloud. Serverless March 18, 2019 | 11:00 AM – 12:00 PM PT – Testing and Deployment Best Practices for AWS Lambda-Based Applications – Learn best practices for testing and deploying AWS Lambda based applications. Storage March 25, 2019 | 11:00 AM – 12:00 PM PT – Introducing a New Cost-Optimized Storage Class for Amazon EFS – Come learn how the new Amazon EFS storage class and Lifecycle Management automatically reduces cost by up to 85% for infrequently accessed files.

New – RISC-V Support in the FreeRTOS Kernel

FreeRTOS is a popular operating system designed for small, simple processors often known as microcontrollers. It is available under the MIT open source license and runs on many different Instruction Set Architectures (ISAs). Amazon FreeRTOS extends FreeRTOS with a collection of IoT-oriented libraries that provide additional networking and security features including support for Bluetooth Low Energy, Over-the-Air Updates, and Wi-Fi. RISC-V is a free and open ISA that was designed to be simple, extensible, and easy to implement. The simplicity of the RISC-V model, coupled with its permissive BSD license, makes it ideal for a wide variety of processors, including low-cost microcontrollers that can be manufactured without incurring license costs. The RISC-V model can be implemented in many different ways, as you can see from the RISC-V cores page. Development tools, including simulators, compilers, and debuggers, are also available. Today I am happy to announce that we are now providing RISC-V support in the FreeRTOS kernel. The kernel supports the RISC-V I profile (RV32I and RV64I) and can be extended to support any RISC-V microcontroller. It includes preconfigured examples for the OpenISA VEGAboard, QEMU emulator for SiFive’s HiFive board, and Antmicro’s Renode emulator for the Microchip M2GL025 Creative Board. You now have a powerful new option for building smart devices that are more cost-effective than ever before! — Jeff;  

Get to know the newest AWS Heroes – Winter 2019

AWS Heroes are superusers who possess advanced technical skills and are early adopters of emerging technologies. Heroes are passionate about sharing their extensive AWS knowledge with others. Some get involved in-person by running meetups, workshops, and speaking at conferences, while others share with online AWS communities via social media, blog posts, and open source contributions. 2019 is off to a roaring start and we’re thrilled to introduce you to the latest AWS Heroes: Aileen Gemma Smith Ant Stanley Gaurav Kamboj Jeremy Daly Kurt Lee Matt Weagle Shingo Yoshida Aileen Gemma Smith – Sydney, Australia Community Hero Aileen Gemma Smith is the founder and CEO of Vizalytics Technology. The team at Vizalytics serves public and private sector clients worldwide in transportation, tourism, and economic development. She shared their story in the Building Complex Workloads in the Cloud session, at AWS Canberra Summit 2017. Aileen has a keen interest in diversity and inclusion initiatives and is constantly working to elevate the work and voices of underestimated engineers and founders. At AWS Public Sector Summit Canberra in 2018, she was a panelist for We Power Tech, Inclusive Conversations with Women in Technology. She has supported and encouraged the creation of internships and mentoring programs for high school and university students with a focus on building out STEAM initiatives.             Ant Stanley – London, United Kingdom Serverless Hero Ant Stanley is a consultant and community organizer. He founded and currently runs the Serverless London user group, and he is part of the ServerlessDays London organizing team and the global ServerlessDays leadership team. Previously, Ant was a co-founder of A Cloud Guru, and responsible for organizing the first Serverlessconf event in New York in May 2016. Living in London since 2009, Ant’s background before serverless is primarily as a solutions architect at various organizations, from managed service providers to Tier 1 telecommunications providers. His current focus is serverless, GraphQL, and Node.js.                 Gaurav Kamboj – Mumbai, India Community Hero Gaurav Kamboj is a cloud architect at Hotstar, India’s leading OTT provider with a global concurrency record for live streaming to 11Mn+ viewers. At Hotstar, he loves building cost-efficient infrastructure that can scale to millions in minutes. He is also passionate about chaos engineering and cloud security. Gaurav holds the original “all-five” AWS certifications, is co-founder of AWS User Group Mumbai, and speaks at local tech conferences. He also conducts guest lectures and workshops on cloud computing for students at engineering colleges affiliated with the University of Mumbai.                 Jeremy Daly – Boston, USA Serverless Hero Jeremy Daly is the CTO of AlertMe, a startup based in NYC that uses machine learning and natural language processing to help publishers better connect with their readers. He began building cloud-based applications with AWS in 2009. After discovering Lambda, became a passionate advocate for FaaS and managed services. He now writes extensively about serverless on his blog, jeremydaly.com, and publishes Off-by-none, a weekly newsletter that focuses on all things serverless. As an active member of the serverless community, Jeremy contributes to a number of open-source serverless projects, and has created several others, including Lambda API, Serverless MySQL, and Lambda Warmer.               Kurt Lee – Seoul, South Korea Serverless Hero Kurt Lee works at Vingle Inc. as their tech lead. As one of the original team members, he has been involved in nearly all backend applications there. Most recently, he led Vingle’s full migration to serverless, cutting 40% of the server cost. He’s known for sharing his experience of adapting serverless, along with its technical and organizational value, through Medium. He and his team maintain multiple open-source projects, which they developed during the migration. Kurt hosts TechTalk@Vingle regularly, and often presents at AWSKRUG about various aspects of serverless and pushing more things to serverless.               Matt Weagle – Seattle, USA Serverless Hero Matt Weagle leverages machine learning, serverless techniques, and a servicefull mindset at Lyft, to create innovative transportation experiences in an operationally sustainable and secure manner. Matt looks to serverless as a way to increase collaboration across development, operational, security, and financial concerns and support rapid business-value creation. He has been involved in the serverless community for several years. Currently, he is the organizer of Serverless – Seattle and co-organizer of the serverlessDays Seattle event. He writes about serverless topics on Medium and Twitter.               Shingo Yoshida – Tokyo, Japan Serverless Hero Shingo Yoshida is the CEO of Section-9, CTO of CYDAS, as well as a founder of Serverless Community(JP) and a member of JAWS-UG (AWS User Group – Japan). Since 2012, Shingo has not only built a system with just AWS, but has also built with a cloud-native architecture to make his customers happy. Serverless Community(JP) was established in 2016, and meetups have been held 20 times in Tokyo, Osaka, Fukuoka, and Sapporo, including three full-day conferences. Through this community, thousands of participants have discovered the value of serverless. Shingo has contributed to these serverless scenes with many blog posts and books about serverless, including Serverless Architectures on AWS.               There are now 80 AWS Heroes worldwide. Learn about all of them and connect with an AWS Hero.

Podcast #299: February 2019 Updates

Simon guides you through lots of new features, services and capabilities that you can take advantage of. Including the new AWS Backup service, more powerful GPU capabilities, new SLAs and much, much more! Chapters: Service Level Agreements 0:17 Storage 0:57 Media Services 5:08 Developer Tools 6:17 Analytics 9:54 AI/ML 12:07 Database 14:47 Networking & Content Delivery 17:32 Compute 19:02 Solutions 21:57 Business Applications 23:38 AWS Cost Management 25:07 Migration & Transfer 25:39 Application Integration 26:07 Management & Governance 26:32 End User Computing 29:22 Additional Resources Topic || Service Level Agreements 0:17 Amazon Kinesis Data Firehose Announces 99.9% Service Level Agreement Amazon Kinesis Data Streams Announces 99.9% Service Level Agreement Amazon Kinesis Video Streams Announces 99.9% Service Level Agreement Amazon EKS Announces 99.9% Service Level Agreement Amazon ECR Announces 99.9% Service Level Agreement Amazon Cognito Announces 99.9% Service Level Agreement AWS Step Functions Announces 99.9% Service Level Agreement AWS Secrets Manager Announces Service Level Agreement Amazon MQ Announces 99.9% Service Level Agreement Topic || Storage 0:57 Introducing AWS Backup Introducing Amazon Elastic File System Integration with AWS Backup AWS Storage Gateway Integrates with AWS Backup – Amazon Web Services Amazon EBS Integrates with AWS Backup to Protect Your Volumes AWS Storage Gateway Volume Detach and Attach – Amazon Web Services AWS Storage Gateway – Tape Gateway Performance Amazon FSx for Lustre Offers New Options and Faster Speeds for Working with S3 Data Topic || Media Services 5:08 AWS Elemental MediaConvert Adds IMF Input and Enhances Caption Burn-In Support AWS Elemental MediaLive Adds Support for AWS CloudTrail AWS Elemental MediaLive Now Supports Resource Tagging AWS Elemental MediaLive Adds I-Frame-Only HLS Manifests and JPEG Outputs Topic || Developer Tools 6:17 Amazon Corretto is Now Generally Available AWS CodePipeline Now Supports Deploying to Amazon S3 AWS Cloud9 Supports AWS CloudTrail Logging AWS CodeBuild Now Supports Accessing Images from Private Docker Registry Develop and Test AWS Step Functions Workflows Locally AWS X-Ray SDK for .NET Core is Now Generally Available Topic || Analytics 9:54 Amazon Elasticsearch Service doubles maximum cluster capacity with 200 node cluster support Amazon Elasticsearch Service announces support for Elasticsearch 6.4 Amazon Elasticsearch Service now supports three Availability Zone deployments Now bring your own KDC and enable Kerberos authentication in Amazon EMR Source code for the AWS Glue Data Catalog client for Apache Hive Metastore is now available for download Topic || AI/ML 12:07 Amazon Comprehend is now Integrated with AWS CloudTrail Object Bounding Boxes and More Accurate Object and Scene Detection are now Available for Amazon Rekognition Video Amazon Elastic Inference Now Supports TensorFlow 1.12 with a New Python API New in AWS Deep Learning AMIs: Updated Elastic Inference for TensorFlow, TensorBoard 1.12.1, and MMS 1.0.1 Amazon SageMaker Batch Transform Now Supports TFRecord Format Amazon Transcribe Now Supports US Spanish Speech-to-Text in Real Time Topic || Database 14:47 Amazon Redshift now runs ANALYZE automatically Introducing Python Shell Jobs in AWS Glue Amazon RDS for PostgreSQL Now Supports T3 Instance Types Amazon RDS for Oracle Now Supports T3 Instance Types Amazon RDS for Oracle Now Supports SQLT Diagnostics Tool Version 12.2.180725 Amazon RDS for Oracle Now Supports January 2019 Oracle Patch Set Updates (PSU) and Release Updates (RU) Amazon DynamoDB Local Adds Support for Transactional APIs, On-Demand Capacity Mode, and 20 GSIs Topic || Networking and Content Delivery 17:32 Network Load Balancer Now Supports TLS Termination Amazon CloudFront announces six new Edge locations across United States and France AWS Site-to-Site VPN Now Supports IKEv2 VPC Route Tables Support up to 1,000 Static Routes Topic || Compute 19:02 Announcing a 25% price reduction for Amazon EC2 X1 Instances in the Asia Pacific (Mumbai) AWS Region Amazon EKS Achieves ISO and PCI Compliance AWS Fargate Now Has Support For AWS PrivateLink AWS Elastic Beanstalk Adds Support for Ruby 2.6v AWS Elastic Beanstalk Adds Support for .NET Core 2.2 Amazon ECS and Amazon ECR now have support for AWS PrivateLink GPU Support for Amazon ECS now Available AWS Batch now supports Amazon EC2 A1 Instances and EC2 G3s Instances Topic || Solutions 21:57 Deploy Micro Focus Enterprise Server on AWS with New Quick Start AWS Public Datasets Now Available from UK Meteorological Office, Queensland Government, University of Pennsylvania, Buildzero, and Others Quick Start Update: Active Directory Domain Services on the AWS Cloud Introducing the Media2Cloud solution Topic || Business Applications 23:38 Alexa for Business now offers IT admins simplified workflow to setup shared devices Topic || AWS Cost Management 25:07 Introducing Normalized Units Information for Amazon EC2 Reservations in AWS Cost Explorer Topic || Migration and Transfer 25:39 AWS Migration Hub Now Supports Importing On-Premises Server and Application Data to Track Migration Progress Topic || Application Integration 26:07 Amazon SNS Message Filtering Adds Support for Multiple String Values in Blacklist Matching Topic || Management and Governance 26:32 AWS Trusted Advisor Expands Functionality With New Best Practice Checks AWS Systems Manager State Manager Now Supports Management of In-Guest and Instance-Level Configuration AWS Config Increases Default Limits for AWS Config Rules VIntroducing AWS CloudFormation UpdateReplacePolicy Attribute Automate WebSocket API Creation in Amazon API Gateway Using AWS CloudFormation AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise Now Support AWS CloudFormation VPC Route Tables Support up to 1,000 Static Routes Amazon CloudWatch Agent Adds Support for Procstat Plugin and Multiple Configuration Files Improve Security Of Your AWS SSO Users Signing In To The User Portal By Using Email-based Verification Topic || End User Computing 29:22 Introducing Amazon WorkLink AppStream 2.0 enables custom scripts before session start and after session termination About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

Podcast 298: [Public Sector Special Series #6] – Bringing the White House to the World

Dr. Stephanie Tuszynski (Director of the Digital Library – White House Historical Association) speaks about how they used AWS to bring the experience of the White House to the world. Additional Resources White House History About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

Pages

Recommended Content