Microsoft Azure Blog

Azure Backup update – New features in IaaS VM backup support

Today we are excited to announce new features to Azure Backup’s support for Azure IaaS VM backup, which was previewed earlier this year. The new set of features include support for virtual machine backup with more data disks, long-term retention and more. These features strengthen Azure Backup’s ability to backup Azure IaaS virtual machines in a simple and reliable way.   New features in Azure Backup Improved data disk limit on backup virtual machines Support for virtual machine backup with 16 data disks in addition to the OS disk. This improved support comes with a more predictable backup time. Support for long-term retention Virtual machine backups can be retained for up to 99 years. Flexible and industry standard GFS schema provides powerful customization of retention choices for backup copies. Example backup retention settings for retaining up to 99 years Enhanced monitoring and reporting Downloadable summary report gives a snapshot of backup and restore operations on a daily, weekly and monthly basis. Every backup job now includes the backup data size transferred value to track the storage consumption for a specific backup job. Export job functionality provides detailed information on jobs triggered in specified timeframes and can be customized to get job details per specified filters. Addition of powerful choices Offline VM backup: Ability to configure protection on an offline VM. Cancel job: Ability to cancel an in-progress backup or restore job. Built-in backup policy: Every backup vault created will come with a built-in backup policy to save few clicks in setting up backup at scale. Ability to restore the virtual machine to storage account of choice. Three simple steps to setup backup for Azure Virtual Machines Setting up backup for Azure virtual machines can be achieved in three simple steps: Discover the machines that can be protected in the Azure Backup vault. Register the discovered virtual machines to Azure Backup vault. Protect the registered virtual machines by associating them with a policy defining backup schedule and how long you want to retain.   In case you don’t have vault, start by creating an Azure Backup vault in the same region as the virtual machines you want to backup. You can also watch this video by Corey Sanders on IaaS VM backup. Related links and additional content: Learn more about Azure Backup. Looking for documentation? Check out the Azure IaaS VM backup documentation. Click for a free Azure trial subscription. Need help? Reach out to Azure Backup forum for support. Tell us how we can improve Azure Backup: contribute new ideas and up-vote existing ones. Follow us on Twitter and Channel9 for the latest news and updates.

New Azure IT Workload: Web-based, line of business application

The new High-Availability Line of Business Application content set guides you through the end-to-end process so you can: Understand the value of hosting a web-based, line of business application in Azure infrastructure services. Create a proof-of-concept configuration or a dev-test environment. Configure the production workload in a cross-premises virtual network.   The result of this process is a highly-available, web-based, intranet line of business application, accessible to on-premises users.     This configuration corresponds to the Line of Business Applications architecture blueprint.     The end-to-end configuration of this production workload using the Azure Resource Manager deployment model consists of the following phases: Phase 1: Configure Azure. Create a resource group, storage accounts, a cross-premises virtual network, and availability sets. Phase 2: Configure domain controllers. Create and configure replica Active Directory Domain Services (AD DS) domain controllers. Phase 3: Configure SQL Server infrastructure. Create and configure the SQL Server and majority node server virtual machines, and then create the cluster. Phase 4: Configure web servers. Configure an internal load balancer and the two web servers, and then and load your web application on them. Phase 5: Create the Availability Group and add the application databases. Prepare the line of business application databases and add them to a SQL Server AlwaysOn Availability Group.   These phases are designed to align with IT departments or typical areas of expertise. For example: Phase 1 can be done by networking infrastructure staff. Phase 2 can be done by identity management staff. Phases 3 and 5 can be done by database administrators. Phase 4 can be done by web server administrators and web application developers.   To make the Azure configuration foolproof, Phases 1 and 2 contain configuration tables for you to fill out with all of the required settings. For example, here is Table V for the cross-premises virtual network settings from Phase 1.     To make the configuration of the Azure elements as fast as possible, the phases use Resource Manager-based Azure PowerShell command blocks wherever possible and prompt you to insert the configuration table settings as variables. Here is an example of the Azure PowerShell command block for creating the first replica domain controller. # Set up subscription and key variables $subscr="<name of the Azure subscription>" Set-AzureSubscription -SubscriptionName $subscr Switch-AzureMode AzureResourceManager $rgName="<resource group name>" $locName="<Azure location of your resource group>" $saName="<Table ST – Item 2 – Storage account name column>" $vnetName="<Table V – Item 1 – Value column>" $avName="<Table A – Item 1 – Availability set name column>" # Create the first domain controller $vmName="<Table M – Item 1 - Virtual machine name column>" $vmSize="<Table M – Item 1 - Minimum size column>" $staticIP="<Table V – Item 6 - Value column>" $vnet=Get-AzurevirtualNetwork -Name $vnetName -ResourceGroupName $rgName $nic=New-AzureNetworkInterface -Name ($vmName +"-NIC") -ResourceGroupName $rgName -Location $locName -SubnetId $vnet.Subnets[1].Id -PrivateIpAddress $staticIP $avSet=Get-AzureAvailabilitySet –Name $avName –ResourceGroupName $rgName $vm=New-AzureVMConfig -VMName $vmName -VMSize $vmSize -AvailabilitySetId $avset.Id $diskSize=<size of the extra disk for AD DS data in GB> $storageAcc=Get-AzureStorageAccount -ResourceGroupName $rgName -Name $saName $vhdURI=$storageAcc.PrimaryEndpoints.Blob.ToString() + "vhds/" + $vmName + "-ADDSDisk.vhd" Add-AzureVMDataDisk -VM $vm -Name "ADDSData" -DiskSizeInGB $diskSize -VhdUri $vhdURI -CreateOption empty $cred=Get-Credential -Message "Type the name and password of the local administrator account for the first domain controller." $vm=Set-AzureVMOperatingSystem -VM $vm -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent -EnableAutoUpdate $vm=Set-AzureVMSourceImage -VM $vm -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version "latest" $vm=Add-AzureVMNetworkInterface -VM $vm -Id $nic.Id $storageAcc=Get-AzureStorageAccount -ResourceGroupName $rgName -Name $saName $osDiskUri=$storageAcc.PrimaryEndpoints.Blob.ToString() + "vhds/" + $vmName + "-OSDisk.vhd" $vm=Set-AzureVMOSDisk -VM $vm -Name "OSDisk" -VhdUri $osDiskUri -CreateOption fromImage New-AzureVM -ResourceGroupName $rgName -Location $locName -VM $vm If you have any feedback on this new content set or approach to documenting Azure IT workloads, please comment on this blog post or leave Disqus comments on the individual articles.

Cloud Foundry on Azure Preview 2 Now Available

Following on the footsteps of the Cloud Foundry on Azure Preview, we are excited to announce the release of an update incorporating feedback received over the past few months from the community. Today, the Cloud Foundry on Azure Preview 2 is available. Cloud Foundry on Azure Preview 2 takes advantage of the newest open source Cloud Foundry technologies and builds on Bosh-Init framework. It also integrates with the latest Azure resource management framework, supporting multiple Cloud Foundry VMs. With these new updates, customers will be able to deploy a standard Cloud Foundry infrastructure on Azure using Bosh-Init.     To setup and deploy Cloud Foundry on Azure using step-by-step commands, refer to Cloud Foundry on Azure Preview 2 Step-by-Step Guidance. To setup and deploy Cloud Foundry on Azure using the Azure Resource Manager template, refer to Cloud Foundry on Azure Preview 2 Template Guidance. In the coming weeks, we will upstream the Azure code to the Cloud Foundry community branch. As always, we are looking to make improvements, so please try it out and send in your feedback. We look forward to hearing about your Azure Cloud Foundry experiences and suggestions!

Row-Level Security for SQL Database is Generally Available

Row-Level Security (RLS) for SQL Database is now generally available. RLS enables you to store data for many users in a single database and table, while at the same time restricting row-level access based on a user’s identity, role, or execution context. RLS centralizes access logic within the database itself, which simplifies and reduces the risk of error in your application code.   RLS can help customers develop secure applications for a variety of scenarios. For instance: Restricting access to financial data based on an employee’s region and role Ensuring that tenants of a multi-tenant application can only access their own rows of data Enabling different analysts to report on different subsets of data based on their position   As a concrete example, we’d like to highlight how K2 Appit is leveraging RLS today to ensure isolation of individual users’ data: Before the advent of RLS, we would have had to meet this requirement via diligent use of query predicates, but this mode of security enforcement is onerous and prone to bugs. By using RLS, we were able to accelerate our development while ensuring the policy-based management applies across all database-access vectors. Furthermore, the data access layer and business logic are able to evolve independently from the RLS policy logic; this separation of concerns improves code quality. The developers could use a policy language they were familiar with – T-SQL – and as such we were productive on RLS from day one. Ultimately, being able to state that row-level security is taken care of by Azure SQL Database itself gives both the customer and our teams confidence in how we protect our customers’ data. - Grant Dickinson, Architect, K2   Thanks to valuable input from our preview customers, we’ve incorporated a significant amount of customer feedback into this first version of RLS. We’re continuing this iterative process, too, so stay tuned for future announcements on new RLS capabilities. If you have any feedback or questions for our team, please leave us a comment below. Lots of customers are using RLS today, and we encourage you to try it, too. To get started, check out our MSDN documentation. You can also find additional demos, tips, and tricks in this introductory post, Channel9 video, and on our SQL Security Blog.

Azure Mobile Apps August 2015 update

Last month, we released an update to the Azure Mobile Apps .NET server SDK making it easier to get started and facilitated use of the SDK with any ASP.NET project. Today, we’re announcing changes to the Mobile App portal, further streamlining the experience and significantly simplifying the process for building web and mobile apps. With these updates, we offer a set of mobile capabilities like push, mobile authentication, and offline sync, which customers can now add to any App Service app. Continuous Web and Mobile experience One of the great things about Azure App Service is that it offers one integrated platform for running both Web and Mobile Apps, among others. We often hear from customers that their web and mobile solutions usually go hand-in-hand. It is becoming increasingly rare to find a mobile-only or a web-only experience. To support this, you can now simply add any mobile capabilities to your web app (or any other app type). Just click on Settings, and enable any features you like from the “Mobile App” section.   Create a Mobile App The “Create Mobile App” gesture has been greatly simplified and now mirrors the “Create Web App” flow. When you create a Mobile App in the Azure Management Portal, you get a resource that looks just like a Web App. You can then add a database, notification hub, or authentication features via the Settings blade. Add features Each of the mobile features may be enabled individually from the “Mobile App” settings category, in any order and combination of your choosing. The “Quickstart” option allows you to access complete Mobile App projects which show you an end-to-end application, both client and server, which you can use to start your development. These projects will require you to create a SQL database, but the quickstart will prompt you for this if you do not already have one.     You can create and connect data stores through the Mobile App Settings’ “Data” option. Right now, only SQL is available, but more providers will be coming soon. For more about working with data and enabling offline capabilities, see the Add offline sync to your app tutorial.     Push notifications in App Service are handled through Notification Hubs. You can provision a notification hub and link it to your application by selecting “Push” under the Mobile App settings category. Once a hub has been linked, the menu item will let you configure the push notification services for that hub. To learn more about working with notifications, see the Add push to your app tutorial.     You can also set up authentication through an App Service gateway. If your resource group already has a gateway, you will see configuration settings for your identity providers. If not, you will be prompted to create a new gateway. Please note that apps can also enable authentication through the “Authentication/Authorization” option in the settings blade. These scenarios will be converging, but for now, Mobile and API Apps should leverage the Mobile App category’s “User Authentication” option. For more, see the Add authentication to your app tutorial.     Mobile Apps is still in preview, but we hope you’ll try things out! Please let us know what you think in the comments, forums, or @AzureMobile, If you have suggestions for something you’d like to see in the product, you can also let us know on our feedback site.

Why Your Private Cloud is Failing – Join Me at OpenStack Silicon Valley

Gartner analyst Tom Bittman recently posted a blog showing that the majority of corporate private clouds are failing, “I was a little surprised that 95% of the 140 respondents (who had private clouds in place) said something was wrong with their private cloud. [1]” At that high a ratio, it doesn’t matter what technology you use for the private cloud – Microsoft, OpenStack, VMware or something else. His report gives a variety of reasons for these failures but the net-net boils down to this: The average corporate private cloud environment wasn’t built to the specifications of its internal customers but, instead, was a reflection of what IT was comfortable doing and wanted the private cloud to be. Most of these clouds aren’t cloudy at all but are more, what Forrester Research Inc. Senior Analyst Lauren Nelson calls, “Enhanced Virtualization” environments. This means they are traditional static virtual machine environments with a little automation to drive up IT’s efficiency. They aren’t self-service to the developer, don’t provide fast access to fully configured environments, aren’t metered (let alone pay-per-use) and typically don’t support Chef or Ansible-based deployments, let alone containers. For private cloud administrators to capture the new enterprise applications, you need to rethink your approach and make the radical and culturally difficult shift from infrastructure management to service delivery. You need to learn from the clouds, but more importantly, reflect what your developers want from your private cloud solution. Too many IT infrastructure managers see public cloud as the enemy and thus don’t bother to either understand why their developers are using them, or get their hands dirty understanding just how these environments work. This mentality serves only to reinforce the long-standing views held by these administrators, and if you are trying to appeal to your front-line developers, this approach is just wrong. A cloudwashed virtual server environment that takes two days to deploy new workloads, fulfilling requests through the help desk and having no cost transparency will lose every day to a public cloud. And don’t even try the security or reliability card. Really? You honestly think your static VM environment hiding behind outdated firewall-based security hosted out of a 1990s era data center on servers bought in 2013 is going to trump a public cloud with clusters of next generation data centers in 19 geographic regions being protected by a team of top security professionals? And by pitting the public cloud as the enemy, you forgo any opportunity to partner with your developers around helping manage and monitor the new and often very important applications they are deploying to public cloud environments, which is where IT pros should really be concentrating your energies. What you should be doing instead is teach your infrastructure admin community how to evolve from static in-house vs. cloud, to a true hybrid cloud portfolio. Help your IT infrastructure administrators understand why the front-line developers value self-service so much and how it doesn’t breed chaos in the data center — but just the opposite. Help them understand that a career path towards service definition and cost transparency is better than fighting to try and keep the company from using more public cloud – that ship sailed long ago. Don’t agree? Agree but need more guidance to get this right? Let’s have the discussion. Come join me at the OpenStack Silicon Valley forum on Wednesday, August 26th where I’ll be leading a talk on this topic and debating the merits of this approach with colleagues from Google and CoreOS. IT Pros are critical to enterprise cloud success but you have to evolve to own this opportunity. We can help and want to engage with you to ensure your success. [1] Gartner Blog, Problems Encountered by 95% of Private Clouds, Tom Bittman, February 5, 2015

Meet AzureCon

We’re excited to announce AzureCon, a virtual event on September 29. AzureCon is for the Azure community—developers and IT professionals who want to learn about the latest Azure innovation and understand how customers are using the platform to transform their business. AzureCon builds on last year’s AzureConf and will feature even more technical sessions, opportunities to hear directly from the engineering team behind the Azure platform and learn about what’s next with Azure. Following keynotes by Azure executives, members of our product team and community will participate in live Q&A sessions and deliver technical sessions. The wide range of sessions are designed to meet your needs—whether you’re just getting started or want to dive deep into the unlimited possibilities enabled by Azure. When you join us at AzureCon, you’ll be among the first to hear about what’s next with Azure. New innovation in compute, storage, and networking will help you easily build, deploy, and manage apps at scale and on your terms. Enabling hybrid, secure, and easy-to-manage cloud environments are core to our ongoing investments based on your feedback. Market-leading innovation in areas such as machine learning, IoT, containers, and more will take center stage at AzureCon. Don’t miss it! And expect to hear directly from our customers as they share their stories and best practices using Azure. Startups and enterprises continue to accelerate time to market, differentiate from the competition, and reduce costs by taking advantage of the breadth and depth of the Azure platform. Event Experience AzureCon will include live, interactive and on-demand sessions. All of the content from AzureCon will be available on-demand after the event, so you can watch the sessions at your convenience. Live keynotes delivered by Scott Guthrie, Jason Zander, Bill Staples, and other executives. Interactive Q&As with keynote speakers, technical leaders, and partners. Technical lap-around sessions presented by Mark Russinovich, Scott Hanselman, and other technical leaders. More than 50 on-demand deep-dive technical sessions that drill into Azure features and capabilities led by members of our product team and community members. We will be sharing a view of these sessions in the coming weeks. Local Experience We encourage members of the Azure community to organize viewing parties in partnership with local Microsoft teams or Meetups to watch the live event as a group. This is a great opportunity to learn about the future of the Azure while meeting with other members of the community. More information about AzureCon is available on our AzureCon website. We invite you to follow us on Twitter: @Azure and look out for our event hashtag #AzureCon to get the latest updates about the event. Stay tuned for more AzureCon announcements and register today!

Azure Site Recovery is now available in Central US, North Central US, South Central US, East US2

We are pleased to announce that we have expanded the availability of Azure Site Recovery (ASR) to all non-government regions in the United States. In order to cater to the ongoing demand of our customers, we have consistently grown our service around the globe to cover most Azure regions. With this expansion, Azure Site Recovery is now available in 17 regions worldwide (Australia East, Australia Southeast, Brazil South, Central US, East Asia, East US, East US2, Japan East, Japan West, North Europe, North Central US, Southeast Asia, South Central US, West Europe, West US, North East China, East China.) Customers can now select any of the above regions to deploy ASR. Irrespective of the region you choose to deploy in, ASR guarantees the same reliability and performance levels as set forth in the ASR SLA. To learn more about Azure Site Recovery see Getting started with Azure Site Recovery For more information about the regional availability of our services, visit the Azure Regions page.

Azure DocumentDB: JavaScript as Modern Day T-SQL

Azure DocumentDB’s database engine is purposefully designed from the ground up to provide first class support for JSON and JavaScript. Our vision is for JavaScript developers to build applications without having to deal with the entity mappers, schemas, code generation tools, type adornments and other duct tapes. Our deep commitment to JSON and JavaScript inside the database engine is surfaced in the following ways:   Schema Agnostic Indexing: DocumentDB does not require developers to specify schemas or secondary indexes in order to support consistent queries. The database engine is designed to automatically index every property inside every JSON document, while simultaneously ingesting a sustained volume of rapid writes. This is a crucial step to removing the friction between ever evolving modern applications and the database. SQL query dialect rooted inside the JavaScript type system: DocumentDB’s SQL query dialect is based on the JavaScript’s type system. This in-turn not only removes the type system mismatch between JavaScript applications and DocumentDB, but also enables seamless invocation of user defined functions (UDFs) written entirely in JavaScript from within a SQL query. JavaScript Language Integrated Transactions: As part of our bet on JavaScript, we allow developers to register stored procedures and triggers written in JavaScript with DocumentDB collections. These stored procedures/triggers get executed in a sandboxed manner inside the database engine within an ambient database transaction. The stored procedure (or trigger) can update multiple documents transactionally. The database transaction is committed upon successful completion of the stored procedure (or trigger); the database transaction is aborted when the JavaScript “throw” keyword is executed! JavaScript Language Integrated Queries: Today, we are pleased to announce we’re taking this vision another step further by introducing a JavaScript language integrated query API to our JavaScript server-side SDK.   JavaScript Language Integrated Queries: The no SQL alternative for NoSQL   “DocumentDB’s fluent JS queries allow for a JS chaining syntax that’s easy to pick up and familiar to those who’ve used ES5’s array built-ins or JS libraries like Lodash or Underscore.” John-David Dalton, Creator of lodash   Consider the following two JSON documents (and their tree representations) in a collection:   { "locations": [ { "country": "Germany", "city": "Berlin" }, { "country": "France", "city": "Paris" } ], "headquarter": "Belgium", "exports": [ { "city": "Moscow" }, { "city": "Athens" } ] } { "locations": [ { "country": "Germany", "city": "Bonn", "revenue": 200 } ], "headquarter": "Italy", "exports": [ { "city":"Berlin", "dealers": [ {"name": "Hans"} ] }, { "city": "Athens" } ] }   Previously, you have been able to execute a stored procedure which can query for all “exports” from documents containing the “headquarter” as “Belgium.”   function() { var filterQuery = 'SELECT * from companies c where c.headquarter = "Belgium"'; var isAccepted = __.queryDocuments(__.getSelfLink(), filterQuery, function(err, docs, options) { if (err) throw new Error(err.number + err.message); __.response.setBody(docs); }); if (!isAccepted) __.response.setBody('Query timed out'); }   Upon execution, the stored procedure will return the response document with the exports:   { "results": [ { "locations": [ { "country": "Germany", "city": "Berlin" }, { "country": "France", "city": "Paris" } ], "headquarter":"Belgium", "exports": [ { "city": "Moscow" }, { "city": "Athens" } ] } ] }   Notice the queries issued from the stored procedure are still in poorly typed SQL. With today’s announcement of language integrated queries, you no longer need to write SQL queries in JavaScript. Contrast this with the new language integrated query below:   function() { var resp = __.filter(function(company) { return company.headquarter == 'Belgium'; }); if (!resp.isAccepted) __.response.setBody('Query timed out'); }   Cool isn’t it? SQL to JavaScript Query API Cheat Sheet The following table illustrates a few equivalent queries using DocumentDB’s SQL grammar and the new JavaScript query API:   DocumentDB SQL JavaScript Language Integrated Query __.queryDocuments(__.getSelfLink(), "SELECT * " + "FROM docs " + "WHERE ARRAY_CONTAINS(docs.Tags, 123)" , function(err, docs, options) { __.response.setBody(docs); }); __.filter(function(x) { return x.Tags && x.Tags.indexOf(123) > -1; }); __.queryDocuments(__.getSelfLink(), "SELECT docs.id, docs.message AS msg " + "FROM docs " + "WHERE docs.id='X998_Y998'" , function(err, docs, options) { __.response.setBody(docs); }); __.chain() .filter(function(doc) { return doc.id === "X998_Y998"; }) .map(function(doc) { return { id: doc.id, msg: doc.message }; }) .value(); __.queryDocuments(__.getSelfLink(), "SELECT VALUE tag " + "FROM docs " + "JOIN tag IN docs.Tags " + "ORDER BY docs._ts" , function(err, docs, options) { __.response.setBody(docs); }); __.chain() .filter(function(doc) { return doc.Tags && Array.isArray(doc.Tags); }) .sortBy(function(doc) { return doc._ts; }) .pluck("Tags") .flatten() .value()   To get started with the DocumentDB server-side JavaScript SDK, sign up for DocumentDB and check out our documentation here. If you need any help, please reach out to us on Stack Overflow, the Azure DocumentDB MSDN Developer Forums, or schedule a 1:1 chat with the DocumentDB engineering team. To stay up to date on the latest DocumentDB news and features, follow us on Twitter.

Microsoft and Mesosphere partner to bring Mesos container orchestration across Windows and Linux worlds

In April, we announced Azure’s support for the Mesosphere Datacenter Operating System (DCOS), which expanded the options for container deployment and orchestrations solutions on Azure. Since that time, we have been continuing to build on our close partnership with Mesosphere and today we are excited to announce that Microsoft and Mesosphere are partnering to bring the Mesos’ highly scalable and elastic container orchestration to the world by porting directly to Windows Server. Mesos on Windows Server is an open source project and is part of the core Apache Mesos. While this is being developed by both Mesosphere and Microsoft, we’re inviting others in the community to join in the effort. The code will be freely available, making it easy to adopt by all users of Mesos as well as the Mesosphere DCOS, which is built atop Mesos. To learn more about this announcement, check out the Server and Cloud blog.

Microsoft Showcases Software Defined Networking Innovation at SIGCOMM

Microsoft’s Albert Greenberg, Distinguished Engineer, Networking Development, has been chosen to receive the 2015 ACM SIGCOMM Award for pioneering the theory and practice of operating carrier and data center networks. ACM SIGCOMM’s highest honor, the award recognizes lifetime achievement and contributions to the field of computer networking. It is awarded annually to a person whose work, over the course of his or her career, represents a significant contribution and a substantial influence on the work and perceptions of others in the field. Albert will accept the award at the 2015 ACM SIGCOMM conference in London, UK, where he delivered the conference keynote address. Below, Albert shares more about Microsoft’s innovative approach to networking and what’s going on at SIGCOMM. Microsoft Showcases Software Defined Networking Innovation at SIGCOMM This week I had the privilege of delivering the keynote address at ACM SIGCOMM, one of our industry’s premier networking events.   My colleagues and I are onsite to talk about and to demonstrate some of Microsoft’s latest research and innovations, and share more about how we leverage software defined networking (SDN) to power Microsoft Azure. Microsoft Bets Big on SDN To meet growing cloud demand for Azure, we have invested over $15 billion in building our global cloud infrastructure since we opened our first datacenter in 1989.  Our datacenters hold over a million physical servers, and it is unthinkable to run infrastructure at this scale using the legacy designs the industry produced prior to the cloud revolution.  In my keynote, I discussed how we applied the principles of virtualized, scale-out, partitioned cloud design and central control to everything from the Azure compute plane implementation to cloud compute, storage, and of course, to networking. Given the scale we had to build to, and the need to create software defined datacenters, for millions of customers, we had to change everything in networking, and so we did – from optical to server to NIC to datacenter networks to WAN to Edge/CDN to last mile. It is always a pleasure to speak at SIGCOMM, since key ideas of hyperscale SDN were put forward in the VL2 (Virtual Layer 2) paper at SIGCOMM 2009: (a) to build a massive uniform high bandwidth Clos network to provide the physical datacenter fabric, and (b) to build for each and every customer a virtual network through software running on every host. Together, these two enabled SDN and Network Function Virtualization (NFV), through iterative development by amazing teams of talented Microsoft Azure engineers. In particular, over the last ten years, we have revised the physical network design every 6 months, constantly improving scale and reliability.   Through virtualizing the network in the host, we ship new network virtualization capabilities weekly, updating the capabilities of services such as Azure ExpressRoute. First, in the keynote, I talked about the challenges of managing massive scale Clos networks built with commodity components (in optics, in merchant silicon, and in switches) to achieve 100X improvements in capex and opex as compared to prior art. Indeed, we now think back on prior art of data center networking as ‘snowflakes’ — closed, scale up designs, lacking real vendor interoperability, each specialized and fragile, requiring frequent human intervention to manage. In contrast, our cloud scale physical networks are managed via a simple, common platform (the network state service), which abstracts the details of the complexity of individual networks, and allows us to build applications for infrastructure deployment, fault management, and traffic engineering, as loosely couple applications on the platform. I talked about the Switch Abstraction Interface (SAI) and the Azure Cloud Switch (Microsoft’s own switching software that runs on top of the Switch Abstraction Interface) inside the switch. At Microsoft, we are big believers in the power of open technology, and networking is no exception. The SAI is the first open-standard C API for programming network switching ASICs. With ASIC vendors are innovating ferociously fast the formerly strict coupling of switch hardware to protocol stack software prevented us from choosing the best combination of hardware and software to build networks, because we couldn’t port our software fast enough. SAI enables software to program multiple switch chips without any changes, making the base router platform simple and consistent. Tomorrow, we will give a SAI and Azure CloudSwitch demonstration with many industry collaborators. Managing massive scale Clos networks calls has called for new innovations in management, monitoring, and analytics. We took on those challenges by using technologies developed for cloud scale — leveraging the same big data and monitoring capabilities that Azure makes available to our customers. At cloud scale, component failures will happen, and Azure is fine with that as we scale out to numerous components. Our systems detect, pinpoint, isolate, and route around the faulty components. At Sigcomm, this year, we talk about two such technologies — PingMesh and EverFlow — used every day to manage the Azure network. In the second part of the talk, I focused on network virtualization, allowing customers to create on shared cloud infrastructure, the full range of networking capabilities that are available on private, dedicated infrastructure. Though virtual networks (VNets), customers can seamlessly span their enterprises to the cloud, which allows customers to protect existing investment, while upgrading to the cloud at their own pace. To make VNets work at higher levels of reliability, we needed to develop two technologies: (a) scalable controllers capable of managing 500,000 servers per regional datacenter, and (b) fast packet processing technologies on every host that light up the functionality through controller APIs. All of this is stitched together through the same principles of cloud design applied to our physical network, as well as to our compute and storage services. Again, we leverage cloud technologies to build the Azure SDN. In particular, Azure Service Fabric provides the micro-service platform on which we have built our Virtual Networking SDN controller. Service Fabric takes care of scale-up, scale-down, load balancing, fault management, leader election, key-value stores, and more, so that our controllers can focus on the key virtual functions needed to light up networking features on demand and huge scale. ExpressRoute, where we essentially create a datacenter scale router, through virtualization, and networking capabilities on every host, enables customers that have an ISP and IXP partner to attach Azure immediately to their enterprises, for their VNets and Azure’s native compute and storage services.  It’s been little over a year since we announced ExpressRoute and that time, the adoption has been phenomenal with new ISP and IXP partners onboarding at amazing pace. This gives me the opportunity to talk about our Virtual Filtering Platform (VFP). In VFP we have developed the extensions for SDN that run on every host in the data center. VFP provides a simple set of networking APIs and abstractions that allow Microsoft to introduce new networking features in an agile and efficient way, through chaining typed match action tables. VFP is fast and simple as it focuses on packet transformations and forwarding. All service semantics are removed from the host and located in the SDN controller. That said, there are limits to what can be done purely in software with tight controls on cost and low latency. As a result, we introduced new super low latency RDMA technologies, and a new protocol that we run in Azure’s NICs, DCQCN, also being presented at this year’s SIGCOMM. I showcased some performance measurements taken for Bing, showing that we dramatically improve latency, going from 700us to 90us at the 99th percentile. That brings me to the question of how we can leverage hardware to offload the VFP technologies, and get even better performance as networking continues on its journey to every greater speed, feature set, and support for larger numbers of VMs and containers. Azure SmartNIC meets these challenges. Our SmartNIC incorporates Field Programmable Gate Arrays (FPGAs), which enable reconfigurable network hardware. Through FPGA’s we can create new hardware capabilities at the speed of software.  No one knows what SDN capabilities will be needed a year from now. Our FPGA-based SmartNIC allows us to reprogram the hardware to meet new needs, as they appear — reprogramming, not redeploying hardware. To make the enormous potential clear to the audience, I demonstrated encrypted VNet, providing strong security for all communications with a VNet. I will update this post soon with a link to my keynote and slides, so keep an eye on this space. Celebrating Networking Innovation What I am most excited about at SIGCOMM is to see many of my colleagues recognized for innovating, ground-breaking research. Microsoft has a proud history of publishing early and working with the industry to deliver innovation.  This year marks the ten year anniversary of the paper “A Clean Slate 4D Approach to Network Control and Management,” which I wrote with Gisli Hjalmtysson, David A. Maltz, Andy Myers, Jennifer Rexford, Geoffrey Xie, Hong Yan, Jibin Zhan, and Hui Zhang. This paper was selected by SIGCOMM for this year’s “Test of Time Award.” In the paper, we proposed the key design principles of centralizing control, programming the network to meet network-wide objectives, based on network-wide views.  This research gave rise SDN and NFV. It also illustrates that the best way to have impact is to imagine the future and then work on the engineering and products to make it happen. We provided the scenario, the team, the systems and tools to turn the dream of the 4D approach into the SDN and NFV reality for Azure. This year’s SIGCOMM is no different, with many publications from our Microsoft Research colleagues and collaborators in Academia being presented, providing insightful measurements, and opening up new challenges and promising innovations. I hope you’ll check out these papers and let us know what you think in the comments. Pingmesh: A Large-Scale System for Data Center Network Latency Measurement and Analysis Network-Aware Scheduling for Data-Parallel Jobs: Plan When You Can Low Latency Geo-distributed Data Analytics Silo: Predictable Message Latency in the Cloud Hopper: Decentralized Speculation-aware Cluster Scheduling at Scale Packet-Level Telemetry in Large Datacenter Networks Enabling End-Host Network Functions Congestion Control for Large-Scale RDMA deployments R2C2: A Network Stack for Rack-scale Computers Programming Protocol-Independent Packet Processors

Azure Government Achieves Significant Compliance Milestones

In April Microsoft announced four new industry certifications for Microsoft Azure – CDSA for the digital media and entertainment industry, FISC for Japanese financial services organizations, DISA Level 2 for the US defense sector, and MTCS Level 3 for the Singapore government. Today I’m excited to share another four milestones specific to Azure Government: FedRAMP Moderate Provisional Authority to Operate (P-ATO) DISA Level 2 Provisional Authorization (PA) HIPAA Business Associate Agreement (BAA) Support for federal tax workloads under IRS Publication 1075.   All of these achievements apply to Azure Government customers immediately, enabling trusted cloud scenarios across a broad range of services.  Azure Government delivers on the criteria necessary for government agencies and their partners to use cloud services, adding assurance that data will remain in US facilities, datacenter personnel have been screened according to strict guidelines, and continuous monitoring ensures effective incident detection and response. FedRAMP Moderate P-ATO Azure Government—including identity services (Azure Active Directory and Multi-Factor Authentication)—is now certified for US government customers. Receiving the P-ATO for Azure Government provides independent attestation that the cloud platform meets the rigorous standards and security requirements laid out in NIST 800-53. DISA Level 2 PA As part of our FedRAMP authorization, Azure Government has also been granted a PA for DISA Level 2 by the FedRAMP Joint Authorization Board (JAB). Department of Defense customers can place non-sensitive information and defense applications into Azure Government that require DISA Level 2. HIPAA BAA Microsoft now contractually commits to meeting HIPAA requirements in Azure Government by providing a BAA addendum to enterprise agreements. US Government customers and partners can have confidence that PHI will be protected with best-in-class security and privacy capabilities and processes. IRS Publication 1075 Many of our customers need the ability to process federal tax information. Azure Government provides the features, processes, and transparency that enables customers to achieve compliance with IRS 1075. Customers can review Azure Government’s IRS 1075 Safeguard Security Report, as well as a controls matrix that defines distributed accountabilities for certifying their solutions on Azure Government. Looking to the Future I’m excited about what these certifications mean for customers seeking to deploy environments on the most broadly validated cloud platform on the market today. With these announcements, Azure holds the largest number of industry, government and international certifications of any commercial cloud provider. For more information, please visit our Azure Trust Center.

Announcing Backup of Windows 10 machines using Azure Backup

Windows 10 client machines can now be backed up seamlessly to cloud by Azure Backup service. Customers with machines running Windows 10 (64-bit) operating systems can protect their important file-folder data to Azure in a secure manner and restore data on any machine. Below are the capabilities of Azure Backup: Granular backup of the files and/or folders from your machines to Azure. Windows 10 files and folder backup using Azure Backup Customizable backup schedules with backups happening as frequently as three times a day. Efficient utilization of bandwidth by transferring incremental backup copies, a.k.a- transferring only changed data from previous backup point. Encryption of data even before it leaves your machine boundary. Retain the backup copies at Azure for almost 99 years. Easily browse files and folders of the backup copies at Azure and restore only the desired files. Restore files from backup copies at Azure to any other machines seamlessly. Windows 10 backed up files restore using Azure Backup Monitor all the above activities like scheduling backup times, backup and recovery jobs in a familiar MMC console as shown below. Windows 10 backup jobs monitoring using Azure Backup Leverage the Azure Backup service across 17 different regions world-wide to back up files to the nearest region in your vicinity.   Watch the below ‘How-to’ videos on Azure Backup to learn the configuration of backup and restores using Azure Backup.   To get started and learn more about Azure Backup browse the ‘Getting started with Azure Backup blog’. To know more about the best practices for protecting Windows client machines browse the Azure Backup announcement on support for client operating systems. Install the latest Azure Backup agent on your windows 10 client machines to start protecting your files and folders to Azure.   Related links/content: Learn more about Azure Backup Click for a free Azure trial subscription, and download the latest Azure Backup agent to get started Need help? Reach out to the Azure Backup forum for support Tell us how we can improve Azure Backup: contribute new ideas and up-vote existing ones Follow us on Twitter and Channel9 to get latest news and updates

Containers: Docker, Windows and Trends

You can’t have a discussion on cloud computing lately without talking about containers. Organizations across all business segments, from banks and major financial service firms to e-commerce sites, want to understand what containers are, what they mean for applications in the cloud, and how to best use them for their specific development and IT operations scenarios. From the basics of what containers are and how they work, to the scenarios they’re being most widely used for today, to emerging trends supporting “containerization”, I thought I’d share my perspectives to better help you understand how to best embrace this important cloud computing development to more seamlessly build, test, deploy and manage your cloud applications. Containers Overview In abstract terms, all of computing is based upon running some “function” on a set of “physical” resources, like processor, memory, disk, network, etc., to accomplish a task, whether a simple math calculation, like 1+1, or a complex application spanning multiple machines, like Exchange. Over time, as the physical resources became more and more powerful, often the applications did not utilize even a fraction of the resources provided by the physical machine. Thus “virtual” resources were created to simulate underlying physical hardware, enabling multiple applications to run concurrently – each utilizing fractions of the physical resources of the same physical machine. We commonly refer to these simulation techniques as virtualization. While many people immediately think virtual machines when they hear virtualization, that is only one implementation of virtualization. Virtual memory, a mechanism implemented by all general purpose operating systems (OSs), gives applications the illusion that a computer’s memory is dedicated to them and can even give an application the experience of having access to much more RAM than the computer has available. Containers are another type of virtualization, also referred to as OS Virtualization. Today’s containers on Linux create the perception of a fully isolated and independent OS to the application. To the running container, the local disk looks like a pristine copy of the OS files, the memory appears only to hold files and data of a freshly-booted OS, and the only thing running is the OS. To accomplish this, the “host” machine that creates a container does some clever things. The first technique is namespace isolation. Namespaces include all the resources that an application can interact with, including files, network ports and the list of running processes. Namespace isolation enables the host to give each container a virtualized namespace that includes only the resources that it should see. With this restricted view, a container can’t access files not included in its virtualized namespace regardless of their permissions because it simply can’t see them. Nor can it list or interact with applications that are not part of the container, which fools it into believing that it’s the only application running on the system when there may be dozens or hundreds of others. For efficiency, many of the OS files, directories and running services are shared between containers and projected into each container’s namespace. Only when an application makes changes to its containers, for example by modifying an existing file or creating a new one, does the container get distinct copies from the underlying host OS – but only of those portions changed, using Docker’s “copy-on-write” optimization.  This sharing is part of what makes deploying multiple containers on a single host extremely efficient. Second, the host controls how much of the host’s resources can be used by a container. Governing resources like CPU, RAM and network bandwidth ensure that a container gets the resources it expects and that it doesn’t impact the performance of other containers running on the host. For example, a container can be constrained so that it cannot use more than 10% of the CPU. That means that even if the application within it tries, it can’t access to the other 90%, which the host can assign to other containers or for its own use. Linux implements such governance using a technology called “cgroups.” Resource governance isn’t required in cases where containers placed on the same host are cooperative, allowing for standard OS dynamic resource assignment that adapts to changing demands of application code. The combination of instant startup that comes from OS virtualization and reliable execution that comes from namespace isolation and resource governance makes containers ideal for application development and testing. During the development process, developers can quickly iterate. Because its environment and resource usage are consistent across systems, a containerized application that works on a developer’s system will work the same way on a different production system. The instant-start and small footprint also benefits cloud scenarios, since applications can scale-out quickly and many more application instances can fit onto a machine than if they were each in a VM, maximizing resource utilization. Comparing a similar scenario that uses virtual machines with one that uses containers highlights the efficiency gained by the sharing. In the example shown below, the host machine has three VMs. In order to provide the applications in the VMs complete isolation, they each have their own copies of OS files, libraries and application code, along with a full in-memory instance of an OS. Starting a new VM requires booting another instance of the OS, even if the host or existing VMs already have running instances of the same version, and loading the application libraries into memory. Each application VM pays the cost of the OS boot and the in-memory footprint for its own private copies, which also limits the number of application instances (VMs) that can run on the host. The figure below shows the same scenario with containers. Here, containers simply share the host operating system, including the kernel and libraries, so they don’t need to boot an OS, load libraries or pay a private memory cost for those files. The only incremental space they take is any memory and disk space necessary for the application to run in the container. While the application’s environment feels like a dedicated OS, the application deploys just like it would onto a dedicated host. The containerized application starts in seconds and many more instances of the application can fit onto the machine than in the VM case. Docker’s Appeal The concept of namespace isolation and resource governance related to OSs has been around for a long time, going back to BSD Jails, Solaris Zones and even the basic UNIX chroot (change root) mechanism. However, by creating a common toolset, packaging model and deployment mechanism, Docker greatly simplified the containerization and distribution of applications that can then run anywhere on any Linux host. This ubiquitous technology not only simplifies management by offering the same management commands against any host, it also creates a unique opportunity for seamless DevOps. From a developer’s desktop to a testing machine to a set of production machines, a Docker image can be created that will deploy identically across any environment in seconds. This story has created a massive and growing ecosystem of applications packaged in Docker containers, with DockerHub, the public containerized-application registry that Docker maintains, currently publishing more than 180,000 applications in the public community repository.  Additionally, to guarantee the packaging format remains universal, Docker recently organized the Open Container Initiative (OCI), aiming to ensure container packaging remains an open and foundation-led format, with Microsoft as one of the founding members. Windows Server and Containers To bring the power of containers to all developers, last October we announced plans to implement container technology in Windows Server. To enable developers that use Linux Docker containers with the exact same experience on Windows Server, we also announced our partnership with Docker to extend the Docker API and toolset to support Windows Server Containers. For us, this was an opportunity to benefit all of our customers, both Linux and Windows alike. As I recently demonstrated at DockerCon, we are excited to create a unified and open experience for developers and system administrators to deploy their containerized applications comprising both Windows Server and Linux. We are developing this in the open Docker GitHub repository. In Windows Server 2016, we will be releasing two flavors of containers, both of which will be deployable using Docker APIs and the Docker client: Windows Server Containers and Hyper-V Containers. Linux containers require Linux APIs from the host kernel and Windows Server Containers require the Windows APIs of a host Windows kernel, so you cannot run Linux containers on a Windows Server host or a Windows Server Container on a Linux host. However, the same Docker client can manage all of these containers, and while you can’t run a packaged Windows container on Linux, a Windows container package works with Windows Server Containers and Hyper-V Containers because they both utilize the Windows kernel. There’s the question of when you might want to use a Windows Server Container versus a Hyper-V Container. While the sharing of the kernel enables fast start-up and efficient packing, Windows Server Containers share the OS with the host and each other. The amount of shared data and APIs means that there may be ways, whether by design or because of an implementation flaw in the namespace isolation or resource governance, for an application to escape out of its container or deny service to the host or other containers. Local elevation of privilege vulnerabilities that operating system vendors patch is an example of a flaw that an application could leverage. Thus, Windows Server Containers are great for scenarios where the OS trusts the applications that will be hosted on it, and all the applications also trust each other. In other words, the host OS and applications are within the same trust boundary. That’s true for many multi-container applications, applications that make up a shared service of a larger application, and sometimes applications from the same organization. There are cases where you may want to run applications from different trust boundaries on the same host, however. One example is if you are implementing a multitenant PaaS or SaaS offering where you allow your customers to supply their own code to extend the functionality of your service. You don’t want one customer’s code to interfere with your service or gain access to the data of your other customers, but you need a container that is more agile than a VM and that takes advantage of the Docker ecosystem. We have several examples of such services in Azure, like Azure Automation and Machine Learning. We call the environment they run in “hostile multi-tenancy,” since we have to assume that there are customers that deliberately seek to subvert the isolation. In these types of environments, the isolation of Windows Server Containers may not provide sufficient assurance, which motivated our development of Hyper-V Containers. Hyper-V Containers take a slightly different approach to containerization. To create more isolation, Hyper-V Containers each have their own copy of the Windows kernel and have memory assigned directly to them, a key requirement of strong isolation. We use Hyper-V for CPU, memory and IO isolation (like network and storage), delivering the same level of isolation found in VMs. Like for VMs, the host only exposes a small, constrained interface to the container for communication and sharing of host resources. This very limited sharing means Hyper-V Containers have a bit less efficiency in startup times and density than Windows Server Containers, but the isolation required to allow untrusted and “hostile multi-tenant” applications to run on the same host. So aren’t Hyper-V Containers the same as VMs? Besides the optimizations to the OS that result from it being fully aware that it’s in a container and not a physical machine, Hyper-V Containers will be deployed using the magic of Docker and can use the exact same packages that run in Windows Server Containers. Thus, the tradeoff of level of isolation versus efficiency/agility is a deploy-time decision, not a development-time decision – one made by the owner of the host. Orchestration As they’ve adopted containers, customers have discovered a challenge. When they deploy dozens, hundreds or thousands of containers that make up an application, tracking and managing the deployment requires advances in both management and orchestration. Container orchestration has become an exciting new area of innovation with multiple options and solutions. Container orchestrators are assigned a pool of servers (VMs or bare metal servers), commonly called a “cluster,” and “schedule” deployment of containers onto those servers. Some orchestrators go further and configure networking between containers on different servers, while some include load balancing, container name resolution, rolling updates and more. Some are extensible and enable application frameworks to bring these additional capabilities. While a deeper discussion on orchestration solutions might require a whole other post on its own, here’s a quick outline a few of the technologies, all supported on top of Azure: Docker Compose enables the definition of simple multi-container applications. Docker Swarm manages and organizes Docker containers across multiple hosts via the same API used by a single Docker host. Swarm and Compose come together to offer a complete orchestration technology built by Docker. Mesos is an orchestration and management solution that actually predates Docker, but has recently added support for Docker into its built-in application framework Marathon. It is an open and community-driven solution built by Mesosphere. We recently demonstrated integration with Mesos and DCOS on Azure. Kubernetes is an open-source solution built by Google offering container grouping into “Pods” for management across multiple hosts. This is also supported on Azure. Deis is an open source PaaS platform to deploy and manage applications integrated with Docker. We have an easy way to deploy a Deis cluster on Azure. We are excited to have support in Azure for most of the popular orchestration solutions and expect to get more engaged in these communities as we see interest and usage increase over time. Microservices The more immediately lucrative usage for containers has been focused on simplifying DevOps with easy developer to test to production flows for services deployed in the cloud or on-premises. But there is another growing scenario where containers are becoming very compelling. Microservices is an approach to application development where every part of the application is deployed as a fully self-contained component, called a microservice that can be individually scaled and updated. For example, the subsystem of an application that receives requests from the public Internet might be separate from the subsystem putting the request on to a queue for a backend subsystem to read and drop them into a database. When the application is constructed using microservices, each subsystem is a microservice. In a dev/test environment on a single box, the microservices might each have one instance, but when run in production each can scale out to different numbers of instances across a cluster of servers depending on their resource demands as customer request levels rise and fall. If different teams produce them, the teams can also independently update them. Microservices is not a new approach to programming, nor is it tied explicitly to containers, but the benefits of Docker containers are magnified when applied to a complex microservice-based application. Agility means that a microservice can quickly scale out to meet increased load, the namespace and resource isolation of containers prevents one microservice instance from interfering with others, and use of the Docker packaging format and APIs unlock the Docker ecosystem for the microservice developer and application operator. With a good microservice architecture, customers can solve the management, deployment, orchestration and patching needs of a container-based service with reduced risk of availability loss while maintaining high agility. Today there are several solutions for building application models using microservices and we partner with many of them on Azure. Docker Compose and Mesosphere Marathon are two examples. Shortly before //build, we announced and then released a developer preview of Service Fabric, our own microservices application platform. It includes a rich collection of microservice lifecycle management capabilities, including rolling update with rollback, partitioning, placement constraints and more. Notably, in addition to stateless microservices, it supports stateful microservices, which are differentiated by the fact that the microservice manages data that’s co-resident with it on the same server. In fact, Service Fabric is the only PaaS platform that offers stateful microservices with state management and replication frameworks built directly into its cluster management. We developed this application model for internal services to be able to scale to hyperscale with stateful replication, and services like Cortana, Azure SQL Database and Skype for Business are built on it. We will release a public preview of Service Fabric later this year, but in the meantime you can check out more on Service Fabric here. I hope the above helps paint a useful picture of Microsoft’s container vision, the most common container use cases, and also some of the emerging industry trends around containers. As always, we’d love your feedback, particularly if there are any areas where you’d like to learn more.

Dependency call stack and Application Insights SDK labs

We have released an experimental package to trace the stack for every dependency call collected by Visual Studio Application Insights. With the existing SDK for .NET, you can track calls from your app to external dependencies such as databases or REST APIs, and find slow or failing calls. But with the current SDK, you can’t easily find out which functions in your app made the calls. Our experimental package, NuGet, addresses this problem. This project extends the Application Insights SDK for .NET to provide call stack information for every dependency collected. Requirements An ASP.NET web application project Application Insights SDK for .NET Web Applications installed in the project (this is the SDK you get if you use Visual Studio to add Application Insights to your project) Our SDK adds one custom property to each dependency telemetry data point. Installation The dependency call stack package is hosted in the new SDK Labs NuGet gallery. If you’re using Visual Studio to manage NuGet package in your project, here’s what to do: Add the Application Insights SDK Labs package source to NuGet. Source: https://www.myget.org/F/applicationinsights-sdk-labs/ Add the prerelease package Microsoft.ApplicationInsights.DependencyCallstacks to your project. Remember to include prerelease packages. That’s it! Re-build and run your app. If you are using the command line package manager this is all you need: > Install-Package "Microsoft.ApplicationInsights.DependencyCallstacks" -Source "https://www.myget.org/F/applicationinsights-sdk-labs/" -Pre Viewing Call Stacks Using this experiment will change your experience in the Azure Portal to support this new data. When you view requests in Application Insights that have related dependencies you will now get a view showing the relationship between your dependencies. Click on any dependency in this new view to bring up the dependency details blade. It now contains a call stack section complete with class and function names as well as filenames and line numbers, when available.   To stop getting this experimental experience, remove the package from your project. After you remove the package from your project all new requests will be displayed in the default UI on the Azure Portal. Old requests made when the package was installed will continue to be displayed in the view above. Current Limitations Some asynchronously called dependencies may not have call stack information collected. Only the “Just My Code” version of the call stack is collected. About SDK Labs With this project we are introducing the concept of Application Insights SDK Labs. The project is open source on GitHub and released on the MyGet gallery. Our strategy: Every experimental project is a standalone NuGet package. Publish projects to the dedicated MyGet feed. Monitor number of downloads and engaged customers using the experimental feature. Decide if the experiment is successful. If it doesn’t prove beneficial, we will discontinue this project and remove it from the repository. Promote successful projects to their own repository or into Application Insights SDK. Summary This project enables some extra diagnostics scenarios. It improves Application Insights by providing more details and increasing functionality of an existing feature. We believe this addition will be very useful. The package was created as an intern project, and we hope it will gain interest so we can include it into our SDK my default. Please help us evaluate this feature by providing your feedback on GitHub: https://github.com/Microsoft/ApplicationInsights-SDK-Labs Want more features? Ask us. Credits I would like to thank the author of this package, Osvaldo Rosado for preparing this blog post.

Azure Media Player update with multi-audio stream support

You asked for it, and we listened! With an overwhelming number of requests on the Azure Media Player UserVoice Forum for the ability to switch between multiple audio streams, Azure Media Player 1.3.0 now supports seamless same codec audio switching on the AzureHtml5JS and FlashSS techs! Check out this awesome feature in action with two of our demo streams: Elephant’s Dream with English and Spanish audio streams Sintel with Stereo, Surround and Instrumental versions of the audio streams   Audio stream switching can be achieved through the default skin, or programmatically.  It is easy to configure the how the audio track button is displayed on the skin, allowing for you to enable or disable the button and choose how the track label is generated.  The track labels can be generated automatically using the language and bitrate information or from the title of the audio stream as outlined in the manifest.  Check out the sample to see how to do this in JavaScript.   This release also includes several bug fixes.  You might have missed some features and bug fixes in the previous release so be sure to update the latest to get all of these changes. Check out the change log for a full list of updates and features.  We also added a nifty update you might have missed which no longer requires you to put the path to the fallback techs in the head. Azure Media Player will now automatically look at the relative path from the azuremediaplayer.min.js file as structured in the release: <link href="//amp.azure.net/libs/amp/1.3.0/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet"> <script src= "//amp.azure.net/libs/amp/1.3.0/azuremediaplayer.min.js"></script> See the documentation for more information. Providing Feedback Azure Media Player will continue to grow and evolve, adding more features and enabling more scenarios.  To help serve you better, we are always open to feedback, new ideas and appreciate any bug reports so we can continue to provide an amazing service with the latest technologies. Remember to read through the documentation and check out the samples first; they’re there to help make your development easier. To request new features, provide ideas or feedback, contact UserVoice for Azure Media Player.  If you have and specific issues, questions or find any bugs, drop us a line at ampinfo@microsoft.com. Sign up for the latest news and updates Sign up here so you never miss a release and to stay up-to-date with everything Azure Media Player has to offer. Additional Resources Learn More License Documentation Samples Demo page UserVoice Sign Up

Announcing Geospatial support in Azure DocumentDB!

We are excited to announce support for geospatial indexing and querying in Azure DocumentDB! The latest DocumentDB service update includes support for automatic indexing of geospatial data, as well as SQL support for performing proximity queries stored in DocumentDB. If you download version 1.4.0 of the .NET SDK, you’ll also find new types for representing points and polygons, support for creating and enabling spatial indexing on DocumentDB collections, and new LINQ query operators for performing spatial queries. As we described in the “Schema Agnostic Indexing with Azure DocumentDB” paper, we designed DocumentDB’s database engine to be truly schema agnostic and provide first class support for JSON. The write optimized database engine of DocumentDB now also natively understands spatial data represented in the GeoJSON standard. Query Geospatial data with SQL   What kinds of queries can you perform with DocumentDB? Try it for yourself at the DocumentDB playground! We’ve pre-created a DocumentDB collection in the playground with a scientific dataset containing JSON data about volcanoes. For example, here’s a query that returns all volcanoes within 100 km of Redmond, WA using the built-in function, ST_DISTANCE. Find locations within a certain radius: You can also use ST_WITHIN to check if a point lies within a polygon. Commonly polygons are used to represent boundaries like zip codes, state boundaries or natural formations. Again if you include spatial indexing in your indexing policy, then “within” queries will be served efficiently through the index. Find locations within a polygon boundary: Here’s what one of our customers had to say about the feature: “Having the backend do the heavy lifting of “location and distance” math married with the power of querying geospatial data via LINQ makes DocumentDB a perfect backend for modern location based applications.” – Ryan Groom, Founder, Trekkit.com How does spatial indexing work in Azure DocumentDB? In a nutshell, the geometry is projected from geodetic coordinates onto a 2D plane then divided progressively into cells using a quadtree. These cells are mapped to 1D based on the location of the cell within a Hilbert space filling curve, which preserves locality of points. Additionally when location data is indexed, it goes through a process known as tessellation, i.e. all the cells that intersect a location are identified and stored as keys in the DocumentDB index. At query time, arguments like points and polygons are also tessellated to extract the relevant cell ID ranges, then used to retrieve data from the index. Learn more about this feature here. Get started with the SDKs: Get started with spatial querying data by downloading version 1.4.0 of the DocumentDB .NET SDK from Nuget here or one of the other supported platforms (Node.js, Java, Python or JavaScript) here. We also created a Github project containing code samples for indexing and querying spatial data. If you need any help or have questions, please reach out to us on the  developer forums on stack overflow or schedule a 1:1 chat with the DocumentDB engineering team. Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB.

Azure DocumentDB bids fond farewell to Self-Links

Ok, I admit it, that title is misleading because we’re not getting rid of self-links entirely. I’ll explain more in a bit, but first let’s pause for a moment to reflect on how and why DocumentDB uses self-links in the first place. Azure DocumentDB is a fully managed JSON document database service hosted on Microsoft Azure, accessible via a REST interface. As a REST service, every resource in DocumentDB is addressable by a uniform resource identifier (Uri). The DocumentDB resource model explains how all resources relate to each other and how these Uris are structured following this resource model. A resource id is a unique, immutable, system generated value. If you inspect any resource in DocumentDB you will see some system properties, identified by an underscore (“_”) character. One of these system properties, _rid, is this resource id. Another one of the system properties is _self, which is the self-link for this resource. The self-link is a Uri in the form of “dbs/{0}/colls/{1}/docs/{2}”; where {0} is the database _rid, {1} is the _rid for the document collection, and {2} is the _rid for the document. If you have done any work with DocumentDB you are likely quite familiar with Uris that look like this; because they are used anytime you do anything with any resource in the database. Being able to address a resource with a Uri like a self-link is a great thing because it provides a stable Uri that can be used to address the resource. So why are we getting rid of them? Well, we haven’t removed self-links. Not entirely. We’ve heard from many customers that these _rid based self-links are really difficult to work with because nobody knows what the resource id is offhand. You have to query for the resource just to get its self-link before you can do anything with that resource. Customers ask, “Every resource has an id, which I set when creating the resource, so why can’t I use that id when addressing the resource?” We agree with this sentiment, which is why today we’re announcing a big change in this area. Documents will still have a _self property but we’re now adding the ability to build up an alternative link to a resource that is based on the id, and not the _rid. (Note: The ability to use the existing self-link on the resource is still supported.) This means that the following Uri is now a valid way to reference the document we looked at earlier. dbs/MyDatabaseId/colls/MyCollectionId/docs/MyDocumentId NB – notice how this URI does not end with a trailing ‘/’ character. This is a subtle, but important difference. Not only is this easier on the eye, but more importantly you can now use the id you supplied when creating the resource as part of this Uri. Up until now, if you wanted to do a simple operation such as deleting a document you needed to write code similar to the following: // Get a Database by querying for it by id Database db = client.CreateDatabaseQuery() .Where(d => d.Id == "SalesDb") .AsEnumerable() .Single(); // Use that Database's SelfLink to query for a DocumentCollection by id DocumentCollection coll = client.CreateDocumentCollectionQuery(db.SelfLink) .Where(c => c.Id == "Catalog") .AsEnumerable() .Single(); // Use that Collection's SelfLink to query for a DocumentCollection by id Document doc = client.CreateDocumentQuery(coll.SelfLink) .Where(d => d.Id == "prd123") .AsEnumerable() .Single(); // Now that we have a doc, use it's SelfLink property to delete it await client.DeleteDocumentAsync(doc.SelfLink); That’s a lot of code, the majority of which is simple boilerplate code to lookup resources in order to get the self-links needed. The code snippet has three calls to the service, each costing you request units and network roundtrips, before you can perform the operation you wanted to do. With this release, you can replace that code with the following two lines of code: // Build up a link manually using ids // If you are building up links manually, ensure that // the link does not end with a trailing '/' character var docLink = string.Format("dbs/{0}/colls/{1}/docs/{2}", "SalesDb", "Catalog", "prd123"); // Use this constructed link to delete the document await client.DeleteDocumentAsync(docLink); Not only does this result in less code to write and maintain, but it also results in fewer database operations leaving those RUs for more important things like writing documents. The code above still requires you understand how to build up the Uri correctly from the resource model. You also need to do things like escape whitespace and encode special characters to ensure you have a valid Uri, so we went one step further in the SDK and added a simple helper factory to do this for you. // Use UriFactory to build the DocumentLink Uri docUri = UriFactory.CreateDocumentUri("SalesDb", "Catalog", "prd123"); // Use this constructed Uri to delete the document await client.DeleteDocumentAsync(docUri); With this new UriFactory class, all you have to know is the kind of Uri you need and the appropriate ids. To take advantage of these changes and get access to the UriFactory class, you will need to update your applications to use version 1.4.0 (or greater) of the .NET SDK. For our Node.js, Python, and Java SDKs, no updates are required, just craft your new alternate links and enjoy! If you would like a more comprehensive sample that demonstrates the use of this ID-based routing, try this project we put together. Please keep the feedback coming for the features that are most important to you using our feedback forum. It’s this feedback that helps us improve the service with updates just like this one. To stay up to date on the latest DocumentDB news and features, follow us on Twitter @DocumentDB. If you haven’t tried DocumentDB yet, get started here and point your browser at the learning path to help you get on your way.

How Well Do You Use Cloud Economics In Your Cloud Strategy?

Is your cloud strategy centered on saving money or fueling revenue growth? Where you land on this question could determine a lot about your experience level with cloud services and what guidance you should be giving to your application developers and IT Ops teams. According to our customer research the majority of CIOs would vote for the savings, seeing cloud computing as an evolution of outsourcing and hosting that can drive down capital and operations expenses. In some cases this is correct, but in many situations the opposite will result. Using the cloud wrong may raise your costs. But this isn’t the crux of the issue, because it’s the exploration of the use cases where it does save you money that bears the real fruit. And it’s through this experience that you can start shifting your thinking from cost savings to revenue opportunities. Forrester surveys show that the top reasons developers tap into cloud services (and the empowered non-developers in your business units) is to rapidly deploy new apps and capabilities. And the drivers behind these efforts – new services, better customer experience and improved productivity. Translation: Revenues and profits. If the cloud is bringing new money in the door, does it really matter if it’s the cheaper solution? Not at first. But over time using cloud as a revenue engine doesn’t necessarily mean high margins on that revenue. That’s where your experience with the cost advantaged uses of cloud come in. Here is where Azure Marketplace partners such as Cloudyn, CloudCruiser and others can really help you optimize your cloud spend. You can also go it your own by leveraging the recently published Azure billing and rate card APIs. There is a discrete thought process and experiential path CIOs go through to reach these conclusions which Forrester has documented in my research report, “The Three Stages Of Cloud Economics.” Like with many maturity models, your organization must gain experience in the first stage to understand and start reaping the gains from the latter two which are where savings turn into profits. You can’t afford to pass up the opportunity cloud computing presents for turning IT from cost center into revenue driver. Get your hands dirty and start evolving your thinking and your cloud use. 

Update: Azure Media Indexer v1.3.2

Today we released an update to Azure Media Indexer with some minor fixes for specific workflows: FIX: AIB file now supports querying of strings with special characters/punctuation FIX: Old configuration schema now returns correct files FIX: Compatibility issues with certain filetypes As always, feel free to reach out with any questions or comments at indexer@microsoft.com. Not sure what Azure Media Indexer is?  Check out the introductory blog post here!

Pages

Recommended Content