Rethinking Paradigms in Networking: Firewalls in the Public Cloud

If you have ever implemented a firewall in a traditional network, almost certainly it had at least two network interfaces. One was on an untrusted side, perhaps directly on the Internet, and the other was on a trusted side. The goal, of course, is to keep unwanted traffic from reaching the trusted network. There are more complex implementations, but this serves for illustration sake.

Traditional firewall configuration

Traditional firewall configuration

I bring up these foundational topics to point out one way in which the public cloud makes us rethink our paradigms…in this case, that of the firewall. Continue reading

Why Does My SQL Server Availability Group Need a Load Balancer in Azure?

When deploying a SQL Server Always On Availability Group in Azure you must create an Azure Load Balancer to properly route client connections to the Availability Group Listener to the primary replica. In every class I teach, someone usually asks a question that basically boils down to something along the lines of “Why do I need to setup an Internal Load Balancer if I am using an Availability Group Listener? Shouldn’t the Availability Group Listener take care of that for you?”. The reason is because Azure blocks all gratuitous ARPs (sometimes referred to as a broadcast). A gratuitous ARP is basically an ARP request or reply that isn’t necessary as per the specification. However, a gratuitous ARP reply is commonly used in clustered environments to notify nearby machines that an IP address has been moved to another network interface so that machines receiving the ARP packet can update their ARP tables with the new MAC address. Azure and most cloud environments block this type of broadcast traffic for security reasons.

01-GratARP

Gratuitous ARP behavior during failover of a typical SQL Server Availability Group not running in Azure. Note that the gratuitous ARP occurs any time the IP is brought online, not just during failover.

OK, so great, I have deployed my SQL Server Availability Group in Azure, I have already created a listener but it isn’t working correctly. How do I fix this? Microsoft documentation on configuring the internal load balancer for an Always On Availability Group assumes that you are not creating the listener when you run the New Availability Group Wizard. The wizard gives you the option to create the Availability Group Listener on the fly and in my experience most people create the listener up front while running the wizard. If you have already deployed your availability group and setup the listener you can still follow the Microsoft instructions for creating the Internal Load Balancer but when you get to the Configure the cluster to use the load balancer IP address section follow these instructions:

1. Open an Administrative Powershell ISE session on the primary instance of SQL. Copy and paste the PowerShell script below into the script window, but do not execute it yet.

$ClusterNetworkName = "Cluster Network 1" 
$IPResourceName = "AdventureWorks_10.0.2.7"
$SQLAGListenerName = "AdventureWorks" 
$ILBIP = "10.0.2.7" 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | '
      Set-ClusterParameter -Multiple @{"Address"="$ILBIP";  `
                           "ProbePort"="59999";             `
                           "SubnetMask"="255.255.255.255";  `
                           "Network"="$ClusterNetworkName"; `
                           "EnableDhcp"=0}
Stop-ClusterResource -Name $IPResourceName
Start-ClusterResource -Name $SQLAGListenerName

2.  Modify the following variables with appropriate values for your environment.

$ClusterNetworkName is the name of the cluster network. You can find this in Failover Cluster Manager under Networks.

17-ClusterNetworkName

$IPResourceName is the name of your listener IP resource. You can find this in Failover Cluster Manager under Roles, then choose your Availability Group, expand the Server Name resource, right-click the IP Address and choose properties. Use the Name value specified here.

17-ipresourcename

$SQLAGListenerName is the name of your SQL Availability Group Listener. You can find this in the Failover Cluster Manager under Roles, then choose your Availability Group, then right-click the Availability Group Listener name, use the name value specified here.

17-SQLAGListenerName

$ILBIP is the static IP address you assigned to the internal load balancer.

The last two lines of the script are simply to restart the cluster resources so that the changes take effect. We take the IP resource offline first because that will cause all of the upstream resources that are dependent on it to be taken offline as well. Then we bring the Availability Group name online because that will automatically bring all of its downstream dependencies online for us without issuing individual commands for each resource.

3.  Execute the PowerShell script to configure your cluster for the probe port. You only need to execute this one time. You do not need to run it on each node of the cluster.

4.  Verify your configuration by connecting to the secondary instance, and making a connection to the SQL AG Listener. You should also failover your Availability Group and test connections to the Listener from the other nodes of the cluster. 

NEW CLASS! Dynamics CRM Online Customization and Configuration

ms_rgb_dynamics_crm_blue

We are excited to announce the publication of our latest online, on-demand course, Dynamics CRM Online Customization and Configuration, by Opsgility author and Microsoft Dynamics CRM MVP, Britta Scampton.

In this course, you will build on the knowledge gained from the Getting Started with Dynamics CRM course, and dive into the array of options available for tailoring Dynamics CRM to the specific needs of an organization. Upon completion of this course students will have hands on experience customizing and automating Dynamics CRM as well as applying proper security and packaging their customizations for transfer to other CRM environments and organizations.

For more information on this course, including a full agenda, please click here.  This course is also available for dedicated, instructor-led delivery.  Please contact us for more information.

Set up continuous deployment with unit tests to your Azure App Service

If you want to set up continuous deployment for an App Service Web app in Microsoft Azure, you can either set up a build server (or service) on your own and have it copy the final build artifacts into the folder structure of the Web app -or- you can enable the built in continuous deployment features of the App Service to pull, build, and deploy code from a nice list of available source code locations.

App service publishing settings

App service publishing settings

All you have to do to enable this is to click on the Settings blade of the App Service and click on the Deployment source link. From there, you will see a list of common source code locations. Once you select one of these, provide the correct credentials, and then choose the right repository and branch, Azure will enable a service called Kudu that will immediately copy the source code into the App service, start a build, and then deploy the code straight into the live site (or deployment slot).

So far so good, and also amazingly easy! When I was doing this, I decided to do an experiment and see what would happen when I pushed a commit that included a failing unit test. To my shock and frustration, Kudu took my updated code with the broken test and happily built the app and published it to the live sight without any hesitation. That’s right, it pushed bad code straight through without checking the unit tests. Not good.

I did some research online and came up with practically nothing. I only found one blog post on the matter and it was someone complaining about the same problem but there wasn’t a good alternative solution provided. I did some more research of my own and finally found a workable solution. Yes, you can make Kudu build and run your tests! Read on to see how.

Web app blade Tools icon

Web app blade Tools icon

The first thing you need to do is navigate to the blade for your Web app in the Azure portal and click on the Tools icon in the blade header.

Kudu blade

Kudu blade

 

From there, click the Kudu link and then click the link called Go.

 

 

 

 

This will take you to the Kudu portal page where you can access the command line of the App service server and see details about deployments, app settings, and a lot of other interesting information. What you need from this screen, specifically, is to download the deployment script that Kudu is using when it builds and deploys your application’s source code. To get this, click the Tools link and then click the Download deployment script link that shows up below it. When you click this, it will download a zip file to your computer that contains two files: deploy.cmd and .deployment. Open the deploy.cmd file in a text editor.

Download the deployment script

Download the deployment script

In the deploy.cmd file, find the “Deployment” section that looks like this screen shot.

Deployment section in the script

Deployment section in the script

Solution structure of my app

Solution structure of my app

In between steps 2 and 3, you need to add the instructions to build your test project and to run the tests in it. In the screenshot and the code sample below, my solution was called WebWithTests and I had two projects, a web application project called WebWithTests and a unit tests project called WebWithTests.Tests. All you need to do is configure the build script appropriately for your solution. It should involve building the test project or projects (this is the Step 4 that I added in the code sample below) and then running the test framework by passing it the location of the final .dll files from the build step (Step 5 that I added in the code sample below). Since you can check the result of the test framework executable tool, you can stop the build if any of the tests failed. For your test framework tool, you should be able to use three different test frameworks natively: vstest, nUnit, and xUnit. Make sure the “KuduSync” step still appears at the end of the steps you added just like it was before you modified the file.

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: Deployment
:: ----------

echo Handling .NET Web Application deployment.

:: 1. Restore NuGet packages
IF /I "WebWithTests.sln" NEQ "" (
  call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\WebWithTests.sln"
  IF !ERRORLEVEL! NEQ 0 goto error
)

:: 2. Build to the temporary path
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
  call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebWithTests\WebWithTests.csproj" /nologo /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release;UseSharedCompilation=false /p:SolutionDir="%DEPLOYMENT_SOURCE%\.\\" %SCM_BUILD_ARGS%
) ELSE (
  call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebWithTests\WebWithTests.csproj" /nologo /verbosity:m /t:Build /p:AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release;UseSharedCompilation=false /p:SolutionDir="%DEPLOYMENT_SOURCE%\.\\" %SCM_BUILD_ARGS%
)

IF !ERRORLEVEL! NEQ 0 goto error

:: ADDED THIS BELLOW --------------------------
:: 3. Building test projects 
rem echo Building test projects 
"%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebWithTests.sln" /p:Configuration=Release;VisualStudioVersion=14.0 /verbosity:m /p:Platform="Any CPU" 

IF !ERRORLEVEL! NEQ 0 (
 	echo Build failed with ErrorLevel !0!
 	goto error
)

:: 4. Running tests 
echo Running tests 
vstest.console.exe "%DEPLOYMENT_SOURCE%\WebWithTests.Tests\bin\Release\WebWithTests.Tests.dll" 
IF !ERRORLEVEL! NEQ 0 goto error 
:: ADDED THIS ABOVE --------------------------

:: 5. KuduSync
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
  call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
  IF !ERRORLEVEL! NEQ 0 goto error
)

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Once you have updated this file on your machine with your own solution and project names, you need to add these files (deploy.cmd and .deployment) to your source code so that the Kudu service will use them when it builds and deploys your app in the cloud. The trick here is that you need to add them at the root of the solution, in the same place as the solution file and the .gitignore file.

Deployment scripts in the folder

Deployment scripts in the folder

Then you just need to commit these files to your repository, and do a push. Kudu will see your new commit and start a build and deploy process. You can see in the console output of the Deployment source blade that it is running your tests. Push up a failing test and try it out, the build should fail and it won’t deploy the bad code into the slot.

Remote Debugging Azure Web Apps from Visual Studio

The ability for a developer to be able to affectively debug and step through the source code of a Web Application is essential to being able to troubleshoot and fix many bugs that can occur. This is especially true when troubleshooting some crazy edge case that came about in production where applications meet real users. Remote debugging isn’t a new feature to Visual Studio and IIS, but how can it be done with a Web App hosted in Azure? This article will step through how to both configure the Azure Web App, as well as how to connect the Visual Studio Debugger running on your local machine. Continue reading

Azure Marketplace Free Trials and Subscription Spending Limits

If you have tried to deploy an ‘Free Trial’ Azure Marketplace image within an MSDN subscription you probably have seen this error:

2016-06-20_19-11-34

They key text of the deployment error is: “We could not find a credit card on file for your azure subscription. Please make sure your azure subscription has a credit card.

The Azure Marketplace FAQ gives the reason why you get this failure:

Do I need to have a payment instrument (e.g. credit card) on file to deploy Free Tier offerings?
No. A payment instrument is not required to deploy Free Tier offerings. However, for Free Trial offerings a payment instrument is required.

The issue here is 2-fold. Continue reading

Move Resources via ARM Portal

Have you ever deployed a resource or resources in Azure only to realize you’ve deployed the resources into an incorrect resource group or perhaps your misspelled the name of the resource group and want to correct it? I have. Many times. Moving those resources was cumbersome at best, frustrating at worst. It could be done via a number of PowerShell cmdlets and the full resource ID. This was helpful, but not extremely user friendly.

When I was doing some onsite training a few weeks ago, one of the students stumbled upon a way to move those resources via the Azure Portal. What?! I didn’t know this had been a feature. Wow! I asked him to show me how he did this and after he showed me, we shared it with the rest of the students. I then investigated a little further and found out it can be done multiple ways.

To prepare for example resource group moves, I utilized the GitHub Quick Start Templates located here: https://github.com/Azure/azure-quickstart-templates to deploy a Simple Windows and a Simple Linux VM environment into two separate resource groups.

The first resource group is entitled TemplateRG1 and contains an Ubuntu Linux VM and the associated resources for this virtual machine. NOTE: I did have to change the name of the NIC, PublicIP, and VNET due to potential naming conflicts for my moving exercise.
image

The second resource group is entitled TempRG and contains a Windows VM and the associated resources for this virtual machine.

image

Let’s say I wanted to move the resources from TempRG to a new resource group called ProdRG. The following are the ways I’ve found how to move resources (and even associated resources) from one resource group to another, and even creating the new resource group in the process if needed.

Option one:

  1. In the Azure Portal, click on the resource you would like to move. In my case, SimpleWindowsVM is the name of the virtual machine I’d like to relocate.
  2. When the information blade appears, notice the “pencil” icon next to the Computer name.
    image
  3. Click on this “pencil” icon.
  4. You will now be prompted with a Move resources blade.
  5. The VM is listed as the resource that will be moved and you have the optional choice to move the related resources along with the VM.
  6. In my case, I check the checkbox beside all resources as I’m going to move them all to a new resource group.
    image
  7. Then I click on the Create a new group option to create my ProdRG resource group and type in the name of the new resource group. Otherwise, I could choose an existing resource group from the drop down menu if desired.
  8. After checking the checkbox that this action will result in new resource IDs so scripts and tools may need to be modified, I click the OK button to continue the process.
    image
  9. After the blade validates the move, the portal takes care of the rest. Watch the notifications that indicate the resources are being moved, and then finally, they have been moved successfully.
    image
    image
  10. Voilà! You have successfully moved a virtual machine and all the associated resources to a brand new resource group! NOTE: You might want to clean up your empty resource group now just for housecleaning purposes.

Now let’s say that I’d like to move all the resources from the TemplateRG1 to my newly created ProdRG group because they should be located in this resource group along with my Windows VM and its resources.

Option two:

  1. Again, in the Azure Portal, click on the resource you would like to move. In my case, MyUbuntuVM is the name of the virtual machine I’d like to relocate this time.
  2. In the Settings blade under the GENERAL heading, select Properties this time.
    image
  3. In the resulting properties information, find the RESOURCE ID property. Notice right below it there is an option for Change resource group.
    image
  4. Once you click this, you will be presented with the Move resources blade as in Option one above.
  5. The VM is listed as the resource that will be moved and you have the optional choice to move the related resources along with the VM.
  6. In my case, I check the checkbox beside all resources as I’m going to move them all to an existing resource group.
    image
  7. Choose ProdRG from the drop down list of existing resource groups, check the checkbox for the resource id information and click the OK button to begin the process.
    image
  8. After the blade validates the move, the portal takes care of the rest. Watch the notifications that indicate the resources are being moved, and then finally, the notification that they have been moved successfully.
    image
    image
  9. Presto! You have successfully moved a virtual machine and all the associated resources to an existing resource group! NOTE: You might want to clean up your empty resource group now just for housecleaning purposes.

As you can see below, I have consolidated two resource groups into one with the two options presented.
image

There you have it. Multiple ways to relocate resources to a new or existing resource group. Be sure and be aware of resources that may not fully support moving (start here) and be aware of some potential impact during the move process. I have tested it on running VMs and they have not been required to restart or lose connectivity, but test fully before performing this on production workloads for safety!

@EdwardFBear

Learn how to Implement Office 365 With Our New Online Course!

Check out the latest online, on-demand offering from Opsgility by Opsgility author Ben Stegink:

Implementing Office 365 – Requirements and Getting Started

This course gives you an introduction to Microsoft’s Office 365 offering and the core services offered within. This course begins by walking you some of the planning and considering that goes into creating your tenant and will walk you through all the steps of getting your tenant up and running. This course will get you well on your way to earning your MCSA: Office 365. The course outline is heavily geared towards preparing you for the topics covered in Microsoft’s 70-346.

You will find all the details for this course by clicking on the hyperlink above or by visiting https://www.opsgility.com/courses.

Trying out Backup and Site Recovery (OMS): Lessons Learned

Summary of the items I learned:

  • Reminder: Azure changes often
  • Register Resource Providers
  • Chocolatey is a cool tool
  • ARMClient is a cool tool, too

For details keep reading below!

I wanted to take a little time and share some of the things I’ve learned as I have started to utilize the new items in Azure Resource Manager. Specifically, surrounding the Backup and Site Recovery (OMS) feature that is now GA. The first thing I have learned is that I need to be flexible when looking for an item in ARM because of Rule #1: Things change rapidly in Azure.

Originally, I had been evaluating this item in ARM as Recovery Services but it recently was renamed to Backup and Site Recovery (OMS). Okay, that’s maybe not a huge revelation to many of you, but a gentle reminder to all of us: Don’t get too used to things in Azure being static. It is a dynamic environment to be sure!

The first thing I ran into, was that in my MSDN Subscription, the Location was actually blank. What? Why can’t I deploy a Recovery Services Vault in ARM with my MSDN Subscription? I had another Subscription that I used for some of my training courses and using that one, I saw Location populated with all Regions that I could create a Recovery Services Vault in.

After some investigation, attempting other subscriptions and reaching out to peers (Thank you, @mscloud_stever!), it was discovered that the Resource Provider for Recovery Services was not registered. Even though I had Site Recovery registered, it was not enough to display the Location regions for my Recovery Services Vault. So, learn from my discovery: Make sure to register your Recovery Services Provider Namespace.

To check your Registered Providers, run the following command in PowerShell:

Get-AzureRmResourceProvider -ListAvailable | Where-Object -FilterScript { $PSItem.RegistrationState -eq 'Registered'; }

Here is the sample output of this command:

image_thumb.png

If you look closely, Microsoft.RecoveryServices is not in the list. If this is the case for you, run the following command to register this Provider for your subscription:

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.RecoveryServices -Verbose

Once this completes, run the command to list out the registered providers again, and you should now see it registered like the screen shot below:

2016-05-12_10-56-38

Now that you see this, you may need to refresh your portal or log-out and log-in again for this to now show you the region options in the Location. NOTE: This took about an hour for mine to show correctly, so you might need to wait a little time for it to recognize the registration, but most of the time it is an immediate process.

After doing this, now I could see the regions and deploy a Recovery Services Vault without issue from within the ARM Portal. Now on to some other stuff I learned.

After registering a Windows 10 client for Files and Folders Backup, registering a Azure VM to backup, registering a System Center VMM server for protection and a VMware vCenter Server for protection, I performed backups and test failovers, etc. All of this is great stuff in the ARM portal and being able to see the backups, replicate VMs, test failovers individually, and more, I started the process of removal and deletion to clean up my environment.

All was great until in the process of deleting the Resource Group, the Site Recovery Vault would not delete. Nothing I attempted to do allowed me to remove it. It kept telling me that there were registered servers in the vault that I needed to remove before attempting the deletion of the vault.

In the Azure portal, I could find no servers listed in any area of the vault. I tried many things, but nothing I did would show that a server was still registered in the vault to delete it. I opened a support incident in Azure and was contacted my Microsoft. They gave me the following option to run and ARMClient command to clean the vault. You might be asking yourself, like I did as well, “How is this done?” So, read on!

First, if you have not done so, install Chocolatey on your Windows machine. If you’re not familiar with Chocolatey, here is the location to get instructions on installation: https://chocolatey.org/ – From the website: “Chocolatey NuGet is a Machine Package Manager, somewhat like apt-get, but built with Windows in mind.

Apt-get for Windows? Cool! So, I browsed to to the homepage and found the PowerShell command to download and install via the easy install of the main page. Bingo! Once you install Chocolatey, check out the packages you can install! https://chocolatey.org/packages – You mean I can install sysinternals tools with just a command line!? C:\> choco install sysinternals – Okay, I’m distracted easily. Back to the topic at hand.

Okay, now I’ve installed Chocolatey, which I hope you’ll agree is a cool tool, now I need to install ARMClient. ARMClient is a simple command-line tool to invoke the Azure Resource Manager API. At the Chocolatey package page, type ARMClient in the Search Packages prompt and click the magnifying glass to perform the search. See screenshot below:

image.png

Once you find the results, click on the ARMClient 1.1.1 to read about the package and the commands to both install and upgrade the package. NOTE: The version number may change should there be any updates after this article was published.

Once I ran choco install armclient and ARMClient was installed, I typed ARMClient at the command prompt showing me the initial help screen and some sample commands. See the screenshot below:

image_thumb.png

After I ran ARMClient login, I entered my credentials for my Azure subscription and successfully authenticated. ARMClient enumerated my tenants and my subscriptions associated with those tenants in the resulting output. Now comes some of the fun part. I had been provided a command to try to remove registered servers in the vault. Some of the information I had to gather and enter specific to my situation, but here is the command that was useful for me:

ARMClient.exe delete subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RecoveryServices/Vaults/<vaultName>/backupContainers/<serverFQDNname>/UnRegisterContainer?api-version=2015-03-15 -verbose

The highlighted areas are the edits I needed to make putting the information in that corresponded to my specific subscription, resource group, vault, and server FQDN name. After modifying the command with my info, I was able to run this command and after doing so was able to delete the vault without issue. See example output (with some information redacted for privacy) in the screenshot below:

image.png

So, all-in-all, this was a great learning exercise for me. Not only did a learn a lot about Recovery Services, Backup, ARM and Resource Providers, I learned a few new cool tools as well: Chocolatey and ARMClient. I hope you find this information useful and learn along with me!

@EdwardFBear

Automate finding a unique Azure storage account name (ARM)

When building ARM-based virtual machines in Azure with PowerShell, one important step involves creating a storage account. And because the storage account name must be unique within Azure, you first need to find an unused name. To accomplish this we can use the Get-AzureRmStorageAccountNameAvailability cmdlet.

This is great, but I may have to manually try several names before I stumble upon one that is unique. The last time I was working through this process I thought it would be helpful to let PowerShell find a unique name for me, retrying as needed until a usable name was determined. I looked around and to my surprise I could not find something that was already written to accomplish this. Continue reading