Run Azure CLI as a Docker Container

The Azure Command Line Interface (CLI), tool is very useful for administrators that are looking to interact with the Azure platform beyond the Management Portal.  The beauty of the Azure CLI is that the commands are the same across any platform.  This means that you only have one set of commands to get familiar with and they can be used across Windows, Linux, Mac, etc.

One of the best things about the Azure CLI tool is that Microsoft publishes it as a Docker image, so you don’t even have to go through the pain of install it on your Linux box.  With just a few simple commands you will have full access to Azure by running it in a container

In this blog post, I’ll get you going with docker and the Azure CLI for Ubuntu 16.04.01.

So, from an Ubuntu desktop the first thing we need to do is install Docker if you don’t already have it installed.

To Install Docker on your Ubuntu Linux Desktop, use this command.  You may be promoted for the administrator password, and you will need to respond with ‘Y’ agreeing the disk space that will be used to install Docker.

sudo apt install


Once the installer completes without any errors run the following commend to check the installation:

docker version


Notice the message that says “Cannot connect to the Docker daemon.  Is the docker daemon running on this host?”

This means that docker isn’t currently running.  To start docker run the following command

sudo service docker start





Now to download the Azure CLI Docker images and run the container use the following command:

sudo docker run -it microsoft/azure-cli


Notice that the package needs to be downloaded the first time.  Once the image is downloaded the container starts.  You can tell that you are now in the container because the prompt changed to show root@<container-name>:/#.

At the new command prompt type, the following command:



The command runs successfully which allows you to now interact with Azure using the CLI running in a Docker Container!

Now the next step will be to authenticate to your Azure subscriptions.  To do this run the following command:

azure login


The command will point you to the following page: and to enter a code.  When you get to the webpage you will type in the code that was provided in the terminal output:















Once you press continue you will need to enter the Sign in credentials for your Azure subscriptions.


Once you have successfully authenticated to Azure you will receive the following message on the Azure Sign in page.








Then after about 15 seconds the authentication will register with your container.  The following output will appear in the terminal window.


Now that you are authenticated you can run commands using the Azure CLI.  Below is the output of running the following command:

azure help


A great example is to see all of the VMs in your subscription run this command:

azure vm list


I hope this helps you to get started with the Azure CLI and enjoying using the tool as a Docker container!



Come and meet us at Microsoft Ignite!


The Opsgility team will be at Ignite!!!

Stop by the expo hall (booth #134) to discuss some of the many options we have available to prepare your company for the Microsoft Cloud!

Enterprise readiness takes time to plan, if you would like to schedule a meeting ahead of time contact us at

Let us know and we’ll be happy to setup some time!

Hope to see you there!
The Opsgility Team

Raspberry Pi GPIO Pin Reference

Microsoft Azure IoT Suite can be used to build extremely scalable Internet of Things (IoT) solutions. With any IoT solution the cloud platform is only half of what needs to be built. The other half resides on physical IoT hardware devices such as a Raspberry Pi that are connected to some combination of sensors and / or actuators to provide the real-world integration side of the IoT solution. Both the Raspberry Pi 2 & 3 offer a 40 pin GPIO header to allow for many different components to be connected, in addition for the capability of providing both 3.3 volt and 5 volt output to those components. Each component needs to be connected to the correct pins, so a proper reference diagram is always necessary to ensure correct pin locations. Continue reading

Can an Azure Resource Manager policy be overridden?

Azure Resource Manager (ARM) policies are a powerful governance capability that allows administrators to accomplish many things, such as:

  • Define a ‘service catalog’ (which services may be created)
  • Control the Azure regions in which resources may be deployed
  • Enforce the use of ARM tags on resources
  • etc.

The use cases for ARM policies is limited only (well, almost only) by your imagination. However I had a question about the possibility of overriding policies. You see, an ARM policy defined at the subscription scope will be ‘inherited’ by all resource groups and resources within the subscription. Could someone with the ‘Owner’ role on a resource group remove the policy association from their resource group? Or could they put in place a policy on their resource group that overrode something defined by the policy at the subscription scope? Let’s try it and find out.

On my subscription I define and associate a policy that limits the resource providers which can be created to compute, network, storage, and resources (for resource groups). My ServiceCatalog.json file looks like this:


My ServiceCatalog.json

Now we switch the context to the resource group owner. This user has the ‘owner’ role on a resource group called ‘MyRG’. Here is how the application of the subscription-scoped policy looks from this user’s perspective:

Scope of ServiceCatalog policy

Scope of ServiceCatalog policy

You can see that the policy applies to the scope of the user’s resource group. SO can they un-associate it? It certainly appears so:

Removing the ServiceCatalog assignment

Removing the ServiceCatalog assignment

However, the association is not truly removed…subsequent runs of ‘Get-AzureRmPolicyAssignment with a scope of MyRG shows it is still in place. This answers the question of whether a resource group owner can remove a policy ‘inherited’ from the subscription. What about overwriting a setting put in place by the subscription-scoped policy? Let’s try that too.

While signed on as the resource group owner within PowerShell, I attempt to define and associate another policy called ServiceCatalogUpdate. You can see below that this JSON file adds another resource provider:



Notice what happens when this user tries to define the policy (a prerequisite to applying it):

Results of overwrite attempt

Results of overwrite attempt

The error calls out that the user is trying to ‘write over scope’ of the subscription policy. So even though the user has the ‘Owner’ role on the resource group, they cannot overwrite a policy defined at the subscription level.

One more question seems appropriate. What if the resource group administrator is attempting to define and assign a policy which does not override any settings from a subscription-scoped policy? Here is another policy definition JSON file that requires tags on resources. This policy should not contradict anything in the subscription-scoped policy:

JSON with no contradictory settings

JSON with no contradictory settings

Here are the results when the resource group owner tries to define and apply this policy:

Policy definition results

Policy definition results

The results are the same. So it appears that a resource group owner cannot apply an ARM policy on their resource group if a policy is defined at the subscription level. The takeaway seems to be, when designing solutions using ARM policies, apply your policies at the lowest level possible to achieve your goals.

For more information on Azure Resource Manager policies, visit the Azure documentation page for this powerful feature:

Dynamics CRM Online Training now Available

If your organization uses Dynamics CRM Online we have the training for you! We have two new classes authored by Microsoft MVP Britta Scampton

Getting Started with Dynamics CRM Online

To accelerate adoption and to make your team more efficient our Getting Started with Dynamics CRM online class is the perfect starting point. This class is targeted at end users and  IT Professionals that are new to Dynamics CRM Online.

In addition to increasing your teams skill level it will also help prepare them for exams:

  • Microsoft Dynamics CRM 2016 Sales – MB2–713
  • Microsoft Dynamics CRM 2016 Customer Service – MB2–714

This class can be delivered onsite at your location or is available self-paced on-demand through our online training service. For more details see the agenda: Getting Started with Dynamics CRM

Dynamics CRM Customization and Configuration

This class is designed for IT Professionals or Developers that are responsible for configuration of a Dynamics CRM Online deployment. This class can also help you prepare for the following certification exam:

  • Dynamics CRM Customization and Configuration–MB2–712

This class can be delivered onsite at your location or is available self-paced on-demand through our online training service. For more details see the agenda: Dynamics CRM Customization and Configuration

If you are interested in scheduling one or more deliveries please contact us at or you can start a 7 day free trial today to take the classes online.


Azure Advanced Data Center Bootcamp class now available

In the past 6 months we have taught over 7,500 students worldwide our Microsoft Azure Infrastructure as a Service Training. The widespread adoption of Azure is clearly accelerating into more advanced workloads.

To manage these workloads and to start implementing Azure at scale a new set of skills are needed.

These involve planning for Azure Governance – to ensure that you still have control over your data center, even if it does happen to reside in the cloud. Another key skill is advanced networking. Efficient routing and configuration of virtual firewalls to protect your workloads is critical as more important as sensitive workloads make their way into Azure. Other important skills are hybrid management and monitoring as well as implementing a dependable and understandable plan for business continuity.

With these goals in mind, the experts here at Opsgility have developed the next wave of Azure training to take your infrastructure teams to the next level. We call it the: “Azure Advanced Datacenter Boot camp”.

This class is a full 5 day instructor-led class that focuses on the following topics:

  • Azure Governance in the Enterprise
  • Advanced Azure Networking
  • Implementing Monitoring and Automation with Operations Management Suite
  • Business Continuity with Site Recovery and Backup

The target audience for this class are IT Professionals that have significant experience with Microsoft Azure Infrastructure as a Service. So if you can deploy and configure virtual machines, virtual networks, automate workloads with the command line or templates and need to move to the next level this is the class for you.

The full agenda for the class is online now:

This class is now available to schedule for private deliveries. An open enrollment schedule will be announced soon.

If you are interested in learning more about this class you can contact us at or email me directly at


Rethinking Paradigms in Networking: Firewalls in the Public Cloud

If you have ever implemented a firewall in a traditional network, almost certainly it had at least two network interfaces. One was on an untrusted side, perhaps directly on the Internet, and the other was on a trusted side. The goal, of course, is to keep unwanted traffic from reaching the trusted network. There are more complex implementations, but this serves for illustration sake.

Traditional firewall configuration

Traditional firewall configuration

I bring up these foundational topics to point out one way in which the public cloud makes us rethink our paradigms…in this case, that of the firewall. Continue reading

Why Does My SQL Server Availability Group Need a Load Balancer in Azure?

When deploying a SQL Server Always On Availability Group in Azure you must create an Azure Load Balancer to properly route client connections to the Availability Group Listener to the primary replica. In every class I teach, someone usually asks a question that basically boils down to something along the lines of “Why do I need to setup an Internal Load Balancer if I am using an Availability Group Listener? Shouldn’t the Availability Group Listener take care of that for you?”. The reason is because Azure blocks all gratuitous ARPs (sometimes referred to as a broadcast). A gratuitous ARP is basically an ARP request or reply that isn’t necessary as per the specification. However, a gratuitous ARP reply is commonly used in clustered environments to notify nearby machines that an IP address has been moved to another network interface so that machines receiving the ARP packet can update their ARP tables with the new MAC address. Azure and most cloud environments block this type of broadcast traffic for security reasons.


Gratuitous ARP behavior during failover of a typical SQL Server Availability Group not running in Azure. Note that the gratuitous ARP occurs any time the IP is brought online, not just during failover.

OK, so great, I have deployed my SQL Server Availability Group in Azure, I have already created a listener but it isn’t working correctly. How do I fix this? Microsoft documentation on configuring the internal load balancer for an Always On Availability Group assumes that you are not creating the listener when you run the New Availability Group Wizard. The wizard gives you the option to create the Availability Group Listener on the fly and in my experience most people create the listener up front while running the wizard. If you have already deployed your availability group and setup the listener you can still follow the Microsoft instructions for creating the Internal Load Balancer but when you get to the Configure the cluster to use the load balancer IP address section follow these instructions:

1. Open an Administrative Powershell ISE session on the primary instance of SQL. Copy and paste the PowerShell script below into the script window, but do not execute it yet.

$ClusterNetworkName = "Cluster Network 1" 
$IPResourceName = "AdventureWorks_10.0.2.7"
$SQLAGListenerName = "AdventureWorks" 
$ILBIP = "" 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | '
      Set-ClusterParameter -Multiple @{"Address"="$ILBIP";  `
                           "ProbePort"="59999";             `
                           "SubnetMask"="";  `
                           "Network"="$ClusterNetworkName"; `
Stop-ClusterResource -Name $IPResourceName
Start-ClusterResource -Name $SQLAGListenerName

2.  Modify the following variables with appropriate values for your environment.

$ClusterNetworkName is the name of the cluster network. You can find this in Failover Cluster Manager under Networks.


$IPResourceName is the name of your listener IP resource. You can find this in Failover Cluster Manager under Roles, then choose your Availability Group, expand the Server Name resource, right-click the IP Address and choose properties. Use the Name value specified here.


$SQLAGListenerName is the name of your SQL Availability Group Listener. You can find this in the Failover Cluster Manager under Roles, then choose your Availability Group, then right-click the Availability Group Listener name, use the name value specified here.


$ILBIP is the static IP address you assigned to the internal load balancer.

The last two lines of the script are simply to restart the cluster resources so that the changes take effect. We take the IP resource offline first because that will cause all of the upstream resources that are dependent on it to be taken offline as well. Then we bring the Availability Group name online because that will automatically bring all of its downstream dependencies online for us without issuing individual commands for each resource.

3.  Execute the PowerShell script to configure your cluster for the probe port. You only need to execute this one time. You do not need to run it on each node of the cluster.

4.  Verify your configuration by connecting to the secondary instance, and making a connection to the SQL AG Listener. You should also failover your Availability Group and test connections to the Listener from the other nodes of the cluster. 

NEW CLASS! Dynamics CRM Online Customization and Configuration


We are excited to announce the publication of our latest online, on-demand course, Dynamics CRM Online Customization and Configuration, by Opsgility author and Microsoft Dynamics CRM MVP, Britta Scampton.

In this course, you will build on the knowledge gained from the Getting Started with Dynamics CRM course, and dive into the array of options available for tailoring Dynamics CRM to the specific needs of an organization. Upon completion of this course students will have hands on experience customizing and automating Dynamics CRM as well as applying proper security and packaging their customizations for transfer to other CRM environments and organizations.

For more information on this course, including a full agenda, please click here.  This course is also available for dedicated, instructor-led delivery.  Please contact us for more information.

Set up continuous deployment with unit tests to your Azure App Service

If you want to set up continuous deployment for an App Service Web app in Microsoft Azure, you can either set up a build server (or service) on your own and have it copy the final build artifacts into the folder structure of the Web app -or- you can enable the built in continuous deployment features of the App Service to pull, build, and deploy code from a nice list of available source code locations.

App service publishing settings

App service publishing settings

All you have to do to enable this is to click on the Settings blade of the App Service and click on the Deployment source link. From there, you will see a list of common source code locations. Once you select one of these, provide the correct credentials, and then choose the right repository and branch, Azure will enable a service called Kudu that will immediately copy the source code into the App service, start a build, and then deploy the code straight into the live site (or deployment slot).

So far so good, and also amazingly easy! When I was doing this, I decided to do an experiment and see what would happen when I pushed a commit that included a failing unit test. To my shock and frustration, Kudu took my updated code with the broken test and happily built the app and published it to the live sight without any hesitation. That’s right, it pushed bad code straight through without checking the unit tests. Not good.

I did some research online and came up with practically nothing. I only found one blog post on the matter and it was someone complaining about the same problem but there wasn’t a good alternative solution provided. I did some more research of my own and finally found a workable solution. Yes, you can make Kudu build and run your tests! Read on to see how.

Web app blade Tools icon

Web app blade Tools icon

The first thing you need to do is navigate to the blade for your Web app in the Azure portal and click on the Tools icon in the blade header.

Kudu blade

Kudu blade


From there, click the Kudu link and then click the link called Go.





This will take you to the Kudu portal page where you can access the command line of the App service server and see details about deployments, app settings, and a lot of other interesting information. What you need from this screen, specifically, is to download the deployment script that Kudu is using when it builds and deploys your application’s source code. To get this, click the Tools link and then click the Download deployment script link that shows up below it. When you click this, it will download a zip file to your computer that contains two files: deploy.cmd and .deployment. Open the deploy.cmd file in a text editor.

Download the deployment script

Download the deployment script

In the deploy.cmd file, find the “Deployment” section that looks like this screen shot.

Deployment section in the script

Deployment section in the script

Solution structure of my app

Solution structure of my app

In between steps 2 and 3, you need to add the instructions to build your test project and to run the tests in it. In the screenshot and the code sample below, my solution was called WebWithTests and I had two projects, a web application project called WebWithTests and a unit tests project called WebWithTests.Tests. All you need to do is configure the build script appropriately for your solution. It should involve building the test project or projects (this is the Step 4 that I added in the code sample below) and then running the test framework by passing it the location of the final .dll files from the build step (Step 5 that I added in the code sample below). Since you can check the result of the test framework executable tool, you can stop the build if any of the tests failed. For your test framework tool, you should be able to use three different test frameworks natively: vstest, nUnit, and xUnit. Make sure the “KuduSync” step still appears at the end of the steps you added just like it was before you modified the file.

:: Deployment
:: ----------

echo Handling .NET Web Application deployment.

:: 1. Restore NuGet packages
IF /I "WebWithTests.sln" NEQ "" (
  call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\WebWithTests.sln"
  IF !ERRORLEVEL! NEQ 0 goto error

:: 2. Build to the temporary path
  call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebWithTests\WebWithTests.csproj" /nologo /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release;UseSharedCompilation=false /p:SolutionDir="%DEPLOYMENT_SOURCE%\.\\" %SCM_BUILD_ARGS%
) ELSE (
  call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebWithTests\WebWithTests.csproj" /nologo /verbosity:m /t:Build /p:AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release;UseSharedCompilation=false /p:SolutionDir="%DEPLOYMENT_SOURCE%\.\\" %SCM_BUILD_ARGS%

IF !ERRORLEVEL! NEQ 0 goto error

:: ADDED THIS BELLOW --------------------------
:: 3. Building test projects 
rem echo Building test projects 
"%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebWithTests.sln" /p:Configuration=Release;VisualStudioVersion=14.0 /verbosity:m /p:Platform="Any CPU" 

 	echo Build failed with ErrorLevel !0!
 	goto error

:: 4. Running tests 
echo Running tests 
vstest.console.exe "%DEPLOYMENT_SOURCE%\WebWithTests.Tests\bin\Release\WebWithTests.Tests.dll" 
IF !ERRORLEVEL! NEQ 0 goto error 
:: ADDED THIS ABOVE --------------------------

:: 5. KuduSync
  call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
  IF !ERRORLEVEL! NEQ 0 goto error


Once you have updated this file on your machine with your own solution and project names, you need to add these files (deploy.cmd and .deployment) to your source code so that the Kudu service will use them when it builds and deploys your app in the cloud. The trick here is that you need to add them at the root of the solution, in the same place as the solution file and the .gitignore file.

Deployment scripts in the folder

Deployment scripts in the folder

Then you just need to commit these files to your repository, and do a push. Kudu will see your new commit and start a build and deploy process. You can see in the console output of the Deployment source blade that it is running your tests. Push up a failing test and try it out, the build should fail and it won’t deploy the bad code into the slot.