Azure Powershell

Quick Tip: Listing your Shared key in Azure VPN with Multisite configuration

Published by:

If you ever configured an Azure VPN, you may have used the trick of downloading the device script configuration from the portal to obtain the shared secret for your VPN device. However, if you have a multi-site configuration that procedure is not effective, since there are different secrets for each network. In that case, PowerShell to the rescue!

 

All you need to know is the name of you Azure VPN Gateway and run this (one liner) below. Make sure you are logged on to the Azure subscription containing the gateway (add-azureaccount) and that you’ve selected the right subscription (select-azuresubscription).

 

(Get-AzureVNetSite -VNetName “MyVPNGateway“).gatewaysites | foreach {Write-host “Local Site: $($_.name) Key: $((Get-AzureVNetGatewayKey -VNetName “MyVPNGateway” -LocalNetworkSiteName $_.Name).Value)” }

 

You’ll get then a list for each local site.

Easy. Quick. PowerShell.

 

Hope this helps!

Log Analytics OMS Operations Management Suite

Exporting OMS Log Analytics Alerts and Importing into Another Workspace

Published by:

One of the problems you may face when using Microsoft’s Operations Management Suite Log Analytics (I’m glad there is no acronym for all that) is to replicate some configurations you may require to another workspace. If you provide services to multiple customers, you will know exactly how challenging it can be. If you have a Dev or QA environment, you may also require moving your configuration.

Currently, the OMS Log Analytics console won’t allow you to move your alerts and search queries. For the saved searches, I’ve written a couple of scripts for that purpose (see here). More recently, Microsoft made the Alert REST API documentation available here and with that, the alerts can also be exported and imported.

For that, I’ve written two scripts:

Export-Alerts.ps1 – it will cycle through your tenants and identify all saved searches that have an action and a schedule (alert) assigned to it and will export them to a file.

Import-Alerts.ps1 – it will take the previously generated file and import those alerts into any workspace you select.

Let’s see how it works. First, exporting:

When you run the script, you must enter your credentials:

image

Then pick your tenant:

image

and your subscription:

image

Once done, it will generate a file (alerts.xml by default):

image

Now to import it, steps are similar. Run the file import-alerts.ps1 file and pick your tenant:

image

Then the subscription:

image

And the target workspace:

image

And lastly, the alerts.xml file:

image

Once done, you should see the alerts in your target workspace, as well as the saved searches!

image

Hope this helps!

Azure Recovery Services

Removing ASR Protection (V2 in the new Azure Portal)

Published by:

Once you are done with your ASR pilot, there is a certain order you should take to properly disconnect your on-premises Hyper-V host.

The first general rule is: do not remove the agent before cleaning up things in the portal. It is going to be harder. Start by deleting the configuration in this order.

– Unprotect any VMs you may be protecting:

1. Go to Replicated items in the Azure portal:

image

Click on the machine, them More Commands, then Delete:

image

Select as below:

image

This will disable the protection only but if needed, this machine will be still manageable later.

Wait until the protection is disabled:

image

image

Now for the Hyper-V server. Click on Site Recovery server:

image

Then select your Hyper-V server:

image

Click Delete and OK.

image

This will remove the configuration from the local Hyper-V host.

Now you would be ready to reconfigure your host, but I will go all the way and remove the agents from it:

image

image

image

And the second one:

image

And you are in the clear! Hope this helps!

Policies Resource Manager

Azure Resource Manager Policies

Published by:

In a real world scenario, policies and restrictions will be something you are going to need on a daily basis. In times of infinity capacity clouds, it is very important that you can control what and how much can be deployed. In the previous Azure portal, that task was very hard. With the addition of RBAC (Role Based Access Control), this task added an important capability to the mix. However, that was not enough to give more granular control on what kind of resources could be deployed.

Enters ARM Policies. With Policies you can essentially determine the conventions for specific subscriptions, resource groups or resources, in terms of what is allowed to be done or not.

With Policies, you can , for example, determine what types of resources a user (authorized with RBAC) can deploy and to which regions.

Let’s take a look at how it is done. In my example, I will create a resource group, then restrict the types of resources you can deploy in it.

First, creating the RG:

Add-AzureRmAccount
$RG=New-AzureRmResourceGroup -Location “East US” -Name “PolicyRG”

Now, let’s define a policy. Each policy contains basically conditions and effects:

$PolicyDef1=@”
{
  “if”: {
    “not” : {
      “field” : “tags”,
      “containsKey” : “costCenter”
    }
  },
  “then” : {
    “effect” : “deny”
  }
}
“@

This particular policy only allows deployment of resources that have a costCenter tag.

The next step is to create the actual policy object:

$policy = New-AzureRmPolicyDefinition -Name tagPolicyDefinition -Description “Policy to allow resource creation only with Tags” -Policy $PolicyDef1

And apply it to a certain scope. In this case, my resource group:

New-AzureRmPolicyAssignment -Name tagPolicyAssignment -PolicyDefinition $policy -Scope $RG.ResourceId

Now if you try to deploy any resource without the specific tag, you will be blocked:

image

If you use PowerShell and create, for example, an external IP with a tag, you will be ok (

$publicIP = New-AzureRmPublicIpAddress -Name $PublicIpName -ResourceGroupName $rgName -Location $locName –AllocationMethod Static -DomainNameLabel $domName –Tag @{Name=”costCenter”;Value=”Sales”}

image

If you want a complete log of what has been denied:

Get-AzureRmLog | where {$_.OperationName -eq "Microsoft.Authorization/policies/deny/action"} 

This is great stuff. The portal doesn’t let you pick a tag from creating time, so you may need to leverage PowerShell for that. Another example is in regards of what kind of resources you want people to deploy. Often enough, groups will only work with Infrastructure elements (Compute, Storage,etc). You don’t want them to accidentally spin up a SQL Database or a Logic App. The policy below only allows for specific types of resources:

{
  "if" : {
    "not" : {
      "anyOf" : [
        {
          "field" : "type",
          "like" : "Microsoft.Resources/*"
        },
        {
          "field" : "type",
          "like" : "Microsoft.Compute/*"
        },
        {
          "field" : "type",
          "like" : "Microsoft.Storage/*"
        },
        {
          "field" : "type",
          "like" : "Microsoft.Network/*"
        }
      ]
    }
  },
  "then" : {
    "effect" : "deny"
  }
}

Let’s apply this policy (and first remove the previous one) and test creating something fancy in our resource group. First, removing:

Get-AzureRmPolicyAssignment -Name “tagPolicyAssignment” -Scope $RG.ResourceId| Remove-AzureRmPolicyAssignment -Scope $RG.ResourceId

($RG contains my resource group object).

You will get this confirmation dialog:

image

Say yes.

Now let’s add another policy (3) in this case.

$PolicyDef3=@”
{
  “if” : {
    “not” : {
      “anyOf” : [
        {
          “field” : “type”,
          “like” : “Microsoft.Resources/*”
        },
        {
          “field” : “type”,
          “like” : “Microsoft.Compute/*”
        },
        {
          “field” : “type”,
          “like” : “Microsoft.Storage/*”
        },
        {
          “field” : “type”,
          “like” : “Microsoft.Network/*”
        }
      ]
    }
  },
  “then” : {
    “effect” : “deny”
  }
}
“@
$policy3 = New-AzureRmPolicyDefinition -Name tagPolicyDefinition3 -Description “Policy to allow resource creation only certain objects” -Policy $PolicyDef3
New-AzureRmPolicyAssignment -Name ResourcePolicyAssignment -PolicyDefinition $policy3 -Scope $RG.ResourceId

Let’s try and add a Network security group:

image

All good:

image

Now, let’s try, say, a SQL Database.

image

Details:

image

And bam! Denied!

image

image

In a nutshell, combining RBAC and Azure Resource Manager Policies gives you a lot of control and ability to create (and enforce) governance over subscriptions, resources groups and resources.

Hope this helps!

Azure Recovery Services Resource Manager

Azure Site Recovery–Onboarding in the New Azure portal–PREVIEW

Published by:

As many Azure features that come out, you just stumble upon it while casually browsing the (extensive) Azure portal. This was the case with the preview of Azure Site Recovery. Previously, you could see a reference to the ASM version, but it would through you back (in time) to the old portal.

Now a real interface to configure the service has been made available. Not this is a preview and shouldn’t be used in production.

It starts with creating a Vault:

image

(isn’t the little alien guy funny?)

Next you need to pick which scenario you want to use:

image

I’m going with Hyper-V Stand alone, since that’s all I can do at this time.

Next, create a Site:

image

Now you will need to install the bits to your Hyper-V hosts and use the credentials file as suggested:

image

Install the provider:

image

image

Register the Vault:

image

Done:

image

Now, to the portal! And there it is:

image

Add a Replication policy:

image

I’ve noticed the naming is more consistent with the PowerShell commands:

image

Create a Compute configuration. This is new:

image

Done:

image

Now moving to a different blade and option:

image

Enable replication through these steps:

image

Picking my usual suspect: CoreOS

image

Select storage account and OS:

image

And Replication policies:

image

[tense music plays]

image

Job completed:

image

And here is my VM being synchronized:

image

Hope this helps. I will be back with the testing procedures and how to set this up using PowerShell!

Azure PowerBI

Using Power BI to view your Azure Usage

Published by:

One of the challenges of understanding your Azure usage is to decipher the usage report from the Azure Portal. And I really needed that, since I was getting past my monthly cap consistently. Since I’m not an Enterprise user that can use this FREE amazing tool, I decided to figure out what was going on with my MSDN subscription. My first step was to download the usage report from the Azure account portal:

image

then:

image

Pick Version 2 – Preview.

Once done, the CSV file you download has two parts. The first shows the summary of utilization per Meter type.

image

Actually, it is based on these 3 items:

image

These 3 together are the key to find the utilization per resource. Column O has the Rate we need in order to find the final cost of single resource. But why is that necessary? Look at the example below, which comes from the second part of the CSV file:

image

Note that you don’t have a cost per line, only the Consumed quantity. So, how can we know? The answer is in columns D, E and F in the second table, which are exactly the same ones used in the first table:

image

Now, if I could grab the Rate from the first table and assign it to each line on the second one based on these 3 columns, wouldn’t it be great?

Enters a slight Excel tweak and Power BI. The first thing is to extract the first peace of the CSV file and turn into a separate tab. Let’s call it RateTable:

image

Now, the remaining rows need to be alone in another tab. Let’s call it azureusage:

image

Now let’s save the CSV as an Excel file and leave it ready.

If you don’t have Power BI Desktop, go here to get it. Once there, you can just add data to it:

image

Select Excel and point to your file:

image

You should see both tabs:

image

Now click on Edit and PBI will take you to the query editor view. There, we will need to execute a few steps to get the proper information out of our data.

1. The first thing is to remove blank rows from the RateTable query:

image

Also make sure you remove unused rows, like the Daily usage title that comes originally from the initial CSV file.

2. Next, we need to create a custom column on both queries, to create a unique key (just so we can relate both of them). We will start with the query we have opened. Select Meter Name, meter sub-category and meter zone columns, in this order, and select Merge Columns:

image

Give it a name:

image

You should now see a new column there:

image

3. Repeat the process for the azureusage query:

image

4. Now let’s create a relationship between the two queries. Click on Close and Apply to save your changes:

image

Once there, click on the relationship icon: image

Once there, you can try to detect the relationship. Click on Manage Relationship up-top and then Autodetect:

image

Isn’t it cool? Smile

You could have added it manually or even just connected the fields between the two tables:

image

6. Great. Back to Edit Queries.  In this step will add corresponding rate for each usage line, based on the type of meter (and meterkey) we have just created. To do that, go as follows:

– Click on Merge Queries up-top while in the azureusage query. This dialog will show:

image

Select as below (MeterKey on both of them):

image

This creates a new column. We don’t need all the tables returned. To select what we need (Rate), click on the arrows icon:

image

And select the Rate only:

image

Rename the column to Rate.

7. Now, all you need is something that calculates the cost for that entry, by multiplying the Rate by the Consumed Quantity. To do that, you click on Add Column:

image

And Add a Custom column:

image

Click Ok. Now set the type of the data in the column:

image

Now close and Apply.

8. Back in the main canvas, select on type of visualization and the Cost and Instance ID on the right side:

image

And there you have it: your cost per individual resource in Azure:

image

It is kind of a long tutorial, but might be a good way to visualize you detailed cost per instance.

 

Hope this helps!

Authoring SCOM VSAE

Service Discovery and Monitoring with Operations Manager

Published by:

One of the most frequent requests we get from customers is to create monitor for application services. Often enough you will find management packs for well know applications, but if you can’t, you will need to create those by yourself. With that, you have basically two options: use the provided Authoring Template in the SCOM console, which has been extensively described on the internet or create monitors in the same authoring area of SCOM, but using an existing target, like Windows Operating system or Windows Computer.

The first option is good because SCOM will not only monitor the services, but it will also create a discovery for those services and make them available to be listed as independent objects in a State view, for example. The cons of this approach is that if you have a lot of services, a lot of work will be required to create all the monitors. It also uses a lot more resources to discover the services, since for each monitored service, you a discovery will be added. This template is also good if you want CPU and and memory monitoring for the services, which are available through the template as well.

With the second option, which much leaner in terms o resources, the con is that the services themselves do not become objects themselves. The monitors for each one of them will be visible in the Health Explorer only. Alerts will work normally though.

What should you do then?

Well, there is a third option, which will require some XML edition and authoring skills. I’ve been using this for different customers and it has a good feedback. To build this solution, I’m using Visual Studio 2015 with the Management Pack Authoring extensions.

It all starts with a Class definition:

<ClassType ID=”Company.Application.Class.Computer” Accessibility=”Public” Abstract=”false” Base=”Windows!Microsoft.Windows.ComputerRole” Hosted=”true” Singleton=”false” />

This one defines a computer class that will host the services. And now the services themselves:

<ClassType ID=”Company.Application.Class.Service” Accessibility=”Public” Abstract=”false” Base=”Windows!Microsoft.Windows.LocalApplication” Hosted=”true” Singleton=”false”>
  <Property ID=”ServiceName” Type=”string” Key=”false” CaseSensitive=”false”  MaxLength=”256″ MinLength=”0″ />
  <Property ID=”ServiceDisplayName” Type=”string” Key=”true” CaseSensitive=”false” MaxLength=”256″ MinLength=”0″ />
  <Property ID=”ServiceProcessName” Type=”string” Key=”false” CaseSensitive=”false” MaxLength=”256″ MinLength=”0″ />
  <Property ID=”StartMode” Type=”string” Key=”false” CaseSensitive=”false” MaxLength=”256″ MinLength=”0″ />
  <Property ID=”LogOnAs” Type=”string” Key=”false” CaseSensitive=”false” MaxLength=”256″ MinLength=”0″ />
</ClassType>

Next, I will need two discoveries, one to discover the computers and then, another one to discover the services. This could be condensed in a single script discovery, but WMI is less expensive than scripts in terms or CPU cycles.

First the computer discovery:

image

Make sure you pick the right service prefix in the WMI query part, to properly identify the computers that belong to that class.

This discovery will then scan all computers that are part of the Windows Server Operating System Class every 15 minutes. Once one machine with that services mentioned above is found, a new instance of the Company.Application.Class.Computer class will be created.

And the service discovery itself:

image

This discovery will scan all the previously discovered computers that belong to the Company.Application.Class.Computer class  and look for the services according to the WMI query. Once any of the services is found, a new member of the Company.Application.Class.Service is discovered and the properties are mapped:

image

Having Service objects as entities by themselves makes it easy to monitor, since you can only create a single monitor that targets all the objects:

image

And that is pretty much it. The remaining pieces of the MP references, presentation and display strings. Make sure to customize the IDs and messages according to your needs.

The final MP can be found here.

Hope this helps!

Azure Resource Manager

Azure Resource Manager– Posts Reference

Published by:

Azure Resource Manager

Journey to ARM – Part V – Adding an external IP to an existing VM

Published by:

Differently from the classic model, when you create a VM it won’t have an external IP to access your VM (if you created it using the portal, yes, it will do it for you). In my case, I have migrated my VMs from the classic model using the method described in my previous articles, so, no external IP for me. However, you may want to temporarily enable access to that VM.

So, without further delay, here’s how you do it.

First, as usual, some variables:

image

Then, create the actual external IP:

image

Next, you need to assign the IP to the NIC:

image

And let us not forget about security. I will create a Network Security Group, create rules to allow RDP and deny everything else from the Internet and assign it to the NIC:

image

Once applied, it should look like this:

image

Yes! It will take a minute, no downtime.

Once you are done, you might want to remove the IP and Network Security group, if you want.

image

You can find the script here.

Hope this helps!

Azure Resource Manager

Journey to ARM–Part IV – Creating a VM from an existing VHD

Published by:

In my last blog, I have showed you how to copy the storage form your previous Classic storage account to brand new and shinny one. Now all you supposedly need is to create a new VM using that VHD.

So, a few assumptions before we go down to the needy-greedy:

– I already have a VNET to connect my VM to:

– You know the name of the VHD the VM is going to use

The script starts by setting some variables:

image

Then I get some VNET and subnet information:

image

Creates the NIC.

IMPORTANT NOTE! Make sure you don’t name your nic just ‘nic’ like I did on my first try.. You may have multiple nics in side the same resource group and you won’t know which one is which.

image

Then create the VM:

image

Make sure you get rid of the previous one in the classic model.

Find the final script here.

Hope this helps!