Category Archives: Azure

Azure Resource Manager

Journey to ARM–Design and Migrate

Published by:

I’ve recently committed myself to migrate my (now former) azure classic VMs environment in Azure to the new Azure Resource Manager model. I then found out that there is no easy or ‘no downtime’ way to do it. There is some documentation and some interesting projects around to help with that, like the ASM2ARM project. Since I wanted to learn how the sausage is done, I’ve tried to come up with my own way, better or worse, so I took the Sinatra approach: did it my way!

What is all that?

If you notice in your current environment (and by that I mean the new Portal), everything is in a resource group already. Cloud Services got resource groups of their own, where you can see you old VMs in there, along with a Cloud Service Object:

image

In the new ARM model, a VM like this would require a few more items, like IPs, Nics,etc. The old model would make a few things easier by deploying cloud services kind of automatically, but it wouldn’t create clear relationships and dependencies between the objects. Before we dive in deeply, here is how I’ve planned my environment.

Planning

Yes, Azure is all about flexibility and having things ready to be used. However, you still need to know what you are doing! Surprise! Well, how I did. Since it is all new, I’m probably wrong, but it is all about the learning.

The things I usually keep ready in my Azure lab are:

– A Domain Controller – This DC is part of a domain, split between on-premises and the cloud (connected through a VPN).

– System Center Servers – SCOM, SCSM, SCORCH, VMM.

– Other things – test machines, Linux, website,etc.

So, my first attempt at all this will be having a basic infrastructure resource group, with my Storage Account, my Virtual Network (and VPN connection), as well as the domain controller:

image

Everything else that I build, unless it requires something special, should point to this infrastructure for Storage and Connectivity.

For the System Centre Servers, I have created another resource group, for all of them. One could argue that having separate ones, it would make things easier to manage later. It might be true. If the number of components was bigger, I’d probably go for that. In this case, most of the VMs will have only a VM and a NIC resource:

image

I have added a Network Resource Group to one of the VMs basically to allow external access.

For the remainder, I will probably create separate small Resource groups or maybe a one-fits-all RG called “other”, or “miscellaneous”.

And there you have it: my whole Azure environment is fully designed. Of course this is a very simple environment, but can get you stated in the ARM way of thinking. In my next Article, I will start with the basic connection between ARM and on-premises using a VPN gateway. Stay tuned!

 

Hope this helps!

Automation Azure Orchestrator

Azure Automation versus Orchestrator

Published by:

Fight!Image result for fight versus

A while ago, I have created a solution using System Center Orchestrator for a customer. Although it was OK, most of the solution was created using PowerShell scripts. I have leveraged Orchestrator basically to monitor a folder for certain types of files. Since this is all local at the customer infrastructure, the first impulse was to use Orchestrator, since it is hard to leverage a cloud solution for on-premises data, right? Wrong!

Challenge accepted! Let’s try to do the same using Azure Automation!

The first part of the challenge is to mimic the basic Orchestrator side of the solution:

image

My first thought was to use the file system watcher object with PowerShell. However, this object is only active whilst your PowerShell session is open, making this impossible through automation. So, taking the simples approach: polling every X minutes for files should do the trick, for now.

Now let’s try to use that in Azure Automation! Let’s start with a variable on which file path will be monitored:

image

Now for the Runbook:

image

The testing runbook looks like this:

image

For testing, let’s use the test pane:

image

Notice it might be queued for a while.

After a bit, if you look at the results:

image

These are the two files in the folder, as expected.

The last step of this part is to add a schedule. For that we’ll create a schedule asset and assign this runbook to run. Don’t forget to save and publish the runbook.

Update: So much to learn, so little time! I was just reviewing some articles on automation and there is a much better way to do this. Let’s try this again.

First we will create a Webhook for our dear published runbook:

image

And make sure you set the run settings for a hybrid worker:

image

Now you’ll need to create a scheduler job. In the new Azure Portal, go to:

image

Set the Job collection:

image

Configure the Action settings:

image

And the (much more granular) schedule:

image

Note that the Scheduler Service is not free. If you need granularity, keep that in mind. Review this to check the pricing.

And there is my first run:

image

Then the second:

imageimage

On the next article, I will finalize the execution of my script through Azure Automation to disable and enable jobs.

I intended to write a second post, but when I started to do it, I noticed it would be much simpler to just add my second script (the one in the second PowerShell .NET box in SCO) as a function in my script than making it a second Azure Automation Runbook and having to handle remote authentication from the worker against Azure (Certificates, published settings file,etc).

So, here’s what I did. Here is the script as it looks like in PS ISE, with the main function (Process-File) collapsed:

image

As you can see, this does all Orchestrator was doing in the original workflow.

The main changes whilst using this through Azure Automation are in regards of the authentication (how to use the variables) and to keep in mind that it will run locally on a remote computer. That is actually since you probably just moved the script from you local ISE, so, shouldn’t be too different!

My original lines looked like this:

image

Of course, I was using an Orchestrator encrypted variable. But with Azure Automation, I can have an asset that is a credential:

image

Neat, eh? Note also that I’m using Invoke-Command to allow me to use these credentials, since I can’t specify the credentials the worker will run the script as and it would then run as the local system. I had to do that using Orchestrator anyways, since the native .NET PowerShell activity won’t allow credentials to be set.

So, let’s test this!

image

And done!

image

I have started the job at about 1:56 pm, so, I should have log file on the local folder of the worker machine:

That looks about right!

image

And the contents are consistent with the purpose of the script:

image

After enabling the scheduler, the Runbooks and generated output whenever a file was found in the folder (and generated logs):

image

So, although the script may be slightly different, this setup required no Orchestrator, no SQL and no setup, besides the hybrid worker. Let’s go Azure Automation!

Hope this helps!

Automation Azure OMS Powershell

Using MSOMS Alerts with Remediation Runbooks

Published by:

 

Microsoft recently put Operations Management Suite Alerts feature on public preview. Official announcement is here.

One of the greatest features along with alerting itself is the possibility of triggering Azure Automation runbooks to remediate a possible issue found by the alerts.

First of all, make sure you enable the feature since it is a preview:

image

Let’s create a simple alert that will for sure be triggered, in order to have some data. Suppose I want to be alerted when computers talk to more than 5 remote IPs. Ok, I know, it doesn’t make sense, but I want a query that will sure bring data and not a lot.

For example:

Type=WireData Direction=Outbound | measure count() by RemoteIP

Got some interesting numbers:

image

Now, let’s save this search, for future use:

image

After that, we can create an image:

image

Notice you can pick the current search or a previously created search.

Next, you will need to pick a threshold and the window of time for the query. It can’t go further back more than 60 minutes.

image

Notice also that OMS gives you a preview of the results. I love that!

Select the Subject and Recipient of the notification, should you need one, as below:

image

The next is step is to setup some remediation:

image

If you look at the New Azure Portal, you will notice a webhook:

image

If you want your remediation to run on premises, by a Hybrid Worked, you will need to set it up here:

image

And there you have it. Once the alert is triggered, you will see the log:

image

Notice the Input:

image

And there is your data, in a JSON format:

image

image

Now you can grab the data using standard Runbook procedure, as described here.

 

Hope this helps!

Azure Operations Management Suite SCOM

Updated Extended Agent Info Management Pack

Published by:

A while ago I wrote this article to help with SCOM side by side migrations from SCOM 2007. With the new Operations Management Suite wave and the possibility of agents reporting to an OMS workspace independently, visualizing agents that have been configured and/or have the OMS direct agent installed seems to be something that will be useful.

So, I have updated the management pack and it can be found here.

The basic difference is that you can see more information in the view:

image

As you can see above, some agents report to multiple workgroups as well as an OMS workspace.

Next steps in my backlog are tasks to configure an agent that has the agent (enable, disable, change workspace) and even perhaps upgrade the agent with the OMS binaries.

 

Hope this helps.

Azure Operations Management Suite

Importing Saved Searches into your OMS workspaces

Published by:

In my previous article, I have shown how to extract Saved Searches from your Azure OMS workspaces. Having that file in your hands, you can use the script below to import the results into another workspace.

First, get a hold of the script here. Also make sure you have all pre-requisites mentioned in the previous article.

When you first run it, you will be prompted for authentication:

image

If you have multiple tenants, you will be asked which tenant to use:

Then about the subscription:

image

Next, you should be prompted about the workspace you would like to target:

image

And finally, the file to be imported, which has been exported with the script in the previous post:

image

Once the script is done, you should see a new category in your workspace:

image

And that is it!

Hope this helps!

Azure OMS Operations Management Suite Uncategorized

Exporting Saved Searches from your OMS workspaces

Published by:

I have been studying OMS for a while now and although there is gradually more and more content about it, here’s another piece of code that can help you with your daily OMS management.

If you don’t know what OMS is, go here.

If you do, you may know that you can save searches that you find interesting and even add them to your workspace for future or daily use.

image or image, for example.

The problem comes when you need to move your searches to another environment. You don’t want to create hundreds of queries manually in the portal.

Enters PowerShell. You can find the documentation on the initial setup here.  With a great start from Richard Rundle from Microsoft, I have completed the script to export the Saved searches.

Once you have Chocolatey and armclient configured, you can go ahead and use the script below.

Here’s a little walkthrough.

1. As soon as you run it,

image

you will be prompted by the login screen:

image

If you are like myself, using a user that has access to multiple tenants, you’ll be prompted for the tenant:

image

You will be then prompted for the subscription:

image

The script will show you a list of queries you may want to extract and then extract the ones that match a certain criteria specified in the script:

image

The criteria is the name of the Category:

image

And as you can see, the queries following the lists match that category only:

image

The script will also create a file named after the search category

image

image

Keep that file handy, since we are going to use it in the next article, to import the searches into another environment.

You can find the script here.

 

Keep on rocking in the cloud world!

Azure Backup

Azure Backup–Restoring VM backups

Published by:

In my previous article on Azure Backup for Azure VMs, I had set backup setup for my Azure SCOM VM.

Once the schedule time came, that it was:

image

The details of the jobs gives us a bit more insight on what was done:

image

Great! So, now, as any backup solution, it might be important to verify if it works and to know how (and what) to restore, should the very unlikely situation arrives.

In that window, nothing can be done. If you now go to the protected items window, you can see the (magic!) Restore button:

image

Also notice that you can backup the VM immediately from here by clicking on “Backup Now”.

But let’s try a restore.

image

I can then select a recovery point in time. Since I have only one, easy choice. At least one…Smile. Once I click next, I’m presented with a new screen, asking information about the new VM that will be created. So, it seems you don’t have to worry about overwriting your current VM:

image

Once I fill up the necessary data:

image

The process starts:

image

And after 40 minutes:

image

There is our VM!

image

Make sure your previous VM is turned off, since the new VM will be exactly as the new one, so, you will have all possible networking conflicts happening if you restore it to the same network, with same name, SID,etc.

 

Hope this helps!

Azure

Azure Online backup–Virtual Machines

Published by:

Since storage is one of the most atractive aspects of the cloud computing, when Microsoft annouced Azure VM backup, it immediately rang a bell or more. So, here’s my experience configuring it for my humble laboratory.

Let’s start with the basics. Here’s where you will find the documentation.

The first step to configure Azure VM Backup is to discover the virtual machine. For that you have to navigate to the portal, under Recovery Services and click on Registered Items:

image

Once there, click on Discover in the lower bar:

image

The next step is to Register your VMs.In my case, when I clicked on the Register button, I’ve got nothing. I assume it is because all my VMs were stopped. However, it is not the case. For some crazy reason, I’ve created this vault in another region. So, a new vault shall be created!

image

Now we are cooking:

image

So, I have select my SCOM machine (ascom).

image

For the last main step, you have to actually Protect the VM. Once you click on Protect, you are presented with the VMs that you had registered before:

image

You may define your backup policy:

image

And a retention policy:

image

For testing, I have just selected as below:

image

The job is then comes to life:

image

Summary: the process to setup the backup is pretty straightforward. I will be back once I have something to restored.

Hope this helps!

Azure Resource Manager

Using JSON Edit to edit my ARM Templates

Published by:

I have just seen a Twitter about JSONEdit and have decided to give it a try. It can be a bit challenging to edit JSON using Visual Studio. Although you can see the tree, you can really interact with with, like copying and editing content. My expectations are that I will be able to do that with JSONEdit.

You can download it from here: http://tomeko.net/software/JSONedit/

Installation, well, there was no installation, actually. When I ran it, I’ve got this very screamy page:

image

I believe I can trust it, so, let’s run it anyways.

And there you have it. Let’s try opening some files. I have the files I have used for my ARM previous articles. On the first try, I had this error:

image

And it really seems that there is something funny there:

image

Even removing the characters, it won’t think this is a JSON file when asking to reformat the code:

 

image

Let’s try editing it with VS and then paste it directly to JSONEdit. Once I did that, I could easily see the tree:

image

The nice thing about the tree here is that you can edit the content:

image

You can also order the nodes and even better, copy and paste (as child or sibling):

image

Once you have it changed, you can paste it back to Visual Studio, for testing and deployment.

In summary, it seems like a nice tool to have, if you want to make sure you are replicating whole sessions or wants to better visualize some variables and resources. It seems Microsoft stored more data in the JSON files, which are not well understood by JSONEdit, but still good to have around when you are editing ARM Templates.

 

Hope this helps!

Azure Resource Manager

Azure Resource Manager–Step 3–The Load Balancer

Published by:

If you checked my previous articles about the two VMs with external IPs, you may have noticed that both VMs get an external IP and that there is no TCP port restriction to them. That won’t likely be the normal situation. Very commonly, you will want something balancing the load between those two identical machines, as well as some control over the ports that can be accessed. In order to accomplish that, we will first create a SINGLE publicIP and then apply to a load balancer entity.

First things first. The Public IP configuration. What I will do is remove the loop and make it a single public IP. This is what I had:

image

Now, after changing:

image

I have also changed the variable names, to represent better what we need to have (names, not prefixes).

image

Next I will remove the reference from the Nics, since the VMs themselves won’t have public IPs:

image

However, you will need to add a dependency on the Load Balancer and assigned NAT rules and backend LB pool:

image

Second, we should add the load balancer itself. It is a tough cookie this one, so let’s take the “Jack the Ripper approach”: Let’s cut it into pieces.

But first, let’s take a look from a high level. Here’s the skeleton of the beast:

image

Important information:

1. “type”: “Microsoft.Network/loadBalancers”, –> sort of obvious.

2. “dependsOn”: [     “[concat(‘Microsoft.Network/publicIPAddresses/’, variables(‘PublicIPName’))]”       ], –> it needs the external IP to work.

3. “frontendIPConfigurations” –> Contains the name of the external LB IP and a reference to the external IP we have created before.

image

4.  “backendAddressPools” -> This configuration will have the name and the backend IP addresses. In this case, the names are sort of hardcoded (allowing only two IPs).

image

5. “inboundNatRules” –> as the name states, this will create NAT rules to allow certain protocols through the load balancer. This used to be done with a cloud service in the old service model.

image

Notice that I’m basically mapping Port 50001 and 50002 to 3389 through the same external IP to the respective internal VM IPs.

6.  “loadBalancingRules” –> here’s where you’ll define which ports (services) will be load-balanced:

image

7. “probes”: And finally, how to detect the availability of the load-balanced services:

image

I have also added an Availability set, just so I can get guaranteed 99.95% availability:

image

It’s location:

image

And assigned the VMs:

image

Once deployed, you’ll hopefully see this:

image

And this:

image

 

Now for a quick testing. Let’s deploy IIS to both VMs, change the default website and test the LB. Notice that because I have a LB rule, I can Connect to the VM:

image

image

Just accept the the next question and there you are:

image

Let’s add IIS to both VMs:

Add-WindowsFeature Web-Server,web-mgmt-console

And add something to identify each one of the VMs:

imageimage

Now, when opening the page from the outside:

image and

image

So! That concludes or tutorial! You can find the template here.

I hope this helps!