Overcast » Blog Archives

Author Archives: admin

Azure Resource Manager

Journey to ARM–Part III – Copying Storage

Published by:

Previously, in the ASM2ARM saga, I have created the VPN gateway I will need to connect my Azure VMs to my on-premises resources. Today I will show you how to move existing VHDs storage in classic storage to new ARM based storage blobs. In my case, I have made a few assumptions:

– You have a machine in the classic model, with storage in classic mode.

– The machine is stopped

Here’s a few things you will need:

– A new storage account, provisioned in ARM;

– Name and storage keys for the classic and ARM storage accounts;

– Name of the old machine and Cloud Service

The script, which was based on this article here, goes like this:

First things first, some variable definitions:

image

In this example, I’m copying the OS hard disks only. Next I will define my source and destination storage accounts and keys (don’t worry, these are not the real keys):

image

Then the actual copy:

image

This might take a while, depending on where your storage accounts are stored and since we are switching modes (classic to arm), all your copies will likely take some time. You can use the last part in the script to monitor the progress of the copy:

image

Find the final script here.

In my net bit of ARM awesomeness, I’ll show you how to create the new VM having the VHD already stored in an ARM Storage account.

Stay tuned!

Hope this helps!

Azure Resource Manager

Journey to ARM–Part II – Creating the VPN gateway

Published by:

The starting point to create a connection between an Azure VNET and your on-premises environment is a VPN gateway. In the classic Azure portal, the experience is relatively easier and well documented on the internet. As you may know or not, there is no user interface to create the VPN gateway, so you have to use PowerShell to do so. Below you will find a script that will do it for you. Before you jump to it, take some time to understand the steps. For demo purposes, I will detail the creation of the gateway for a test VNET called overcastvnet in a resource group called demorg.

Let’s create the Resource Group and the VNET:

image

If your VNet already exists and you just need the gateway subnet to be added, you can run these lines below:

image

The next step is to create a local network, which basically tells the gateway which networks are on the other side of the connection.

image

After that, we need to create an external IP for the Azure gateway. Once provisioned, this will be the IP you are going to use on the other end (on-premises or another VNET)

image

Next, select which subnet will be used for the gateway and assign the configuration to the gateway:

image

And finally, create the gateway. Make sure you select the right type, being static or dynamic:

image

This should take a while.

The last step is to establish the actual connection:

image

And there you have it!

image

Find the script here.

The next article will discuss copying storage from your legacy storage accounts to the new ARM storage.

Hope it helps!

Azure Resource Manager

Journey to ARM–Design and Migrate

Published by:

I’ve recently committed myself to migrate my (now former) azure classic VMs environment in Azure to the new Azure Resource Manager model. I then found out that there is no easy or ‘no downtime’ way to do it. There is some documentation and some interesting projects around to help with that, like the ASM2ARM project. Since I wanted to learn how the sausage is done, I’ve tried to come up with my own way, better or worse, so I took the Sinatra approach: did it my way!

What is all that?

If you notice in your current environment (and by that I mean the new Portal), everything is in a resource group already. Cloud Services got resource groups of their own, where you can see you old VMs in there, along with a Cloud Service Object:

image

In the new ARM model, a VM like this would require a few more items, like IPs, Nics,etc. The old model would make a few things easier by deploying cloud services kind of automatically, but it wouldn’t create clear relationships and dependencies between the objects. Before we dive in deeply, here is how I’ve planned my environment.

Planning

Yes, Azure is all about flexibility and having things ready to be used. However, you still need to know what you are doing! Surprise! Well, how I did. Since it is all new, I’m probably wrong, but it is all about the learning.

The things I usually keep ready in my Azure lab are:

– A Domain Controller – This DC is part of a domain, split between on-premises and the cloud (connected through a VPN).

– System Center Servers – SCOM, SCSM, SCORCH, VMM.

– Other things – test machines, Linux, website,etc.

So, my first attempt at all this will be having a basic infrastructure resource group, with my Storage Account, my Virtual Network (and VPN connection), as well as the domain controller:

image

Everything else that I build, unless it requires something special, should point to this infrastructure for Storage and Connectivity.

For the System Centre Servers, I have created another resource group, for all of them. One could argue that having separate ones, it would make things easier to manage later. It might be true. If the number of components was bigger, I’d probably go for that. In this case, most of the VMs will have only a VM and a NIC resource:

image

I have added a Network Resource Group to one of the VMs basically to allow external access.

For the remainder, I will probably create separate small Resource groups or maybe a one-fits-all RG called “other”, or “miscellaneous”.

And there you have it: my whole Azure environment is fully designed. Of course this is a very simple environment, but can get you stated in the ARM way of thinking. In my next Article, I will start with the basic connection between ARM and on-premises using a VPN gateway. Stay tuned!

 

Hope this helps!

Authoring SCOM

Yet another update to the Extended Agent Info Management Pack

Published by:

imageI have recently updated my Extended agent info MP to include information about Operations Management Suite. I have now added a task to configure the agent to use OMS.

The new task shows in the tasks pane when you click on any agent or agents in the Extended Agents View:

image

Once clicked, you will need to override the name (should be Guid, I know) of the workspace and the key to that workspace:

image

Once configured, click override and then Run. Once completed, the agent (as long as it supports OMS, version 7.2 and higher), it will be configured as below:

image

I have also fixed an issue where once a management group was removed, the Monitoring Service wouldn’t start. I have found this great piece of code Here from Matty T and have incorporated the technology. Thanks Matty!

The new version can be found here!

Automation Azure Orchestrator

Azure Automation versus Orchestrator

Published by:

Fight!Image result for fight versus

A while ago, I have created a solution using System Center Orchestrator for a customer. Although it was OK, most of the solution was created using PowerShell scripts. I have leveraged Orchestrator basically to monitor a folder for certain types of files. Since this is all local at the customer infrastructure, the first impulse was to use Orchestrator, since it is hard to leverage a cloud solution for on-premises data, right? Wrong!

Challenge accepted! Let’s try to do the same using Azure Automation!

The first part of the challenge is to mimic the basic Orchestrator side of the solution:

image

My first thought was to use the file system watcher object with PowerShell. However, this object is only active whilst your PowerShell session is open, making this impossible through automation. So, taking the simples approach: polling every X minutes for files should do the trick, for now.

Now let’s try to use that in Azure Automation! Let’s start with a variable on which file path will be monitored:

image

Now for the Runbook:

image

The testing runbook looks like this:

image

For testing, let’s use the test pane:

image

Notice it might be queued for a while.

After a bit, if you look at the results:

image

These are the two files in the folder, as expected.

The last step of this part is to add a schedule. For that we’ll create a schedule asset and assign this runbook to run. Don’t forget to save and publish the runbook.

Update: So much to learn, so little time! I was just reviewing some articles on automation and there is a much better way to do this. Let’s try this again.

First we will create a Webhook for our dear published runbook:

image

And make sure you set the run settings for a hybrid worker:

image

Now you’ll need to create a scheduler job. In the new Azure Portal, go to:

image

Set the Job collection:

image

Configure the Action settings:

image

And the (much more granular) schedule:

image

Note that the Scheduler Service is not free. If you need granularity, keep that in mind. Review this to check the pricing.

And there is my first run:

image

Then the second:

imageimage

On the next article, I will finalize the execution of my script through Azure Automation to disable and enable jobs.

I intended to write a second post, but when I started to do it, I noticed it would be much simpler to just add my second script (the one in the second PowerShell .NET box in SCO) as a function in my script than making it a second Azure Automation Runbook and having to handle remote authentication from the worker against Azure (Certificates, published settings file,etc).

So, here’s what I did. Here is the script as it looks like in PS ISE, with the main function (Process-File) collapsed:

image

As you can see, this does all Orchestrator was doing in the original workflow.

The main changes whilst using this through Azure Automation are in regards of the authentication (how to use the variables) and to keep in mind that it will run locally on a remote computer. That is actually since you probably just moved the script from you local ISE, so, shouldn’t be too different!

My original lines looked like this:

image

Of course, I was using an Orchestrator encrypted variable. But with Azure Automation, I can have an asset that is a credential:

image

Neat, eh? Note also that I’m using Invoke-Command to allow me to use these credentials, since I can’t specify the credentials the worker will run the script as and it would then run as the local system. I had to do that using Orchestrator anyways, since the native .NET PowerShell activity won’t allow credentials to be set.

So, let’s test this!

image

And done!

image

I have started the job at about 1:56 pm, so, I should have log file on the local folder of the worker machine:

That looks about right!

image

And the contents are consistent with the purpose of the script:

image

After enabling the scheduler, the Runbooks and generated output whenever a file was found in the folder (and generated logs):

image

So, although the script may be slightly different, this setup required no Orchestrator, no SQL and no setup, besides the hybrid worker. Let’s go Azure Automation!

Hope this helps!

Authoring SCSM

SCSM–A tale of two customizations

Published by:

imageThis is a story with a happy ending. Not all Service Manager Data Warehouse customization stories end the same way. Embark with me in this incredible journey!

Once upon a time, there was a System Center Service Manager installation that required some customizations. A reference to a list of customers and the scope of the customizations was required. And so it happened. Management Pack 1 was created and implement. Victory!

Months later, filled with courage, a new modification was requested. This time, with new properties and interface customizations. And so it happened. Management Pack 2 came to be.

The previous endeavors, however, were not enough and more customizations were required. To accomplish them, not only new fields needed to be created, but the interface had to be modified. As you may know or not, Service Manager will only support a single management pack with customizations for a certain form. In light of that and considering the prior customizations were not being used fully, it was agreed that part of the previous modifications, MP2, was to be removed and redesigned. And that’s when the problems began.

As soon as MP2 was removed and the Data Warehouse jobs started to run, errors showed up:

clip_image002

clip_image004

clip_image006

And the funny part is that the errors referred to Columns defined on MP1 (not MP2, that was removed) as invalid. That puzzled all the wizards of the realm. Again, since the service requests were not being used fully, it was decided that MP1 and all related MPs were to be removed. To start fresh.

However, that didn’t help. The errors continued. And no data was being loaded into the data warehouse, not even regarding other types of workitems.

In the past, the villagers had asked for the help of the mighty gods of Service Manager and many times had heard: “Thou shalt drop the data warehouse and attach a new one”.

Fearing for the worst, the frightened SCSM admin decided to re-import both management packs. And reboot the DW servers, of course. That had a strange effect. Now the load jobs would run, but still, no transform job. And even more weird was that the properties generating the errors were now the ones defined in MP2:

image

Despair and fear took over the administrator. The council had already ruled that the best would be to sacrifice 3 years of data to save the daily operations and reporting. In a last and desperate attempt, the SCSM admin removed all the MPs once more. Rebooted the Data Warehouse server. Poured a glass of the finest Ale and waited.

And then the miracle came. No more errors in the data warehouse jobs. All fully synchronized. All the data backlog (8 days) was synchronized successfully. A great relief took over the NOC and all the analysts danced, feasted and drank (soft drinks) all night to celebrate. And they lived happily ever after.

It might not be something that you can do in your installation, but it fixed our problem with the data.

True story.

Hope this helps!

PS: an old enemy was lurking in the dark: the Cubes! But that is another story.

Automation Azure OMS Powershell

Using MSOMS Alerts with Remediation Runbooks

Published by:

 

Microsoft recently put Operations Management Suite Alerts feature on public preview. Official announcement is here.

One of the greatest features along with alerting itself is the possibility of triggering Azure Automation runbooks to remediate a possible issue found by the alerts.

First of all, make sure you enable the feature since it is a preview:

image

Let’s create a simple alert that will for sure be triggered, in order to have some data. Suppose I want to be alerted when computers talk to more than 5 remote IPs. Ok, I know, it doesn’t make sense, but I want a query that will sure bring data and not a lot.

For example:

Type=WireData Direction=Outbound | measure count() by RemoteIP

Got some interesting numbers:

image

Now, let’s save this search, for future use:

image

After that, we can create an image:

image

Notice you can pick the current search or a previously created search.

Next, you will need to pick a threshold and the window of time for the query. It can’t go further back more than 60 minutes.

image

Notice also that OMS gives you a preview of the results. I love that!

Select the Subject and Recipient of the notification, should you need one, as below:

image

The next is step is to setup some remediation:

image

If you look at the New Azure Portal, you will notice a webhook:

image

If you want your remediation to run on premises, by a Hybrid Worked, you will need to set it up here:

image

And there you have it. Once the alert is triggered, you will see the log:

image

Notice the Input:

image

And there is your data, in a JSON format:

image

image

Now you can grab the data using standard Runbook procedure, as described here.

 

Hope this helps!

Azure Operations Management Suite SCOM

Updated Extended Agent Info Management Pack

Published by:

A while ago I wrote this article to help with SCOM side by side migrations from SCOM 2007. With the new Operations Management Suite wave and the possibility of agents reporting to an OMS workspace independently, visualizing agents that have been configured and/or have the OMS direct agent installed seems to be something that will be useful.

So, I have updated the management pack and it can be found here.

The basic difference is that you can see more information in the view:

image

As you can see above, some agents report to multiple workgroups as well as an OMS workspace.

Next steps in my backlog are tasks to configure an agent that has the agent (enable, disable, change workspace) and even perhaps upgrade the agent with the OMS binaries.

 

Hope this helps.

Azure Operations Management Suite

Importing Saved Searches into your OMS workspaces

Published by:

In my previous article, I have shown how to extract Saved Searches from your Azure OMS workspaces. Having that file in your hands, you can use the script below to import the results into another workspace.

First, get a hold of the script here. Also make sure you have all pre-requisites mentioned in the previous article.

When you first run it, you will be prompted for authentication:

image

If you have multiple tenants, you will be asked which tenant to use:

Then about the subscription:

image

Next, you should be prompted about the workspace you would like to target:

image

And finally, the file to be imported, which has been exported with the script in the previous post:

image

Once the script is done, you should see a new category in your workspace:

image

And that is it!

Hope this helps!

Azure OMS Operations Management Suite Uncategorized

Exporting Saved Searches from your OMS workspaces

Published by:

I have been studying OMS for a while now and although there is gradually more and more content about it, here’s another piece of code that can help you with your daily OMS management.

If you don’t know what OMS is, go here.

If you do, you may know that you can save searches that you find interesting and even add them to your workspace for future or daily use.

image or image, for example.

The problem comes when you need to move your searches to another environment. You don’t want to create hundreds of queries manually in the portal.

Enters PowerShell. You can find the documentation on the initial setup here.  With a great start from Richard Rundle from Microsoft, I have completed the script to export the Saved searches.

Once you have Chocolatey and armclient configured, you can go ahead and use the script below.

Here’s a little walkthrough.

1. As soon as you run it,

image

you will be prompted by the login screen:

image

If you are like myself, using a user that has access to multiple tenants, you’ll be prompted for the tenant:

image

You will be then prompted for the subscription:

image

The script will show you a list of queries you may want to extract and then extract the ones that match a certain criteria specified in the script:

image

The criteria is the name of the Category:

image

And as you can see, the queries following the lists match that category only:

image

The script will also create a file named after the search category

image

image

Keep that file handy, since we are going to use it in the next article, to import the searches into another environment.

You can find the script here.

 

Keep on rocking in the cloud world!