Uncategorized

Overcast.info

Published by:

 

Recently I have decided to show my blog a little more as a brand than my actual name. I have then created a new domain and a new twitter account. If you like the work I have been doing in the last few years, please update your links to http://overcast.info and follow me @overcastinfo. All the technical stuff will be there.

If you don’t like it, let me know why and I will try to make it better!

 

Thank you,

Authoring SCSM

SCSM–Using PowerShell to Create DW Outriggers and Dimensions

Published by:

image

Update (Dec,3rd 2015): Found a noob/doingthislateatnight bug in the script. Fixed it. Now the option to create dimension or not works. Nothing like eating your own dog food…

Certain technologies have such complexity when we first look at them that, as Carl Sagan said before, they are undistinguishable from magic. However, as history has proven, if you spend time to study and understand, these phenomena won’t be so mystifying anymore. For me, although working wit SCSM for a while, send custom information to the Data Warehouse was always a point I would avoid as much as I could. However, often enough, there is no way around it.

So, I have decided to face the challenge and not only produce one-offs. I wanted something I could use multiple times.

With that said, I’m not covering all possibilities here. Service Manager has endless opportunities for expansion, which won’t always be easy to implement. Here’s the scenario I wanted to have ready, since it happens often enough in SCSM Implementations:

– Customer needs a new field in a form, typically Incidents or Service Request. Something like a list of customers, internal departments,etc.

– You use the Authoring Console to create the new fields and lists

– Once the new (sealed) MP is imported, you can see text fields in the warehouse DB, but all the lists (enumerations) contain GUIDs.

To fix this issue, Service Manager requires you to create what is called an Outrigger, very well described here. It is however a dry subject and it takes a while to really take off.

Let’s look at an example:

Here’s my new class property, on top of incidents. I actually have a couple more, but I’m focusing on the Clients one:

image

If you look at the database (data warehouse):

image

Notice Clients is there, but:

image

Not too helpful. Enters the outrigger:

image

It is actually simpler than it looks, but can be confusing. The key part is the line:

<Attribute ID=”Clients” PropertyPath=”$Context/Property[Type=’ref1!ClassExtension_ced5b84f_54a9_4ff5_b681_0d071e879d94′]/Clients$” />

To compose that line, you need to make sure you have a reference (call ref1 here) to the MP that you used to extend the class.

image

You need to know the PublicKeyToken as well to write this manually.

You also need to know the name of the extension of the (incident) class from your original MP:

image

Fortunately, you can get all this information using PowerShell and that’s exactly what I did in my script.

The script also allows you to generate a Dimension, which is some sort of new compose class in the DW. There is a flag in the beginning of the script.

Once generated, the script will also try to seal the MP according to the configuration in the header of the script:

image

Once you run it, it will prompt you for the class you need to create the outrigger from:

image

It looks a bit messy, but PowerShell gridview allows you to quickly search, so, note the Identifier on the right:

image

Once you select and click ok, you’ll be prompted for the field or fields you want to add as outriggers:

image

Again, a bit messy to look, but effective. Smile

And there you have it.

Now you have a new table in the database (DWDatamart):

image

That you can relate by the EnumTypeId field:

image

And you can use for parameters in Reporting Services. Neat, huh?

You can find the script here!

 

Points for improvement – there are a lot more things that can be done, including facts tables, for relationships, more options, error control and Cube data (next version), but this is version 1.0 and I think it will help with a common/simple task.

A few tips: once you import your new class, make sure you synchronize the DW once before importing the Outrigger extensions. You can use this script from Travis Wright to speed up the process. Also make sure you run it on a Service Manager Posh window, as administrator. If fastseal fails for some reason, you HAVE to fix it in order to seal the MP. You can’t use unsealed MPs in the Data Warehouse.

 

Hope this helps!

SCOM

Configure the SQL Agent Job Monitor in Operations Manager

Published by:

I had to configure the SQL Agent Job Monitor for a customer and had some very interesting experiences while doing it.

The first fact to be aware about is that Jobs are not discovered by default. I believe Microsoft did that to avoid unnecessary cycles when not all jobs may be that critical.

If you want to enable the discovery of the jobs, you will have to first go to the Authoring area and scope as below:

image

Once you do that, your options should be as below when you select Discoveries on the Left side:

image

To enable the discovery, apply an override to either or both SQL 2008 and SQL 2012 Agents:

image

In this case, I have created a groups of SQL Servers that were important to have jobs monitored and applied the override to that group only.

Once the discovery runs, you should see the Agent Job State being populated:

image

image

Once you have the SQL Agent Jobs discovered the monitors will run by default, both last run states and Job Duration.

image

That’s when the fun starts. These are very peculiar monitors. Let’s take a look at each aspect of them.

1. The monitors are enabled by default, but don’t generate alerts. If you are looking to have alerts from them, you will need to apply an override:

image

2. For the Last Run Status monitor, the default behaviour is to send alerts (if you enable them) when the monitor is in a critical state. But surprise, this monitor never goes into a critical state!

image

So, even if you override it to send alerts, you will need an extra override for it to actually work:

image

3. Although the default value of the Alert Severity property is set to be Critical, when you get an alert, it will be a Warning alert,not critical. It’s not clear to me why, since all the configuration seems ok. If you really want the alert to be critical, you’ll need another override:

image

It really seems redundant, but it fixes the problem.

4. It seems that the Auto-Resolve Alert property also doesn’t work as expected. I have reset the health of the monitor and the Alert closed by itself, which I wouldn’t expect with the Auto-Resolve Alert set to false. The very likely reason for that you can see in the monitor properties:

image

So, if you want to change that, you will need to force the override below:

image

Once you get the alerts, they are a bit cryptic and not very informative:

image

image

For the SQL admin, those steps will likely make sense. Smile

 

Hope this helps!

Azure Backup

Azure Backup–Restoring VM backups

Published by:

In my previous article on Azure Backup for Azure VMs, I had set backup setup for my Azure SCOM VM.

Once the schedule time came, that it was:

image

The details of the jobs gives us a bit more insight on what was done:

image

Great! So, now, as any backup solution, it might be important to verify if it works and to know how (and what) to restore, should the very unlikely situation arrives.

In that window, nothing can be done. If you now go to the protected items window, you can see the (magic!) Restore button:

image

Also notice that you can backup the VM immediately from here by clicking on “Backup Now”.

But let’s try a restore.

image

I can then select a recovery point in time. Since I have only one, easy choice. At least one…Smile. Once I click next, I’m presented with a new screen, asking information about the new VM that will be created. So, it seems you don’t have to worry about overwriting your current VM:

image

Once I fill up the necessary data:

image

The process starts:

image

And after 40 minutes:

image

There is our VM!

image

Make sure your previous VM is turned off, since the new VM will be exactly as the new one, so, you will have all possible networking conflicts happening if you restore it to the same network, with same name, SID,etc.

 

Hope this helps!

Azure

Azure Online backup–Virtual Machines

Published by:

Since storage is one of the most atractive aspects of the cloud computing, when Microsoft annouced Azure VM backup, it immediately rang a bell or more. So, here’s my experience configuring it for my humble laboratory.

Let’s start with the basics. Here’s where you will find the documentation.

The first step to configure Azure VM Backup is to discover the virtual machine. For that you have to navigate to the portal, under Recovery Services and click on Registered Items:

image

Once there, click on Discover in the lower bar:

image

The next step is to Register your VMs.In my case, when I clicked on the Register button, I’ve got nothing. I assume it is because all my VMs were stopped. However, it is not the case. For some crazy reason, I’ve created this vault in another region. So, a new vault shall be created!

image

Now we are cooking:

image

So, I have select my SCOM machine (ascom).

image

For the last main step, you have to actually Protect the VM. Once you click on Protect, you are presented with the VMs that you had registered before:

image

You may define your backup policy:

image

And a retention policy:

image

For testing, I have just selected as below:

image

The job is then comes to life:

image

Summary: the process to setup the backup is pretty straightforward. I will be back once I have something to restored.

Hope this helps!

Azure Resource Manager

Using JSON Edit to edit my ARM Templates

Published by:

I have just seen a Twitter about JSONEdit and have decided to give it a try. It can be a bit challenging to edit JSON using Visual Studio. Although you can see the tree, you can really interact with with, like copying and editing content. My expectations are that I will be able to do that with JSONEdit.

You can download it from here: http://tomeko.net/software/JSONedit/

Installation, well, there was no installation, actually. When I ran it, I’ve got this very screamy page:

image

I believe I can trust it, so, let’s run it anyways.

And there you have it. Let’s try opening some files. I have the files I have used for my ARM previous articles. On the first try, I had this error:

image

And it really seems that there is something funny there:

image

Even removing the characters, it won’t think this is a JSON file when asking to reformat the code:

 

image

Let’s try editing it with VS and then paste it directly to JSONEdit. Once I did that, I could easily see the tree:

image

The nice thing about the tree here is that you can edit the content:

image

You can also order the nodes and even better, copy and paste (as child or sibling):

image

Once you have it changed, you can paste it back to Visual Studio, for testing and deployment.

In summary, it seems like a nice tool to have, if you want to make sure you are replicating whole sessions or wants to better visualize some variables and resources. It seems Microsoft stored more data in the JSON files, which are not well understood by JSONEdit, but still good to have around when you are editing ARM Templates.

 

Hope this helps!

SCOM

SCOM Event Log monitoring–Event Source vs EventSourceName

Published by:

This is an old subject and EVERYBODY should know how to create an Alerting rule that detects a certain event and triggers and alert. However, the way things are laid out in SCOM can make your daily life difficult. Just run in to an issue yesterday that was giving me (more) gray hair.

The requirement was simple: detect abnormal BSOD or power related shutdowns. Easy as pie, right?

The events are fairly easy to pinpoint. Say, for example, event ID 1001:

image

Cool. All you have to do is create an Alerting rule in SCOM, with this criteria:

image

image

Event ID: 1001

Source: BugCheck

image

Right?

Wrong!

Here’s what I’ve experienced. When testing the rule, I run a simple PowerShell command to create a fake event:

Write-EventLog –LogName System –Source “BugCheck” –EntryType Error –EventID 1001 –Message “This is a test message.”

Event is pretty similar:

image

That should have triggered my alerts. It didn’t however. Since I have ‘faked’ the event, the message shows me a bit more than just ‘This is a test message.’:

image

Now, notice this:

image

Why does it say the source is Microsoft-Windows-WER-SystemErrorReporting when the source is supposed to be “BugCheck”?

So, I’ve decided to change the rule to:

image

Bingo! Now, the alert was generated correctly. In summary, the source you see in the event log is what SCOM sees when detecting the event. The same applies for Kernel-Power, for example:

image

Now the reason for that is in the details of the event:

image

In fact, the EventSourceName is ‘BugCheck’. The Provider Name is considered the souce by SCOM, as  you can see above and below:

image

The way to fix it, if you want to use the EvenSourceName is to use a custom field. Notice SCOM doesn’t provide a native ‘EventSourceName’ option:

image

You can then use:

image

And there you have it!

 

Hope this helps!

Uncategorized

Operations Manager 2016 TP 3–What is new

Published by:

Recently Microsoft made System Center 2016 TP3 available along with Windows Server 2016 preview as well. This post will evaluate the experience and what changed in TP3.

Let’s start by installing it. In order to do that I have conveniently deployed a VM with SQL in Azure (Windows Server 2012 R2 in this case, just so SQL pre-deployed). Ideally, Windows Server 2016 should be the OS, but then I would need to install SQL server to,so, decided to focus my demo.

After downloading, you get the usual:

image

After extracting the files, setup looks like this:

image

I think I’ve seen this before…but the splash has something new:

image

Let’s install all the roles:

image

I have purposely not installed any pre-reqs, so, it complains about hat seem to be the same things as before:

image

SQL 2012 components still seem to be required:

image

Let’s add the required web roles:

Web-Server,Web-WebServer,Web-Common-Http,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-Static-Content,Web-Health,Web-Http-Logging,Web-Custom-Logging,Web-Request-Monitor,Web-Performance,Web-Stat-Compression,Web-Security,Web-Filtering,Web-Basic-Auth,Web-Windows-Auth,Web-App-Dev,Web-Net-Ext,Web-Net-Ext45,Web-Asp-Net,Web-Asp-Net45,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Mgmt-Tools,Web-Mgmt-Console,Web-Mgmt-Compat,Web-Metabase,Web-Scripting-Tools,Web-Mgmt-Service,NET-Framework-Features,NET-Framework-Core,NET-HTTP-Activation,NET-Framework-45-Features,NET-Framework-45-Core,NET-Framework-45-ASPNET,NET-WCF-Services45,NET-WCF-HTTP-Activation45,NET-WCF-TCP-PortSharing45

You can use this list with get-windowsfeature and pipe it into install-windowsfeature

After installing the pre-requisites:

image

Let’s create a new MG:

image

Standard SQL:

image

image

A note on Azure pre-deployed SQL imagines: SSRS is not configured by default, so you will need to go over that. Use standard configuration. Also, SQL Server agent is Manual and stopped, so, make sure you start it and set it to automatic.

image

image

image

Accounts screen is the same. Some disclaimer:

image

And there we go!

image

Well, I have to say it is more or less the same. Even compared to TP2, nothing really new there. The list is short in fact: https://technet.microsoft.com/en-us/library/dn997273.aspx

But here is my due diligence.

Hope this helps!

Azure Resource Manager

Azure Resource Manager–Step 3–The Load Balancer

Published by:

If you checked my previous articles about the two VMs with external IPs, you may have noticed that both VMs get an external IP and that there is no TCP port restriction to them. That won’t likely be the normal situation. Very commonly, you will want something balancing the load between those two identical machines, as well as some control over the ports that can be accessed. In order to accomplish that, we will first create a SINGLE publicIP and then apply to a load balancer entity.

First things first. The Public IP configuration. What I will do is remove the loop and make it a single public IP. This is what I had:

image

Now, after changing:

image

I have also changed the variable names, to represent better what we need to have (names, not prefixes).

image

Next I will remove the reference from the Nics, since the VMs themselves won’t have public IPs:

image

However, you will need to add a dependency on the Load Balancer and assigned NAT rules and backend LB pool:

image

Second, we should add the load balancer itself. It is a tough cookie this one, so let’s take the “Jack the Ripper approach”: Let’s cut it into pieces.

But first, let’s take a look from a high level. Here’s the skeleton of the beast:

image

Important information:

1. “type”: “Microsoft.Network/loadBalancers”, –> sort of obvious.

2. “dependsOn”: [     “[concat(‘Microsoft.Network/publicIPAddresses/’, variables(‘PublicIPName’))]”       ], –> it needs the external IP to work.

3. “frontendIPConfigurations” –> Contains the name of the external LB IP and a reference to the external IP we have created before.

image

4.  “backendAddressPools” -> This configuration will have the name and the backend IP addresses. In this case, the names are sort of hardcoded (allowing only two IPs).

image

5. “inboundNatRules” –> as the name states, this will create NAT rules to allow certain protocols through the load balancer. This used to be done with a cloud service in the old service model.

image

Notice that I’m basically mapping Port 50001 and 50002 to 3389 through the same external IP to the respective internal VM IPs.

6.  “loadBalancingRules” –> here’s where you’ll define which ports (services) will be load-balanced:

image

7. “probes”: And finally, how to detect the availability of the load-balanced services:

image

I have also added an Availability set, just so I can get guaranteed 99.95% availability:

image

It’s location:

image

And assigned the VMs:

image

Once deployed, you’ll hopefully see this:

image

And this:

image

 

Now for a quick testing. Let’s deploy IIS to both VMs, change the default website and test the LB. Notice that because I have a LB rule, I can Connect to the VM:

image

image

Just accept the the next question and there you are:

image

Let’s add IIS to both VMs:

Add-WindowsFeature Web-Server,web-mgmt-console

And add something to identify each one of the VMs:

imageimage

Now, when opening the page from the outside:

image and

image

So! That concludes or tutorial! You can find the template here.

I hope this helps!

Azure Resource Manager

Azure Resource Manager– Step 2–Copy and Public IPs

Published by:

Previously on Overcast: Azure Resource Manager–First Steps

For my next trick, I will try to deploy two VMs, on the same VNET and add external access endpoints. For that, we will require two new items for our collection: loops and external endpoints.

Let’s start with the loops.

Loops are implemented by the copy directive:

image

For the number of instances, as you may have noticed, I have created a variable to make things easier.

The interesting part is that the index of the copy is available for you to use, for example, in the name of the objects:

image

Same for the VMs:

image

Also note that you need to think in loop, so, each iteration will create a dependency on the specific NIC:

image

And each VM needs a different VHD:

image

OK, so let’s give it a spin:

image

Looks OK:

image

And it seems fine!

image

and here:

image

Now, the next step is to add an external IP to the VM. The IP is actually added to each NIC.

When you use the wizard, it is not smart enough to know you are using copy and creating multiple interfaces, but it is a good start. Again, you have to think in loops and variables, so you’ll need to change a few things.

First, the name:

image

I’ve created a publicIPName variable and added the copyindex, so the name will be like <publicIPName>0, <publicIPName>1,etc.

I have also created a DNS prefix instead of a name, since I will add a copyindex to it:

image

remember the rules: the dns name must be unique and lowercase.

The last part is to assign the IP to a NIC. First the dependency:

image

then the actual IP:

image

After the deployment finishes, here’s what you get:

image

And since I have an external IP, I can even RDP to the VM:

image

Summary

– We have learned how to copy instances of an object to create many

– We have learned how to add public IPs to the VMs

You can find the final Deployment file here.

Hope this helps!