Monitor your Logic Apps with Log Analytics (Preview)

This is a continuation of the blog series that fellow MVP Billy York and I are putting on around the journey from on-prem SCORCH to Automation in Azure. In this post we will look at a new preview Log Analytics solution that will help you monitor and respond to issues that will invariably happen with Logic Apps.

Logic App Management allows heavy users of Logic Apps to get both ‘at-a-glance’ updates on the success or failure of their apps, or drill down deeply into app runs and get details such as start/stop time, execution duration, action durations, etc… It is especially useful when you have a very large set of Logic Apps. Trying to get a single pane of glass when you have dozens or hundreds of Logic Apps is difficult with this exciting preview feature.

To add this solution to Log Analytics, you will need…..wait for it…..a Log Analytics workspace! Your existing Log Analytics workspace will do just fine – no need to create a new one. In fact, there are a lot of reasons why you wouldn’t want to! Once you have an eligible workspace, it’s time to add the solution. You can get to this either by adding a resource to the resource group, or adding the solution directly from the workspace – both will start same wizard

One option to add the solution is to do it directly from your Log Analytics workspace.
Searching for the solution…
The actual solution is still in preview, but is very stable.

The actual wizard is very simple – just pick your workspace and click create. Once you have your solution provisioned, you can turn it on or off per Logic App by selecting the “Diagnostic Settings” on the LA and editing the setting below – if you want to turn it off, just uncheck the “Send to Log Analytics” setting.

For this article, I setup a couple of simple Logic Apps – both of them query RSS feeds, but one has a good feed URL, and the other doesn’t. I varied their start times a bit just to show some contrasting data.

The good logic app…..
And the bad one – this URL doesn’t actually work.

I let these apps run for a couple of days. When I look at my Log Analytics workspace, I am greeted with this new tile, and can drill down into to get a nice Logic App run overview:

New tile!
And some awesome graphs!!!

This is a great overview screen for your apps. Notice that the central graph is slightly off – I have tried it in multiple browsers on multiple themes, but it shows the same in each. Hence the ‘preview’ moniker on the solution, I guess. Regardless, at a simple glance I can see the LAs that have succeeded or failed, get a list of the errors, and see a visual comparison of the status of my apps. Clicking on the ‘runs’ tab brings me to a bit more details, and if I click on the “See All” link in the bottom left of each tile, I get a familiar looking interface 🙂

A bit more detail……
Our old friend – Kusto!

For anyone wanting to know the actual query, here it is:

AzureDiagnostics
| where Category == "WorkflowRuntime"
| where OperationName == "Microsoft.Logic/workflows/workflowRunCompleted"
| join kind = rightouter
(
    AzureDiagnostics
    | where Category == "WorkflowRuntime"
    | where OperationName == "Microsoft.Logic/workflows/workflowRunStarted"
    | where resource_runId_s in (( AzureDiagnostics
    | where Category == "WorkflowRuntime"
    | where OperationName == "Microsoft.Logic/workflows/workflowTriggerCompleted"
    | project resource_runId_s ))
    | project WorkflowStartStatus=status_s, WorkflowNameFromInnerQuery=resource_workflowName_s, WorkflowIdFromInnerQuery=workflowId_s, resource_runId_s
)
on resource_runId_s
| extend WorkflowStatus=iff(isnotempty(status_s), status_s, WorkflowStartStatus)
| extend WorkflowName=iff(isnotempty(resource_workflowName_s), resource_workflowName_s, WorkflowNameFromInnerQuery)
| extend WorkflowId=iff(isnotempty(workflowId_s), workflowId_s, WorkflowIdFromInnerQuery)
| summarize Count=count() by WorkflowId, WorkflowName, WorkflowStatus

The benefits of having this data in the “Gateway Drug” (trademark pending) of Azure – Log Analytics – should be obvious. Want to export your Logic App data to PowerBI, or even better setup alerts when a LA fails? Combine your LA data with feeds from AAD, Graph, Office365, etc…. Having all of this data in an easily queried repository is priceless.

Automation with Azure Event Grid

This is another post in a series of posts by fellow MVP Billy York and myself on migration from on-prem automation to Azure. Check out his post here to see the full list.

One of the challenges that needs to be addressed with moving off an on-prem tool such as Orchestrator is how to trigger your automations. Almost all of the tools individually have a method to call them remotely – things such as webhooks, watchers, api endpoints, etc… In this post I would like to highlight how another Azure tool – Azure Event Grid – can prove a useful tool for correlation and centralization. If you need a quick refresher on Event Grids – check out this post. I’m not going to dig into the concepts of Event Grid, but I will walk through how to do a quick setup of Event Grid and get it ready to trigger your workflows.

The first thing we are going to do is create an Event Grid Topic – go to the appropriate resource group, and create a new resource – pick Event Grid Topic, and click ‘Create’.

Specify the event topic name, subscription, resource group, location, etc… The deployment will take a minute or two.

When it’s created, you should see something like this. Take note of the “+ Event Subscription” and the Topic Endpoint.

The topic endpoint is important – this is where you can forward events to from your on-prem resources in order to for event grid to pick them up. This URL takes a json payload (under 64kb in the general release version, with 64kb increments being charged separately – 1mb version in public preview now). Because I am who I am, I wrote a quick PowerShell script that could be used by your resources as a quick and dirty integration.

Connect-AzAccount -Credential (get-credential)
$body = @{
    id= 123743
    eventType="recordInserted"
    subject="App01/Database/TLogFull"
    eventTime= (get-date).ToUniversalTime()
    data= @{
        database="master"
        version="2019"
        percent="93.5"
    }
    dataVersion="1.0"
}
$body = "["+(ConvertTo-Json $body)+"]"

$topicname="EventGridTopic01"
try{
    $endpoint = (Get-AzEventGridTopic -ResourceGroupName AzureAutomationOptions -Name $topicname).Endpoint
    $keys = Get-AzEventGridTopicKey -ResourceGroupName AzureAutomationOptions -Name $topicname
    Invoke-WebRequest -Uri $endpoint -Method POST -Body $body -Headers @{"aeg-sas-key" = $keys.Key1}
}
catch{
    write-output $_
}

The important bits:

  • Subject – What the event subject would be, which is what we will key off of later for subscriptions – this is a personal preference. You can trigger from the event type in the basic editor, but I prefer subject.
  • eventTime – Required, in UTC
  • id – Important, but only if you want unique event identifiers.
  • data – The important bits from your on-prem resources – the bits we will want to pass to the Azure Automation resources.

After sending a couple of events, you can look at the ‘Published Events’ metric to ensure they are coming in as expected:

Now it’s time to give the event grid something to do when the events arrive. There are several ways to accomplish this – all done with a ‘subscription’ to the events. Some of the ‘automation’ flavored options include:

  • Azure Functions
  • WebHooks
  • Sending to Event Hubs
  • Service Bus Queues and Topics

WebHooks are pretty self-explanatory, but also one of the most powerful. For example, you can create a Logic App that is triggered by an http request, and from there break out and perform any type of automation that you want. I will go over that in another post. For this post, I will show a slightly more difficult to setup, but equally as powerful method to start automation – the Azure Function EventGrid triggered function. Head over to your Azure Function, and let’s add a new function:

Select the “Azure Event Grid trigger” and give it a name – in my case I gave it the descriptive name ” EventGridTrigger1″. After it’s created, you should see something like this:

You can see the parameters – eventGridEvent and TriggerMetadata. Keep those in mind. Now head back over to the Event Grid Topic, and let’s add a new subscription. When you select the endpoint in the new subscription, you should see your EventGridTrigger function:

Great – since the function is created already, the endpoint is something we can select. If the function wasn’t created, there would be no option to create a new one.

Now we can dig into the actual subscription properties – notice the 3 tabs. Basic, Filters, and Advanced Features. Filters is where we will do most filtering for automation, although some can be done in the basic tab via the event type. For now, just set the Event Type to ‘recordInserted ‘, since that is what we put in the PowerShell code, so we can switch over to the filters and do the rest of the work.

On the Filters tab, the first thing we want to do in this example is check the box marked “Enable subject filtering”. If you remember in the PowerShell, we set the filter to “App01/Database/TLogFull”. You could imagine this being the unique identifier to trigger the appropriate automation – almost something like a unique monitor ID or an automation trigger id. In this example, let’s set that as our filter. We will do this simple one in this post, but branch out and look at advanced features in a future one:

When you are ready to create your new subscription, head back to the ‘Basic’ tab and give it a name – numbers, letters, and dashes only. Do it right, and you will be greeted with something like this:

Now that we have the subscription created, let’s send some events and see if they trigger the subscription! If you run that PowerShell snippet a couple of times, wait a minute or two, and check out the ‘monitor’ tab of the EventGridTrigger function, you will see something like this:

And clicking on the details:


From here, we can see the data fields that were passed, and which could be used in our PowerShell function.

Hopefully you can see how using an Event Grid can help trigger your automations, especially when dealing with IOT and monitoring situations, or when a simple webhook is all you have to work with. They offer an easy way to take those PowerShell workflows from Orchestrator and directly import them into Azure with little modification, while at the same time providing a centralized method of tracking.

Automation Tools in Azure – Q1 2020 Edition

Whether you are well into your automation journey or just starting out, it’s important to know what options are available. Moving a manual workload to the wrong automation engine can be just as disruptive as automating a bad workload. Luckily Microsoft has a plethora of tools, so you can be sure to pick the right tool for the right job.

Azure Automation – Process Automation

It’s tough to start an article about automation tools in Azure without starting with Azure Automation – so I won’t try. Azure Automation is going to be the first place you look if you are migrating things like:

  • PowerShell scripts
  • Python scripts
  • System Center Orchestrator runbooks
  • Simple commands called repeatedly (restarting services, for example)

Azure Automation uses RunBooks and Jobs, which will immediately be familiar to Orchestrator admins. Azure Automation supports PowerShell, and Python 2 scripts, along with PowerShell workflows. The automation jobs can be run either on-prem via Hybrid Workers, or in the cloud. A little known secret about Azure Automation – it runs a lot of the backend process that power Azure!

There is another piece to Azure Automation worth calling out – it’s CHEAP. Azure gives you 500 run-time minutes for free each month, with each additional minute costing only $0.002. Watcher tasks are even cheaper – I will go over those in another blog post.

Azure Functions

The server-less powerhouse of the automation options in Azure – Functions are designed for scale, speed, and complete extensibility. Deploy code or docker containers for your function and then build your functions with .Net Core, Node.js, Python, Java, and even PowerShell Core.

With the language options available, moving on-prem workloads should be a breeze. Access your functions from anywhere via API or schedule them to run automatically. Customize your compute stack, secure the functions with multiple keys, and monitor your runs with Log Analytics and App Insights.

You can build your functions in VSCode, any other code editor you choose, or edit and test your function right in the Azure portal. Each Function App can have multiple functions, and scaling can occur manually or automatically. There are so many options available for Azure Functions, it deserves it’s own blog series.

As with Azure Process Automation, Functions are priced really competitively. Check out the pricing list here.

Azure Logic Apps

Anyone coming from a tool like System Center Orchestrator, or other automation tools like MicroFocus Operations Orchestration will tell you one thing those tools have that the tools I have previously mentioned dont – a UI that shows logic flow. Microsoft’s answer to that – Logic Apps. Logic Apps are a personal favorite of mine, and I use them extensively

Building a Logic App couldn’t be simpler. You can start with a blank app, or choose from a LARGE selection of templates that are pre-built. Once in the Logic App Editor, it’s practically drag and drop automation creation. Logic Apps are started with ‘Triggers’, which lead to ‘Actions’. The apps can access services via ‘connections’, of which there are hundreds. If you do happen to find a 3rd party service that doesn’t have a built-in connector, build a custom one!

Logic Apps makes it easy to build complex automations by helping you with things like automatically creating loops when arrays are detected. Allowing you to control parallelism, offering you hundreds of ways to call your app, and more. You can add conditions, switches, do-until loops, etc… There isn’t much they can’t do.

Of course you get the enterprise controls you would expect – version controls, IP access restrictions, full metrics and logging, diagnostics, etc. We run a massive Systems Management and Monitoring conference almost entirely with Logic Apps.

If you are considering migrating from Orchestrator (or other 3rd party automation tool), then look no further. This should be one of the first Azure tools you do a proof of concept with.

Power Apps/Power BI/Power Automate

The tools I have talked about so far are focused on you – the enterprise system admin. But PowerApps gives your organization an exciting and probably under-utilized automation opportunity – your Business users! Even the biggest automation organizations don’t have the resources to automate everything their users want, so why not let them handle some of that automation on their own.

Power Apps let you or your users create new desktop or mobile business applications in a matter of minutes or hours. These can be self contained, or reach out to tools like Azure Functions to extend these simple to make apps into something truly enterprise worthy.

Power BI gives world class data visualizations and business intelligence to the average business user. Using Power BI you can allow your users to create their own dashboards or become their own data scientists directly from their desktop.

Power Automate is the tool formerly known as Flow. If you are familiar with Logic Apps, then Power Automate will look almost identical – and for good reason! Flow was originally built from Logic App code. There are a couple of big differences, though:

  • Power Automate has an amazing mobile app. Start a flow, or even create one from your phone.
  • Power Automate can no simulate screen clicks – Remember AutoIt?

Configuration and Update Management

I am going to lump these two into one description, mainly because each is slightly meta. Configuration management is like PowerShell DSC for your Azure and on-prem resources. Describe what your environment should look like, and determine if you want auto-remediation or not. Expect more information on this in future blog posts.

Update management is patching for all of your resources – on-prem or in Azure. Group your servers logically and schedule OS and app updates, or trigger update management from Log Analytics, runbooks, or functions.

The great thing about Configuration and Update management? The cost. Update and configuration management is practically free – only pay for the data ingestion used by Log Analytics. Update management is even ‘free’ for on-prem resources, including Linux! Configuration management does have a cost for on-prem resources, but the cost is still low.

Event Grid and Hub

Although not automation in the strictest sense of the term, Event Grids and Hubs are prime examples of triggers for automation. For most use cases, Event Grid is going to be the best trigger – Event Hub and even Service Bus are more for telemetry and high-value data, but Event Grid is designed to handle reactionary data. Filter events as they come into Grid, and create action groups based off filtered events. Action groups can have actions for starting Azure Functions, generic web-hooks, automation rubooks, Logic Apps, and more! Send your events to a endpoint API, and you are set to start your automation flows automatically!

Meta – ARM

What’s the first thing you need to automate if you are moving to Azure? The automation workflows themselves, of course! Whether it’s configuration or full deployments, ARM is your best friend.

Migrating from Orchestrator to Azure – A new blog series

Over the next several post I’ll be teaming up with Microsoft MVP Billy York to show you various ways and technologies to accomplish your automation tasks in a more modern way. We will explore automation options, show you how to prepare your Orchestrator environment for migration, pick the right tool for the right job, and even add new features and enhancements to your workloads. Check back tomorrow for Automation Options in Azure – Q1 2020 Edition, and watch Billy’s page as we prepare our migration journey!

Orchestrator is Dead, Long Live Automation

If you reading this blog and are considering installing Orchestrator 2016/2019 –  stop.  Don’t.  Do not pass go, do not collect your salary. Save your time and energy.  Seriously, we know other consultants that are still getting requests for proposals to install System Center Orchestrator, but now is not the time for new installations of Orchestrator. It’s time to migrate from Orchestrator. For those of you that have used Orchestrator 2012 R2 and installed the upgrades to 2016/2019, then you know that it wasn’t much of an upgrade.

For those of you are already heavily invested in Orchestrator, it’s time to start considering your migration path. Time to consider what product(s) you’re going to leverage, what your options are, and how to move to new tools. Orchestrator isn’t literally dead in that MS is going to reach into your environment and kill your scorch servers(s), but don’t count on getting any new cutting edge features or updates. Without new features it’s going to continue to be more and more difficult to keep up with automating new products and services.  Luckily Azure provides a myriad of ways to take on your automation workloads.

Disaster leads to Azure

How a sinister act leads to great things – in about 15 minutes

This Sunday, as the wife and I traveled back from Dallas to Austin after a weekend away I get a text from an automated website monitor. My WordPress blog – this blog – was either offline or not responding correctly. Happens occasionally. When I got home I popped up the site, and immediately got a message about php being a really old version – like 5.x – when it normally runs a 7.x version. I decided to log into my provider and check it out. I was not prepared with what I saw.

It was obvious that I had been hacked, big time. Redirects to shady pharma sites in Russia, CSS injection on every post, random hacked php files in practically every directory in not just this domain but in 8 sub-domains as well. I was well and truly up the creek. To make it even better, the hosting service I use seems to think that UI enhancements are forbidden – hence my inability to download a current backup.

I had to do something, and quick. This blog contains some of my contributions to the community, and does (shockingly) show up in search results. I needed to get it back up and running quick. The only thing I had was an XML export from the blog a couple of weeks back. I immediately decided to use Azure to get this running quick.

For those who don’t know, the easiest way to export from a WordPress blog is to go to Tools – Export – All content. I suggest you do it often.

I jumped over to Azure, and provisioned a new resource. Since this was going to be an actual production thing, and not just some testing resource for a conference or for a post, I decided to use a new resource group.

Note – I choose to do a MySQL database in the app – mainly because I needed this up and running quickly, and I don’t have traffic substantial enough to warrant scaling backend MySQL instances. For large instances, I would recommend using ‘Azure Database for MySQL’ – that allows options like scaling, larger instances, etc…

The deployment of the WordPress instance was honestly the longest part of the process – it took between 7 – 10 minutes, but once it was up and running you are presented with a brand new WordPress instance:

Click on the URL, and you are presented with the interface for your brand new WordPress instance. Now we continue with the WordPress setup

Now update your WordPress – immediately. After updating, log out and back in, just in case the WordPress database needs to update as well. Once everything is updated, it’s time to import the WordPress XML.

When you install the importer, then the ‘Run Importer’ option will appear. Upload your XML file and let the importer run. In my case it took about 5 minutes. A great thing about this import – it takes all of your settings – preferences, link post IDs, media, etc… This was in many ways much better than doing a restore on my normal hosting provider – I have my blog back up and running (minus some theme customization), but I have the entire power of Azure behind it! I get Azure Monitor – Azure Sentinel, App Insights, Log Analytics, and more!

Next, it was time to redirect my old-and-busted blog to the new blog website. This is going to differ by hosting provider, but in my case it was fairly simple.

What are the next steps? In my case, it is going to be adding a custom domain to my App service, so that I no longer have to rely on my old hosting provider, and use them solely as domain registrar. That will be in an upcoming post.

In my case, something as horrible as a hack has led to a great outcome – hosting in Azure, really cheaply, with an absolute glut of new features at my disposal. This hack might have been the best thing to happen to this blog in a while. Who knows – maybe I will just continue to add new Azure features and see how it turns out.

2 Great Experimental Features in Core 7

In this blog post, I want to drill into a couple of very useful experimental features that are available in PowerShell 7 preview 6.  The two features I want to dive into are ‘PSNullConditionalOperators’ and  â€˜skip error check on web cmdlets’.  The former feature was available in preview5, and the latter was made available in preview 6.  Both of these are (as of the writing of this blog) experimental features.  In order to play around with these features, I suggest doing a ‘Enable-ExperimentalFeature *’ and restarting your PowerShell session.  When you are done with your testing, you can always do a “Disable-ExperimentalFeature” to get your session back to normal.

What exactly is ‘PSNullConditionalOperators’?  This specifically refers to the ‘&&’ and ‘||’ operators, which deal with continuing or stopping a set of commands depending on the success or failure of the first command.  The ‘&&’ operator will execute what is on it’s right if the command on it’s left succeeded, and conversely the ‘||’ operator will execute what is on it’s right if the command on it’s left failed.  Probably the best way to picture this is with an example.

 

Write-Host “This is a good command” && Write-Host “Therefore this command will run”

This is a good command

Therefore this command will run

The first write-host succeeded, and so the second write-host was run.  Now, let’s look at ‘||’.

write-badcmdlet "This is a bad command" || Write-Host “Therefore this command will run”
write-badcmdlet: The term 'write-badcmdlet' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

Therefore this command will run

Here we can see that the ‘write-badcmdlet’ cmdlet doesn’t exist (if it does, we need to talk about your naming conventions), so the second command was run.  This can replace 3 or 4 lines of code easily – you would normally have to do an if-else block to check the $? variable and see if it was true or false.  You can also get fancy and chain these together for even more checks.

write-badcmdlet "This is a bad command" || Write-Host “Therefore this command will run” && "And so will this one"




write-badcmdlet: The term 'write-badcmdlet' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

Therefore this command will run

And so will this one

You can see how the 3rd command worked because the 2nd command ran successfully.  Very handy, and dead simple to consolidate code.  They even work with OS commands!  Look at this example using notepad and the non-existent notepad2:

notepad && "Worked fine!"

Worked fine!

notepad2 && "Worked fine!"




notepad2: The term 'notepad2' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

notepad2 || "Failed!"




notepad2: The term 'notepad2' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

Failed!

 

Now on to the second experimental feature – and one that I find amazingly handy.  Often times when you are working with RestAPIs or really any sort of web request, you are dealing with invoke-restmethod and invoke-webrequest.  One major pain point with these cmdlets is that if you get a non-200 error code back from the cmdlet, it returns an actual error object.  Consider this: 

That url doesn’t exist, but the way the error object is returned can be a massive pain!  What if I want to know if it was a 404 vs a 500 vs a 403 and actually want to do something about it?  Dealing with error objects is not the most intuitive, and I much prefer the way the new experimental feature ‘-SkipHttpErrorCheck’ handles this.  With this switch, a returned error code from these cmdlets will not return as an error.  Combine this with ‘-StatusCodeVariable’ and you can now act on the actual error without large try/catch blocks.  You can use switch blocks to route properly based on status code without diving into error objects.  This is a very welcome feature!

PoshAzurelab – starting azure machines in order

aka – please be careful of asking me something

Recently a friend sent me an email asking how to start a series of machines in Azure in a certain order.  These particular machines were all used for labs and demos – some of them needed to be started and running before others.  This is probably something that a lot of people can relate to; imagine needing to start domain controllers before bringing up SQL servers before bringing up ConfigMgr servers before starting demo client machines..  You get the idea.

So, even though we got something for my friend up and running, I decided to go ahead and build a repo for this.  The idea right now is that this repo will  start machines in order, but I will expand it later to also stop them.  Eventually I will expand it to other resources that have a start/stop action.  Thanks a lot, Shaun.

The basics of the current repo are 2 files – start-azurelab.ps1 and config.json.  Start-AzureLab performs the actual heavy lifting, but the important piece is really config.json.  Let’s examine the basic one:

{
    "TenantId": "fffffff-5555-4444-fake-tenantid",
    "subscriptions": [
        {
            "subscription_name": "Pay-As-You-Go",
            "resource_groups": [
                {
                    "resource_group_name": "AzureLabStartup001",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":false,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":false,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                }
            ]
        }
    ]
}

The first thing you see is the ‘tenantid’.  You will want to replace this with your personal tenant id from Azure.  To find your tenant id, click help and show diagnostics:

Next, you can see the subscription name.  If you are familiar with JSON, you can also see that you can enter multiple subscriptions – more on this later.  Enter your subscription name (not ID). 

Next, you can see the Resource_Group_Name.  Again, you can have multiple resource groups, but in this simple example there is only one.  Put in your specific resource group name.  Now we get down to the meat of the config.

"data": {
         "virtual_machines": [
          {
                "vm_name": "Server001",
                "wait":false,
                "delay_after_start": "1"
           },
           {
                 "vm_name": "Server002",
                 "wait":false,
                 "delay_after_start": "2"
            }
     ]
}

This is where we put in the VM names.  Place them in the order you want them to start.  The two other properties have special meaning – “wait” and “delay_after_start”.  Let’s look at “wait”.

When you start an Azure VM with start-azvm, there is a property you specify that tells the cmdlet to either start the VM and keep checking until the machine is running, or start the VM and immediately return back to the terminal.  If you set the “wait” property in this config to ‘false’, then when that VM is started the script will immediately return back to the terminal and process the next instruction.  This is important when dealing with machines that take a bit to start  – i.e. large Windows servers.

Now – combine the “wait” property with the “delay_after_start” property, and you have capability to really customize your lab start ups.  For example, maybe you have a domain controller, a DNS server , and a SQL server in your lab, plus a bunch of client machines.  You want the DC to come up first, and probably want to wait 20-30 seconds after it’s running to make sure your DC services are all up and running.  Same with the DNS and SQL boxes, but maybe you don’t need to wait so long after the box is running.  The client machines, though – you don’t need to wait at all, and you might set the delay to 0 or 1.  Just get them up and running and be done with it.

So now we can completely control how our lab starts, but say you have multiple resource groups or multiple subscriptions to deal with.  The JSON can be configured to allow for both!  In the github repo, there are 2 additional examples – config-multigroup.json and config-multisub.json.    A fully baked JSON might look like this:

 

{
    "TenantId": "ff744de5-59a0-4b5b-a181-54d5efbb088b",
    "subscriptions": [
        {
            "subscription_name": "Pay-As-You-Go",
            "resource_groups": [
                {
                    "resource_group_name": "AzureLabStartup001",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":false,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                },
                {
                    "resource_group_name": "AzureLabStartup002",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":true,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                }
            ]
        },
        {
            "subscription_name": "SubID2",
            "resource_groups": [
                {
                    "resource_group_name": "AzureLabStartup001",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":true,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                },
                {
                    "resource_group_name": "AzureLabStartup002",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":true,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                }
            ]
        }
    ]
}

So there you have it – hope you enjoy the repo, and keep checking back because I will be updating it to add more features and improve the actual code.  Hope this helps!  

 

 

Add multiple powershell versions to Vscode

Here is the quick and dirty way to add multiple PowerShell versions to VSCode, and switch between them quickly.

It’s often times advantageous to quickly switch between multiple versions of a programming language when coding to ensure that your code works on multiple platforms.  For me, that is switching between Windows PowerShell and multiple versions of Microsoft PowerShell (PowerShell Core).  He’s how to easily do it in VSCode.

First, I am going to assume you have the PowerShell extension already installed in VSCode.  If not, hit F1 (Ctrl-Shift-P) to open the command palette, and type this

 

ext install powershell

Now, let’s get some different PowerShell versions.  This is the link to PowerShell 7 preview 5.  Down that and extract it to any directory you want.  In this example I will also be using the latest stable version – 6.2.3.  In this example I have downloaded them both to the c:\pwsh directory.

Open VSCode, and create a new file.  Save the file with a .ps1 extension.  Notice how the terminal automatically starts PowerShell, and that the little green number in the lower right says 5.1?  That’s because VSCode is smart enough to look in common locations for PowerShell versions, and will add those automatically to your session options.  Since we are putting the version in a non-typical location, we have to edit our settings.json.  The easiest way to do it is this is to hit F1 (Ctrl-Shift-P), and type ‘language’.  Select “Configure Language Specific Settings”

 

You will open the settings.json for your user and language.  If you haven’t done much with VSCode, it probably looks something like this:

 

{
    "workbench.iconTheme": "vscode-icons",
    "team.showWelcomeMessage": false,
    "markdown.extension.toc.githubCompatibility": true,
    "git.enableSmartCommit": true,
    "git.autofetch": true,
    "[powershell]": {}
} 

To add our new versions, we need to add just a bit to this file.  We are going to add powershell.powerShellAdditionalExePaths settings – in our case 2 of them.  The important parts are to make sure you are escaping the “\” slashes with another slash, and just follow normal JSON formatting requirements.  Continuing with my example, my settings.json looks like this:

{
    "workbench.iconTheme": "vscode-icons",
    "team.showWelcomeMessage": false,
    "markdown.extension.toc.githubCompatibility": true,
    "git.enableSmartCommit": true,
    "git.autofetch": true,
    "[powershell]": {},
    
    "powershell.powerShellAdditionalExePaths": [
        {
            "exePath": "C:\\pwsh\\7.p5\\pwsh.exe",
            "versionName": "PowerShell Core 7p5"
        },
        {
            "exePath": "C:\\pwsh\\6.2.3\\pwsh.exe",
            "versionName": "PowerShell Core 6.2.3"
        }        
    ],
    "powershell.powerShellDefaultVersion": "PowerShell Core 7p5",
    "powershell.powerShellExePath": "C:\\pwsh\\7.p5\\pwsh.exe"
} 

Save the settings file, and if you have an open terminal, either exit it, or crash/reload.  This will force VSCode to reload the settings file.  If you have done it correctly, if you click on the little green version number in the lower right, you should see something like this:

Switch between them, and verify you see the various version with $PSVersionTable

SCOM Incremental discovery

Limitless possibilities – very little code

One of the biggest pet peeves that I’ve had with SCOM discovery is that it was always very agent biased.  If you wanted to discover IIS, for example, the agent would look for the bits and pieces of IIS installed on a server.  While that works fine for most classes and objects, it does pose some limitations.  What if I wanted to add additional properties to an existing object, or mark a server with a custom attribute?  A common work around was to ‘tattoo’ a system with a reg key or perhaps a file, and then discover that via the normal mechanism.  That seems like extra steps for something that should be relatively easy.  I want to run discovery on something other than the agent, and have that discovery data roll up to the agent.

Why would someone want to perform discovery ‘remotely’?  Well, maybe you have a CMDB or an application to server relationship database.  Would you rather put a regkey on every server with the application information, or just query the database?  Imagine being able to have application owner properties (email, phone, manager, etc…) in SCOM as a property for every server, but without having to touch the server at all – no more regkeys or property files.

Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalDiscoveryData

This class – IncrementalDiscoveryData – is an amazingly powerful tool.  The idea is that you can submit additional discovery data for an existing object, or even create brand new objects like normal.  The best way to imagine this is to look at the code itself:


$data = (invoke-sql @DBParams -method query -statement "select * from SomeDataTable").tables.rows 
New-SCOMManagementGroupConnection -ComputerName “localhost”
$mg = Get-SCOMManagementGroup

foreach ($row in $data){
    $class = get-scomclass -displayname $row.ClassName
    $ClassObject=New-Object Microsoft.EnterpriseManagement.Common.CreatableEnterpriseManagementObject($mg,$class)
    $servername = $row.servername
    $ClassObject[$Class.FindHostClass(),“PrincipalName”].Value = "$servername"
    $ClassInstanceDisplayName=$Servername
    $ClassObject[$Class,“AppID”].Value = $row.AppID
    $ClassObject[$Class,“MonitorCI”].Value = $row.MonitorCI
    $discovery=New-Object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalDiscoveryData
    $discovery.Add($ClassObject)
    $discovery.Overwrite($mg)
}

The first 3 lines are pretty straight forward.  A simple function that querys a database (invoke-sql), creating a connection the the local management server, and getting the management group.

Then we go row by row with what’s in the table.  In this case the table looks something like this:

ClassName ServerName AppID MonitorCI
Microsoft.Windows.Computer server01 COTS1 AssignmentGroup1
Microsoft.Windows.Computer server02 Custom2 AssignmentGroup2

The code is going to get the class (in this case $row.classname, which is Microsoft.Windows.Computer), then create a new instance of that class (even if the object exists – we won’t overwrite anything not in the discovery data).  We find the host using the servername , add a couple of our new attributes – AppID and MonitorCI – and then create our discovery object.  That is where the magic happens – 

$discovery=New-Object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalDiscoveryData

This is the bit that creates the actual discovery object.  Since it’s incremental, we aren’t going to mess with anything already in that object, just update the bits that we add to the discovery object.  In this case, we are adding the $ClassObject that we populated with our additional bits of data – AppID and MonitorCI.    Two more lines, and we have our remotely discovered data as properties in our server object.

    $discovery.Add($ClassObject)
    $discovery.Overwrite($mg)

With just that little bit of code we are able to store attributes, properties, configurations, or anything else in a central database and have a simple PowerShell script that updates the objects in SCOM with that discovery data!  Not only that, those items are not rolled up to some random object, we can roll them up to the exact object we want.  No more regkey tattoos, or configuration files that you have to keep update to date on servers.  

Powershell and Parsing html code in Core

When working on a PowerShell webservice, I came across an interesting problem that I think is only going to crop up more and more.  This webservice takes a json payload, performs some simple manipulation on the data, kicks off some automation, and logs some events.  Part of the payload is html code, however.  Of course the json payload doesn’t care, but when it was coming into PowerShell it was being interpreted as a string.  That meant that when you looked at the string, you literally saw html code:

<BODY><H3>OPEN Problem 256 in environment <I>EPG</I></H3>
                               <HR>
                               <B>1 impacted infrastructure component</B>
                               <HR>
                               <BR>
                               <DIV><SPAN>Process</SPAN><BR><B><SPAN style="FONT-SIZE: 120%; COLOR: #dc172a">bosh-dns-health</SPAN></B><BR>
                               <P style="MARGIN-LEFT: 1em"><B><SPAN style="FONT-SIZE: 110%">Network problem</SPAN></B><BR>Packet retransmission rate for process bosh-dns-health on host 
                               cloud_controller/<GUID> has increased to 18 %</P></DIV>
                               <HR>
                               Root cause
                               <HR>
                               
                               <DIV><SPAN>Process</SPAN><BR><B><SPAN style="FONT-SIZE: 120%; COLOR: #dc172a">bosh-dns-health</SPAN></B><BR>
                               <P style="MARGIN-LEFT: 1em"><B><SPAN style="FONT-SIZE: 110%">Network problem</SPAN></B><BR>Packet retransmission rate for process bosh-dns-health on host 
                               cloud_controller/<GUID> has increased to 18 %</P></DIV>
                               <HR>
                               
                               <P><A href="https://redacted.somewhere.com/e/<GUID>/#problems/problemdetails;pid=-<GUID>">Open in Browser</A></P></BODY> 

I wanted to put this data in the events I was logging, but I obviously didn’t want it to look like this.  There isn’t a PowerShell cmdlet for ‘ConvertFrom-HTML’, although that would be great.  I could have tried to parse the text and remove the formatting, but that would be a pain trying to escape the right characters and account for all of the html tags.  I found several ways online to handle this – namely creating a ‘HTMLFile’ object, writing the html code into it, and then extracting out the innertext property of the object.  This works fine in some environments, but throws a weird error if you don’t have Office installed.  Here is the code:

$HTML = New-Object -Com "HTMLFile"
$html.IHTMLDocument2_write($htmlcode)

And the error it throws when Office isn’t installed:

Method invocation failed because [System.__ComObject] does not contain a method named 'IHTMLDocument2_write'.

There is also a catch here – that error will also be thrown if you are trying this in PowerShell Core/Microsoft PowerShell (not Windows PowerShell – i.e. anything under 5.x)!  Fortunately, there is a quick fix:

$HTML = New-Object -Com "HTMLFile"
try {
    $html.IHTMLDocument2_write($htmlcode)
}
catch {
    $encoded = [System.Text.Encoding]::Unicode.GetBytes($htmlcode)
    $html.write($encoded)
}
$text = ($html.all | Where-Object { $_.tagname -eq 'body' } | Select-Object -Property innerText).innertext

A simple try/catch, and it works great.  I am going to create a new repo in GitHub and make this a new function.  Hopefully you haven’t wasted too much time trying to track this down – it took me a while, and it wasn’t until I stumbled upon a similar solution on StackOverflow (https://stackoverflow.com/questions/46307976/unable-to-use-ihtmldocument2) that I was able to wrap it up.  Expect the cmdlet/function soon!