2 Great Experimental Features in Core 7

In this blog post, I want to drill into a couple of very useful experimental features that are available in PowerShell 7 preview 6.  The two features I want to dive into are ‘PSNullConditionalOperators’ and  ‘skip error check on web cmdlets’.  The former feature was available in preview5, and the latter was made available in preview 6.  Both of these are (as of the writing of this blog) experimental features.  In order to play around with these features, I suggest doing a ‘Enable-ExperimentalFeature *’ and restarting your PowerShell session.  When you are done with your testing, you can always do a “Disable-ExperimentalFeature” to get your session back to normal.

What exactly is ‘PSNullConditionalOperators’?  This specifically refers to the ‘&&’ and ‘||’ operators, which deal with continuing or stopping a set of commands depending on the success or failure of the first command.  The ‘&&’ operator will execute what is on it’s right if the command on it’s left succeeded, and conversely the ‘||’ operator will execute what is on it’s right if the command on it’s left failed.  Probably the best way to picture this is with an example.

 

Write-Host “This is a good command” && Write-Host “Therefore this command will run”

This is a good command

Therefore this command will run

The first write-host succeeded, and so the second write-host was run.  Now, let’s look at ‘||’.

write-badcmdlet "This is a bad command" || Write-Host “Therefore this command will run”
write-badcmdlet: The term 'write-badcmdlet' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

Therefore this command will run

Here we can see that the ‘write-badcmdlet’ cmdlet doesn’t exist (if it does, we need to talk about your naming conventions), so the second command was run.  This can replace 3 or 4 lines of code easily – you would normally have to do an if-else block to check the $? variable and see if it was true or false.  You can also get fancy and chain these together for even more checks.

write-badcmdlet "This is a bad command" || Write-Host “Therefore this command will run” && "And so will this one"




write-badcmdlet: The term 'write-badcmdlet' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

Therefore this command will run

And so will this one

You can see how the 3rd command worked because the 2nd command ran successfully.  Very handy, and dead simple to consolidate code.  They even work with OS commands!  Look at this example using notepad and the non-existent notepad2:

notepad && "Worked fine!"

Worked fine!

notepad2 && "Worked fine!"




notepad2: The term 'notepad2' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

notepad2 || "Failed!"




notepad2: The term 'notepad2' is not recognized as the name of a cmdlet, function, script file, or operable program.

Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

Failed!

 

Now on to the second experimental feature – and one that I find amazingly handy.  Often times when you are working with RestAPIs or really any sort of web request, you are dealing with invoke-restmethod and invoke-webrequest.  One major pain point with these cmdlets is that if you get a non-200 error code back from the cmdlet, it returns an actual error object.  Consider this: 

That url doesn’t exist, but the way the error object is returned can be a massive pain!  What if I want to know if it was a 404 vs a 500 vs a 403 and actually want to do something about it?  Dealing with error objects is not the most intuitive, and I much prefer the way the new experimental feature ‘-SkipHttpErrorCheck’ handles this.  With this switch, a returned error code from these cmdlets will not return as an error.  Combine this with ‘-StatusCodeVariable’ and you can now act on the actual error without large try/catch blocks.  You can use switch blocks to route properly based on status code without diving into error objects.  This is a very welcome feature!

PoshAzurelab – starting azure machines in order

aka – please be careful of asking me something

Recently a friend sent me an email asking how to start a series of machines in Azure in a certain order.  These particular machines were all used for labs and demos – some of them needed to be started and running before others.  This is probably something that a lot of people can relate to; imagine needing to start domain controllers before bringing up SQL servers before bringing up ConfigMgr servers before starting demo client machines..  You get the idea.

So, even though we got something for my friend up and running, I decided to go ahead and build a repo for this.  The idea right now is that this repo will  start machines in order, but I will expand it later to also stop them.  Eventually I will expand it to other resources that have a start/stop action.  Thanks a lot, Shaun.

The basics of the current repo are 2 files – start-azurelab.ps1 and config.json.  Start-AzureLab performs the actual heavy lifting, but the important piece is really config.json.  Let’s examine the basic one:

{
    "TenantId": "fffffff-5555-4444-fake-tenantid",
    "subscriptions": [
        {
            "subscription_name": "Pay-As-You-Go",
            "resource_groups": [
                {
                    "resource_group_name": "AzureLabStartup001",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":false,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":false,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                }
            ]
        }
    ]
}

The first thing you see is the ‘tenantid’.  You will want to replace this with your personal tenant id from Azure.  To find your tenant id, click help and show diagnostics:

Next, you can see the subscription name.  If you are familiar with JSON, you can also see that you can enter multiple subscriptions – more on this later.  Enter your subscription name (not ID). 

Next, you can see the Resource_Group_Name.  Again, you can have multiple resource groups, but in this simple example there is only one.  Put in your specific resource group name.  Now we get down to the meat of the config.

"data": {
         "virtual_machines": [
          {
                "vm_name": "Server001",
                "wait":false,
                "delay_after_start": "1"
           },
           {
                 "vm_name": "Server002",
                 "wait":false,
                 "delay_after_start": "2"
            }
     ]
}

This is where we put in the VM names.  Place them in the order you want them to start.  The two other properties have special meaning – “wait” and “delay_after_start”.  Let’s look at “wait”.

When you start an Azure VM with start-azvm, there is a property you specify that tells the cmdlet to either start the VM and keep checking until the machine is running, or start the VM and immediately return back to the terminal.  If you set the “wait” property in this config to ‘false’, then when that VM is started the script will immediately return back to the terminal and process the next instruction.  This is important when dealing with machines that take a bit to start  – i.e. large Windows servers.

Now – combine the “wait” property with the “delay_after_start” property, and you have capability to really customize your lab start ups.  For example, maybe you have a domain controller, a DNS server , and a SQL server in your lab, plus a bunch of client machines.  You want the DC to come up first, and probably want to wait 20-30 seconds after it’s running to make sure your DC services are all up and running.  Same with the DNS and SQL boxes, but maybe you don’t need to wait so long after the box is running.  The client machines, though – you don’t need to wait at all, and you might set the delay to 0 or 1.  Just get them up and running and be done with it.

So now we can completely control how our lab starts, but say you have multiple resource groups or multiple subscriptions to deal with.  The JSON can be configured to allow for both!  In the github repo, there are 2 additional examples – config-multigroup.json and config-multisub.json.    A fully baked JSON might look like this:

 

{
    "TenantId": "ff744de5-59a0-4b5b-a181-54d5efbb088b",
    "subscriptions": [
        {
            "subscription_name": "Pay-As-You-Go",
            "resource_groups": [
                {
                    "resource_group_name": "AzureLabStartup001",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":false,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                },
                {
                    "resource_group_name": "AzureLabStartup002",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":true,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                }
            ]
        },
        {
            "subscription_name": "SubID2",
            "resource_groups": [
                {
                    "resource_group_name": "AzureLabStartup001",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":true,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                },
                {
                    "resource_group_name": "AzureLabStartup002",
                    "data": {
                        "virtual_machines": [
                            {
                                "vm_name": "Server001",
                                "wait":true,
                                "delay_after_start": "1"
                            },
                            {
                                "vm_name": "Server002",
                                "wait":true,
                                "delay_after_start": "2"
                            }
                        ]
                    }
                }
            ]
        }
    ]
}

So there you have it – hope you enjoy the repo, and keep checking back because I will be updating it to add more features and improve the actual code.  Hope this helps!  

 

 

Add multiple powershell versions to Vscode

Here is the quick and dirty way to add multiple PowerShell versions to VSCode, and switch between them quickly.

It’s often times advantageous to quickly switch between multiple versions of a programming language when coding to ensure that your code works on multiple platforms.  For me, that is switching between Windows PowerShell and multiple versions of Microsoft PowerShell (PowerShell Core).  He’s how to easily do it in VSCode.

First, I am going to assume you have the PowerShell extension already installed in VSCode.  If not, hit F1 (Ctrl-Shift-P) to open the command palette, and type this

 

ext install powershell

Now, let’s get some different PowerShell versions.  This is the link to PowerShell 7 preview 5.  Down that and extract it to any directory you want.  In this example I will also be using the latest stable version – 6.2.3.  In this example I have downloaded them both to the c:\pwsh directory.

Open VSCode, and create a new file.  Save the file with a .ps1 extension.  Notice how the terminal automatically starts PowerShell, and that the little green number in the lower right says 5.1?  That’s because VSCode is smart enough to look in common locations for PowerShell versions, and will add those automatically to your session options.  Since we are putting the version in a non-typical location, we have to edit our settings.json.  The easiest way to do it is this is to hit F1 (Ctrl-Shift-P), and type ‘language’.  Select “Configure Language Specific Settings”

 

You will open the settings.json for your user and language.  If you haven’t done much with VSCode, it probably looks something like this:

 

{
    "workbench.iconTheme": "vscode-icons",
    "team.showWelcomeMessage": false,
    "markdown.extension.toc.githubCompatibility": true,
    "git.enableSmartCommit": true,
    "git.autofetch": true,
    "[powershell]": {}
} 

To add our new versions, we need to add just a bit to this file.  We are going to add powershell.powerShellAdditionalExePaths settings – in our case 2 of them.  The important parts are to make sure you are escaping the “\” slashes with another slash, and just follow normal JSON formatting requirements.  Continuing with my example, my settings.json looks like this:

{
    "workbench.iconTheme": "vscode-icons",
    "team.showWelcomeMessage": false,
    "markdown.extension.toc.githubCompatibility": true,
    "git.enableSmartCommit": true,
    "git.autofetch": true,
    "[powershell]": {},
    
    "powershell.powerShellAdditionalExePaths": [
        {
            "exePath": "C:\\pwsh\\7.p5\\pwsh.exe",
            "versionName": "PowerShell Core 7p5"
        },
        {
            "exePath": "C:\\pwsh\\6.2.3\\pwsh.exe",
            "versionName": "PowerShell Core 6.2.3"
        }        
    ],
    "powershell.powerShellDefaultVersion": "PowerShell Core 7p5",
    "powershell.powerShellExePath": "C:\\pwsh\\7.p5\\pwsh.exe"
} 

Save the settings file, and if you have an open terminal, either exit it, or crash/reload.  This will force VSCode to reload the settings file.  If you have done it correctly, if you click on the little green version number in the lower right, you should see something like this:

Switch between them, and verify you see the various version with $PSVersionTable

SCOM Incremental discovery

Limitless possibilities – very little code

One of the biggest pet peeves that I’ve had with SCOM discovery is that it was always very agent biased.  If you wanted to discover IIS, for example, the agent would look for the bits and pieces of IIS installed on a server.  While that works fine for most classes and objects, it does pose some limitations.  What if I wanted to add additional properties to an existing object, or mark a server with a custom attribute?  A common work around was to ‘tattoo’ a system with a reg key or perhaps a file, and then discover that via the normal mechanism.  That seems like extra steps for something that should be relatively easy.  I want to run discovery on something other than the agent, and have that discovery data roll up to the agent.

Why would someone want to perform discovery ‘remotely’?  Well, maybe you have a CMDB or an application to server relationship database.  Would you rather put a regkey on every server with the application information, or just query the database?  Imagine being able to have application owner properties (email, phone, manager, etc…) in SCOM as a property for every server, but without having to touch the server at all – no more regkeys or property files.

Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalDiscoveryData

This class – IncrementalDiscoveryData – is an amazingly powerful tool.  The idea is that you can submit additional discovery data for an existing object, or even create brand new objects like normal.  The best way to imagine this is to look at the code itself:


$data = (invoke-sql @DBParams -method query -statement "select * from SomeDataTable").tables.rows 
New-SCOMManagementGroupConnection -ComputerName “localhost”
$mg = Get-SCOMManagementGroup

foreach ($row in $data){
    $class = get-scomclass -displayname $row.ClassName
    $ClassObject=New-Object Microsoft.EnterpriseManagement.Common.CreatableEnterpriseManagementObject($mg,$class)
    $servername = $row.servername
    $ClassObject[$Class.FindHostClass(),“PrincipalName”].Value = "$servername"
    $ClassInstanceDisplayName=$Servername
    $ClassObject[$Class,“AppID”].Value = $row.AppID
    $ClassObject[$Class,“MonitorCI”].Value = $row.MonitorCI
    $discovery=New-Object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalDiscoveryData
    $discovery.Add($ClassObject)
    $discovery.Overwrite($mg)
}

The first 3 lines are pretty straight forward.  A simple function that querys a database (invoke-sql), creating a connection the the local management server, and getting the management group.

Then we go row by row with what’s in the table.  In this case the table looks something like this:

ClassName ServerName AppID MonitorCI
Microsoft.Windows.Computer server01 COTS1 AssignmentGroup1
Microsoft.Windows.Computer server02 Custom2 AssignmentGroup2

The code is going to get the class (in this case $row.classname, which is Microsoft.Windows.Computer), then create a new instance of that class (even if the object exists – we won’t overwrite anything not in the discovery data).  We find the host using the servername , add a couple of our new attributes – AppID and MonitorCI – and then create our discovery object.  That is where the magic happens – 

$discovery=New-Object Microsoft.EnterpriseManagement.ConnectorFramework.IncrementalDiscoveryData

This is the bit that creates the actual discovery object.  Since it’s incremental, we aren’t going to mess with anything already in that object, just update the bits that we add to the discovery object.  In this case, we are adding the $ClassObject that we populated with our additional bits of data – AppID and MonitorCI.    Two more lines, and we have our remotely discovered data as properties in our server object.

    $discovery.Add($ClassObject)
    $discovery.Overwrite($mg)

With just that little bit of code we are able to store attributes, properties, configurations, or anything else in a central database and have a simple PowerShell script that updates the objects in SCOM with that discovery data!  Not only that, those items are not rolled up to some random object, we can roll them up to the exact object we want.  No more regkey tattoos, or configuration files that you have to keep update to date on servers.  

Powershell and Parsing html code in Core

When working on a PowerShell webservice, I came across an interesting problem that I think is only going to crop up more and more.  This webservice takes a json payload, performs some simple manipulation on the data, kicks off some automation, and logs some events.  Part of the payload is html code, however.  Of course the json payload doesn’t care, but when it was coming into PowerShell it was being interpreted as a string.  That meant that when you looked at the string, you literally saw html code:

<BODY><H3>OPEN Problem 256 in environment <I>EPG</I></H3>
                               <HR>
                               <B>1 impacted infrastructure component</B>
                               <HR>
                               <BR>
                               <DIV><SPAN>Process</SPAN><BR><B><SPAN style="FONT-SIZE: 120%; COLOR: #dc172a">bosh-dns-health</SPAN></B><BR>
                               <P style="MARGIN-LEFT: 1em"><B><SPAN style="FONT-SIZE: 110%">Network problem</SPAN></B><BR>Packet retransmission rate for process bosh-dns-health on host 
                               cloud_controller/<GUID> has increased to 18 %</P></DIV>
                               <HR>
                               Root cause
                               <HR>
                               
                               <DIV><SPAN>Process</SPAN><BR><B><SPAN style="FONT-SIZE: 120%; COLOR: #dc172a">bosh-dns-health</SPAN></B><BR>
                               <P style="MARGIN-LEFT: 1em"><B><SPAN style="FONT-SIZE: 110%">Network problem</SPAN></B><BR>Packet retransmission rate for process bosh-dns-health on host 
                               cloud_controller/<GUID> has increased to 18 %</P></DIV>
                               <HR>
                               
                               <P><A href="https://redacted.somewhere.com/e/<GUID>/#problems/problemdetails;pid=-<GUID>">Open in Browser</A></P></BODY> 

I wanted to put this data in the events I was logging, but I obviously didn’t want it to look like this.  There isn’t a PowerShell cmdlet for ‘ConvertFrom-HTML’, although that would be great.  I could have tried to parse the text and remove the formatting, but that would be a pain trying to escape the right characters and account for all of the html tags.  I found several ways online to handle this – namely creating a ‘HTMLFile’ object, writing the html code into it, and then extracting out the innertext property of the object.  This works fine in some environments, but throws a weird error if you don’t have Office installed.  Here is the code:

$HTML = New-Object -Com "HTMLFile"
$html.IHTMLDocument2_write($htmlcode)

And the error it throws when Office isn’t installed:

Method invocation failed because [System.__ComObject] does not contain a method named 'IHTMLDocument2_write'.

There is also a catch here – that error will also be thrown if you are trying this in PowerShell Core/Microsoft PowerShell (not Windows PowerShell – i.e. anything under 5.x)!  Fortunately, there is a quick fix:

$HTML = New-Object -Com "HTMLFile"
try {
    $html.IHTMLDocument2_write($htmlcode)
}
catch {
    $encoded = [System.Text.Encoding]::Unicode.GetBytes($htmlcode)
    $html.write($encoded)
}
$text = ($html.all | Where-Object { $_.tagname -eq 'body' } | Select-Object -Property innerText).innertext

A simple try/catch, and it works great.  I am going to create a new repo in GitHub and make this a new function.  Hopefully you haven’t wasted too much time trying to track this down – it took me a while, and it wasn’t until I stumbled upon a similar solution on StackOverflow (https://stackoverflow.com/questions/46307976/unable-to-use-ihtmldocument2) that I was able to wrap it up.  Expect the cmdlet/function soon!

Azure ARM Fragments

SCOM Trick adopted for Azure ARM

I am working on a project for a very popular conference – working on backend automation that controls everything from speaker coordination to session scheduling.  The current task at hand involves deploying the entire suite of Logic Apps, Azure Automation accounts, Azure Functions, etc…  These would all be deployed to a new Resource Group.  When deploying these resources for a new conference a few things change – changing the conference name from ‘Jazz’ to ‘Midway’ for example, or changing the backend data sources.  

Normally you would use Azure ARM Template Parameters to pass these values when you deploy the resources, and you would be absolutely right!  They are powerful assets to have in your pocket.  It does get a bit dodgy, however, when you start to deploy Logic Apps in the ARM template and those Logic Apps have parameters of their own.  

Logic Apps have parameters and variables all their own, and they are defined just like parameters and variables in ARM templates.  When you want to deploy a Logic App that has parameters you can put them in the ARM Template and reference them in the Logic App, or you can use an ARM template expression in the Logic App.  The latter is not a popular idea, and the former doesn’t evaluate the parameters until the execution of the Logic App.  That obviously makes it difficult to work on the Logic App after it’s deployed.

That’s when I got the idea of using a popular SCOM tool – SCOM Fragments by Kevin Holman – in order to get the best of both worlds.  I wanted a quick way to deploy, and a quick way to edit after deployment.  Cake and all that….

The idea is simplistic – Find/Replace what you want to change before you deploy.  Sounds simple, and it is!  That is essentially how the SCOM Fragments work, and the same idea can be utilized here.  Say, for example, you have a simple Conference Name parameter you want to change.  The first thing you would do is build your Logic App like normal, keeping in place a static conference name.  Export that template.  Now, at the top of the template, add a new parameter to the parameters section in the template, like this:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "Conference_Name": {
            "metadata": {
                "defaultValue": "###Awesome_Conference_Name###",
                "description": "Name of the conference - i.e. Management Conference 2019"
            }
        },

Now, find your static conference name down in the code for the Logic App itself, and replace the name with ###Awesome_Conference_Name###.   That’s it!  That is all that is needed to prepare your template for a rapid deployment.  When you need to deploy this template, simply Find/Replace ###Awesome_Conference_Name### with whatever text you want – i.e. “My Super Conference 2019”.  It will update it both in the parameters section, and in the code itself.  Do we really need the Parameter at the top of ARM template, especially if we are just going to replace the text ourselves before we deploy?  That answer is no, but it does help immensely when keeping track when you have a ton of parameters:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "Conference_Name": {
            "metadata": {
                "defaultValue": "###Awesome_Conference_Name###",
                "description": "Name of the conference - i.e. Management Conference 2019"
            }
        },
        "Sharepoint_Base_URL": {
            "defaultValue": "https://www.sharepoint.com/sites/Communications"
        },
        "Sharepoint_Speaker_ListName": {"defaultValue": "###Sharepoint_Speaker_ListName###"},
        "Sharepoint_Session_ListName": {"defaultValue": "###Sharepoint_Session_ListName###"},
        "Sharepoint_Session_Selection_Team_ListName": {"defaultValue": "###Sharepoint_Session_Selection_Team_ListName###"},
        "Sharepoint_Approved_Sessions_ListName": {"defaultValue": "###Sharepoint_Approved_Sessions_ListName###"},
        "Speaker_Agreement_URL": {"defaultValue": "###Speaker_Agreement_URL###"},
        "Session_Submission_FormName": {"defaultValue": "###Session_Submission_FormName###"},
        "Sched_Speakers_API": {"defaultValue": "###Sched_Speakers_API###"},
        "Sched_Sessions_API": {"defaultValue": "###Sched_Sessions_API###"},
        "Sched_Base_URL": {"defaultValue": "###Sched_Base_URL###"},
        "Sched_API_Key": {"defaultValue": "###Sched_API_Key###"},
        "MediaPack_URL": {"defaultValue": "###MediaPack_URL###"},

By putting the parameters in your code, even though you aren’t going to use them in the way they are intended, you can easily see which ones you need to replace in one place.  

Powershell – what modules are in use?

There comes a time when you are neck deep in code, and come to the realization that one of your modules could really use a tweak.  There is just one problem – what other scripts are using this module?  What if you wanted to inventory your scripts and the modules that each script is using?  This was the main use-case for this post.  I have a scripts directory, and want to get a quick inventory of what modules each one is using.  

First, the function:

#requires -version 3.0
Function Test-ScriptFile {
    <#
    .Synopsis
    Test a PowerShell script for cmdlets
    .Description
    This command will analyze a PowerShell script file and display a list of detected commands such as PowerShell cmdlets and functions. Commands will be compared to what is installed locally. It is recommended you run this on a Windows 8.1 client with the latest version of RSAT installed. Unknown commands could also be internally defined functions. If in doubt view the contents of the script file in the PowerShell ISE or a script editor.
    You can test any .ps1, .psm1 or .txt file.
    .Parameter Path
    The path to the PowerShell script file. You can test any .ps1, .psm1 or .txt file.
    .Example
    PS C:\> test-scriptfile C:\scripts\Remove-MyVM2.ps1
     
    CommandType Name                                   ModuleName
    ----------- ----                                   ----------
        Cmdlet Disable-VMEventing                      Hyper-V
        Cmdlet ForEach-Object                          Microsoft.PowerShell.Core
        Cmdlet Get-VHD                                 Hyper-V
        Cmdlet Get-VMSnapshot                          Hyper-V
        Cmdlet Invoke-Command                          Microsoft.PowerShell.Core
        Cmdlet New-PSSession                           Microsoft.PowerShell.Core
        Cmdlet Out-Null                                Microsoft.PowerShell.Core
        Cmdlet Out-String                              Microsoft.PowerShell.Utility
        Cmdlet Remove-Item                             Microsoft.PowerShell.Management
        Cmdlet Remove-PSSession                        Microsoft.PowerShell.Core
        Cmdlet Remove-VM                               Hyper-V
        Cmdlet Remove-VMSnapshot                       Hyper-V
        Cmdlet Write-Debug                             Microsoft.PowerShell.Utility
        Cmdlet Write-Verbose                           Microsoft.PowerShell.Utility
        Cmdlet Write-Warning                           Microsoft.PowerShell.Utility

    .Notes
    Original script provided by Jeff Hicks at (https://www.petri.com/powershell-problem-solver-find-script-commands)

    #>
     
    [cmdletbinding()]
    Param(
        [Parameter(Position = 0, Mandatory = $True, HelpMessage = "Enter the path to a PowerShell script file,",
            ValueFromPipeline = $True, ValueFromPipelineByPropertyName = $True)]
        [ValidatePattern( "\.(ps1|psm1|txt)$")]
        [ValidateScript( { Test-Path $_ })]
        [string]$Path
    )
     
    Begin {
        Write-Verbose "Starting $($MyInvocation.Mycommand)"  
        Write-Verbose "Defining AST variables"
        New-Variable astTokens -force
        New-Variable astErr -force
    }
     
    Process {
        Write-Verbose "Parsing $path"
        $AST = [System.Management.Automation.Language.Parser]::ParseFile($Path, [ref]$astTokens, [ref]$astErr)

        #group tokens and turn into a hashtable
        $h = $astTokens | Group-Object tokenflags -AsHashTable -AsString
     
        $commandData = $h.CommandName | where-object { $_.text -notmatch "-TargetResource$" } | 
        ForEach-Object {
            Write-Verbose "Processing $($_.text)" 
            Try {
                $cmd = $_.Text
                $resolved = $cmd | get-command -ErrorAction Stop
                if ($resolved.CommandType -eq 'Alias') {
                    Write-Verbose "Resolving an alias"
                    #manually handle "?" because Get-Command and Get-Alias won't.
                    Write-Verbose "Detected the Where-Object alias '?'"
                    if ($cmd -eq '?') { 
                        Get-Command Where-Object
                    }
                    else {
                        $resolved.ResolvedCommandName | Get-Command
                    }
                }
                else {
                    $resolved
                }
            } 
            Catch {
                Write-Verbose "Command is not recognized"
                #create a custom object for unknown commands
                [PSCustomobject]@{
                    CommandType = "Unknown"
                    Name        = $cmd
                    ModuleName  = "Unknown"
                }
            }
        } 

        write-output $CommandData
    }

    End {
        Write-Verbose -Message "Ending $($MyInvocation.Mycommand)"
    }
}

 

This function was originally from Jeff Hicks .  I made a few tweaks, but the majority of this is his code.  As always, Jeff puts out amazing work.

Next, a simple script to call the function on all of the scripts in a certain directory:

if ($IsWindows -or $IsWindows -eq $null){$ModulesDir = (Get-ItemProperty HKLM:\SOFTWARE\PWSH).PowerShell_Modules}
else{get-content '/var/opt/tifa/settings.json'}
$log = $PSScriptRoot + '\'+ ($MyInvocation.MyCommand.Name).split('.')[0] + '.log'
$ModulesToImport = 'logging', 'SQL', 'PoshKeePass', 'PoshRSJob', 'SCOM', "OMI", 'DynamicMonitoring', 'Nix','Utils'
foreach ($module in $ModulesToImport) {Get-ChildItem $ModulesDir\$module\*.psd1 -Recurse | resolve-path | ForEach-Object { import-module $_.providerpath -force }}

$files = (Get-ChildItem 'D:\pwsh\' -Include *.ps1 -Recurse | where-object { $_.FullName -notmatch "\\modules*" }).fullname

[System.Collections.ArrayList]$AllData = @()
foreach ($file in $files) {
    [array]$data = Test-ScriptFile -Path $file
    foreach ($dataline in $data) {
        $path = $Dataline.module.path
        $version = $dataline.Module.version
        $dataobj = [PSCustomObject]@{
            File          = $File;
            Type          = $dataline.CommandType;
            Name          = $dataline.Name;
            ModuleName    = $dataline.ModuleName;
            ModulePath    = $path;
            ModuleVersion = $version;
        }
        $AllData.add($dataobj)|out-null
    }
}
$AllData|select-object -Property file, type, name, modulename,modulepath,moduleversion -Unique|out-gridview

There are a few gotchas you want to keep an eye out for – in order to correctly identify the cmdlets, you will need to load the module that contains said cmdlet.  You can see where I am loading several modules  – SCOM, SQL, logging, etc.  The modules in my script are loaded from modules stored in a reg key – HKLM:\Software\PWSH.  Load your modules in any way that works for you.  Also note that the last line sends the output to a gridview – change that output to something more useful unless you just previewing it.

And there you have it!  Thanks to Jeff for the core function, and I hope this helps keep your modules organized!

Powershell – Security in your profile

If you have done much with Invoke-webrequest, and if your endpoints have an inkling of security minded people watching them, then chances are you have run into a small issue:

Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel.

What’s happening here? Well, chances are that the end-point you are attempting to access has turned off TLS1.0 and 1.1, and for good reason! There is an easy fix, however. Just simply place a single line of code in your script above the invoke-webrequest:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Great! Post done, walk away.

But…..I have about 10000 scripts…..

That one line works great if you have just a handful of scripts that you run, but what if you need to do this for a large company – maybe a large enterprise? Well, it turns out that your profile can help.

First off – there are multiple PowerShell profiles on a system, but for this instance, let’s focus on the All-Users/All-Hosts profile (also sometimes referred to as the System profile). Depending on the flavor of PowerShell you are running – Microsoft vs Windows – the System profile will be in different locations. Not to fear, however, cause $PSHome will show you where the profile is located. Create your profile (if you haven’t already) in the $PSHome directory. The name of the file should be “profile.ps1”.

Now – place the Net.ServicePointManager line you would normally place in a single script into your System profile and save it. Whenever an invoke-webrequest is run from this system, it will automatically use the TLS1.2 protocol. Updating a few systems that run your scripts is a lot easier than updating thousands of scripts, and this will save you a ton of time.

SQL Process Automation with PowerShell

Here is a handy way to handle SQL processes that you find yourself needing to schedule. Of course you can always setup that kind scheduling via the SQL Server Agent, but there are two good reasons to do this kind of scheduling via PowerShell.

1: You don’t have rights to add jobs to via the SQL Server Agent. Some security teams will restrict non-dba access to either the agent or insist on setting the agent to a manual start.
2: You wish to have easier tracking, easier configuration, and just want to do something cool with PowerShell.

The other option you have when trying to schedule something like SQL processes would be to simply use Task Scheduler. Indeed – in my solution I actually use Task Scheduler as a base engine to run every minute or so. What I don’t like about Task Scheduler is trying to put SQL command lines in it. It’s flat out a pain. So, I built something that was easy to configure – even by someone who is not skilled in PowerShell, easy to implement, and has all of the typical good stuff you want with PowerShell.

The solution utilizes 3 main pieces. First, is a script that we will schedule to run every minute via Task Scheduler. Second, a configuration file in JSON format. I choose JSON since it’s simple to read and easy to write. Lastly, we will have XML file that tracks when the last time something ran. Let’s examine each piece:

The script file is fairly straight forward, and there are only a couple of pieces that need explaining. In simple terms this script file will be used as the engine that calls the various SQL commands we specify in the config.json file. We import a few modules, setup a couple of base variables, and then loop through the SQL commands, updating the runhistory.xml file with the timestamp. Schedule this file to run every minute via task scheduler.

$log = $PSScriptRoot + '\'+ ($MyInvocation.MyCommand.Name).split('.')[0] + '.log'
$ModulesDir = 'D:\pwsh\modules'
$ModulesToImport = 'DDTLogging','SQL'
foreach ($module in $ModulesToImport){Get-ChildItem $ModulesDir\$module\*.psd1 -Recurse | resolve-path | ForEach-Object { import-module $_.providerpath -force }}
$Date = Get-Date
$Commands = (get-content $PSScriptRoot\config.json|convertfrom-json).commands
[xml]$RunHistory = get-content "$PSScriptRoot\runhistory.xml"
foreach ($command in $Commands){
    $commandname = $command.name
    $LastRunTime = ($runhistory.Catalog.dataset|where-object {$_.name -eq $commandname}|select-object -Property time).time
    if ($LastRunTime -lt $Date.AddMinutes(-$command.TimePeriod) -or $LastRunTime -eq $null)
    {
        if ($LastRunTime -eq $null){
            $newdata = $RunHistory.Catalog.AppendChild($RunHistory.CreateElement("dataset"))
            $newdata.SetAttribute("name","$commandname")
            $newdata.SetAttribute("time","$date")
            $RunHistory.Save("$PSScriptRoot\runhistory.xml")
        }
        else {
            $RunHistory.Catalog.dataset|where-object {$_.name -eq $commandname}|foreach {$_.time = [string]$date}
            $RunHistory.Save("$PSScriptRoot\runhistory.xml")
        }
        try{
            invoke-sql -server $command.server -database $command.database -method $command.commandtype -integratedsecurity $true -statement $command.statement
        }
        catch{
            write-log -text "Error in $commandname.  $_" -level ERROR -log $log
        }
    }
}

If you are wondering about the Invoke-SQL command, it’s something that I wrapped up in a quick module. You can get the module here.

Now that we have the base script, let’s look at the config.json file. This is where the meat of the information about your commands come from:

{
    "Commands":
    [
        {
            "name":"SQL_Views_Stored_Proc",
            "timeperiod":15,
            "statement":"exec refreshviews",
            "server":"sqlserver1.constoso.com",
            "database":"customers",
            "commandtype":"execute"
        },
        {
            "name":"Update_Time",
            "timeperiod":15,
            "statement":"update cogsuppliers set time = getdate()",
            "server":"sqlserver2.constoso.com",
            "database":"suppliers",
            "commandtype":"update"
        }       
    ]
}

As you can see – it’s a pretty straight forward. Put in the frequency you would like the statement to run (timeperiod), the statement itself, the server and database to run on, and finally the ‘type’ of command it is. This is just to tell the SQL module whether or not to load the data into a data table. That’s it! Anyone with a basic knowledge of how to write a json file can add or remove from this config file quickly!

The final piece is the runshistory.xml file. This simply lets the main script keep track of the last time each statement was run. You shouldn’t have to ever update this file manually.

<Catalog>
  <dataset name="SQL_Views_Stored_Proc" time="11/15/2018 10:49:12" />
  <dataset name="Update_Time" time="11/15/2018 10:49:12" />
</Catalog>

For full transparency, there are a few things I just need to swing back around and address. First – the commands runs sequentially. Long running SQL statements might cause others to fail or fall behind. The plan for that is to use something like PoshRSJob to run the jobs in parallel, and each in it’s own runspace. Secondly, there is a small chance that if someone was to manually edit the runhistory.xml file and remove one of the lines the script could error. I will update the script with a catch to make sure this doesn’t happen.

SCOM 1807, Linux omiagent service stopped, 403 error in discovery and more

I hope this post saves you some time and pain – I lost 2 days to troubleshooting and opening cases. Here is the setup:

OpsMgr 1807
Latest Linux/Unix MP (updated in August, 2018)
OEL 7.x client
Followed the instructions here: SCOM 1801/1807 Install

The agent install went fine – install completed, cert created, conf file created and configured, cert signed by the OpsMgr server and replaced on the client, and the agent restarted. That’s when I checked ‘scxadmin -status’. Omiserver was running, but omiagent was stopped. Only one log was created – omiserver.log, and nothing for the agent. Trying to start or restart via scxadmin yielded nothing. No feedback, no updates to logs, nothing. Even doing the install with –debug or settings the logs to verbose did nothing. So what happened?

Turns out, the omiagent process won’t start until a successful discovery is ran in the console. Who knew? Turns out that I didn’t. Two days lost trying to troubleshoot. I knew that a discovery was needed, but I assumed that the agent processes would be running ahead of time. Turns out they don’t all run before the discovery.

So, run over to the console and try to do the discovery only to immediately get a 403 error. Specifically, “The WinRM client received an HTTP status code of 403 from the remote WS-Management service.” Turns out this one is an easy fix, at least in my instance. My environment, like most in the corporate world, uses proxy servers. On the MS, a quick “netsh winhttp set proxy proxy-server=”http=” bypass-list=”*.yourdomain.com”” was all it took to fix the issue.

Once the discovery ran fine, it took about 5 minutes before the omiagent on the Linux machine was up and running on it’s own.

I hope this saves someone a few minutes of troubleshooting – if it does, shoot me a tweet!