Customize your PowerShell profile for useful startup actions

Did you know you can make PowerShell run any commands you want when you start a shell? This is amazingly useful for gathering information, making settings changes, or kicking off processes – all at shell startup. There are plenty of places that talk about the profiles, so I won’t go into each type, but long story short there are almost 10 different profiles when you account for 32 and 64 bit PowerShell. Some of them are explained in detail here.

For my example here, I am going to deal with the %UserProfile%\My Documents\WindowsPowerShell\profile.ps1 profile, which affects the current user, but all shells. This is useful for when you are switching back and forth between the ISE and console – say when you are testing new scripts. By default this file won’t exist – you will have to create it if it doesn’t.

#See if your profile file exists.  Checks the 'My Documents\WindowsPowerShell' directory
Test-Path $profile

The profile file is really just a .ps1 file. You can put any PowerShell you want in this file. Say you want to get a random Cat Fact every time you start a shell? (who am I to judge?)

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
((((invoke-webrequest -uri https://catfact.ninja/fact -Method get).content).split(':')[1])).trim('","length"')

There are some really useful things you can do now that you know you can run anything. For example – this will show the number of running PowerShell processes, along with WinRM service status:

write-host -ForegroundColor Green "PowerShell Processes: " (Get-process 'PowerShell').count
write-host -ForegroundColor Green "WinRM Status: " (Get-service 'winrm').status

And this will show the PowerShell module paths:

write-host -ForegroundColor Green "Module Paths:"
foreach ($module in ($env:PSModulePath).Split(";")){write-host -ForegroundColor Green "    "$module}

Notice here – I also added a simple “cd c:\blog” to my profile – really useful for starting straight in your scripts directory (no more “cd c:\workspaces\app\development\scripts….” every time you start a console)

But you can do even more! Full functions can be loaded into your profile and are available immediately. One of my favorites is one that will add a Start-RDP function, so I can initiate a remote desktop session without ever touching the start menu! How cool is this?

write-host -ForegroundColor Green "PowerShell Processes: " (Get-process 'PowerShell').count
write-host -ForegroundColor Green "WinRM Status: " (Get-service 'winrm').status
write-host -ForegroundColor Green "Module Paths:"
foreach ($module in ($env:PSModulePath).Split(";")){write-host -ForegroundColor Green "    "$module}
cd C:\Blog
function Start-RDP
{
    param  
    (  
        [Parameter(
            Position = 0,
            ValueFromPipeline=$true,
            Mandatory=$true
        )]
        [ValidateNotNullOrEmpty()]
        [string]
        $ServerName
    )
    mstsc /v:$ServerName 
}

There is also a special function you can put in your profile – it’s the ‘prompt’ function. This will change the PowerShell command line prompt to whatever you want! Just create a new function called ‘prompt’, write-host anything you want into the function, and make sure you put a ‘return ” “‘ at the end – it’s that’s simple! You can put some great data right on the prompt – for example, you can make the current time show up each time the prompt is shown! This is really useful for measuring how long something takes to run if you don’t want to pull out measure-object! Here is my prompt, along with the full profile:

function prompt
{
    $time = "("+(get-date).ToLongTimeString()+")"
    write-host -NoNewline -ForegroundColor Green "PS " (Get-Location).Path $time ">"
    return " "
}
write-host -ForegroundColor Green "PowerShell Processes: " (Get-process 'PowerShell').count
write-host -ForegroundColor Green "WinRM Status: " (Get-service 'winrm').status
write-host -ForegroundColor Green "Module Paths:"
foreach ($module in ($env:PSModulePath).Split(";")){write-host -ForegroundColor Green "    "$module}
cd C:\Blog
function Start-RDP
{
    param  
    (  
        [Parameter(
            Position = 0,
            ValueFromPipeline=$true,
            Mandatory=$true
        )]
        [ValidateNotNullOrEmpty()]
        [string]
        $ServerName
    )
    mstsc /v:$ServerName 
}

And the outcome:

It’s that simple! Customize your profile, and start being productive quicker! Leave a comment below to tell me what your favorite profile modifications are!!

Austin PowerShell User Group Survey Responses!

Last week, I sent out a survey where we were asking people questions about the Austin PowerShell User Group. Basically we are trying to find out what you all want when it comes to location, meeting times, etc… Well, here are the results!!

To begin with we had around 40 responses, which is excellent! Thank you all for responding!

The first question – “Would you be interested in attending an Austin PowerShell User Group meeting in the future?”. I know – I know – softball question. If someone is responding, then they are probably interested in attending. Not surprisingly, the percent that answered yes was 100%! That’s great!

When we asked what days worked best, a couple of clear winners emerged – Friday and Doesn’t Matter.

Next – How often should we meet? Again, a pretty clear winner – Once every other month took almost 50% of the votes!

Now we get to the fun questions – Where and for how long should we meet. First, here are the primary and secondary choices for location. North Austin and Round Rock/Pflugerville appear to be the leaders for Primary choice, and North Austin taking the bulk of the Secondary choice!


When we asked how long each meeting should be, we got some great varied responses! All day and Afternoon took 2/3rds from the Primary, while Afternoon and Mornings dominated the Secondary Choice!


We know we can’t pick everyone’s preferred time and/or location, so next we asked how likely you would still be able to attend if the selection didn’t go your way. All in all, everyone seemed somewhat flexible!

Now on the to the free-form text! Some great suggestions on venues:

Microsoft or Dell Campus
Employer
Microsoft, Member Facilities
Just happy to be aware of this
Dave & Busters, Alamo Drafthouse, MSFT Store (free)
Domain area
eBay (Daytime only), Microsoft (daytime only), User Group Member Businesses
Private companies to host
Yes 🙂
Microsoft Austin on Stonelake

I don’t know who the smart-ass was that said “Yes” with a smiley face, but I will find you 🙂

I should have know better than ask for open comments, but here they are.

I think this is a great idea! It would be great to meet other PS developers in the area
I could participate more easily on days I could not attend if we had live feed or if the presentations were available on you tube or something
Maybe we could expand CTSMUG and devote a session to Powershell every time we meet – We could schedule it after lunch to allow for those attendees who cant take an entire day – Or whenever during the CTSMUG Day that makes the most sense. The technologies are complimentary and it benefits the CTSMUGers as much as the PUGers. Or barring that how about a Powershell Happy Hour post CTSMUG.
Newcomers’ meeting would be cool for a start.
Meetings during business hours opens up more options for locations because you don’t have to pay for extra security or host in a Retail/Food location which may be too noisy. I work for eBay and can easily host meetings (small or large) given enough advanced notice but it must be during the week, during business hours.
Ask for more volunteers to lead the group so we can spread the load.

Here is something nice – someone thanked me!

Thanks Donnie !

And then there’s Duncan McAlynn’s comment (Yes, I know it was you)

Donnie’s an asshole.

Thanks to everyone that took the survey (except Duncan) – we REALLY appreciate it! We will munch on this data, and send out invites shortly!

Run _Anything_ with Flow. PowerShell Triggers

Want to start PowerShell commands from a Tweet? Yeah you do, and you didn’t even know you wanted to.

Earlier this month, a great Flow of the Week was posted that highlighted the ability to use a .net filesystemwatcher to kick off local processes. This sparked an idea – I think we can expand on this and basically run anything we want. Here’s how:

First, let’s start with the Connected Gateway. The link above goes into a bit of detail on how to configure the connection. Nothing special there.
Second, on the Connected Gateway, run this PowerShell script:

$FileSystemWatcher = New-Object System.IO.FileSystemWatcher
$FileSystemWatcher.path = "C:\temp\WatchMe"
$FileSystemWatcher.Filter = "Flow.txt"
$FileSystemWatcher.EnableRaisingEvents = $true
 
Register-ObjectEvent $FileSystemWatcher "Changed" -Action {
$content =  get-content C:\temp\WatchMe\Flow.txt |select-object -last 1
powershell.exe $content
}

This script sets up a FileSystemWatcher on the C:\temp\WatchMe\Flow.txt file. The watcher will only perform an action if the file is changed. There are several options for the “Changed” parameter – Created, Deleted, Renamed, Error, etc… Once created, the watcher will look at the last line of the c:\temp\WatchMe\Flow.txt file, and launch a PowerShell process that takes that last line as the input.

Third – This is the best part. Since we have a FileSystemWatcher, and that watcher is reading the last line of the C:\temp\WatchMe\Flow.txt file and kicking that process off, all we have to do is append a line to that file to start a PowerShell session. Flow has a built-in connection for FileSystem. You can see where this is going. Create a new Flow, and add an input action – I am fond of the Outlook.com Email Arrives action. Supply a suitable trigger in the subject, and add the ‘Append File’ action from the FileSystem service. Here is how mine is configured:

The only catch with this particular setup is that the body of the email needs to be in plain text – Windows 10 Mail app, for example, will not send in plain text. The body of the mail is the PowerShell command we want to run. For example, maybe we want PowerShell to get a list of processes that have a certain name, and dump those to a text file for parsing later. Simply send an email that has the body of “get-process -name chrome|out-file c:\temp\ChromeProcesses.txt”. Here is what that results in:
Before we send the email:

The Email:

After a few minutes – an new folder appears!:

The contents of the text file:

Handles  NPM(K)    PM(K)      WS(K)     CPU(s)     Id  SI ProcessName                                                  
-------  ------    -----      -----     ------     --  -- -----------                                                  
   1956      97   131588     191648     694.98    728   1 chrome                                                       
    249      22    34268      43356       1.63   4264   1 chrome                                                       
    381      81   307592     331312     145.16   6080   1 chrome                                                       
    149      12     2140      10076       0.05   7936   1 chrome                                                       
    632      86   277900     557484     974.00   9972   1 chrome                                                       
    431      31   147956     159404     182.11  10056   1 chrome                                                       
    219      12     2132       9608       0.08  11636   1 chrome                                                       
    283      50   135932     141512      98.05  12224   1 chrome                                                       
    396      54   133912     297432      18.58  12472   1 chrome                                                       
    253      46   107348     106752      50.13  13276   1 chrome                                                       
    381      48   114452     128836     242.89  14328   1 chrome                                                       

Think about what you could do with this – Perhaps you want to do an Invoke-WebRequest every time a RSS Feed updates. Maybe start a set of diagnostic commands when an item is added to Sharepoint. Kick off actions with a Tweet. If you want full scripts to run instead of commands, change the Action section of the FileSystemWatcher to “PowerShell.exe -file $content”. Easy as pie.

PowerShell WSMAN Configuration for Massive Scale

In my day job, I constantly strive to push PowerShell to the limit, attempting to use absolutely every bit of processor/memory/network bandwidth available. One way I do this is with PoshRSJob written by Boe Prox. PoshRSJob is a wonder multi-threading tool, and I use it at pretty heavy scale – typically at a 100 thread throttle.

Sometimes, when you are running a lot of concurrent threads attaching to remote machines, you will run into WinRM connection limitations. They typically will show up in error messages like this when you try to do commands line “invoke-command -computername remoteserver01” :
“This user is allowed a maximum number of 5 concurrent shells, which has been exceeded. “

Configuring typical WSMAN connection limits are fairly well documented, but I was running into another type of error. This error was occurring even after I had upped the connection limits:
“The maximum number of concurrent shells allowed for this plugin has been exceeded.”

This was driving me crazy, until I realized the slightly different wording. Browsing the WSMAN PSDrive, I was eventually able to solve it. The key word in the second error was “plugin”. I had to configure the limits on the plugin, not just the shell. After I realized the difference, I was able to find the right settings. I have compiled them here in a small script that will enable WinRM, and set the limits very high for both the shell and plugin.

enable-psremoting -force 
cd WSMan:\localhost\Shell 
set-item MaxConcurrentUsers 100 
set-item MaxProcessesPerShell 10000 
set-item MaxMemoryPerShellMB 1024 
set-item MaxShellsPerUser 1000 
cd WSMan:\localhost\Plugin\microsoft.powershell\Quotas 
set-item MaxConcurrentUsers 100 
set-item MaxProcessesPerShell 10000 
set-item MaxShells 1000 
set-item MaxShellsPerUser 1000 
restart-service winrm 

Obviously test this before you deploy to production. I also found a neat one-liner to monitor the number of WSMan connections of a target system (set the $computername variable to the target, or use localhost):

while($true){(Get-WSManInstance -ComputerName $ComputerName -ResourceURI Shell -Enumerate).count;start-sleep 1} 

Azure Runbook for Posting to the OMS API

For MMSMOA 2017, I created an Azure Runbook that could post to the OMS API. Well, it’s more than a month later, but I finally got around to making a post around it. I’m going to skip the basics of creating a runbook, but if you need a primer, I suggest starting here.

Let’s start with the runbook itself. Here is a decent template that I modified from the OMS API documentation. This template takes an input string, parses the string into 3 different fields, and sends those fields over to OMS. Here’s the runbook:

Param
(
    [Parameter (Mandatory= $true)]
    [string] $InputString
)



$CustomerID = Get-AutomationVariable -Name "CustomerID"
$SharedKey = Get-AutomationVariable -Name "SharedKey"
write-output $customerId
write-output $SharedKey
$date = (get-date).AddHours(-1)

# Specify the name of the record type that you'll be creating
$LogType = "MyRecordType"

# Specify a field with the created time for the records
$TimeStampField = "DateValue"

# Create the function to create the authorization signature
Function Build-Signature ($customerId, $sharedKey, $date, $contentLength, $method, $contentType, $resource)
{
    $xHeaders = "x-ms-date:" + $date
    $stringToHash = $method + "`n" + $contentLength + "`n" + $contentType + "`n" + $xHeaders + "`n" + $resource

    $bytesToHash = [Text.Encoding]::UTF8.GetBytes($stringToHash)
    $keyBytes = [Convert]::FromBase64String($sharedKey)

    $sha256 = New-Object System.Security.Cryptography.HMACSHA256
    $sha256.Key = $keyBytes
    $calculatedHash = $sha256.ComputeHash($bytesToHash)
    $encodedHash = [Convert]::ToBase64String($calculatedHash)
    $authorization = 'SharedKey {0}:{1}' -f $customerId,$encodedHash
    return $authorization
}


# Create the function to create and post the request
Function Post-OMSData($customerId, $sharedKey, $body, $logType)
{
    $method = "POST"
    $contentType = "application/json"
    $resource = "/api/logs"
    $rfc1123date = [DateTime]::UtcNow.ToString("r")
    $contentLength = $body.Length
    $signature = Build-Signature `
        -customerId $customerId `
        -sharedKey $sharedKey `
        -date $rfc1123date `
        -contentLength $contentLength `
        -fileName $fileName `
        -method $method `
        -contentType $contentType `
        -resource $resource
    $uri = "https://" + $customerId + ".ods.opinsights.azure.com" + $resource + "?api-version=2016-04-01"

    $headers = @{
        "Authorization" = $signature;
        "Log-Type" = $logType;
        "x-ms-date" = $rfc1123date;
        "time-generated-field" = $TimeStampField;
    }

    $response = Invoke-WebRequest -Uri $uri -Method $method -ContentType $contentType -Headers $headers -Body $body -UseBasicParsing
    $WhatISent = "Invoke-WebRequest -Uri $uri -Method $method -ContentType $contentType -Headers $headers -Body $body -UseBasicParsing"
    write-output $WhatISent
    return $response.StatusCode
}

# Submit the data to the API endpoint

$ComputerName = $InputString.split(';')[0]
$AlertName = $InputString.split(';')[1]
$AlertValue = $InputString.split(';')[2]

# Craft JSON
$json = @"
[{  "StringValue": "$AlertName",
    "Computer": "$computername",
    "NumberValue": "$AlertValue",
    "BooleanValue": true,
    "DateValue": "$date",
    "GUIDValue": "9909ED01-A74C-4874-8ABF-D2678E3AE23D"
}]
"@

Post-OMSData -customerId $customerId -sharedKey $sharedKey -body ([System.Text.Encoding]::UTF8.GetBytes($json)) -logType $logType  

There are a couple of things to note with this runbook – first, the date it will post into OMS will be Central Standard Time. If you want to change to another timezone, change the $date = (get-date).AddHours(-1) line (aligning to EST). Second, this script has output which you can remove. The output will only show up in the output section in Azure Automation, which makes it handy for troubleshooting. The third thing you might want to change is the $LogType = “MyRecordType” line. This is the name that OMS will give the log (with one caveat mentioned below).

So, create your runbook in Azure Automation, and give it a test. You will be prompted for the InputString. In my example here, I will use the input string of “Blog Test;Critical;This is a test of an Azure Runbook that calls the OMS HTTP API”

Give it a minute or so, and you are rewarded with this:

Notice the “_CL” at the end of my log name? Notice the “_S” at the end of the fields? OMS does that automatically – CL for custom log, S for string (or whatever data type you happen to pass).

There you have it – runbooks that post to OMS. Add a webbook to the Runbook, and call it from Flow. Send an email to an inbox, have Flow trigger the Runbook with some of the email data, and suddenly you have the ability to send emails and have that data appear in OMS.

OMS and Flow Integration – Best Friends Forever

The alert actions in Microsoft OMS are great, but they can be a bit limited. Sending emails can be boring, ignored, or end up in the spam folder. Runbooks are awesome, but can take some time to setup. What if you want to use a simple Microsoft Flow to perform other actions? Flow is fast, simple, reliable, and mobile ready. Here is how to make OMS and Flow best friends – and make you look like a rockstar!

Go to https://requestb.in/ and select “Create a RequestBin”
Copy the Bin URL. You will need this in a minute.
Open OMS and head to the alert you want to make awesome (Overview – Settings – Alerts).
Turn on the Webhook action, and paste the Requestb.in url into the “WebHook URL” field.
Check the box to turn on the “Include Custon JSON Payload”. Paste this json into the field below the checkbox:

{“AlertRuleName”:”#alertrulename”,
“AlertThresholdOperator”:”#thresholdoperator”,
“AlertThresholdValue”:”#thresholdvalue”,
“LinkToSearchResults”:”#linktosearchresults”,
“ResultCount”:”#searchresultcount”,
“SearchIntervalEndtimeUtc”:”#searchintervalendtimeutc”,
“SearchIntervalInSeconds”:”#searchinterval”,
“SearchIntervalStartTimeUtc”:”#searchintervalstarttimeutc”,
“searchquery”:”#searchquery”,
“workspace”:”#workspaceid”,
“IncludeSearchResults”:true
}

Note: You might want to remove the “searchquery” json pair – the alert queries that have special characters can really cause havoc with the webhook.
Hit “Test WebHook”. If you get an error, remove the searchquery section out of the json and try again. If it occurs again, shoot me an email and I will be glad to help.
Go back to RequestB.in and refresh your page. You should see something like this:

Copy out the “Raw Body” section.
Open Flow, and create a new Flow (My Flows, Create from Blank)
For the Trigger – pick “Request/Response – Request”. When you click on the trigger, it will open the Flow for editing.
Click on “Use sample payload to generate schema”, and paste in the data you got from RequestB.in. Click “Done”. Flow has now setup of the JSON schema that the rest of your Flow will use.
Add another other Flow action like you normally would – For example, you might want to add these alerts to a Sharepoint list, or perhaps put them in a SQL database for archiving. The fields that are available to the actions that follow the Request trigger depend on the fields that were sent by the alert – I suggest that you play around with Threshold vs Metric alerts so you can see how they post to Flow.
One last step – when you save the Flow, make sure you go into the “Request” trigger and copy the URL. You will want to paste that url in the OMS alert Webhook URL (remove the requestb.in url). Now, when the alert triggers, it will pass the data to Flow rather than RequestB.in.

Once you have the alerts coming to Flow, the possibilities are endless – want to see those Alerts on your phone? Get PowerApps and in 5 minutes you can point it at the SQL database you are using to store the alerts. Want a mobile notification? Piece of cake with Flow. Hell, maybe you want to Tweet those alerts for some reason – Go for it! I won’t tell you not to.

OMS Custom JSON Payload with all Attributes

When using a custom JSON payload in an OMS Alert, knowing which attributes are available can be difficult. It took me way too long to find this. These are the attributes that I find most handy (and I believe this is a mostly complete list as of the time of this post). The odd-ball one here – IncludeSearchReqults, is a simple True/False, and will send the bulk of the data to whatever webhook you are using. Without this pair, you will only get alert information, not information from the actual event that triggered the alert.

Another one to keep an eye on is the ‘searchquery’ pair. This pair (obviously) sends the Alert Search Query to the webhook. Because the queries can often contain special characters, and the webhooks can be finicky when it comes to special character translation, I will often exclude this pair to keep everyone happy. If you use the payload below, but your webhook fails when testing, remove the searchquery pair and try again.

{“AlertRuleName”:”#alertrulename”,
“AlertThresholdOperator”:”#thresholdoperator”,
“AlertThresholdValue”:”#thresholdvalue”,
“LinkToSearchResults”:”#linktosearchresults”,
“ResultCount”:”#searchresultcount”,
“SearchIntervalEndtimeUtc”:”#searchintervalendtimeutc”,
“SearchIntervalInSeconds”:”#searchinterval”,
“SearchIntervalStartTimeUtc”:”#searchintervalstarttimeutc”,
“searchquery”:”#searchquery”,
“workspace”:”#workspaceid”,
“IncludeSearchResults”:true
}

Flow to the Rescue! Overcoming Azure Automation Scheduling Limitations

Azure Automation – the 800lb gorilla in the room. If you can think of a way to accomplish a task, more than likely Azure Automation can do it. Combining the ease of use of PowerShell, the sheer power of Azure, and the multitude of integrations available, you can build enterprise worthy automation runbooks quickly and easily.

It’s a no-brainer.
Until it isn’t.

One of the most frustrating limitations of Azure Automation is the scheduling tool. Sure – you can setup one-time and reoccurring schedules with ease, but what if you need something to run often – say every 10 or 15 minutes? Unfortunately, when you go to set a schedule like that, you are greeted with this:

That means you can’t set a single schedule shorter than an hour. Sure, you can set multiple schedules – you would need 4 if you want to run every 15 minutes. What if you want something to run every 5? Are you going to create 20 schedules? Of course not! This is where Microsoft Flow comes to the rescue.

There are 2 ways to trigger an Azure Automation job from Flow (probably more, if you dig deep), so let’s start with the simplest one. If you have a connection from Flow to Azure Automation already, then this simple Flow will start an Azure Automation Runbook:

Boom – no need for 20 schedules here! This simple Flow probably just saved you an hour of clicking and checking. But what if you don’t have a connection to Azure Automation already established? Perhaps you want to run a Runbook that isn’t in your subscription? It’s still pretty easy. The first step is to obtain the webhook URL for the Runbook. Start by access your Azure Automation Runbook – make sure it has focus. In the left navigation pane you should see a link for ‘Webhooks’. Click that to shift focus to the Webhook listing page. Click the ‘Add Webhook’ button at the top of the main pane. From here on out, it’s pretty straight forward. Give your webhook a name, set the expiration date for the webhook, and it your Runbook has any parameters you want the webhook to pass, specify those name.

IMPORTANT – Make sure you copy the URL before clicking OK. Finding that URL later is like pulling teeth – you might be able to do it, but it will be painful.

When you are satisfied with the settings, and have copied the URL, click ‘OK’.

No that you have your webhook created in Azure Automation, it time to setup flow. Luckily for you, it couldn’t be easier!

That’s it! No more Azure Automation Scheduling limitations! Run those runbooks as often as you like!

PowerShell – Runspaces and Large Enterprises

You’ve got 40gb of log files, a broken app, and a CFO reminding you how much money the company is losing per minute. You’ve got to find that one error in one log that will clue you in on how to fix the issue. You have no idea where it is, but you know you have this issue in the bag. How? Because you have PowerShell. I’m going to show you how.

We’ve all been there – a task that has to be done across hundreds of systems, a search of thousands of files, pulling a property from tens of thousands of AD user accounts. PowerShell can do it, but there is always a need to shave those seconds off. In this article we examine how to perform these large action in the quickest possible ways.

Asynchronous Processing

PowerShell has a couple of options when it comes to running tasks in a ‘multi-threaded’ fashion. The two you will primarily hear about are workflows and runspaces (jobs are another topic). Workflows are dead easy to setup, but can be picky about what they will and will not allow to happen in them. Workflows do have the nice feature of sequencing – being able to tell part of the workflow to run in sequence, then run other parts in parallel. Runspaces are more difficult to setup initially, but allow essentially any action you desire. Runspaces also allow for insane parallel processing. In my personal preference, runspaces are always in my toolbox. It becomes a no-brainer when you combine some of the work that the PowerShell heavy-weights have done to make runspaces super easy. Mainly Boe Prox and Warren F. These guys are serious rock-stars.

Invoke-Parallel – Your new best friend

When you absolutely, positively have to burn up those CPUs and flood the network, you need Invoke-Parallel. Seriously, download it now. I made that a link for a reason. Go get it. Using this beast, we can run multiple commands against 20,000 remote server nodes every evening. We can put thousands of SCOM nodes into maintenance mode in a matter of minutes, or search hundreds of directories with thousands of files in a matter of seconds. This is one function that will elevate your PowerShell game. It’s all built on runspaces, and has some amazing logic wrapped around it.

From Github you will get a .ps1 file. You can either pull the function out of that file and include it in your script, dot-source the whole .ps1 file (. “C:\temp\invoke-parallel.ps1”), or take the function and wrap it up in a module. That is my preferred method, since I wrap it up with other useful functions. Regardless of how you reference the function, calling it is easy. Here is a simple example:

. "c:\blog\Parallel\invoke-parallel.ps1"
$servers = 'server1','server2','server3'
Invoke-Parallel -InputObject $servers -ScriptBlock {Test-Connection $_ -Count 1 -Quiet}

This is pretty straight forward. I am dot-sourcing the ps1, Generating an array that has 3 servers in it, and then sending that array to the Invoke-Parallel function as the InputObject parameter. This couldn’t be easier, and guess what? You just ‘multi-threaded’ a PowerShell script. Pat yourself on the back, and then buy Boe and Warren a drink the next time you see them.

Now invoking these runspaces doesn’t come free – there is an overhead and startup time associated with starting a runspace, and that might actually be a detriment to your outcome. For example, if I have 20,000 log files that are relatively small (10mb or less), and you need to do a “select-string -pattern ‘something'”, then it might not be advantageous to run invoke-parallel. Let’s look at the time it takes to find an error in one of those logs files with each method. In a previous blog post, I created a function to create a lot of log files with random data – I am using that here to create 20,000 log files of about 1mb in size. (side-note: I will later be expanding that function to take advantage of invoke-parallel).
dirproperties

I have edited a random file and added this line somewhere in the middle:
2016-08-21–ERROR–TOO MANY FILES, IDIOT.
I have idea which one I edited. That’s how dedicated I am to this cause. Now, let’s measure how long it takes to find this string both with and without invoke-parallel.
Without:

PS C:\blog\Parallel> Measure-Command{Get-ChildItem c:\Temp\blog -recurse | Select-String -pattern "ERROR"}


Days              : 0
Hours             : 0
Minutes           : 2
Seconds           : 11
Milliseconds      : 966
Ticks             : 1319662757
TotalDays         : 0.00152738745023148
TotalHours        : 0.0366572988055556
TotalMinutes      : 2.19943792833333
TotalSeconds      : 131.9662757
TotalMilliseconds : 131966.2757

With:

Measure-Command{
	. "c:\blog\Parallel\invoke-parallel.ps1"
	$files = Get-ChildItem c:\Temp\blog -Recurse
	Invoke-Parallel -InputObject $files -Throttle 8 -ScriptBlock { $_ | Select-String -Pattern 'ERROR' }
}

Days              : 0
Hours             : 0
Minutes           : 4
Seconds           : 1
Milliseconds      : 880
Ticks             : 2418807160
TotalDays         : 0.00279954532407407
TotalHours        : 0.0671890877777778
TotalMinutes      : 4.03134526666667
TotalSeconds      : 241.880716
TotalMilliseconds : 241880.716

Because the files are small, the Select-String can process them faster then we can spin up new runspaces. But, if we change the size of the files – say to something like 1GB, the difference is dramatic.
Without Invoke-Parallel:

PS C:\blog\Parallel> Measure-Command{Get-ChildItem c:\Temp\blog -recurse | Select-String -pattern "ERROR"}


Days              : 0
Hours             : 0
Minutes           : 5
Seconds           : 12
Milliseconds      : 368
Ticks             : 3123680771
TotalDays         : 0.00361537126273148
TotalHours        : 0.0867689103055556
TotalMinutes      : 5.20613461833333
TotalSeconds      : 312.3680771
TotalMilliseconds : 312368.0771

And with:

Days              : 0
Hours             : 0
Minutes           : 2
Seconds           : 15
Milliseconds      : 444
Ticks             : 1354442030
TotalDays         : 0.00156764123842593
TotalHours        : 0.0376233897222222
TotalMinutes      : 2.25740338333333
TotalSeconds      : 135.444203
TotalMilliseconds : 135444.203

It halved the time it took to process those files. Why? Because we could load up multiple select-strings at a time as each was long-running. Each individual select-string is not CPU intensive, it just takes time. In this instance I could process 40GB of files in 2.25 minutes, whereas before I could only do 20gb in 4 minutes. This tells us that runspaces are great for commands or scripts that take a bit longer to run and aren’t horribly CPU intensive.

All of this brings me to the title of this article – when you are dealing with an absolute massive amount of machines, or AD accounts, or large files – whatever the case may be – invoke-parallel should be in your toolbox. At my current job, I have 6 commands to run every night on around 18,000 servers. I can run these through 8 jump servers – I pipe invoke-commands through a large invoke-parallel with a throttle of 80, and can finish this job in about 3 hours. Prior to using invoke parallel, it was taking about 18 hours to complete. That is how you utilize a network.

The main parameters that we typically deal with when using Invoke-Parallel are the InputObject, the Throttle, the ScriptBlock (or ScriptFile), and the Timeout. The InputObject is an array that is the basis for the function. In essence the function will open a runspace for each object in the array. It could be an array of servers, array of users, or a list of files. The throttle is how many runspaces you want running at the same time. Avoid the temptation to set this value too high – it can actually be detrimental if too many runspaces are vying for the same resources (CPU/MEM/Disk). A good rule of thumb for my environment is to limit it to the number of processors on the system running the task. If I am using multiple servers to run tasks, or if the tasks have extremely minimal requirements, I might set it higher. Timeout is how many seconds you want the runspace to run before it is killed. This is typically used to free up runspaces that have encountered a problem – hung commands and such. The last parameter – the ScriptBlock (or scriptfile) is what you want to actually happen in the runspace. Take this example:

$sites = 'www.google.com','www.cnn.com','www.microsoft.com','www.reddit.com'
. "C:\blog\Parallel\invoke-parallel.ps1"
invoke-parallel -InputObject $sites -scriptblock {Test-Connection $_ -quiet}

In this case, the ScriptBlock is a simple test-connection. The $_ is the reference to the current object being processed by this runspace. In this example it was a single url, but it can also be an object with properties, which you would access as any other property ($_.name, $_.Size, etc…). Inside the scriptblock, there are 2 option for accessing variables that are declared outside of the scriptblock. You can either use the ‘$using:variable’ method, or you can specify the ‘-ImportVariables’ parameter for Invoke-Parallel. Along those same lines, if you want to use modules that are imported outside of the runspace, you can use the ‘-ImportModules’ parameter.

This example expands on the script block a bit, and shows how to use the -ImportVariables parameter:

$sites = 'www.google.com','www.cnn.com','www.microsoft.com','www.reddit.com', 'www.powershell.org','www.draith.com','www.bing.com','www.arstechnica.com','www.bbcnews.com'
$texttofind = 'PowerShell'
. "C:\blog\Parallel\invoke-parallel.ps1"
invoke-parallel -InputObject $sites  -ImportModules -ImportVariables -scriptblock {
    if (Test-Connection $_ -quiet -count 1)
    {
        try
        {
            $result = Invoke-WebRequest -Uri $_ -UseBasicParsing|select-string -Pattern $texttofind
            if ($result)
            {
                Write-output $_
            }
        }
        catch
        {
            Write-Verbose 'Error getting site data.'
        }
    }
}

These are the basics of Invoke-Parallel. If you have any questions, feel free to leave a comment or ping me via email. In a future post we will go over jobs and how they compare to runspaces. See you then!

Again – special thanks to Boe Prox and Warren F. You guys make this stuff look easy.

‘Why Not?’ Series – PowerShell, IFTTT, and Smartthings

Ever wanted to turn your kitchen lights off from a command line?

OF COURSE YOU HAVE.

In another addition of my ‘Why Not?’ series, I explore how to bring the power of IFTTT and SmartThings to PowerShell.

SmartThings is a home automation suite that consists of a central hub, z-wave or Zigbee switches, outlets, light bulbs, smoke detectors, water sensors, etc… Thousands of devices exist that integrate with the SmartThings system, and it has an accessible API. I swear, this is not a paid advertisement. It just happens to be the system I use in my home automation. One night I was sitting at home wondering why I had to use an app or even my Harmony remote to turn off the lights in my man-cave. I was working on a PowerShell script, and that’s when it hit me – ‘Why Not?’ – Why can’t I use PowerShell to turn those off these lights?

Let’s assume you have SmartThings setup in your home already, and that you have signed into the https://graph.api.smartthings.com/ portal at least once. If that is done – we simply need to head over to IFTTT. If you need a primer on IFTTT, head here: https://ifttt.com/wtf. Yeah, I am NOT going to change that link – it’s awesome. If you already have an IFTTT account, sign in. If not, sign up. Once signed in, we will want to add a new channel. The channel we want to add, ironically enough, is called SmartThings. Click on the “Channels” link, search for SmartThings, and click on the Icon (should be the only one returned if you search correctly). Click the Icon.
IftttFindSmartThings

Now click on the GIANT connect button on the right hand side. It will take you to the SmartThings Api login page. Sign in, and you are greeted with this:
iftttpicklocation
Pick the Hub/Location that you want to integrate with IFTTT. When you do, you are shown a list of devices connected to your SmartThings hub:
iftttpickdevices
Select the devices you want to control, and press the “Authorize” button. You will be taken back to IFTTT. Don’t try to add any recipes yet – we need one more channel to make this work. In order to get IFTTT to trigger, we can use either the DO channel/app (which I am going to ignore in this demo), or we can use the Maker channel. Search for, and add the Maker channel just like we did for the SmartThings channel.
iftttmaker
When you add the Maker Channel, a key is automatically generated for you. Keep this key handy – we will be using it soon. I am not showing you my key, cause I barely know you guys – and it is unique to my recipes.
iftttmakerkey

Great – now what? Let’s add a recipe!! Click the “Create a New Recipe button”, and you are shown the typical IFTTT recipe builder page that looks like:
iftttthis
Click on the “this” portion, and search for/select the Maker channel:
iftttmakerthis
Pick the “Receive Web Request” tile. We now need to name our trigger. These will be unique to each device, and unique to the function we are calling. For example, if I want to turn on and off my Man-Cave Lights, I need to specify two unique triggers. Leave out spaces in this name.
iftttmakertrigger-mc
Create the trigger. BOOM – we are back the equation:
iftttthat
Click the ‘THAT’ section. We are dropped back to the channel search page – this time it’s the search for the Action channel – type in and select ‘SmartThings’. You should see a list of all the fun things that SmartThings brings to IFTTT.
iftttactionSmartthings
Choose the ‘Switch On’ tile. You can now pick the switch you want to interact with. In my example I will choose ‘Man Cave Lights’
iftttchoosedevice
One final step – click “Create Recipe”. Done! Now, create another new recipe, following the same steps, except name the Maker trigger something like ‘Man_Cave_Lights_Off’, and make sure you select the ‘Switch Off’ tile in the action section. You should now have 2 recipes – one for turning the lights on, another for turning the lights off. If you check the My Recipes section, you should have something along the lines of these two:
iftttrecipes

We are done in IFTTT for the moment – let’s head over to PowerShell.

This is pretty straight-forward, actually. We need to craft a URL with this format: https://maker.ifttt.com/trigger/{event}/with/key/{key}. The {event} is the Maker event we specified when we created the recipe – they {key} is the unique key that was generated when we added the Maker channel. All we need to do is craft the URL and send the web-request.

$MakerKey = 'abcdefghijklmnopqrstuvwxyz'
$BaseURL = 'https://maker.ifttt.com/trigger/'
$EndURL = "/with/key/$MakerKey"
$event = 'Man_Cave_Lights_On'
$url = $BaseURL+$event+$endurl
Invoke-WebRequest -uri $url -UseBasicParsing|Select-Object -Property content

Again, we use the -UseBasicParsing parameter to keep from firing up IE initial config. If we have done everything right, we should be greeted with something along the lines of:
Content
——-
Congratulations! You’ve fired the Man_Cave_Lights_On event

Want proof? Here you go!

20160827_234150

Once you add the Maker Channel, it opens a world of possibility up when it comes to IFTTT. For example, I later added my Harmony remote as a channel and was able to turn my AV system on and off and perform a large number of automations from PowerShell. Oh, the trouble we will get into….