MVP

I got a nice surprise this Sunday. I have been awarded MVP status in Cloud and Datacenter Management!! I am beyond happy – this is something that I have worked towards for a while, and it feels great! I am humbled to bear the same moniker of same of the legends in our field…I don’t feel like I deserve to be in the same room as some of them, but I will strive to do my best and continue to contribute to the community as best I can.

I was asked what my journey to MVP was like – my experiences, the steps I took, and what I wanted to get out of it. Here’s how it all played out.

It started a couple of years ago – I had (and still have) several MVP friends that I always looked up to. They are pillars of the tech community who willingly share their hard fought knowledge, so I naturally wanted to join their ranks. I was already on the board of the Central Texas Systems Management User Group, so I was already involved in the community to a certain extent, but I knew that I needed to up my game if I was going to qualify.

I started speaking more at user groups – CTSMUG, Austin PowerShell, DFWSMUG, etc… In a previous job at Dell, I had engaged in a lot of customer briefings (200+ in one year!), and had spoken at old school MMS several times, so I was comfortable speaking to large groups. The speaking at the user groups really helped me adjust the tone and detail, though. I found out what the users wanted to hear, how much detail to put in my presentations, and to make sure that every presentation ended with the users walking away with actionable knowledge. I went from speaking once a year to speaking to groups once a month or more. The more I spoke to these groups, the more I wanted to do it again.

At the same time I began to blog more – I always had a WordPress blog, but I was horrible about putting fingers-to-keyboard and actually writing articles. Just as with the speaking engagements, I found that the more I wrote, the more I wanted to write. I would be in the middle of writing an article when suddenly an idea for _another_ article would pop into my head. OneNote became my best friend – handy place to store those ideas/drafts and access them anywhere! I still don’t write as much as I want, but it’s definitely a lot more than it used to be!

MMSMOA. I cannot say enough about this conference. I submitted four session ideas, two of which were accepted. Both of my co-presenters were (still are) MVPs, and it was a wonderful experience. The feel was much closer to old-school MMS – the sessions are small and extremely technical. Managing to get through this conference, and getting excellent speaker evaluations back, is an extreme ego boost. It was at this conference when I heard, for the first time – “Wait, you aren’t an MVP?”. I knew at that point I needed to find out more.

I sent out some emails – to MVPS that would give me solid feedback. The email was simple – “If you think I could be a MVP, please nominate me. If not, let me know what I need to improve!” I specifically sent it to people I knew would smack me down if I deserved it. This was extremely valuable – I got honest responses back (along with some good-natured jabs) that helped me improve my blog posts, speaking engagements, and how to tune my message to a specific area of expertise. In addition I got 3 nominations. I was elated!

The process from there on was straight forward – I was asked to register on the MVP site, and then record my community activities. Keeping a list of those as I did them would have been a life-saver here, so if you are interested in pursuing a MVP, keep track of what you are doing! Specifically – Date, Location, Type of Activity (Blog, Speaking at user group, Speaking at Conference, etc…), description of the activity, and the reach (number of user group members present, page views, etc…). I did this at the start of June, and on October 1st, I got the email that I had been waiting for!

Customize your PowerShell profile for useful startup actions

Did you know you can make PowerShell run any commands you want when you start a shell? This is amazingly useful for gathering information, making settings changes, or kicking off processes – all at shell startup. There are plenty of places that talk about the profiles, so I won’t go into each type, but long story short there are almost 10 different profiles when you account for 32 and 64 bit PowerShell. Some of them are explained in detail here.

For my example here, I am going to deal with the %UserProfile%\My Documents\WindowsPowerShell\profile.ps1 profile, which affects the current user, but all shells. This is useful for when you are switching back and forth between the ISE and console – say when you are testing new scripts. By default this file won’t exist – you will have to create it if it doesn’t.

The profile file is really just a .ps1 file. You can put any PowerShell you want in this file. Say you want to get a random Cat Fact every time you start a shell? (who am I to judge?)

There are some really useful things you can do now that you know you can run anything. For example – this will show the number of running PowerShell processes, along with WinRM service status:

And this will show the PowerShell module paths:

Notice here – I also added a simple “cd c:\blog” to my profile – really useful for starting straight in your scripts directory (no more “cd c:\workspaces\app\development\scripts….” every time you start a console)

But you can do even more! Full functions can be loaded into your profile and are available immediately. One of my favorites is one that will add a Start-RDP function, so I can initiate a remote desktop session without ever touching the start menu! How cool is this?

There is also a special function you can put in your profile – it’s the ‘prompt’ function. This will change the PowerShell command line prompt to whatever you want! Just create a new function called ‘prompt’, write-host anything you want into the function, and make sure you put a ‘return ” “‘ at the end – it’s that’s simple! You can put some great data right on the prompt – for example, you can make the current time show up each time the prompt is shown! This is really useful for measuring how long something takes to run if you don’t want to pull out measure-object! Here is my prompt, along with the full profile:

And the outcome:

It’s that simple! Customize your profile, and start being productive quicker! Leave a comment below to tell me what your favorite profile modifications are!!

Austin PowerShell User Group Survey Responses!

Last week, I sent out a survey where we were asking people questions about the Austin PowerShell User Group. Basically we are trying to find out what you all want when it comes to location, meeting times, etc… Well, here are the results!!

To begin with we had around 40 responses, which is excellent! Thank you all for responding!

The first question – “Would you be interested in attending an Austin PowerShell User Group meeting in the future?”. I know – I know – softball question. If someone is responding, then they are probably interested in attending. Not surprisingly, the percent that answered yes was 100%! That’s great!

When we asked what days worked best, a couple of clear winners emerged – Friday and Doesn’t Matter.

Next – How often should we meet? Again, a pretty clear winner – Once every other month took almost 50% of the votes!

Now we get to the fun questions – Where and for how long should we meet. First, here are the primary and secondary choices for location. North Austin and Round Rock/Pflugerville appear to be the leaders for Primary choice, and North Austin taking the bulk of the Secondary choice!


When we asked how long each meeting should be, we got some great varied responses! All day and Afternoon took 2/3rds from the Primary, while Afternoon and Mornings dominated the Secondary Choice!


We know we can’t pick everyone’s preferred time and/or location, so next we asked how likely you would still be able to attend if the selection didn’t go your way. All in all, everyone seemed somewhat flexible!

Now on the to the free-form text! Some great suggestions on venues:

Microsoft or Dell Campus
Employer
Microsoft, Member Facilities
Just happy to be aware of this
Dave & Busters, Alamo Drafthouse, MSFT Store (free)
Domain area
eBay (Daytime only), Microsoft (daytime only), User Group Member Businesses
Private companies to host
Yes 🙂
Microsoft Austin on Stonelake

I don’t know who the smart-ass was that said “Yes” with a smiley face, but I will find you 🙂

I should have know better than ask for open comments, but here they are.

I think this is a great idea! It would be great to meet other PS developers in the area
I could participate more easily on days I could not attend if we had live feed or if the presentations were available on you tube or something
Maybe we could expand CTSMUG and devote a session to Powershell every time we meet – We could schedule it after lunch to allow for those attendees who cant take an entire day – Or whenever during the CTSMUG Day that makes the most sense. The technologies are complimentary and it benefits the CTSMUGers as much as the PUGers. Or barring that how about a Powershell Happy Hour post CTSMUG.
Newcomers’ meeting would be cool for a start.
Meetings during business hours opens up more options for locations because you don’t have to pay for extra security or host in a Retail/Food location which may be too noisy. I work for eBay and can easily host meetings (small or large) given enough advanced notice but it must be during the week, during business hours.
Ask for more volunteers to lead the group so we can spread the load.

Here is something nice – someone thanked me!

Thanks Donnie !

And then there’s Duncan McAlynn’s comment (Yes, I know it was you)

Donnie’s an asshole.

Thanks to everyone that took the survey (except Duncan) – we REALLY appreciate it! We will munch on this data, and send out invites shortly!

Run _Anything_ with Flow. PowerShell Triggers

Want to start PowerShell commands from a Tweet? Yeah you do, and you didn’t even know you wanted to.

Earlier this month, a great Flow of the Week was posted that highlighted the ability to use a .net filesystemwatcher to kick off local processes. This sparked an idea – I think we can expand on this and basically run anything we want. Here’s how:

First, let’s start with the Connected Gateway. The link above goes into a bit of detail on how to configure the connection. Nothing special there.
Second, on the Connected Gateway, run this PowerShell script:

This script sets up a FileSystemWatcher on the C:\temp\WatchMe\Flow.txt file. The watcher will only perform an action if the file is changed. There are several options for the “Changed” parameter – Created, Deleted, Renamed, Error, etc… Once created, the watcher will look at the last line of the c:\temp\WatchMe\Flow.txt file, and launch a PowerShell process that takes that last line as the input.

Third – This is the best part. Since we have a FileSystemWatcher, and that watcher is reading the last line of the C:\temp\WatchMe\Flow.txt file and kicking that process off, all we have to do is append a line to that file to start a PowerShell session. Flow has a built-in connection for FileSystem. You can see where this is going. Create a new Flow, and add an input action – I am fond of the Outlook.com Email Arrives action. Supply a suitable trigger in the subject, and add the ‘Append File’ action from the FileSystem service. Here is how mine is configured:

The only catch with this particular setup is that the body of the email needs to be in plain text – Windows 10 Mail app, for example, will not send in plain text. The body of the mail is the PowerShell command we want to run. For example, maybe we want PowerShell to get a list of processes that have a certain name, and dump those to a text file for parsing later. Simply send an email that has the body of “get-process -name chrome|out-file c:\temp\ChromeProcesses.txt”. Here is what that results in:
Before we send the email:

The Email:

After a few minutes – an new folder appears!:

The contents of the text file:

Think about what you could do with this – Perhaps you want to do an Invoke-WebRequest every time a RSS Feed updates. Maybe start a set of diagnostic commands when an item is added to Sharepoint. Kick off actions with a Tweet. If you want full scripts to run instead of commands, change the Action section of the FileSystemWatcher to “PowerShell.exe -file $content”. Easy as pie.

PowerShell WSMAN Configuration for Massive Scale

In my day job, I constantly strive to push PowerShell to the limit, attempting to use absolutely every bit of processor/memory/network bandwidth available. One way I do this is with PoshRSJob written by Boe Prox. PoshRSJob is a wonder multi-threading tool, and I use it at pretty heavy scale – typically at a 100 thread throttle.

Sometimes, when you are running a lot of concurrent threads attaching to remote machines, you will run into WinRM connection limitations. They typically will show up in error messages like this when you try to do commands line “invoke-command -computername remoteserver01” :
“This user is allowed a maximum number of 5 concurrent shells, which has been exceeded. “

Configuring typical WSMAN connection limits are fairly well documented, but I was running into another type of error. This error was occurring even after I had upped the connection limits:
“The maximum number of concurrent shells allowed for this plugin has been exceeded.”

This was driving me crazy, until I realized the slightly different wording. Browsing the WSMAN PSDrive, I was eventually able to solve it. The key word in the second error was “plugin”. I had to configure the limits on the plugin, not just the shell. After I realized the difference, I was able to find the right settings. I have compiled them here in a small script that will enable WinRM, and set the limits very high for both the shell and plugin.

Obviously test this before you deploy to production. I also found a neat one-liner to monitor the number of WSMan connections of a target system (set the $computername variable to the target, or use localhost):

Azure Runbook for Posting to the OMS API

For MMSMOA 2017, I created an Azure Runbook that could post to the OMS API. Well, it’s more than a month later, but I finally got around to making a post around it. I’m going to skip the basics of creating a runbook, but if you need a primer, I suggest starting here.

Let’s start with the runbook itself. Here is a decent template that I modified from the OMS API documentation. This template takes an input string, parses the string into 3 different fields, and sends those fields over to OMS. Here’s the runbook:

There are a couple of things to note with this runbook – first, the date it will post into OMS will be Central Standard Time. If you want to change to another timezone, change the $date = (get-date).AddHours(-1) line (aligning to EST). Second, this script has output which you can remove. The output will only show up in the output section in Azure Automation, which makes it handy for troubleshooting. The third thing you might want to change is the $LogType = “MyRecordType” line. This is the name that OMS will give the log (with one caveat mentioned below).

So, create your runbook in Azure Automation, and give it a test. You will be prompted for the InputString. In my example here, I will use the input string of “Blog Test;Critical;This is a test of an Azure Runbook that calls the OMS HTTP API”

Give it a minute or so, and you are rewarded with this:

Notice the “_CL” at the end of my log name? Notice the “_S” at the end of the fields? OMS does that automatically – CL for custom log, S for string (or whatever data type you happen to pass).

There you have it – runbooks that post to OMS. Add a webbook to the Runbook, and call it from Flow. Send an email to an inbox, have Flow trigger the Runbook with some of the email data, and suddenly you have the ability to send emails and have that data appear in OMS.

OMS and Flow Integration – Best Friends Forever

The alert actions in Microsoft OMS are great, but they can be a bit limited. Sending emails can be boring, ignored, or end up in the spam folder. Runbooks are awesome, but can take some time to setup. What if you want to use a simple Microsoft Flow to perform other actions? Flow is fast, simple, reliable, and mobile ready. Here is how to make OMS and Flow best friends – and make you look like a rockstar!

Go to https://requestb.in/ and select “Create a RequestBin”
Copy the Bin URL. You will need this in a minute.
Open OMS and head to the alert you want to make awesome (Overview – Settings – Alerts).
Turn on the Webhook action, and paste the Requestb.in url into the “WebHook URL” field.
Check the box to turn on the “Include Custon JSON Payload”. Paste this json into the field below the checkbox:

{“AlertRuleName”:”#alertrulename”,
“AlertThresholdOperator”:”#thresholdoperator”,
“AlertThresholdValue”:”#thresholdvalue”,
“LinkToSearchResults”:”#linktosearchresults”,
“ResultCount”:”#searchresultcount”,
“SearchIntervalEndtimeUtc”:”#searchintervalendtimeutc”,
“SearchIntervalInSeconds”:”#searchinterval”,
“SearchIntervalStartTimeUtc”:”#searchintervalstarttimeutc”,
“searchquery”:”#searchquery”,
“workspace”:”#workspaceid”,
“IncludeSearchResults”:true
}

Note: You might want to remove the “searchquery” json pair – the alert queries that have special characters can really cause havoc with the webhook.
Hit “Test WebHook”. If you get an error, remove the searchquery section out of the json and try again. If it occurs again, shoot me an email and I will be glad to help.
Go back to RequestB.in and refresh your page. You should see something like this:

Copy out the “Raw Body” section.
Open Flow, and create a new Flow (My Flows, Create from Blank)
For the Trigger – pick “Request/Response – Request”. When you click on the trigger, it will open the Flow for editing.
Click on “Use sample payload to generate schema”, and paste in the data you got from RequestB.in. Click “Done”. Flow has now setup of the JSON schema that the rest of your Flow will use.
Add another other Flow action like you normally would – For example, you might want to add these alerts to a Sharepoint list, or perhaps put them in a SQL database for archiving. The fields that are available to the actions that follow the Request trigger depend on the fields that were sent by the alert – I suggest that you play around with Threshold vs Metric alerts so you can see how they post to Flow.
One last step – when you save the Flow, make sure you go into the “Request” trigger and copy the URL. You will want to paste that url in the OMS alert Webhook URL (remove the requestb.in url). Now, when the alert triggers, it will pass the data to Flow rather than RequestB.in.

Once you have the alerts coming to Flow, the possibilities are endless – want to see those Alerts on your phone? Get PowerApps and in 5 minutes you can point it at the SQL database you are using to store the alerts. Want a mobile notification? Piece of cake with Flow. Hell, maybe you want to Tweet those alerts for some reason – Go for it! I won’t tell you not to.

OMS Custom JSON Payload with all Attributes

When using a custom JSON payload in an OMS Alert, knowing which attributes are available can be difficult. It took me way too long to find this. These are the attributes that I find most handy (and I believe this is a mostly complete list as of the time of this post). The odd-ball one here – IncludeSearchReqults, is a simple True/False, and will send the bulk of the data to whatever webhook you are using. Without this pair, you will only get alert information, not information from the actual event that triggered the alert.

Another one to keep an eye on is the ‘searchquery’ pair. This pair (obviously) sends the Alert Search Query to the webhook. Because the queries can often contain special characters, and the webhooks can be finicky when it comes to special character translation, I will often exclude this pair to keep everyone happy. If you use the payload below, but your webhook fails when testing, remove the searchquery pair and try again.

{“AlertRuleName”:”#alertrulename”,
“AlertThresholdOperator”:”#thresholdoperator”,
“AlertThresholdValue”:”#thresholdvalue”,
“LinkToSearchResults”:”#linktosearchresults”,
“ResultCount”:”#searchresultcount”,
“SearchIntervalEndtimeUtc”:”#searchintervalendtimeutc”,
“SearchIntervalInSeconds”:”#searchinterval”,
“SearchIntervalStartTimeUtc”:”#searchintervalstarttimeutc”,
“searchquery”:”#searchquery”,
“workspace”:”#workspaceid”,
“IncludeSearchResults”:true
}

Flow to the Rescue! Overcoming Azure Automation Scheduling Limitations

Azure Automation – the 800lb gorilla in the room. If you can think of a way to accomplish a task, more than likely Azure Automation can do it. Combining the ease of use of PowerShell, the sheer power of Azure, and the multitude of integrations available, you can build enterprise worthy automation runbooks quickly and easily.

It’s a no-brainer.
Until it isn’t.

One of the most frustrating limitations of Azure Automation is the scheduling tool. Sure – you can setup one-time and reoccurring schedules with ease, but what if you need something to run often – say every 10 or 15 minutes? Unfortunately, when you go to set a schedule like that, you are greeted with this:

That means you can’t set a single schedule shorter than an hour. Sure, you can set multiple schedules – you would need 4 if you want to run every 15 minutes. What if you want something to run every 5? Are you going to create 20 schedules? Of course not! This is where Microsoft Flow comes to the rescue.

There are 2 ways to trigger an Azure Automation job from Flow (probably more, if you dig deep), so let’s start with the simplest one. If you have a connection from Flow to Azure Automation already, then this simple Flow will start an Azure Automation Runbook:

Boom – no need for 20 schedules here! This simple Flow probably just saved you an hour of clicking and checking. But what if you don’t have a connection to Azure Automation already established? Perhaps you want to run a Runbook that isn’t in your subscription? It’s still pretty easy. The first step is to obtain the webhook URL for the Runbook. Start by access your Azure Automation Runbook – make sure it has focus. In the left navigation pane you should see a link for ‘Webhooks’. Click that to shift focus to the Webhook listing page. Click the ‘Add Webhook’ button at the top of the main pane. From here on out, it’s pretty straight forward. Give your webhook a name, set the expiration date for the webhook, and it your Runbook has any parameters you want the webhook to pass, specify those name.

IMPORTANT – Make sure you copy the URL before clicking OK. Finding that URL later is like pulling teeth – you might be able to do it, but it will be painful.

When you are satisfied with the settings, and have copied the URL, click ‘OK’.

No that you have your webhook created in Azure Automation, it time to setup flow. Luckily for you, it couldn’t be easier!

That’s it! No more Azure Automation Scheduling limitations! Run those runbooks as often as you like!

PowerShell – Runspaces and Large Enterprises

You’ve got 40gb of log files, a broken app, and a CFO reminding you how much money the company is losing per minute. You’ve got to find that one error in one log that will clue you in on how to fix the issue. You have no idea where it is, but you know you have this issue in the bag. How? Because you have PowerShell. I’m going to show you how.

We’ve all been there – a task that has to be done across hundreds of systems, a search of thousands of files, pulling a property from tens of thousands of AD user accounts. PowerShell can do it, but there is always a need to shave those seconds off. In this article we examine how to perform these large action in the quickest possible ways.

Asynchronous Processing

PowerShell has a couple of options when it comes to running tasks in a ‘multi-threaded’ fashion. The two you will primarily hear about are workflows and runspaces (jobs are another topic). Workflows are dead easy to setup, but can be picky about what they will and will not allow to happen in them. Workflows do have the nice feature of sequencing – being able to tell part of the workflow to run in sequence, then run other parts in parallel. Runspaces are more difficult to setup initially, but allow essentially any action you desire. Runspaces also allow for insane parallel processing. In my personal preference, runspaces are always in my toolbox. It becomes a no-brainer when you combine some of the work that the PowerShell heavy-weights have done to make runspaces super easy. Mainly Boe Prox and Warren F. These guys are serious rock-stars.

Invoke-Parallel – Your new best friend

When you absolutely, positively have to burn up those CPUs and flood the network, you need Invoke-Parallel. Seriously, download it now. I made that a link for a reason. Go get it. Using this beast, we can run multiple commands against 20,000 remote server nodes every evening. We can put thousands of SCOM nodes into maintenance mode in a matter of minutes, or search hundreds of directories with thousands of files in a matter of seconds. This is one function that will elevate your PowerShell game. It’s all built on runspaces, and has some amazing logic wrapped around it.

From Github you will get a .ps1 file. You can either pull the function out of that file and include it in your script, dot-source the whole .ps1 file (. “C:\temp\invoke-parallel.ps1”), or take the function and wrap it up in a module. That is my preferred method, since I wrap it up with other useful functions. Regardless of how you reference the function, calling it is easy. Here is a simple example:

This is pretty straight forward. I am dot-sourcing the ps1, Generating an array that has 3 servers in it, and then sending that array to the Invoke-Parallel function as the InputObject parameter. This couldn’t be easier, and guess what? You just ‘multi-threaded’ a PowerShell script. Pat yourself on the back, and then buy Boe and Warren a drink the next time you see them.

Now invoking these runspaces doesn’t come free – there is an overhead and startup time associated with starting a runspace, and that might actually be a detriment to your outcome. For example, if I have 20,000 log files that are relatively small (10mb or less), and you need to do a “select-string -pattern ‘something'”, then it might not be advantageous to run invoke-parallel. Let’s look at the time it takes to find an error in one of those logs files with each method. In a previous blog post, I created a function to create a lot of log files with random data – I am using that here to create 20,000 log files of about 1mb in size. (side-note: I will later be expanding that function to take advantage of invoke-parallel).
dirproperties

I have edited a random file and added this line somewhere in the middle:
2016-08-21–ERROR–TOO MANY FILES, IDIOT.
I have idea which one I edited. That’s how dedicated I am to this cause. Now, let’s measure how long it takes to find this string both with and without invoke-parallel.
Without:

With:

Because the files are small, the Select-String can process them faster then we can spin up new runspaces. But, if we change the size of the files – say to something like 1GB, the difference is dramatic.
Without Invoke-Parallel:

And with:

It halved the time it took to process those files. Why? Because we could load up multiple select-strings at a time as each was long-running. Each individual select-string is not CPU intensive, it just takes time. In this instance I could process 40GB of files in 2.25 minutes, whereas before I could only do 20gb in 4 minutes. This tells us that runspaces are great for commands or scripts that take a bit longer to run and aren’t horribly CPU intensive.

All of this brings me to the title of this article – when you are dealing with an absolute massive amount of machines, or AD accounts, or large files – whatever the case may be – invoke-parallel should be in your toolbox. At my current job, I have 6 commands to run every night on around 18,000 servers. I can run these through 8 jump servers – I pipe invoke-commands through a large invoke-parallel with a throttle of 80, and can finish this job in about 3 hours. Prior to using invoke parallel, it was taking about 18 hours to complete. That is how you utilize a network.

The main parameters that we typically deal with when using Invoke-Parallel are the InputObject, the Throttle, the ScriptBlock (or ScriptFile), and the Timeout. The InputObject is an array that is the basis for the function. In essence the function will open a runspace for each object in the array. It could be an array of servers, array of users, or a list of files. The throttle is how many runspaces you want running at the same time. Avoid the temptation to set this value too high – it can actually be detrimental if too many runspaces are vying for the same resources (CPU/MEM/Disk). A good rule of thumb for my environment is to limit it to the number of processors on the system running the task. If I am using multiple servers to run tasks, or if the tasks have extremely minimal requirements, I might set it higher. Timeout is how many seconds you want the runspace to run before it is killed. This is typically used to free up runspaces that have encountered a problem – hung commands and such. The last parameter – the ScriptBlock (or scriptfile) is what you want to actually happen in the runspace. Take this example:

In this case, the ScriptBlock is a simple test-connection. The $_ is the reference to the current object being processed by this runspace. In this example it was a single url, but it can also be an object with properties, which you would access as any other property ($_.name, $_.Size, etc…). Inside the scriptblock, there are 2 option for accessing variables that are declared outside of the scriptblock. You can either use the ‘$using:variable’ method, or you can specify the ‘-ImportVariables’ parameter for Invoke-Parallel. Along those same lines, if you want to use modules that are imported outside of the runspace, you can use the ‘-ImportModules’ parameter.

This example expands on the script block a bit, and shows how to use the -ImportVariables parameter:

These are the basics of Invoke-Parallel. If you have any questions, feel free to leave a comment or ping me via email. In a future post we will go over jobs and how they compare to runspaces. See you then!

Again – special thanks to Boe Prox and Warren F. You guys make this stuff look easy.

PowerShell, OpsMgr, ConfigMgr, SiteScope, and more!