Powershell – Security in your profile

If you have done much with Invoke-webrequest, and if your endpoints have an inkling of security minded people watching them, then chances are you have run into a small issue:

Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel.

What’s happening here? Well, chances are that the end-point you are attempting to access has turned off TLS1.0 and 1.1, and for good reason! There is an easy fix, however. Just simply place a single line of code in your script above the invoke-webrequest:

Great! Post done, walk away.

But…..I have about 10000 scripts…..

That one line works great if you have just a handful of scripts that you run, but what if you need to do this for a large company – maybe a large enterprise? Well, it turns out that your profile can help.

First off – there are multiple PowerShell profiles on a system, but for this instance, let’s focus on the All-Users/All-Hosts profile (also sometimes referred to as the System profile). Depending on the flavor of PowerShell you are running – Microsoft vs Windows – the System profile will be in different locations. Not to fear, however, cause $PSHome will show you where the profile is located. Create your profile (if you haven’t already) in the $PSHome directory. The name of the file should be “profile.ps1”.

Now – place the Net.ServicePointManager line you would normally place in a single script into your System profile and save it. Whenever an invoke-webrequest is run from this system, it will automatically use the TLS1.2 protocol. Updating a few systems that run your scripts is a lot easier than updating thousands of scripts, and this will save you a ton of time.

SQL Process Automation with PowerShell

Here is a handy way to handle SQL processes that you find yourself needing to schedule. Of course you can always setup that kind scheduling via the SQL Server Agent, but there are two good reasons to do this kind of scheduling via PowerShell.

1: You don’t have rights to add jobs to via the SQL Server Agent. Some security teams will restrict non-dba access to either the agent or insist on setting the agent to a manual start.
2: You wish to have easier tracking, easier configuration, and just want to do something cool with PowerShell.

The other option you have when trying to schedule something like SQL processes would be to simply use Task Scheduler. Indeed – in my solution I actually use Task Scheduler as a base engine to run every minute or so. What I don’t like about Task Scheduler is trying to put SQL command lines in it. It’s flat out a pain. So, I built something that was easy to configure – even by someone who is not skilled in PowerShell, easy to implement, and has all of the typical good stuff you want with PowerShell.

The solution utilizes 3 main pieces. First, is a script that we will schedule to run every minute via Task Scheduler. Second, a configuration file in JSON format. I choose JSON since it’s simple to read and easy to write. Lastly, we will have XML file that tracks when the last time something ran. Let’s examine each piece:

The script file is fairly straight forward, and there are only a couple of pieces that need explaining. In simple terms this script file will be used as the engine that calls the various SQL commands we specify in the config.json file. We import a few modules, setup a couple of base variables, and then loop through the SQL commands, updating the runhistory.xml file with the timestamp. Schedule this file to run every minute via task scheduler.

If you are wondering about the Invoke-SQL command, it’s something that I wrapped up in a quick module. You can get the module here.

Now that we have the base script, let’s look at the config.json file. This is where the meat of the information about your commands come from:

As you can see – it’s a pretty straight forward. Put in the frequency you would like the statement to run (timeperiod), the statement itself, the server and database to run on, and finally the ‘type’ of command it is. This is just to tell the SQL module whether or not to load the data into a data table. That’s it! Anyone with a basic knowledge of how to write a json file can add or remove from this config file quickly!

The final piece is the runshistory.xml file. This simply lets the main script keep track of the last time each statement was run. You shouldn’t have to ever update this file manually.

For full transparency, there are a few things I just need to swing back around and address. First – the commands runs sequentially. Long running SQL statements might cause others to fail or fall behind. The plan for that is to use something like PoshRSJob to run the jobs in parallel, and each in it’s own runspace. Secondly, there is a small chance that if someone was to manually edit the runhistory.xml file and remove one of the lines the script could error. I will update the script with a catch to make sure this doesn’t happen.

SCOM 1807, Linux omiagent service stopped, 403 error in discovery and more

I hope this post saves you some time and pain – I lost 2 days to troubleshooting and opening cases. Here is the setup:

OpsMgr 1807
Latest Linux/Unix MP (updated in August, 2018)
OEL 7.x client
Followed the instructions here: SCOM 1801/1807 Install

The agent install went fine – install completed, cert created, conf file created and configured, cert signed by the OpsMgr server and replaced on the client, and the agent restarted. That’s when I checked ‘scxadmin -status’. Omiserver was running, but omiagent was stopped. Only one log was created – omiserver.log, and nothing for the agent. Trying to start or restart via scxadmin yielded nothing. No feedback, no updates to logs, nothing. Even doing the install with –debug or settings the logs to verbose did nothing. So what happened?

Turns out, the omiagent process won’t start until a successful discovery is ran in the console. Who knew? Turns out that I didn’t. Two days lost trying to troubleshoot. I knew that a discovery was needed, but I assumed that the agent processes would be running ahead of time. Turns out they don’t all run before the discovery.

So, run over to the console and try to do the discovery only to immediately get a 403 error. Specifically, “The WinRM client received an HTTP status code of 403 from the remote WS-Management service.” Turns out this one is an easy fix, at least in my instance. My environment, like most in the corporate world, uses proxy servers. On the MS, a quick “netsh winhttp set proxy proxy-server=”http=” bypass-list=”*.yourdomain.com”” was all it took to fix the issue.

Once the discovery ran fine, it took about 5 minutes before the omiagent on the Linux machine was up and running on it’s own.

I hope this saves someone a few minutes of troubleshooting – if it does, shoot me a tweet!

New PowerShell Module – PoshMTLogging

At the end of this post, I promised a simple module for logging that can make use a mutex and has other neat features. Well, here it is!

This simple module will write to a log file. This module has a couple of unique features:

– Optional ‘UseMutex’ switch which helps avoid resource contention so multiple threads can write to the log at the same time
– Entry Severity make log readers like CMTrace color code automatically
– Standard line entry with automatic timestamping
– Automatic log rolling at 5mb

So check it out! Let me know what you think, and feel free to branch/pull with any ideas!

PowerShell Logging – What about file locking?

Today I was working on one of my large-scale enterprise automation scripts, when a common annoyance reared it’s ugly head. When dealing with multi-threaded applications, in this case Runspaces, it’s common to have one runspace trying to access a resource another runspace is currently using. For example – writing some output data to a log file. If two or more runspaces attempt to access the same log at the same time you will receive an error message. This isn’t a complete stoppage when dealing with log files, but if a critical resource is locked then your script could certainly fail.

So what is the easiest way to get around this? Enter the humble Mutex. There are several other posts that deal with Mutex locking, so I won’t go over the basics. Here I want to share some simple code that makes use of the mutex for writing to a log file.

The code below is a sample write-log function that takes 4 parameters. 3 of them are mandatory – Text for the log entry, name of the log to write to, and the ‘level’ of the entry. Level is really only used to help color coding and reading of the log in something like CMTrace. The last of the four parameters is ‘UseMutex’. This is what is going to tell our function whether or not to lock the resource being accessed.

To write to the log, you use the function like such:

Let’s test this. First, let’s make a script that will fail due to the file being in use. For this example, I am going to use PoshRSJob, which is a personal favorite of mine. I have saved the above function as a module to make sure I can access it from inside the Runspace. Run this script:

Assuming you saved your files in the same locations as mine, and this runs, then you should see something like this when you do a get-rsjob|receive-rsjob:

So what exactly happened here? Well, we told 3000 Runspaces to write their number ($param) to the log file….250 at a time (throttle). That’s obviously going to cause some contention. In fact, if we examine the output file (c:\temp\mutex.txt), and count the actual lines written to it, we will have missed a TON of log entries. On my PC, out of the 3000 that should write, we ended up with only 2813 entries. That is totally unacceptable to miss that many log entries. I’ve exaggerated the issue you will normally see using these large numbers, but this happens all the time when using smaller sets as well. To fix this, we are going to run the same bit of code, but we are going to use the ‘UseMutex’ option in write-log function. This tells each runspace to grab the mutex and attempt to write to the log. If it can’t grab the mutex, it will wait until it can (in this case forever – $LogMutex.WaitOne()|out-null). Run this code:

See the ‘-UseMutex’ switch? That should fix our problem. A get-rsjob|receive-rsjob now returns this:

Success! If we examine our output file, we find that all 3000 lines have been written. Using our new write-log function that uses a Mutex, we have solved our locking problem. Coming soon, I will publish the actual code on Github – stay tuned!

Custom Log Analytics logging with Logic Apps!

Here is a quick demo on sending a simple API to Log Analytics using Logic Apps. It used to be quite a pain to get data to Log Analytics – using an API, sending via something like an Azure Function, Azure Automation Runbook, PowerShell scripts, etc… Now, you can do it in about 3 minutes with no code!

First – some assumptions:
1: You have a Log Analytics workspace already set in Azure. See this article if you need help with that.
2: Actually, there is no 2. Just make sure you have a Log Analytics workspace.

Create a Logic Apps….App. That naming convention seems wrong.

It will take a few seconds for the app to be created. When it is, enter the designer. In this example, we are going to retrieve a simple piece of data via a web API – we are going to get a current stock quote from IEXTrading.com. For those that don’t know, IEXTrading has an AMAZING web API for pulling stock data. Seriously – check it out: https://iextrading.com/developer/docs/

In this example we will setup a simple 15 minute timer, pull the data from IEXTrading, take the JSON payload from the API call, and send that to Log Analytics. It’s actually really easy.

If you haven’t setup a Log Analytics connection in Logic Apps, then there are a couple of pieces of information from Log Analytics you are going to need. Go into your Log Analytics workspace, click on the ‘Advanced Settings’ section and copy down the “Workspace ID” and either the “Primary” or “Secondary” key. Enter those into the connection information for the Logic Apps Connector. I’ve shown you mine here:

Now – just let it run! It might fail for the first couple of runs – I believe this has something to do with the creation of the custom log in Log Analytics. After a (short) period of time, you can query for your custom log in Log Analytics. The one thing you should know is that the log name you specified in Logic Apps will be appended with “_CL”. That stands for Custom Log, and it will show up if you want it to or not. You can search for you log like this:

Logic Apps or Flow – String to Array

Ran across an interesting problem today – how to, in Flow or Logic Apps, take a string and create a data array from it. In this particular instance, the bit of data was being emailed to an Outlook.com inbox, and we wanted to parse that data and only work with 3 pieces of the string. It sounds easy, as actually is, but takes a small bit of knowledge ahead of time. Hopefully this helps you avoid some web-searches.

So let’s say we have some data formatted like this:

This data is a sample performance metric – something you might receive from Log Analytics, for example. This data is obviously an array in theory, but right now it’s just a long string to Flow or Logic Apps. So, let’s get this into a usable array and access only the bits we want.

Here is the email:

Now, let’s create a trigger and add a “Compose” action. There is literally no configuration to the “Compose” action.

Next, initialize a variable and set it’s type to an Array. Set the value to the output of the “Compose” action.

From here, you can call the individual instances of the array in a very straight forward method. You simple reference the index of the item you want like this: variables(‘StringArray’)[1] (This would return the second item in the array since the array numbering starts at 0). In this example, I pull out three pieces of data, and (for no particular reason) email it back to myself.

And the email:

PowerShell – Get is optional

Here is something I learned a while back from Mr. Snover himself, and it was something I just couldn’t believe. Sure enough it’s true, and it’s still true in PowerShell Core 6. The “Get-” part of almost all “Get-” commands is completely optional. Yeah – you heard that right. “Get” is optional for almost all. Get-Process can’t run correctly, mainly because “Process” expects some arguments. Otherwise, give it a try!

500 Error when setting up Windows Azure Pack

I am putting this out there so the next person doesn’t have to spend the DAYS I wasted trying to fix this. Here is the story:

You want to install SMA on your brand new Windows 2016 Server, but obviously need Windows Azure Pack. Grab the web installer, fire it up, verify that you have the right pre-reqs, and pick the Windows Azure Pack Express and Admin API selection. The install goes fine, and a web page will launch so you can configure Azure Pack. You enter your database info, user and passphrase info, and ‘next’ your way to the end. You press that last checkmark and expect all of the little circles to come back green……but then you see that the Admin Authentication Site has come back with an error:

“500 Internal Server Error – Failed to configure databases and services: Some or all identity references could not be translated.”

Long story short – Go to your inetpub directory, and pull up the properties on the folder named “MgmtSvc-WindowsAuthSite”. Un-check the Read-Only box at the bottom, apply, and when prompted tell it apply to all sub-items.

From what I can tell – the connection strings and users you entered during the config are entered into the webconfig for this site. My guess is that the setup program is attempting to decrypt the webconfig and create additional files (an un-encrypted version of the webconfig?) but was failing due to the read-only property. I can’t verify that (I refuse to go through that setup program again), but I do know that unchecking that box finally got the config to complete successfully, I was able to finally launch the Service Management Portal, and get my SMA web service registered.

The literal days I wasted on this – I hope none of you have to go through that.

Super-fast mass update of management servers for OpsMgr

Here’s a quick one – you want to update the failover management servers on your agents en-mass, and don’t want to wait 12 years for it to complete. Why do you want to set it? Maybe you only want certain agents talking to certain data-centers, or specific management servers have very limited resources. Regardless of the reasons, if you do need to update the agent config, it can be a bit slow. Here is a quick little script that can make those update a LOT quicker.

First thing first – download PoshRSJob from Boe Prox. It’s about the best thing since sliced bread, and I use it constantly. Download the module and place it in one of your module directories (C:\Windows\System32\WindowsPowerShell\v1.0\Modules, for example). Next, create a CSV called FailOverPairs.csv. This should have 2 columns – Primary and Failover. For example:

You will want that header line – mainly because it saves us a couple of lines of code in PowerShell. Next, save that CSV in the same directory as the script below. This CSV will be used to set the appropriate failover partner. Save the script below in the same directory as the csv, and you are good to go! Here is the script:

Let’s examine some of this – the imports are obvious. If you have any issue with unblocking files or execution policy, leave a comment and I will help you through the import. The next line is different:

What we are doing here is to get a list of the loaded modules, then exclude some of them. We are doing this because when we run this script, we are creating a ton of Runspaces. By default, these runspaces will need to know which modules to load. We don’t need them to load PoshRSJob, and we don’t need them to load things like the ISE because they are ephemeral – they will go away after they have completed their processing. This line can be modified if you don’t need to load other modules. It will load the OperationsManager module, which is the heavy lifter of this script.

Next, we get all of the agents from the management group. This script needs to be run from a SCOM server, but you could easily modify this script to run from a non-SCOM system by adding the “-computername” switch to the get-scomagent command. Then we import the CSV that contains our failover pairs.

Now the fun starts – this line starts the magic:

This is the magic. We are feeding the list of SCOM agents (via the pipeline) to the start-rsjob cmdlet. The “-name” parameter tells the runspaces to use the Agent name as the job name, and the “-Throttle” parameter is set to control the number of runspaces we want running at once. I typically find that there isn’t a lot of benefit to going much over 2 or 3 times the number of logical cores. Maybe if you have remote processes that were very long running it might be beneficial to go up to 5-10 times the number of processors, but for this I found 2-3 to be the sweet spot. You will also see that we are telling start-rsjob what modules to import (see above).

The rest of the script is the scriptblock we want PoshRSJob to run. This is actually pretty straight forward – we set some variables (some of the we have to get with “$using:“). Then we find the current primary and failover, see if they match our pairs, and if they don’t we correct them. This isn’t a fast process, but if you are doing 20 of them at a time, it goes by a lot faster!

At the end of the script, we are simply waiting for the jobs to finish. In fact, if you want to track the progress, comment out this line:

If you comment that line out, you can track how fast your jobs are completing by using this:

We’ve been able to check several thousand systems daily in very little time to make sure our primary and failover pairs are set correctly. I hope you guys get some use from this, and go give Boe some love for his awesome module! Leave a comment if you have any questions, or hit me up on Twitter.

PowerShell, OpsMgr, ConfigMgr, SiteScope, and more!