EASY PowerShell API Endpoint with FluentD

One of the biggest problems that I have had with PowerShell is that it’s just too good. I want to use it for everything. Need to perform automation based on monitoring events? Pwsh. Want to update rows in a database when someone clicks a link on a webpage? Pwsh. Want to automate annoying your friends with the push of a button on your phone? Pwsh. I use it for everything. There is just one problem.

I am lazy and impatient. Ok, that’s two problems. Maybe counting is a third.

I want things to happen instantly. I don’t want to schedule something in task scheduler. I don’t want have to run a script manually. I want an API like end-point that will allow me to trigger my shenanigans immediately.

Oh, and I want it to be simple.

Enter FluentD. What is Fluentd you ask? From the website – “Fluentd allows you to unify data collection and consumption for a better use and understanding of data.” I don’t necessarily agree with this statement, though – I believe it’s so much more than that. I view it more like an integration engine with a wide community of plug-ins that allow you to integrate a wide variety of toolsets. It’s simple, light-weight, and quick. It doesn’t consume a ton of resources sitting in the background, either. You can run it on a ton of different platforms too – *nix, windows, docker, etc… There is even a slim version for edge devices – IOT or small containers. And I can run it all on-prem if I want.

What makes it so nice to use with PowerShell is that I can have a web API endpoint stood up in seconds that will trigger my PowerShell scripts. Literally – it’s amazingly simple. A simple config file like this is all it takes:

<source>
  @type http
  port 9880
</source>

Boom – you have an listener on port 9880 ready to accept data. If you want to run a PowerShell script from the data it receives, just expand your config file a little.

<source>
  @type http
  port 9880
</source>

#Outputs
<match **>
  @type exec
  command "e:/tasks/pwsh/pwsh.exe  -file e:/tasks/pwsh/events/start-annoyingpeople.ps1"
  <format>
    @type json
  </format>
  <buffer>
    flush_interval 2s
  </buffer>
</match>

With this config file you are telling FluentD to listen on port 9880 (http://localhost:9880/automation?) for traffic. If it sees a JSON payload (post request) on that port, it will execute the command specified – in this case, my script to amuse me and annoy my friends. All I have to do is run this as a service on my Windows box (or a process on *Nix, of course) and I have a fully functioning PowerShell executing web API endpoint.

It doesn’t have to just be web, either. They have over 800 plug-ins for input and output channels. Want SNMP traps to trigger your scripts? You can do it. How about an entry in a log starting your PowerShell fun? Sure! Seriously – take a look at FluentD and how it can up your PowerShell game immensely.

Use a Config File with PowerShell

How many times has this occurred?


“Hey App Onwer – all of these PowerShell automation scripts you had our team create started failing this weekend. Did something change?”
“Oh yeah Awesome Automation Team. We flipped over to a new database cluster on Saturday, along with a new web front end. Our API endpoints all changed as well. Didn’t anyone tell you? How long will it take to update your code?”

Looking at dozens of scripts you personally didn’t create: “Ummmmm”

This scenario happens all of the time, and there are probably a hundred different ways to deal with it. I personally use a config file, and for very specific reasons. If you are coding in PowerShell 7+ (and you should be), then making your code cross-platform compatiable should be at the top of your priority list. In the past, I would store things like database connection strings or API endpoints in the Windows registry, and my scripts would just reference that reg key. This worked great – I didn’t have the same property listed in a dozen different scripts, and I only had to update the property in once place. Once I started coding with cross-plat in mind, I obviously had to change my thinking. This is where a simple JSON config file comes in handy. A config file can be placed anywhere, and the structured JSON format makes creating, editing, and reading easy to do. Using a config file allows me to take my whole code and move it from one system to another regardless of platform.

Here is what a sample config file might look like:

{
    "Environments": [
        {
            "Name": "Prod",
            "CMDBConnectionString": "Data Source=cmdbdatabaseserverAG;MultiSubnetFailover=True;Initial Catalog=CMDB;Integrated Security=SSPI",
            "LoggingDBConnectionString": "Data Source=LoggingAG;MultiSubnetFailover=True;Initial Catalog=Logs;Integrated Security=SSPI",
            "ServiceNowRestAPI":"https://prodservicenowURL:4433/rest",
            "Servers": [
                {
                    "ServerName": "prodserver1",
                    "ModulesDirectory": "e:\\tasks\\pwsh\\modules",
                    "LogsDirectory": "e:\\tasks\\pwsh\\logs\\"
                },
                {
                    "ServerName": "prodserver2",
                    "ModulesDirectory": "e:\\tasks\\pwsh\\modules",
                    "LogsDirectory": "e:\\tasks\\pwsh\\logs\\"
                }
            ]
        },
        {
            "Name": "Dev",
            "CMDBConnectionString": "Data Source=testcmdbdatabaseserverAG;MultiSubnetFailover=True;Initial Catalog=CMDB;Integrated Security=SSPI",
            "LoggingDBConnectionString": "Data Source=testLoggingAG;MultiSubnetFailover=True;Initial Catalog=Logs;Integrated Security=SSPI",
            "ServiceNowRestAPI":"https://testservicenowURL:4433/rest",
            "Servers": [
                {
                    "ServerName": "testserver1",
                    "ModulesDirectory": "e:\\tasks\\pwsh\\modules",
                    "LogsDirectory": "e:\\tasks\\pwsh\\logs\\"
                },
                {
                    "ServerName": "testserver2",
                    "ModulesDirectory": "e:\\tasks\\pwsh\\modules",
                    "LogsDirectory": "e:\\tasks\\pwsh\\logs\\"
                }
            ]            
        }
    ]
}

And here is what the code that parses it looks like in each script:

$config = get-content $PSScriptRoot\..\config.json|convertfrom-json
$ModulesDir = $config.Environments.Servers|where-object {$_.ServerName -eq [Environment]::MachineName}|select-object -expandProperty ModulesDirectory
$LogsDir = $config.Environments.Servers|where-object {$_.ServerName -eq [Environment]::MachineName}|select-object -expandProperty LogsDirectory
$DBConnectionString = $config.Environments|where-object {[Environment]::MachineName -in $_.Servers.ServerName}|select-object -expandProperty CMDBConnectionString

Note that in this particular setup, I use the server name of the server actually running the script to determine if the environment is production or not. That is completely optional – your config file might be as simple as a couple of lines – something like this:

{
    "CMDBConnectionString": "Data Source=cmdbdatabaseserverAG;MultiSubnetFailover=True;Initial Catalog=CMDB;Integrated Security=SSPI",
    "LoggingDBConnectionString": "Data Source=LoggingAG;MultiSubnetFailover=True;Initial Catalog=Logs;Integrated Security=SSPI",
    "ServiceNowRestAPI":"https://prodservicenowURL:4433/rest"
}

When you use a simple file like this, you don’t need to parse the machine name – a simple “$ModulesDir = $config |select-object -expandProperty ModulesDirectory” would work great!

Now you can simply update the connection string or any other property in one location and start your testing. No need to “Find in all Files” or search and replace across all your scripts. An added plus is that using a config file vs something like a registry key is that the code is cross-platform compatible. I’ve used JSON here due to a personal preference, but you could use any type of flat file you wanted, including a simple text file. If you use a central command and control tool – something like SMA, then those properties are already stored locally, but this method would be handy for those that don’t have capability or simply want to prepare their scripts for running on K8s or Docker.

Let me know what you think about this – Do you like this approach or do you have another method you like better? Hit me up on Twitter – @donnie_taylor