Skip navigation

Tag Archives: scripting

One of the things I do at work involves creating scripts to run a Ruby script. For each line in each script I create, I have to:

  1. Go to our secondary email program
  2. Copy a job name (a word, basically) or an entire line from an error email
  3. Change to an editing program (like the PowerShell Integrated Scripting Environment, a.k.a. ISE)
  4. Create a line with the Ruby command line and space for the text I’ve copied from the error email, and add the error email text to that line

Today, I had almost 120 lines to create this way – some of them in two versions

Previously, I did it all by hand – I duplicated the Ruby script portion as many times as I needed, then copied and pasted the error text.

Today, though, contemplating the 200+ lines to create, I decided to dig a bit deeper into the PowerShell ISE.

I discovered I could open a PowerShell script file from disk using:

PS P:\> New-Item -ItemType File test0.ps1
PS P:\> $PSISE.CurrentPowerShellTab.Files.Add(“P:\test0.PS1”)

I then found I could access the open files in the tabbed script panes using standard array indexing notation:

PS P:\> $PSISE.CurrentPowerShellTab.Files[7]                # or [0], [1], [2], etc

With a little more experimentation, I found I could assign the tabbed script to a variable, Save its contents from the command line, and update the Text in the Editor property of the script pane:

PS P:\> $file0 = $PSISE.CurrentPowerShellTab.Files[7]
PS P:\> $file0.Editor.Text = “hello, world”
PS P:\> $file0.Editor.Text += “`n” + “Goodbye, cruel world”
PS P:\> $file0.Save()
PS P:\> Get-Content P:\test0.ps1
hello, world
Goodbye, cruel world

PS P:\>

I then discovered that I could access the current tab directly, without having to use the array indexing notation, or assign the tabbed script to a variable:

PS P:\> $PSISE.CurrentFile.Editor.Text = “”

I then adapted Recipe 8.3 (“Read and Write from the Windows Clipboard”) from the Windows PowerShell Cookbook to write a one-liner:

PS P:\> function Get-Clipboard { Add-Type -Assembly PresentationCore; [Windows.Clipboard]::GetText() }

Finally, I put the Ruby script lines into variables (for example, $ruby_script_1),
defined a new variable $PCE:

PS P:\> $PCE = $PSISE.CurrentFile.Editor

And used the results to add lines to the currently selected script tab:

PS P:\> $PCE.Text += $ruby_script_1 + (Get-Clipboard) + “`n”
# the `n is the PowerShell way to specify a newline character

Now, I just had to

  1. Switch to the email program,
  2. Copy the job name or full line out of the error email,
  3. Switch back to the PowerShell ISE, and
  4. Up-Arrow to create each new line

It looks like the same number of steps – but there’s a lot fewer keypresses, so…WIN!!!

I’ve been diving back in to – well, dipping my toe in the chilly waters of – PowerShell for some scripting here at my Data Processing job.

Several years ago, I learned the hard way (i.e., after writing a couple hundred lines of Ruby script) that although much of our processing automation was written without unit tests, that does NOT apply to any automation that *I* want to write. Not if I want to put it into production, that is.

I resisted unit testing and TDD for some time (Why? Well, that’s a story for another time), but I finally got testing religion last year with some Python scripting.

I could continue with the Python, but I think PowerShell is a better fit for our environment here.

Most modern programming languages have a choice of testing frameworks to choose from, but for PowerShell there’s only one that I know of – Pester.

Pester can be installed through NuGet or downloaded from GitHub.

I’m not going to repeat any Pester examples here – you can find plenty of “Getting Started” guides on the web. For example,

While looking for the Technet link, I found this post courtesy of Matt Wrock’s Hurry Up and Wait blog:

Why TDD for PowerShell? Or why pester? Or why unit test a “scripting” language?

Matt’s blog is subtitled “Tales from an Automation Engineer”, so his perspective on testing is a little different from the usual software testing guru. In particular, he points out that when it comes to infrastructure (and Data Processing, IMO), the things that are mocked / “stubbed out” in most software development environments are the things that we want to test:

Why TDD for PowerShell? Or why pester? Or why unit test a “scripting” language?

But infrastructure code is different

Ok. So far I don’t think anything in this post varies with infrastructure code. As far as I am concerned, these are pretty universal rules to testing. However, infrastructure code IS different…

If I mock the infrastructure, what’s left?

So when writing more traditional style software projects (whatever the hell that is but I don’t know what else to call it), we often try to mock or stub out external “ifrastructureish” systems. File systems, databases, network sockets – we have clever ways of faking these out and that’s a good thing. It allows us to focus on the code that actually needs testing.

However …if I mock away all of these layers, I may fall into the trap where I am not really testing my logic.

More integration tests

One way in which my testing habits have changed when dealing with infrastructure code is I am more willing to sacrifice unit tests for integration style tests…If I mock everything out I may just end up testing that I am calling the correct API endpoints with the expected parameters. This can be useful to some extent but can quickly start to smell like the tests just repeat the implementation.

Typically I like the testing pyramid approach of lots and lots of unit tests under a relatively thin layer of integration tests. I’ll fight to keep that structure but find that often the integration layer needs to be a bit thicker in the infrastructure domain. This may mean that coverage slips a bit at the unit level but some unit tests just don’t provide as much value and I’m gonna get more bang for my buck in integration tests.

Matt’s opinion accords with my intuition about my Data Processing environment. In the DP realm, the part of the script that can be tested without accessing the production environment (or at least a working model of the production environment) can be trivial. This is probably the main reason our existing production automation doesn’t have full testing coverage. (Well, that, and the fact that as far as I know there’s no testing framework for the automation software we use).

So I think my approach will be something like Matt’s – unit test where it’s useful and non-trivial, and more integration tests (a “thicker layer” as Matt says) to get full (or at least adequate) coverage.

PowerShell originally started as a project called “Monad” within Microsoft.

The original Monad Manifesto[PDF] was written by Jeffrey Snover back in August 2002.

BTW, one of the major influences on Monad was a paper by John Ousterhaut:
Scripting: HigherLevel Programming for the 21st Century[PDF]

It’s interesting to read Snover’s original manifesto and see how much of the original vision made it into PowerShell (and how much didn’t).

(originally posted at edward.spurlock.cc)

There comes a time in every programmer’s life when s/he has to strike out on his/her own, writing new code (instead of typing in examples from books / websites). That time has come now for me with regards to PowerShell.

But first, I have to set up my working environment.

Here at work, we have a common (i.e., shared) network directory on our Production resource server. There were no PowerShell utilities in the directory (probably because I think I’m the first person to do anything serious with PowerShell here, with the possible exception of the IT guys – and they don’t use the Production resource server).

However, it occurred to me that that common directory (call it N:\common\utils, because that’s not its name) would be a good place to put modules meant to be shared.

How do I tell PowerShell to look for modules there, without having to specify this every time I start PowerShell?

For now, I just:

  1. created a PS subdirectory in N:\common\utils (PS for PowerShell, of course)
  2. Started PowerShell on my PC and created a profile file in the $profile directory (per Recipe 1.6 from the Windows PowerShell Cookbook):
    New-Item -type file -force $profile
  3. edited the profile file using Notepad.exe:
    notepad $profile
  4. and added a line to add the common directory to the PSModulePath environment variable:
    $env:PSModulePath += “;N:\common\utils\PS”
    (the leading semicolon is the profile path separator)
  5. exited notepad, saving $profile on the way out.

Now, whenever I start PowerShell, the $profile runs and adds the PS shared folder to the module search path.

To do the same thing (less step 1) for the Windows PowerShell ISE, I consulted Microsoft Technet article How to Use Profiles in Windows PowerShell ISE, which suggests wrapping the New-Item in step 2 in an if statement to prevent overwriting an existing profile, and using the ISE to edit the resulting profile file:

  1. (PS subdirectory already created in N:\common\utils)
  2. Started the PowerShell ISE and created a profile file in $profile:
    if (!(test-path $profile)) {New-Item -type file -path $profile -force}
  3. edited the profile file (using the ISE editor):
    psEdit $profile
  4. and added the same line to add the common directory to PSModulePath:
    $env:PSModulePath += “;N:\common\utils\PS”
  5. then closed the ISE editor tab, saving the ISE $profile file on the way out

Now I just have to figure out modules and module manifests…

(originally posted at edward.spurlock.cc)

A lot of our Production Quality Control (QC) operations where I work require checking that data has been uploaded to one of our websites, using either one of our internal tools, or our backdoor access to one of our customer-facing sites. This is all right when we’re checking a couple of customer jobs, but gets tedious VERY quickly for routine QC of dozens or hundreds of customer jobs.

A web app works well as a manual tool (“enter text to be searched for in this box, click Search, click the link for desired item in the list of items matching your search term…”), but our internal tools and customer-facing sites were never designed to be scripted.

For a while, when I was working with more cross-platform scripting languages, I was looking at Selenium. Selenium allows you to control popular web browsers from a number of programming languages, including Python, Ruby, and C# – but not directly from PowerShell. It would be possible to write my own PowerShell wrapper for C# to control Selenium, but I don’t have any experience extending PowerShell with C#, and since we’re not a C# shop, I think that would be very fragile from a long-term maintenance standpoint.

Anyway, unlike the typical Selenium application, our Production QC ops aren’t testing a single web app across multiple browsers. We’re searching for a multitude of data items, but we only have to find each one once, in a single web browser. A more robust solution would be to use something more native to PowerShell to control a single browser – which could even be Internet Explorer (perish the thought!).

I Googled “powershell web browser automation” and came up with a number of possibilities.

Web UI Automation with Windows PowerShell, is an MSDN article from 2008 and talks about using COM to control Internet Explorer, which is something I’ve dabbled in using VB Script. My first experiment with the method wasn’t successful, though, so I looked for troubleshooting info for COM in my handy copy of Windows PowerShell in Action. As it happens, the book illustrates COM with an example of “…a COM-based tool to help manage…browser windows,” so the book probably offers a more fertile field for further research.

A post on StackOverflow then led me to WatiN – Web Application Testing in .Net. WatiN allows control of Internet Explorer AND Firefox, so it might be even better than using COM.

(originally posted at edward.spurlock.cc)

Another resource I’ve been mining in the last week or so: PowerShell.org, an independent community for PowerShell users.

Something at PowerShell.org that you won’t find at every other PowerShell resource: the PowerScripting Podcast, currently on episode 289. As I get time, I’m paging back through the archives to find episodes that are of interest to me (a relatively new PowerShell user)

(originally posted at edward.spurlock.cc)

Microsoft’s Hey, Scripting Guy! blog

Microsoft TechNet Script Center

PowerShell Code Repository – PoshCode.org

(originally posted at edward.spurlock.cc)

Our processing automation at work creates a number of files during processing. One way we can tell when the automation hasn’t completed successfully is when the processed files directory has been created, but the files that are created at the end of processing are missing from the directory.

Here’s a PowerShell script fragment to identify subfolders (two levels down) that lack a given file (target.txt in this example) :

PS > Get-ChildItem |
>>   ForEach-Object {
>>     Set-Location $_
>>     Get-ChildItem |
>>       Where-Object {!(Test-Path $_\target.txt)}
>>     Set-Location ..
>> }
>>

And here’s a one-liner version of the above:

PS > ls | %{ cd $_ ; ls | ?{!(test-path $_\target.txt)} ; cd ..}

It’s not perfect – if it encounters a file (rather than a subdirectory) one level down, it attempts to Set-Location to that filename and throws an error (because you can’t Set-Location to a file, only a folder). However, it seems to find all the folders missing the target.txt file before it attempts to Set-Location to that file.

(originally posted at edward.spurlock.cc)

A recent post on the Hey, Scripting Guy! blog showed how to use PowerShell to find a network adapter’s MAC address. The post provided two ways to get the information using WMI:

Get-WmiObject win32_networkadapterconfiguration | select description, macaddress

Get-CimInstance win32_networkadapterconfiguration | select description, macaddress

I wondered about the difference between GetWmiObject versus Get-CimInstance. Happily, while exploring older Hey, Scripting Guy! posts, I found one about simplifying PowerShell scripts that addressed the difference(s):

One of the first changes I make to a script, if I can, is I change Get-WmiObject to Get-CimInstance. Since Windows PowerShell 3.0, I can use Get-CimInstance. It is faster and more robust, and it permits lots of cool things for retrieving data (such as using Cim-Sessions).

Since most of our servers here are running PowerShell 2.0 right now, it will be a while before I can routinely use Get-CimInstance instead of Get-WmiObject — but it’s something for me to keep in mind for the future.

(originally posted at edward.spurlock.cc)

Via yesterday’s Hey, Scripting Guy! blog –

Scriptify – A navigation aid for SharePoint 2013 PowerShell Cmdlets is a web-based reference for PowerShell cmdlets related to SharePoint 2013. The cmdlets are divided into 36 categories – clicking on a category’s button takes you to a page of buttons to access the invidual cmdlets in that category. Clicking on a cmdlet’s button takes you to a page with parameters for the cmdlet and a link to its Technet homepage.

(originally posted at edward.spurlock.cc)