PowerShell TDD: Testing ShouldProcess

July 12th, 2022 No comments

The built-in PowerShell feature ShouldProcess is considered best practice for commands that perform destructive operations.

function Remove-Something {
    [CmdletBinding(SupportsShouldProcess)]
    param()
    if ($PSCmdlet.ShouldProcess("Something", "Permanently remove")) {
        # Code that performs the destructive operation
    }
}

The problem with ShouldProcess is that it may prompt the user for manual confirmation, which makes automated testing difficult or impractical. The normal approach to automate testing of processes that involves manual prompting is to mock the part which performs it. However, since ShouldProcess is a .NET method on an automatic context variable ($PSCmdlet), testing frameworks like Pester aren’t able to replace its implementation with a mock.

Fortunately, there is a way to make ShouldProcess adhering commands fully testable. It just requires a little bit of extra work, such as wrapping ShouldProcess into a PowerShell function of its own.

Wrapping ShouldProcess

Here’s a PowerShell function that wraps the ShouldProcess method and makes it possible to use mocks to bypass the prompting.

function Invoke-ShouldProcess {
    # Suppress the PSShouldProcess warning since we're wrapping ShouldProcess to be able to mock it from a Pester test.
    # Info on suppressing PSScriptAnalyzer rules can be found here:
    # https://github.com/PowerShell/PSScriptAnalyzer/blob/master/README.md#suppressing-rules
    [Diagnostics.CodeAnalysis.SuppressMessageAttribute('PSShouldProcess', '')]
    [CmdletBinding()]
    param (
        [Parameter(Mandatory)]
        [System.Management.Automation.PSCmdlet]
        $Context,
        [Parameter(Mandatory)]
        [string]
        $Target,
        [Parameter(Mandatory)]
        [string]
        $Operation,
        [string]
        $Message
    )
    if ($Message) {
        $Context.ShouldProcess($Message, $Target, $Operation)
    } else {
        $Context.ShouldProcess($Target, $Operation)
    }
}

Actually, there are four overloads of the ShouldProcess method, but the code above only handles the two most useful ones (in my opinion).

The next step is to replace the ShouldProcess method in the command, and use the wrapper instead.

function Remove-Something {
    [CmdletBinding(SupportsShouldProcess)]
    param()
    if (Invoke-ShouldProcess -Context $PSCmdlet -Target "Something" -Operation "Permanently remove") {
        # Code that performs the destructive operation
    }
}

Note that you need to provide the command’s $PSCmdlet variable as the context on which ShouldProcess should be invoked.

Mocking Invoke-ShouldProcess

Now our command is ready for some Pester testing. Here are a couple of test stubs that show you how to mock the wrapper. Hopefully, you’ll be able to take it from here and start testing your own ShouldProcess-aware commands.

It 'Should perform the operation if the user confirms' {
    Mock Invoke-TDDShouldProcess { $true }

    Remove-Something
        
    # Code that verifies that the operation was executed
}

It 'Should not perform the operation if the user aborts' {
    Mock Invoke-TDDShouldProcess { $false }

    Remove-Something

    # Code that verifies that the operation was not executed
}

Introducing TDDSeams

I have created a PowerShell module named TDDSeams which has wrappers for ShouldProcess and its sister ShouldContinue, as well as a couple of helper methods that implement the best practice behavior described by Kevin Marquette in his excellent post Everything you wanted to know about ShouldProcess.

You can install the TDDSeams module from an elevated PowerShell with the following command.

Install-Module TDDSeams

Also, if you are interested in test-driven PowerShell development, check out my previous post PowerShell TDD: Testing CmdletBinding and OutputType.

PowerShell TDD: Testing CmdletBinding and OutputType

July 11th, 2022 No comments

A while ago, I decided to add PowerShell to my automation toolbox. Since I believe the best way to learn a new language is to do test-driven development and PowerShell has a fantastic module for this, called Pester. The framework is super intuitive, easy to do mocking in, and allows you develop your code design from tests.

However, one thing bugged me. I didn’t seem to be able to write code that tested whether a function had declared CmdletBinding (i.e. was an Advanced Function), or if it had declared OutputType.

# How can I test this?
function MyFunction {
    [CmdletBinding(SupportsShouldProcess, ConfirmImpact='High')]
    [OutputType('String')]
    ...
}

Google didn’t have anything useful on this subject so I tried my luck on StackOverflow and mclayton pointed me in the right direction (although, as mclayton puts it: it’s a bit of a mouthful). It turns out that I can use the built-in Abstract Syntax Tree (AST) and specifically the Param block attributes to find out if CmdletBinding is declared.

Testing CmdletBinding

The below function takes a command as input and looks for a CmdletBinding param block attribute.

function Test-CmdletBinding {
[OutputType([Bool])]
[CmdletBinding()]
    param (
        # Parameter help description
        [Parameter(Mandatory)]
        [System.Management.Automation.CommandInfo]
        $Command
    )

    $attribute = $command.ScriptBlock.Ast.Body.ParamBlock.Attributes | where-object { $_.TypeName.FullName -eq 'CmdletBinding' };

    $null -ne $attribute
}

You can then use the helper function in a Pester test, like this.

It "Should be an advanced function" {
    $c = Get-Command -Name MyCommand
    Test-CmdletBinding $c | Should -BeTrue
}

Testing CmdletBinding Arguments

That’s great, but what about arguments, like SupportsShouldProcess or ConfirmImpact?

[CmdletBinding(SupportsShouldProcess, ConfirmImpact='High')]

How can I test for those? Well, that’s where we get mouthful bits I guess, but the good news is that it’s doable. Here’s a helper function that can test those scenarios. It takes a command, an argument name, and an optional argument value and returns true if the command meets those conditions.

function Test-CmdletBindingArgument {
    [CmdletBinding()]
    [OutputType([Bool])]
    param (
        # Parameter help description
        [Parameter(Mandatory)]
        [System.Management.Automation.CommandInfo]
        $Command,
        [Parameter(Mandatory)]
        [string]
        $ArgumentName,
        [Parameter()]
        [string]
        $ArgumentValue
    )

    $attribute = $command.ScriptBlock.Ast.Body.ParamBlock.Attributes | where-object { $_.TypeName.FullName -eq 'CmdletBinding' };

    if ($attribute) {
        $argument = $attribute.NamedArguments | where-object { $_.ArgumentName -eq $ArgumentName };

        if ($null -eq $argument) {
            # The attribute does not have the argument, return false
            $false
        } elseif ($argument.ExpressionOmitted) {
            $ArgumentValue -eq '' -or $ArgumentValue -eq $true
        } elseif ($argument.Argument.Extent.Text -eq '$true') {
            $ArgumentValue -eq '' -or $ArgumentValue -eq $true
        } elseif ($argument.Argument.Extent.Text -eq '$false') {
            $ArgumentValue -eq $false
        } else {
            $ArgumentValue -eq $argument.Argument.Value
        }
    } else {
        # No such attribute exists on the command, return false
        $false
    }
}

The code handles both implicit and explicit values, e.g.

[CmdletBinding(SomeAttribute)]
[CmdletBinding(SomeAttribute=$true)] # same as having the attribute declared
[CmdletBinding(SomeAttribute=$false)] # same as not having the attribute declared
[CmdletBinding(SomeAttribute='Some Value')]

Here are a couple of examples of Pester tests using the helper function.

It "Should have CmdletBinding with ConfirmImpact set to High" {
    $c = Get-Command -Name MyCommand
    Test-CmdletBindingArgument $c -ArgumentName 'ConfirmImpact' -ArgumentValue 'High' | Should -BeTrue
}

It "Should have SupportShouldProcess declared" {
    $c = Get-Command -Name MyCommand
    Test-CmdletBindingArgument $c -ArgumentName 'SupportShouldProcess' | Should -BeTrue
}

Testing OutputType

Testing OutputType requires a slightly different approach. Since the OutputType attribute declares a type we have to access it through the positional arguments (instead of named arguments that were used for CmdletBinding).

Here’s a helper function to verify that a given command has declared a given type as its output type.

function Test-OutputType {
    [OutputType([Bool])]
    [CmdletBinding()]
    param (
        # Parameter help description
        [Parameter(Mandatory)]
        [System.Management.Automation.CommandInfo]
        $Command,
        [Parameter(Mandatory)]
        [string]
        $TypeName
    )

    $attribute = $command.ScriptBlock.Ast.Body.ParamBlock.Attributes | where-object { $_.TypeName.FullName -eq 'OutputType' };

    if ($attribute) {
        $argument = $attribute.PositionalArguments | where-object {
            if ($_.StaticType.Name -eq 'String') {
                $_.Value -eq $TypeName 
            } elseif ($_.StaticType.Name -eq 'Type') {
                $_.TypeName.Name -eq $TypeName 
            } else {
                $false
            }
        }
        if ($argument) {
            $true
        } else {
            $false
        }
    } else {
        $false
    }
}

Note that a type can be declared as a string or as a type, but the helper function handles both cases:

[OutputType([Bool])]
[OutputType('System.Bool')]

And here’s an example of how it can be used within a Pester test.

It "Should have Output type Bool" {
    $c = Get-Command -Name MyCommand
    Test-TDDOutputType $c -TypeName 'Bool' | Should -BeTrue
}

Introducing TDDUtils

I have created a PowerShell module called TDDUtils, which contains the above functions (although named with a TDD prefix) as well as more general versions that allows you to test other attributes than CmdletBinding and OutputType.

 You can install it from an Administrator PowerShell with the below command.

Install-Module TDDUtils

I plan to add more useful functions to that module as I go on with my PowerShell TDD journey, and if you want to contribute or just give me some suggestions, feel free to contact me on Twitter.

iOS 6 Recipes Is Out

January 11th, 2013 7 comments

Allright!

I’ve had this dream, for like ages, to become a published book author. Now I’ve done it. iOS 6 Recipes is out. (Actually, it’s been out since late november but I’ve been so exhausted that I haven’t been able to muster the energy to blog about it.)

iOS 6 Recipes is based on another book, iOS 5 Recipes, by Colin Francis and Shawn Grimes. My job was to upgrade that book with the new features of iOS 6, but as it turned out I had to rewrite most of the original content, keeping only the general structure and most of the old topics. I’m very proud of the result and although there are many ways in which it could still be improved, I believe I’ve accomplished the goal of producing a better book, valuable to its readers.

But I couldn’t have pulled this off if it wasn’t for the fantastic team at Apress, especially Anamika Panchoo (Coordinating Editor), Anselm Bradford (Technical Rewiever), Douglas Pundick (Developmental Editor) and Linda Seifert (Copy Editor).

If you’ve read iOS 6 Recipes and want to share your thoughts, or ask me questions, feel free to contact me, either through the comments here or by emailing me (you’ll find my email address in the book).

Cheers!

Categories: Anouncements, books Tags: , ,

New XCode Tutorial: Turning View-based Into Navigation-based

September 30th, 2011 No comments

The other day I started a new iPhone project to create an app that I had been thinking about for quite some time now. I meant to build a Navigation-based app, but for some reason I used the general View-based application template to create my project.

Nemas problemas – I thought – transforming it into a Navigation-based project should be simple enough. Call me stupid, or stubborn, but what seemed like an easy task took me the better part of a day to achieve; XCode can be very unintuitive some times, and the error messages not very helpful 🙂

Anyway, I thought I’d better put my achievement down in writing in case some of you out there will have the same need one day. Then maybe you’ll thank me for writing the Turning View-base into Navigation-based  tutorial.

Cheers!

Categories: programming Tags: ,

New Tutorial: Getting Started With UISpec on XCode 4

July 8th, 2011 No comments

One of the most popular pages on this website is the tutorial that helps you getting started with UISpec; i guess there’s a great demand for acceptance testing frameworks on Objective-C.

Anyway, I have just made a switch from XCode 3 to XCode 4, a huge improvement. A lot has changed in the user interface so I decided to upgrade the UISpec tutorial, but kept the old one in case someone prefer the old IDE:

Cheers!

Incremental Development and Regression Testing Paper Released

May 17th, 2011 No comments

As you might know I wrote about the regression testing problem in The Swamp of Regression Testing. One of the commenters of that post, Ricky Clarkson, thought that the extensive use of the agile word got in the way of the underlying message.

I’d love to see a copy of this article without “if you don’t do X, you won’t be agile”, because where I work, agile isn’t a goal, but quality and quick turnaround is. If I passed this around, the word ‘agile’ would get in the way of an otherwise great message.

That made so much sense that I decided to do just that. I wrote a new version, a complete rework of the original post, and published it as a separate document so that it can be easily passed around.

You find it under Shared Stuff on my website, or by following the below link.

Incremental Development and Regression Testing (PDF)

Please don’t hesitate to contact me if you have any feedback to share.

Cheers!

Categories: Anouncements, software development Tags:

Automatic Acceptance Testing for iPhone

February 23rd, 2011 No comments

I’ve been playing around with XCode and iOS SDK lately, particularely with various alternatives for achieving automatic acceptance testing. (Which I belive is the only long-term solution to fight The Regression Testing problem.)

One of the most promising solutions I’ve found is UISpec, modeled after Ruby’s mighty popular RSpec. Even though it still feels a bit rough cut, UISpec provides an easy way to write tests that invoke your application’s user interface; no recording, just plain Objective-C.

Anyway, in case you’re interested, I published a tutorial on how to set UISpec up for your project: Getting Started With UISpec.

Categories: software development, test-driven Tags:

Steve and Nat: Encapsulate Generics

February 21st, 2011 No comments

I’m reading Growing Object-Oriented Software, Guided by Tests, by Steve Freeman and Nat Pryce. I was happy to find that the authors share a similar view on constructed generic types as I do.

Our rule of thumb is that we try to limit passing around types with generics […]. Particularly when applied to collections, we view it as a form of duplication. It’s a hint that there’s a domain concept that should be extracted into a type.

A less blunt way of saying what I tried to say in my own post.

If you’re interested in the book I can strongly recommend it. Although I’m only half-way through it I can already tell it’s the best TDD book I have read so far. I’ll have a brief review up in a coming post.

Cheers!

Categories: books, Comment, programming Tags:

The First Rule of Holes

January 26th, 2011 7 comments

We don't have time for reevaluation. Keep digging!

At some point someone made a promise. Now the team is in a bad position and struggling hard to meet the deadlines of the original plan.

I’ve been in that position more than once, and as a Scrum Master I have used the same strategy:

  1. Working with the customer to lower expectations and make the plan more aligned with reality.
  2. Find more time for the team to do its work.

The second part would make me cancel everything not immediately helping the team accomplishing its short-term goals. The retrospective meetings were always the first thing that went out the window, and I felt like I was helping the team by doing it.

That was before the first rule of holes was brought to my attention (by Henrik Kniberg).

The first rule of holes: When you’re in one stop digging.

That wonderful quote by Molly Ivins really got me thinking. To bring a derailed train back on track, the solution is not to make the engines run faster. Instead we need to make a careful evaluation of the situation, and find solutions that will help us achieve the end goal of getting the passengers and cargo reach its destination on time.

In software development terms, when the going gets tough, we need our retrospectives the most.

Another way I’ve violated the first rule of holes is the thing I wrote about in my previous post. Although I know that automated tests is one of the best ways to increase productivity, I came up with all sorts of excuses why “it wasn’t for us”. So, I’ve let my teams dig deeper holes and making a bad situation worse.

Yet another common violation is the building up of technical debt. We’re so busy digging for new features that we forget that we have to be able to ascend from the pit. We need to stop digging and clean the small messes before they become drainers of effort.

What holes have you created or ended up in? What did you do about it? I’d love to hear your stories.

Cheers!

Categories: learning, project management Tags:

The Swamp of Regression Testing

January 17th, 2011 14 comments
Have you ever read a call for automation of acceptance testing? Did you find the idea really compelling but then came to the conclusion that it was too steep a mountain for your team to climb, that you weren’t in a position to be able to make it happen? Maybe you gave it a half-hearted try that you were forced to abandon when the team was pressured to deliver “business value”? Then chances are, like me, you heard but didn’t really listen.
You must automate your acceptance testing!

Why? Because otherwise you’re bound to get bogged down by the swamp of regression testing.

Regression Testing in Incremental Development

Suppose we’re building a system using a traditional method; We design it, build it and then test it – in that order. For the sake of simplicity assume that the effort of acceptance testing the system is illustrated by the box below.

Box depicting a specified system

The amount of acceptance testing using a traditional approach

Now, suppose we’re creating the same system using an agile approach; Instead of building it all in one chunk we develop the system incrementally, for instance in three iterations.

The system is divided into parts (in this example three parts: A, B and C).

An agile approach. The system is developed in increments.

New functionality is added with each iteration

Potential Releasability

Now, since we’re “agile”, acceptance testing the new functionality after each iteration is not enough. To ensure “potential releasability” for our system increments, we also need to check all previously added functionality so that we didn’t accidentally break anything we’ve already built. Test people call this regression testing.

Can you spot the problem? Incremental development requires more testing. A lot more testing. For our three iteration big  example, twice as much testing.

The theoretical amount of acceptance testing using an agile approach and three iterations, is twice as much as in Waterfall

Unfortunately, it’s even worse than that. The amount of acceptance testing increases exponentially with the number of increments. Five iterations requires three times the amount of testing, nine requires five times, and so on.

The amount of acceptance testing increase rapidly with the number of increments.

This is the reason many agile teams lose velocity as they go, either by yielding to an increasing burden of test (if they do what they should) or by spending an increasing amount of time fixing bugs that cost more to fix since they’re discovered late. (Which is the symptom of an inadequately tested system.)

A Negative Force

Of course, this heavily simplified model of acceptance testing is full of flaws. For example:

  • The nature of acceptance testing in a Waterfall method is very different from the acceptance testing in agile. (One might say though that the Waterfall hides behind a false assumption (that all will go well) which looks good in plans but becomes ugly in reality).
  • One can’t compare initial acceptance testing of newly implemented features, with regression testing of the same features. The former is real testing, while the second is more like checking. (By the way, read this interesting discussion on the difference of testing and checking: Which end of the horse.)

Still, I think it’s safe to say that there is a force working against us, a force that gets greater with time. The burden of regression testing challenges any team doing incremental development.

So, should we drop the whole agile thing and run for the nearest waterfall? Of course not! We still want to build the right system. And finding that out last is not an option.

The system we want is never the system we thought we wanted.

OK, but can’t we just skip the regression testing? In practice, this is what many teams do. But it’s not a good idea. Because around The Swamp of Regression Testing lies The Forest of Poor Quality. We don’t want to go there. It’s an unpleasant and unpredictable place to be.

We really need to face the swamp. Otherwise, how can we be potentially releasable?

The only real solution is to automate testing. And I’m not talking about unit testing here. Unit testing is great but has a different purpose: Unit-testing verifies that the code does what you designed it to do. Acceptance testing on the other hand verifies that our code does what the customer (or product owner, please use your favorite word for the one that pays for the stuff you build) asked for. That’s what we need to test to achieve potential releasability.

So, get out there and find a way to automate your acceptance testing. Because if you don’t, you can’t be agile.

Cheers!