James O'Neill's Blog

April 10, 2011

Ten tips for better PowerShell functions

Filed under: Uncategorized — jamesone111 @ 11:02 pm

Explaining PowerShell often involves telling people it is both an interactive shell – a replacement for the venerable CMD.EXE – and a scripting language used for single task scripts and libraries of re-useable functions. There are some good practices which are common to both kinds of writing  – including comments, being explicit with parameters, using full names instead of aliases and so-on but having written hundreds of “script cmdlets” I have developed some views on what makes a good function which I wanted to share…

1. Name your function properly
It’s not actually compulsory to use Verb-SingularNoun names with the standard verbs listed by Get-Verb. “Helpers” which you might pop in a profile can be better with a short name. But if your function ends up in a module Import-module grumbles when it sees non-standard verbs. Getting the right name can clarify your thinking about what a command should or should not do. I cite IPConfig.exe as an example of a command line tool which didn’t know when to stop – what it does changes dramatically with different switches.  PowerShell tends towards multiple smaller functions whose names tell you what they will do – which is a Good Thing

2. Use standard, consistent and user-friendly parameters.
(a) PowerShell Cmdlets give you –whatIf and –Confirm switches; before you do something irreversible –  you can get these in your own functions Put this line of code before any others in the function
[CmdletBinding(SupportsShouldProcess=$True)]
and then where you do something which is hard to undo  
If ($psCmdlet.shouldProcess("Target" , "Action")) {
    dangerous actions
}
(b) Look at the names PowerShell uses: “path”, not “filename” , “ComputerName” not “Host”, “Force” “NoClobber” and so on – copy what has been done before unless you have a good reason to do something different; I don’t use “ComputerName” when working with Virtual Machines because it is not clear if it means a Virtual Machine or the Physical Machine which hosts them.
(c)If you are torn between two names : remember that “Computer” is a valid shortening of “ComputerName” and for names which are shortenings of an alternative you can define aliases, like this:
[Alias("Where","Include")]$Filter
TIP 1.You can discover all the parameter names used in by cmdlets, and how popular they are like this
get-command -c cmdlet | get-help -full| foreach {$_.parameters.parameter} |
   forEach{$_.name} | group -NoElement | sort count

Tip2
If you think “Filter” is the right name to re-use you can see how other cmdlets use it like this:
Get-Command -C cmdlet | where { $_.definition -match "filter"} | get-help  -Par "filter" 

3. Support Piping into your functions.
V2 of PowerShell greatly simplified Piping. The more you use PowerShell the stronger sense you get that the output of one command should become the input for another. If you are writing functions, aim for the ability to pipe into them and pipe their output into other things.  Piped input becomes a parameter, all you need to do is

  • Make sure the parts of the function which run for each piped object are in a
    process {} block
  • Prefix the parameter declaration with [parameter(ValueFromPipeline=$true)].
  • If you want a property of a piped object instead of the whole object, use ValueFromPipelineByPropertyName
  • If different types of objects get piped, and they use different property names for what you want, give your parameter aliases, and it will look for the “true” name if it doesn’t find it try each alias in turn.

If you find code that looks like this
something | foreach {myFunction $_ }
It is a sign that you probably need to look at piping.

4. Be flexible about arrays and types of parameters
Piping is one way to feed many objects into one command. In addition, many built-in cmdlets and operators will accept arrays as parameters just as happily as they would accept a single object; previously I gave the example  of Get-WmiObject whose –computername parameter can specify a list of machines – it makes for simpler code.
It is easier to use the functions which catch being passed arrays and process them sensibly (and see that previous post for why simply putting [String] or [FileInfo] in front of a parameter doesn’t work).  Actually I see it as good manners – “I handle the loop so you don’t have to do it at the command line”
Accepting arrays is one case of not being over-prescriptive about types: but it isn’t the only one. If I write something which deals with, say, a Virtual Machine, I ensure that VM names are just as valid as objects which represent VMs. For functions which work with files, it has to be just as acceptable to pass System.IO.FileInfo and System.Management.Automation.PathInfo, objects or strings containing the path (unique or wild card, relative path or absolute). 
TIP:  resolve-path will accept any of these and convert them into objects with fully-qualified paths.
It seems rude to make the user use Get-whatever to fetch the object if I can do it for them.

5. Support ScriptBlock parameters.
If one parameter can be calculated from another it is good to let the user say how to do the calculation.  Consider this example with Rename-Object. I have photos named IMG_4000.JPG, IMG_4001.JPG , IMG_4002.JPG, up to IMG_4500.JPG. They were taken underwater, so I want them to be named DIVE4000.JPG etc. I can use:
dir IMG_*.JPG | rename-object –newname {$_.name –replace "IMG_","DIVE"}
In English “Get the files named IMG_*.JPG and rename them. The new name for each one is the result of replacing IMG_ with DIVE in that one’s current name.” Again you can write a loop to do it but a script block saves you the trouble.

  • The main candidates for this are functions where one parameter is piped and a second parameter is connected to a property of the Piped one.
  • When you are dealing with multiple items arriving from the pipeline, be careful what variables you set in the process{} block of the function: you can introduce some great bugs by overwriting non-piped parameters. For example if you had to implement rename-object, it would be valid to handle a string that had been piped in as the –path parameter by converting it into a FileInfo object – doing so has no effect on the next object to come down the pipe; but if you convert a script block which is passed as -NewName to a String, when the next object arrives it will get that string – I’ve had great fun with the bugs which result from this
  • All you need to do to provide this functionality is
    If ($newname –is [ScriptBlock]) { $TheNewName = $newname.invoke() }
    else                            { $TheNewName = $newname}

6. Don’t make input mandatory if you can set a sensible default.
Perhaps obvious, but… If I write a function named “Get-VM” which finds virtual machines with a given name, what should I do if the the user doesn’t give me a VM name ? Return nothing ? Throw an error ? Or assume they want all possible VMs ?
What would you mean if you typed Get-VM on its own ?

7. Don’t require the user to know too much underlying syntax.
Many of my functions query WMI; WMI uses SQL syntax; SQL Syntax uses “%” as a wildcard, not “*”.  Logical conclusion: if a user wants to specify a wildcarded filter to my functions they should learn to use % instead of *.  That just seems wrong to me: so my code replaces any instance of * with %.  If the user is specifying filtering or search terms a few lines to change the from things they will instinctively do, or wish they could do, to what is required for SQL,  LDAP or any other syntax can make a huge difference in usability.

8. Provide information with Write-Verbose , Write-debug and Write-warning
When you are trying to debug the natural reaction is to put in Write-Host commands, fix the problem and take them out again.  Instead of doing that change $DebugPreference and/or $VerbosePreference and use write-debug / write-verbose to output information. You can leave them in and stop the output by changing the preference variables back. If your function already has
[CmdletBinding(SupportsShouldProcess=$True)]
at the start then you get –debug and –verbose switches for free.
Write-Error is ugly and if you are able to continue, it’s often better to use Write-warning.
And learn to use write-progress when you expect something to spend a long time between screen updates.

9. Remember: your output is someone else’s input.
(a) Point 8  Didn’t talk about using Write-Host – only use it to display something you want to prevent going into something else.
(b) Avoid formatting output in the function, try to output objects which can be consumed by something else.  If you must format turn it on or off with a –formatted or -raw switch.
(c) Think about the properties of the objects you emit. Many commands will understand that something is a file if it has a .Path property, so add one to the objects coming out of your function and they can be piped into copy, invoke-item, resolve-path and so on. Usually that is good – and if it might be dangerous look at what you can do to change it.  Another example: when I get objects that represent components of a virtual machine their properties don’t include the VM name. So I go to a little extra trouble to add it.
Add-Member can add properties or aliases for properties to an object for example
$obj | Add-member -MemberType AliasProperty –Name "Height"-Value “VerticalSize”

10 Provide help
In-line help is easy – it is just a carefully comment before any of the code in your function. It isn’t just there for some far when you share the function with the wider world. It’s for you when you are trying to figure out what you did months previously – and Murphy’s law says you’ll be trying to do it at 3AM when everything else is against you.
Describe what the Parameters expect and what they can and can’t accept. 
Give examples (plural) to show different ways that the function can be called. And when you change the function in the future, check the examples still work.

2 Comments

  1. I actually thing that parameters should have specific types as much as possible. Let the pipeline binding and type conversion take care of that work. In the case of Rename-Item, -NewName is a [String] parameter that takes pipeline input. This allows you to override the binding with a scriptblock, but still ensures proper typing on the value.

    Comment by Jason Archer — April 12, 2011 @ 7:17 pm

  2. Jason, The built in rename-item takes PATH as piped input, not NewName. New-name acts on the piped parameter.

    To see why what you suggest doesn’t work with functions try this

    function test
    { param ( [parameter(ValueFromPipeline = $true)]$p , $q )
    process { if ($q -is [scriptblock]) {$p.name, $q.invoke() }
    else {$p.name ,$q}
    }
    }

    PS > dir *.jpg | test -q {$_.name -replace “jpg”,”jpeg”}
    overLay.jpg
    overLay.jpeg

    Then try it again saying that the parameter is a string
    function test
    { param ( [parameter(ValueFromPipeline = $true)]$p , [string]$q )
    process { if ($q -is [scriptblock]) {$p.name, $q.invoke() }
    else {$p.name ,$q}
    }
    }

    PS> dir *.jpg | test -q {$_.name -replace “jpg”,”jpeg”}
    overLay.jpg
    $_.name -replace “jpg”,”jpeg”

    More detail here

    Comment by jamesone111 — April 14, 2011 @ 6:39 pm


RSS feed for comments on this post.

Blog at WordPress.com.