James O'Neill's Blog

March 25, 2011

PowerShell parameters revisited.

Filed under: Powershell — jamesone111 @ 8:49 pm

A little while back, as a follow up to a talk I’d given, I wrote a post entitled why it is better not to use PowerShell Parameter validation. I repeated the talk recently and met up with Thomas who’d organized both.  His initial instinct had been that my “best practice” of NOT declaring parameter types was just wrong – it’s not a surprising view given what he has been exposed to….

Over a quarter of a century ago, when I was studying computer science at University, one of the lecturers wrote the following quote from Dykstra on the board: “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”
I might be pushing a heavy stone up a steep hill here, trying to overcome the “mental mutilation” of too much “good programming”. But to see why “good programmers” can get the declaration of parameter types wrong in PowerShell, consider this example:
Function Use-File { Param ([System.io.FileInfo]$theFile )
     Format-list -InputObject $thefile -Property *

Use-File $f

Simple stuff: I declare a function with one parameter – a FileInfo object – which outputs a list showing all the properties of that object.
Then I pass a string to the function – and it contains a relative path to a file held in the current folder. So … What comes out of the function ?

  1. An error, or.
  2. The properties for the file in the current folder , or
  3. Something else ?

If you have a “good programming” background I’d expect you to say “An error”. Experience tells you passing a type other than the specified one does that. Not here.
In a shell you don’t want to worry about the type of object which comes out of one command or goes into the next, so PowerShell smooths out differences between types. Ask it to evaluate “Fire ” + 1 and it does an implicit conversion – usually called a type cast – of the second argument to the type of the first and returns “Fire 1”.
(1 + “Fire” fails because when it applies the rule of converting to match the first, “fire” can’t be turned into a number).
After spending a while doing only PowerShell, a compiled language like C# seems to need a huge number of “cast to String”, “Isn’t null”, “Isn’t empty”, “Is non-zero” operations, because it requires cast to string or Boolean types to be explicit.  It’s not one is good and the other bad, the different environments impose different requirements.

In PowerShell putting [Type] is how you explicitly cast something to a different type. [FileInfo] does not say “Reject anything which isn’t a file” as it would in C#,
it says to PowerShell “Convert anything which isn’t a file…”.
So, what conversion does it attempt? The underlying FileInfo .net object has a constructor which accepts a string and returns a file, so PowerShell creates a new FileInfo object using the constructor and passing in the string.  Unfortunately the constructor doesn’t handle the relative path, so the object which comes back  represents a non-existent read-only file in the windows\system32 folder. Yes you did read correctly, it doesn’t exist and you can’t write to it. 

I was taught to be as clear as the tools allowed about data types. It’s my contention that “good programmers” learn that it is dangerous to assume parameters will be an acceptable type; so when it is optional to specify types they will still do so out of habit. In PowerShell that leads to a more dangerous (and hidden) assumption- that what is passed will be the correct type or will fail if it can’t be cast correctly.  But what if a casting operation yields a nonsense result as it does here?

“Good programmers” accept writing something for type changes as a necessary a safety mechanism; at a command line it’s unnecessary pedantry. But PowerShell is both command line AND programming environment; and when we program in PowerShell we want to do it well, don’t we ? 
In that previous post I argued functions should cope with be passed names (paths in the example) and resolve them to the desired object (a file).  If a user expects to supply a file to your function by typing part of its name and hitting the [TAB] key then you have to write something after the parameter declaration (like a resolve-path statement) to cope with the change of type; it’s slightly different code to that which C# programmers need to write, but we don’t escape the task completely.

You might have thought ahead and asked “What would happen if I wrote”
Param ([String]$theFile) instead ? If the function is passed a file won’t casting it to string give me the full path ? That WILL work, but here’s an example to show why you should not use [String]:

Function Use-word {           
    Param ([string]$theword )           
    write-host "**$theWord**"           
$w = "Hello","world"           
Use-word $w

This  function takes a single string  – and writes that string to the console: but I haven’t passed it a single string but an array containing  “Hello” and “world”.  I’ll pose the same question as before: what comes out ?

  1. An error.
  2. **Hello** (the first item in the array “wins”)
  3. **World** (the last item in the array “wins”)
  4. **Hello World**  
  5. Something else.

If your instincts said “An array of X into something which takes a single X won’t go so this should give an error” but you now doubt them, I’ve achieved something.
In fact, when PowerShell casts an array to a single string it converts the members to strings and concatenates them with spaces between each.[string]("hello","world") returns “Hello world”.  With paths, two valid paths will become a string which is not a valid path.

A lot of PowerShell cmdlets will accept arrays instead of a single parameter.  Operators also work with arrays  "hello","world" -split "l" gives the same result as
"hello" -split "l" ; "world" -split "l".
For a cmdlet example, you can get the a list of QFEs (Hotfixes) installed on a computer with  Get-WmiObject -class win32_QuickFixEngineering
You can find the results on multiple machines (if they are set up for remote WMI) with
Get-WmiObject -class win32_QuickFixEngineering –computer Server1,Server2,serverN
because the Get-WmiObject cmdlet accepts an array in the –computername parameter.

Any function which takes a –computername parameter does not even need to look at it before passing to Get-WmiObject cmdlet.  But this seems wrong or at least lazy: surely we should catch an error as soon as possible ? Here’s one last example to show what difference it makes:

Function Use-integer {
   Param ($theNumber )
  start-sleep $theNumber

$n = "hello"
Use-integer $n

So here I Don’t validate that $number is actually a number, and I pass in “Hello”, and Start-Sleep will generate an error which looks like this

Start-Sleep : Cannot bind parameter 'Seconds'.
Cannot convert value "hello" to type "System.Int32".
Error: "Input string was not in a correct format."

If  I DO validate that $number is a number, I get this

Use-integer : Cannot process argument transformation on parameter 'theNumber'.
Cannot convert value "hello" to type "System.Int32".
Error: "Input string was not in a correct format."

Early validation didn’t gain anything in this case. It might have saved me from using a lot of time/system resources before the error. If $ErrorActionPreference is set to “Continue”, which it is by default, my function might continue on to something stupid, and early validation would stop that.  So it is not universally wrong to use [type] validate parameters, you just need to ask two questions: “Does what I am doing catch what I need to ?” and “Does what I am doing handle that in the best way”. Best way includes handling names and paths for objects, handling arrays and even (if you’re really bold) handling script block parameters. This post has gone quite long enough so I’ll talk about those in another post.

Create a free website or blog at WordPress.com.

%d bloggers like this: