James O'Neill's Blog

December 12, 2017

Using the import Excel Module: Part 3, Pivots and charts, data and calculations

Filed under: Uncategorized — jamesone111 @ 4:43 pm

In the previous post I showed how you could export data to an XLSx file using the Export-Excel command in Doug Finke’s ImportExcel module (Install it from the PowerShell gallery!). The command supports the creation of Pivot tables and Pivot charts. Picking up from where part 2 left off, I can get data about running processes, export them to a worksheet and then set up a pivot table

$mydata = Get-Process | Select-Object -Property Name, WS, CPU, Description, Company, StartTime
$mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet `
   -IncludePivotTable -PivotRows "Company" -PivotData @{"WS"="Sum"} -show

clip_image002

To add a pivot chart the command line becomes
$mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet `
    
-IncludePivotTable -PivotRows "Company" -PivotData @{"WS"="Sum"} `
    
-IncludePivotChart -ChartType Pie -ShowPercent -show

clip_image004

The chart types are also suggested by intellisense, note that some of them don’t support -ShowPercent or ‑ShowCategory options and “bad” combinations will result in an error on opening Excel. Re-creating existing Pivot charts can cause an error as well.

There is an alternative way of creating Pivot tables and charts –which is particularly useful when we want more than one in the same workbook

del .\demo.xlsx

$xl = $mydata | Export-Excel -Path .\demo.xlsx -WorkSheetname "Processes" -PassThru

$Pt1 = New-PivotTableDefinition -PivotTableName "WS"  -PivotData @{"WS" ="Sum"} -SourceWorkSheet "Processes" `
          -PivotRows Company -IncludePivotChart -ChartType ColumnClustered -NoLegend
$Pt2 = New-PivotTableDefinition -PivotTableName "CPU" -PivotData @{"CPU"="Sum"} -SourceWorkSheet "Processes" `
          -PivotRows Company -IncludePivotChart -ChartType ColumnClustered -NoLegend

$xl = Export-Excel -ExcelPackage $xl -WorkSheetname "Processes" -PivotTableDefinition $Pt1 -PassThru
Export-Excel -ExcelPackage $xl -WorkSheetname "Processes" -PivotTableDefinition $Pt2 -Show

clip_image006

New-PivotTableDefinition builds the table definition as a hash table – we could equally well write a large hash table with multiple pivots defined in it, like this

del .\demo.xlsx

$mydata | Export-Excel -Path .\demo.xlsx -WorkSheetname "Processes" -Show -PivotTableDefinition @{
    "WS" = @{"SourceWorkSheet"   = "Processes"      ;
             "PivotRows"         = "Company"        ;
             "PivotData"         = @{"WS"="Sum"}    ;
             "IncludePivotChart" = $true            ;
             "ChartType"         = "ColumnClustered";
             "NoLegend"          = $true};
   "CPU" = @{"SourceWorkSheet"   = "Processes"      ;
             "PivotRows"         = "Company"        ;
             "PivotData"         = @{"CPU"="Sum"}   ;
             "IncludePivotChart" = $true            ;
             "ChartType"         = "ColumnClustered";
             "NoLegend"          = $true }
}

Export-Excel allows [non-pivot] charts to be defined and passed as a parameter in similar same way -in the following example we’re going to query a database for a list of the most successful racing drivers, the SQL for the query looks like this:
$Sql = "SELECT TOP 25 WinningDriver, Count(RaceDate) AS Wins
        FROM   races 
        GROUP  BY WinningDriver  
        ORDER  BY count(raceDate) DESC"

Then we define the chart and feed the result of the query into Export-Excel (I’m using my GetSQL module from the PowerShell Gallery for this but there are multiple ways )
$chartDef = New-ExcelChart -Title "Race Wins" -ChartType ColumnClustered
               -XRange WinningDriver -YRange Wins -Width 1500 -NoLegend -Column 3

Get-SQL $Sql | Select-Object -property winningDriver, Wins |
  Export-Excel -path .\demo2.xlsx -AutoSize -AutoNameRange -ExcelChartDefinition $chartDef -Show

The important thing here is that the chart definition refers to named ranges in the spreadsheet – “Winning Driver” and “Wins” and the Export-Excel command is run with –AutoNameRanges so the first column of is a range named “Winning Driver” and the second “Wins” – you can see in the screen shot “Wins” has been selected in the “Name” box (underneath the File menu) and the data in the Wins column is selected. The chart doesn’t need a legend and positioned the right of column 3

clip_image008

I found that he EEPLUS object which Doug uses can insert a Data table object directly into a worksheet, which should be more efficient, and it also saves using a select-object command to remove the database housekeeping properties which are in every row of data as I had to do in the example above. It didn’t take much to a command to Doug’s module to put SQL data into a spreadsheet without having to pipe the data into Export-Excel from another command. And I cheated by passing the resulting object through to Export-Excel so that I could use parameters found in Export-Excel and pass the Worksheet object and parameters on and get Export-Excel to finish the job so I write something like this:
Send-SQLDataToExcel -SQL $sql -Session $session -path .\demo2.xlsx -WorkSheetname "Winners" `
        -AutoSize  -AutoNameRange -ExcelChartDefinition $chartDef -Show
  

In this example I use an existing session with a database – the online help shows you how to use different connection strings with ODBC or the SQL Server native client. 

I also added commands to set values along a row or down a column – for an example we can expand the racing data to not just how many wins, but also how many fastest laps and how many pole positions, export this data and use the -Passthrough switch to get an Excel Package object back
$SQL = "SELECT top 25 DriverName,         Count(RaceDate) as Races ,
                      Count(Win) as Wins, Count(Pole)     as Poles,
                      Count(FastestLap) as Fastlaps
        FROM  Results
        GROUP BY DriverName
        ORDER BY (count(win)) desc"

$Excel = Send-SQLDataToExcel -SQL $sql -Session $session -path .\demo3.xlsx `
            -WorkSheetname "Winners" -AutoSize -AutoNameRange -Passthru

Having done this, we can add columns to calculate the ratios of two pairs of existing columns

$ws = $Excel.Workbook.Worksheets["Winners"]
Set-Row    -Worksheet $ws -Heading "Average"     -Value {"=Average($columnName`2:$columnName$endrow)"}
`
              -NumberFormat "0.0" -Bold
Set-Column -Worksheet $ws -Heading "WinsToPoles" -Value {"=D$row/C$row"} -Column 6 -AutoSize -AutoNameRange
Set-Column -Worksheet $ws -Heading "WinsToFast"  -Value {"=E$row/C$row"} -Column 7 -AutoSize -AutoNameRange
Set-Format -WorkSheet $ws -Range "F2:G50" -NumberFormat "0.0%"

In the examples above the value parameter is a Script block, when this is evaluated $row and $column are available so if the value is being inserted in row 5, {"=E$row/C$row"} becomes =E5/C5
The script block can use $row , $column (current row and column numbers) $columnName (current column letter), $StartRow/$EndRow $StartColumn/$EndColumn (column and row numbers)

If the value begins with “=” it is treated as a formula rather than a value; we don’t normally want to put in a fixed formula – without the “=” the value inserted down the column or across the row will be a constant

The set-Column command supports range naming, and both commands support formatting – or we can use the ‑PassThru switch and pipe the results of setting the column into Set-Format. There seems to be a bug in the underlying library where applying number formatting to column after formatting a row applies the same formatting to the column and to the row from the previous operation. So, the example above uses a third way to apply the format which is to specify the range of cells in Set-Format.
Finally we can output this data, and make use of the names given to the newly added columns in a new chart.

$chart = New-ExcelChart -NoLegend -ChartType XYScatter -XRange WinsToFast -YRange WinsToPoles `
           -Column 7 -Width 2000 -Height 700

Export-Excel -ExcelPackage $Excel -WorkSheetname "Winners"-Show -AutoSize -AutoNameRange `
         -ExcelChartDefinition $chart

clip_image010

So there you have it; PowerShell objects or SQL data goes in – possibly over multiple sheets; headings and filters get added, panes arranged: extra calculated rows and columns are inserted , and formatting applied, pivot tables and charts inserted – and if Excel itself is available you can export them. No doubt someone will ask before too long if I get the the charts out of Excel and into PowerPoint slides ready for a management meeting … And since the all of this only works with XLSX files, not legacy XLS ones there might me another post soon about reading those files.

Advertisements

December 11, 2017

Using the Import Excel module part 2: putting data into .XLSx files

Filed under: Office,Powershell — jamesone111 @ 3:55 pm

This is third of a series of posts on Excel and PowerShell – the first on getting parts of an Excel file out as images wasn’t particularly tied to the ImportExcel Module, but the last one, this one and next one are.  I started with the Import Command – which seemed logical given the name of the module; the Export command is more complicated, because we may want to control the layout and formatting of the data, add titles, include pivot tables and draw charts;. so I have split it into two posts. At its simplest the command looks like this :

Get-Process | Export-Excel -Path .\demo.xlsx -Show

This gets a list of processes, and exports them to an Excel file; the -Show switch tells the command to try to open the file using Excel after saving it. I should be clear here that import and export don’t need Excel to be installed and one of the main uses is to get things into Excel format with all the extras like calculations, formatting and charts on a computer where you don’t want to install desktop apps; so –Show won’t work in those environments.  If no –WorksheetName parameter is give the command will use “Sheet1”.

Each process object has 67 properties and in the example above they would all become columns in the worksheet, we can make things more compact and efficient by using Select-Object in the command to filter down to just the things we need:

Get-Process | Select-Object -Property Name,WS,CPU,Description,StartTime |
Export-Excel -Path .\demo.xls -Show
 

Failed exporting worksheet 'Sheet1' to 'demo.xls':
Exception calling ".ctor" with "1" argument(s):
"The process cannot access the file 'demo.xls' because it is being used by another process."

This often happens when you look at the file and go back to change the command and forget to close it – we can either close the file from Excel, or use the -KillExcel switch in Export‑Excel – from now on I’ll use data from a variable

$mydata = Get-Process | Select-Object -Property Name, WS, CPU, Description, Company, StartTime
$mydata | Export-Excel -KillExcel -Path .\demo.xlsx -Show

This works, but Export-Excel modifies the existing file and doesn’t remove the old data – it takes the properties of the first item that is piped into it and makes them column headings, and writes each item as a row in the spreadsheet with those properties. (If different items have different properties there is a function Update-FirstObjectProperties to ensure the first row has every property used in any row). If we are re-writing an existing sheet, and the new data doesn’t completely cover the old we may be left with “ghost” data. To ensure this doesn’t happen, we can use the ‑ClearSheet option

$mydata | Export-Excel -KillExcel -Path .\demo.xlsx -ClearSheet -Show

clip_image002

Sometimes you don’t want to clear the sheet but to add to the end of it, and one of the first changes I gave Doug for the module was to support a –Append switch, swiftly followed by a change to make sure that the command wasn’t trying to clear and append to the same sheet.

We could make this a nicer spreadsheet – we could make it clear the column headings look like headings, and even make them filters, we can also size the columns to fit…

$mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet `
             -BoldTopRow
-AutoSize -Title "My Processes" -TitleBold -TitleSize 20 -FreezePane 3 -AutoFilter -Show

clip_image004

The screen shot above shows the headings are now in bold and the columns have been auto sized to fit. A title has been added in bold, 20-point type; and the panes have been frozen above row 3. (There are options for freezing the top row or the left column or both, as well as the option used here –FreezePane row [column]) and filtering has been turned on.

Another way to present tabular data nicely is to use the -Table option

$mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet –BoldTopRow    -AutoSize `
       -TableName table -TableStyle Medium6 -FreezeTopRow -show

clip_image006

“Medium6” is the default table style but there are plenty of others to choose from, and intellisense will suggest them

clip_image008

Sometimes it is helpful NOT to show the sheet immediately, and one of the first things I wanted to add to the module was the ability to pass on an object representing the current state of the workbook to a further command, which makes the following possible:

$xl = $mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" `
    
-ClearSheet -AutoSize -AutoFilter -BoldTopRow –FreezeTopRow -PassThru

$ws = $xl.Workbook.Worksheets["Processes"]

Set-Format -WorkSheet $ws -Range "b:b" -NumberFormat "#,###"   -AutoFit
Set-Format -WorkSheet $ws -Range "C:C" -NumberFormat "#,##0.00" -AutoFit
Set-Format -WorkSheet $ws -Range "F:F" -NumberFormat "dd MMMM HH:mm:ss" -AutoFit

The first line creates a spreadsheet much like the ones above, and passes on the Excel Package object which provides the reference to the workbook and in turn to the worksheets inside it.
The example selected three columns from the worksheet and applied different formatting to each. The module even supports conditional formatting, for example we could add these lines into the sequence above

Add-ConditionalFormatting -WorkSheet $ws -Range "c2:c1000" -DataBarColor Blue
Add-ConditionalFormatting -WorkSheet $ws -Range "b2:B1000" -RuleType GreaterThan
`
           
-ConditionValue '104857600'  -ForeGroundColor "Red" -Bold

The first draws data bars so we can see at glance what is using CPU time and the second makes anything using over 100MB of memory stand out.

Finally, a call to Export-Excel will normally apply changes to the workbook and save the file, but there don’t need to any changes – if you pass it a package object and don’t specify passthrough it will save your work, so “Save and Open in Excel” is done like this once we have put the data in a formatted it the way we want.

Export-Excel -ExcelPackage $xl -WorkSheetname "Processes" -Show

clip_image002[1]

In the next post I’ll look at charts and Pivots, and the quick way to get SQL data into Excel

December 5, 2017

Using the Import-Excel module: Part 1 Importing

Filed under: Office,Powershell — jamesone111 @ 9:15 am

The “EEPLus” project provides .NET classes to read and write XLSx files without the need to use the Excel object model or even have Excel installed on the computer (XLSx files, like the other  Office Open XML Format are actually .ZIP format files, containing XML files describing different aspects of the document – they were designed to make that sort of thing easier than the “binary” formats which went before.)   Doug Finke, who is well known in the PowerShell community, used EEPlus to build a PowerShell module named ImportExcel which is on GitHub and can be downloaded from the PowerShell gallery (by running Install-Module ImportExcel on PowerShell 5 or PS4 with the Package Management addition installed) As of version 4.0.4 his module contains some of my contributions. This post is to act as an introduction to the Export parts of the module that I contributed to; there are some additional scripts bundled into the module which do require Excel itself but the core Import / Export functions do not. This gives a useful way to get data on a server into Excel format, or to provide users with a work book to enter data in an easy to use way and process that data on the server – without needing to install Microsoft Office or translate to and from formats like .CSV.

The Import-Excel command reads data from a worksheet in an XLSx file. By default, it assumes the data has headers and starts with the first header in Cell A1 and the first row of data in row 2. It will skip columns which don’t have a header and but will include empty rows. If no worksheet name is specified it will use the first one in the work book, so at its simplest the command looks like :
Import-Excel -Path .\demo.xlsx  

It’s possible that the worksheet isn’t the first sheet in the workbook and/or has a title above the data, so we can specify the start point explicitly
Import-Excel -Path .\demo.xlsx -WorkSheetname winners -StartRow 2  

We can say the first row does not contain headers and either have each property (column) named P1, P2, P3 etc, by using the ‑NoHeader switch or specify header names with the -HeaderName parameter like this

Import-Excel -Path .\demo.xlsx -StartRow 3 -HeaderName “Name”,"How Many"

The module also provides a ConvertFrom-ExcelSheet command which takes -Encoding and -Delimiter parameters and sends the data to Export-CSV with those parameters, and a ConvertFrom-ExcelToSQLInsert command which turns each row into a SQL statement: this command in turn uses a command ConvertFrom-ExcelData, which calls Import-Excel and then runs a script block which takes two parameters PropertyNames and Record.

Because this script block can do more than convert data, I added an alias “Use-ExcelData” which is now  part of the module and can be used like this
Use-ExcelData -Path .\NewUsers.xlsx -HeaderRow 2 -scriptBlock $sb

If I define the script block as below, each column becomes a parameter for the New-AdUser command which is run for each row

$sb = {
  param($propertyNames, $record)
  $propertyNames | foreach-object -Begin {$h = @{} }  -Process {
      if ($null -ne $record.$_) {$h[$_] = $record.$_}
  } -end {New-AdUser @h -verbose}
}

The script block gets a list of property names and a row of data: the script block gets called for each row and creates a hash table, adds an entry for each property and finally Splats the parameters into a command. It can be any command in the end block provided that the column names in Excel match its Parameters , I’m sure you can come up with your own use cases.

November 25, 2017

Getting parts of Excel files as images.

Filed under: Office,Powershell — jamesone111 @ 7:54 pm

I feel old when I realise its more than two decades since I learnt about the object models in Word, Excel and even Microsoft project and how to control them from other applications. Although my preferred tool is now PowerShell rather than Access’s version of Visual basic, the idea that “it’s all in there somewhere” means I’ll go and do stuff inside Excel from time to time…

One of the things I needed to do recently was to get performance data into a spreadsheet with charts – which the export part of Doug Finke’s ImportExcel module handles very nicely. But we had a request to display the charts on a web page without the need to open an Excel file, so it was time to have a look around in Excel’s [very hierarchical] object model.

An Excel.Application contains
…. Workbooks which contain
…. …. Worksheets which contain
…. …. …. Chartobjects each of which contains
…. …. …. …. A Chart which has
…. …. …. …. …. An Export Method

It seems I can get what I need if I get an Excel application object, load the workbook, work through the sheets, find each chart, decide a name to save it as and call its export method. The PowerShell to do that looks like this

$OutputType    = "JPG"
$excelApp      = New-Object -ComObject "Excel.Application"
$excelWorkBook = $excelApp.Workbooks.Open($path)
foreach ($excelWorkSheet in $excelWorkBook.Worksheets) {
  foreach ($excelchart in $excelWorkSheet.ChartObjects([System.Type]::Missing)) {
    $excelApp.Goto($excelchart.TopLeftCell,$true)
    $imagePath = Join-Path -Path $Destination -ChildPath ($excelWorkSheet.Name +
                        "_" + ($excelchart.Chart.ChartTitle.Text + ".$OutputType"))
    $excelchart.Chart.Export($imagePath, $OutputType, $false)    
  }
}
$excelApp.Quit()

A couple of things to note – the export method can output a PNG, JPG or GIF file and in the final version of this code, $OutputType is passed as a parameter (like $Path and $Destination  I’ve got into the habit of capitalizing parameter names, and starting normal variables with lowercase letters). There’s a slightly odd way of selecting ‘all charts’ and if the chart isn’t selected before exporting it doesn’t export properly.

I sent Doug a this which he added to his module (along with some other additions I’d been meaning to send him for over a year!). Shortly afterwards he sent me a message 
Hello again. Someone asked me about png files from Excel. They generate a sheet, do conditional formatting and then they want to save is as a png and send that instead of the xlsx…

Back at Excel’s object model… there isn’t an Export method which applies to a range of cells or a whole worksheet – the SaveAs method doesn’t have the option to save a sheet (or part of one) as an image. Which left me asking “how would I do this manually?” I’d copy what I needed and paste it into something which can save it. From version 5 PowerShell has a Get-Clipboard cmdlet which can handle image data. (Earlier versions let you access the clipboard via the .net objects but images were painful). The Excel object model will allow a selection to be copied, so a single script can load the workbook, make a selection, copy it, receive it from the clipboard as an image and save the image.

$Format = [system.Drawing.Imaging.ImageFormat]::Jpeg
$xlApp  = New-Object -ComObject "Excel.Application"
$xlWbk  = $xlApp.Workbooks.Open($Path)
$xlWbk.Worksheets($WorkSheetname).Select()
$xlWbk.ActiveSheet.Range($Range).Select() | Out-Null
$xlApp.Selection.Copy() | Out-Null
$image = Get-Clipboard -Format Image
$image.Save($Destination, $Format)

In practice $Path, $Worksheetname, $Range, $Format and $Destination are all parameters. And the whole thing is wrapped in a function Convert-XlRangeToImage
Excel puts up a warning that there is a lot of data in the clipboard on exit and to stop that I copy a single cell before exiting.

$xlWbk.ActiveSheet.Range("a1").Select() | Out-Null
$xlApp.Selection.Copy() | Out-Null
$xlApp.Quit()

The Select and Copy methods return TRUE if they succeed so I send those to Null. The whole thing combines with Doug’s module like this

$excelPackage = $myData | Export-Excel -Path $Path -WorkSheetname $workSheetname
$workSheet    = $excelPackage.Workbook.Worksheets[$workSheetname]
$range        = $workSheet.Dimension.Address
#      << apply formatting >>
Export-Excel -ExcelPackage $excelPackage -WorkSheetname $workSheetname
Convert-XlRangeToImage -Path $Path -WorkSheetname $workSheetname -Range $range –Destination "$pwd\temp.png" –Show

I sent the new function over to Doug and starting with version 4.0.8 it’s part of the downloadable module

July 24, 2017

An extra for my PowerShell profile–Elevate

Filed under: Uncategorized — jamesone111 @ 7:15 pm

More than usual, in the last few days I’ve found myself starting PowerShell or the ISE only to find I wanted a session as administrator : it’s a common enough thing but eventually I said ENOUGH!  I’d seen “-verb runas” to start an executable as administrator , so I added this to my profile.

Function Elevate        {
<#
.Synopsis
    Runs an instance of the current program As Administrator
#>

    Start-Process (Get-Process -id $PID).path -verb runas
}

June 16, 2017

More on writing clear scripts: Write-output and return … good or bad ?

Filed under: Powershell — jamesone111 @ 11:26 am

My last post talked about writing understandable scripts and something I read a piece entitled Let’s kill Write-Output by Mark Krauss (actually I found it because Thomas Lee Tweeted it with “And sort out return too”).

So let’s start with one practicality. You can’t remove a command which has been in a language for 10 years unless you are prepared for a lot of pain making people re-write scripts. It’s alias “echo” was put there for people who come from other scripting languages who start by asking “How do I print to the console”. But if removing it altogether is impractical, we can advise people to avoid it, write rules to catch it in the script analyser and so on. Should we ? And when is a good idea to use it ?

Mark points out he’s not talking about Write-host, which should be kept for limited scenarios: if you want the user to see it by default, but it isn’t part of the output then that’s a job for Write-host, for example with my Get-SQL command   $result = Get-SQL $sqlQuery writes “42 rows returned” to the console but the output saved into $result is the 42 rows of data. Mark gives an example:    
Write-Output "PowerShell Processes:"
Get-Process -Name PowerShell

and says it is better written as  
"PowerShell Processes:"
Get-Process -Name PowerShell

And this is actually a case where Write-host should be used … why ? Let’s turn that into a function.
Function Get-psProc {
  "PowerShell Processes:"
  Get-Process -Name"*PowerShell*"
}

Looks fine doesn’t it ? But it outputs two different types of object into the pipeline. All is well if we run Get-psProc on its own, but if we run
 Get-psProc | ConvertTo-Csv 
It returns
#TYPE System.String
"Length"
"21" 

The next command in the pipeline saw that the first object was a string and that determined its behaviour. “PowerShell processes” is decoration you want the user to see but isn’t part of the output. That earlier post on understandable scripts came from a talk about writing good code and the one of the biggest problems I find in other peoples code is fixation with printing to the screen.  That leads to things like the next example – which is meant to read a file and say how many lines there are and their average length.

$measurement = cat $path | measure -average Length
echo ("Lines read    : {0}"    -f $measurement.Count  )
echo ("Average length: {0:n0}" -f $measurement.Average)

This runs and it does the job the author intended but I’d suggest they might be new to PowerShell and haven’t yet learnt that Output is not the same as “stuff for a user to read” (as in the previous example) , and they feel their output must be printed for reading. Someone more experienced with PowerShell might just write:
cat $path| measure -average Length
If they aren’t bothered about the labels,  or if the labels really matter
cat $path | measure -average Length | select @{n="Lines Read";e={$_.count}}, @{n="Average Length";e={[math]::Round($_.Average,2)}}

If this is something we use a lot, we might change the aliases to cmdlet names, specify parameter names and save it for later use. And it is re-usable, for example if we want to do something when there are more than x lines in the file, where the previous version can only return text with the number of lines embedded in it.  Resisting the urge to print everything is beneficial and that gets rid of a lot of uses of Write-output (or echo).

Mark’s post has 3 beefs with Write-Output.

  1. Performance. It is slower but rarely noticeably so, so I’d discount this.
  2. Security / Predictability – Write-Output can be redefined, and that allows for something malign or just buggy. True, but it also allows you to redefine it for logging debugging and so on. So you could use a proxy Write-output for testing and the standard one in production. So this is not exclusively bad
  3. The false sense of security. He says that explicitly returning stuff is held to be better than implicit return, which implies
    Write-Output $result
          is better than just   $result            
    But no-one says you should write    cat $path | Write-Output it’s obviously redundant, but when you don’t isn’t that implying output ?

My take on the last point is piping output into write-output (or Out-Default) is a tautology “Here’s some output, take it and output it”. It’s making things longer but not clearer. If using write-output does make things clearer then it is a sign the script is hard to read and at least needs some comments, and possibly some redesign. Joel Bennett sums up the false sense of security part in a sentencewhile some people like it because it highlights the spots where you intentionally output something — other people argue it’s presence distracts you from the fact that other lines could output.”  [Thanks Joel, that would have taken me a paragraph!]

This is where Thomas’ comment about return comes in. Return tells PowerShell to bail out of a function, and there are many good reasons for doing that, it also has a two in one syntax :  return $result is the same as
$result
return

When I linked to Joel above he also asks the question whether, as the last lines of a function, this
$output = $temp + (Get-Thing $temp)
return $output

is better or worse than
$output = $temp + (Get-Thing $temp)
$output

Not many people would add return to the second example – it’s redundant.  But if you store the final output in a variable there is some logic to using return (or Write-output) to send it back. But is it making things any clearer to store the result in a variable ? or it just as easy to read the following.  
$temp + (Get-Thing $temp)

As with Write-output, sometimes using return $result makes things clearer and sometimes it’s a habit from other programming languages where functions return results in a single  place so multiple parts must be gathered and then returned. Here’s something which combines the results of 3 queries and returns them

$result =  (Get-SQL $sqlQuery1)
$result += (Get-SQL $sqlQuery2)
$result +  (Get-SQL $sqlQuery3)

So the first line assigns an array of database rows to a variable the second appends more rows and the third returns these rows together with the results of a third query.  You need to look at  the operator in each line to figure out which sends to the pipeline. Arguably this it is clearer to replace the last line with this:

$result += (Get-SQL $sqlQuery3)
return $result

When there are 3 or 4 lines between introducing $result and putting it into the pipeline this is OK. But lets say there are 50 lines of script between storing the results of the first query and appending the results of the second.  Has the script been made clearer by storing a partial result … or would you see something being appended to $result and look further up the script for where it was originally set and anywhere it was changed ? This example does nothing with the combined segments (like sorting them) we’re just following an old habit of only outputting in one place. Not outputting anything until we have everything can mean it takes a lot longer to run the script – we could have processed all the results from the first query while waiting for the second to run. I would dispense with the variable entirely and use

Get-SQL $sqlQuery1
Get-SQL $sqlQuery2
Get-SQL $sqlQuery3

If there is a lot of script between each I’d then use a #region around the lines which lead up to each query being run
#region build query and return rows for x
#etc etc
Get-SQL $sqlQuery1
#endregion 

so when I collapse the outlining regions in my editor I see
#region build query and return rows for x
#region build query and return rows for y
#region build query and return rows for z

Which gives me a very good sense of what the script is doing at a high level and then I can drill into the regions if I need to. If I do need to do something to the combined set of rows (like sorting) then my collapsed code might become
#region build query for x and keep rows for sorting later
#region build query for y and keep rows for sorting later
#region build query for z and keep rows for sorting later
#region return sorted and de-duplicated results of x,y and Z

Both outlines give a sense of where there should be output and where any output might be a bug.

In conclusion. 
When you see Lots of echo / write-output commands that’s usually a bad sign – it’s usually an indication of too many formatted strings going into the pipeline, but Write-Output is not automatically bad when used sparingly – and used properly return isn’t bad either. But if you find yourself adding either for clarity it should make you ask “Is there a better way”.  


June 2, 2017

On writing understandable scripts.

Filed under: Uncategorized — jamesone111 @ 7:20 pm

 

At two conferences recently I gave a talk on “What makes a good PowerShell module”  (revisiting an earlier talk) the psconf.eu guys have posted a video of it and I’ve made the slides available (the version in US used the same slide deck with a different template). .

One of the my points was Prefer the familiar way to the clever way. A lot of us like the brilliant PowerShell one-liner (I belong to the “We don’t need no stinking variables” school and will happily pipe huge chains of commands together). But sometimes breaking it into multiple commands means that when you return it later or someone new picks up what you have done, it is easier to understand what is happening.  There are plenty of other examples, but generally clever might be opaque ; opaque needs comments and somewhere I picked up that what applies to jokes, applies to programming: if you have to explain it, it isn’t that good.

Sometimes, someone doesn’t now the way which is familiar to everyone else, and they throw in something like this example which I used in the talk:
Set-Variable -Scope 1 -Name "variableName" -Value $($variableName +1)
I can’t recall ever using Set-Variable, and why would someone use it to to set a variable to its current value + 1? The key must be in the -scope parameter, –scope 1 means “the parent scope” , most people would write $Script:VariableName ++ or $Global:VariableName ++ When we encounter something like this, unravelling what Set-Variable is doing interrupts the flow of understanding … we have to go back and say “so what was happening when that variable was set …” 

There are lots of cases where there are multiple ways to do something some are easier to understand but aren’t automatically the one we pick: all the following appear to do the same job

"The value is " + $variable
"The value is " + $variable.ToString()
"The value is $variable"
"The value is {0}" -f $variable
"The value is " -replace "$",$variable

You might see .ToString() and say “that’s thinking like a C# programmer” … but if $variable holds a date and the local culture isn’t US the first two examples will produce different results (to string will use local cultural settings) .
If you work a lot with the –f operator , you might use {0:d} to say “insert the first item in ‘short date’ format for the local culture” and naturally write
“File {0} is {1} bytes in size and was changed on {2}” –f $variable.Name,$variable.Length,$variable.LastWriteTime
Because the eye has to jump back and forth along the line to figure out what goes into {0} and then into {1} and so on, this loses on readability compared concatenating the parts with + signs, it also assumes the next person to look at the script has the same familiarity with the –f operator. I can hear old hands saying “Anyone competent with PowerShell should be familiar with –f” but who said the person trying to understand your script meets your definition of competence.   
As someone who does a lot of stuff with regular expressions, I might be tempted by the last one … but replacing the “end of string” marker ($) to as a way of appending excludes people who aren’t happy with regex.  I’m working on something which auto-generates code at the moment and it uses this because the source that it reads doesn’t provide a way of specifying “append”, but has “replace”. I will let it slide in this case but being bothered by it is a sign that I do ask myself “are you showing you’re clever or writing something that can be worked on later”.  Sometimes the only practical way is hard, but if there is a way which takes an extra minute to write and pays back when looking at the code in a few months time.   

March 13, 2017

Improving PowerShell performance with hash tables.

Filed under: Powershell — jamesone111 @ 1:07 pm

Often the tasks which we do with PowerShell scripts aren’t very sensitive to performance – unless we are sitting drumming our fingers on the desk waiting for it to complete there isn’t a lot of value in making it faster.   When I wrote about start-Parallel, I showed that some things only become viable if they can be run reasonably quickly; but you might assume that scripts which run as scheduled tasks can take an extra minute if they need to . 

imageThat is not always the case.  I’ve been working with a client who has over 100,000 Active directory users enabled for Lync (and obviously they have a lot more AD objects than that). They want to set Lync’s policies based on membership of groups and users are not just placed directly into the policy groups but nested via other groups. If users have a policy but aren’t in the associated group, the policy needs to be removed. There’s a pretty easy Venn diagram for what we need to do.

If you’ve worked with LDAP queries against AD, you may know how to find the nested members of the group using(memberOf:1.2.840.113556.1.4.1941:=<<Group DN>>).
The Lync/Skype for Business cmdlets won’t combine an LDAP filter for AD group membership and non-AD property filter for policy – the user objects returned actually contain more than a dozen different policy properties, but for simplicity I’m just going to use ‘policy’ here – as pseduo-code the natural way to find users and change policy – which someone else had already written – looks like this
Get-CSuser –ldapfiler "(   memberOf <<nested group>> )" | where-object {$_.policy –ne $PolicyName} | Grant-Policy $PolicyName
Get-CSuser –ldapfiler "( ! memberOf <<nested group>>) " | where-object {$_.policy –eq $PolicyName} | Grant-Policy $null

(Very late in the process I found there was a way to check Lync / Skype policies from AD but it wouldn’t have changed what follows). 
You can see that we are going to get every user, those in the group in the first line and those out of it in the second. These “fan out” queries against AD can be slow – MSDN has a warning “Some such queries on subtrees may be more processor intensive, such as chasing links with a high fan-out; that is, listing all the groups that a user is a member of.”   Getting the two sets of data was taking over an hour.  But so what? This is a script which runs once a week, during quiet hours, provided it can run in the window it is given all will be well.  Unfortunately, because I was changing a production script, I had to show that the correct users are selected, the correct changes made and the right information written to a log. While developing a script which will eventually run as scheduled task, testing requires we run it interactively, step through it, check it, polish it, run it again, and a script with multiple segments which run for over an hour is, effectively, untestable (which is why I came to be changing the script in the first place!).

I found I could unpack the nested groups with a script a lot more quickly than using the “natural” 1.2.840.113556.1.4.1941 method; though it feels wrong to do so. I couldn’t find any ready-made code to do the unpack operation – any search for expanding groups comes back to using the OID method which reinforces the idea. 

I can also get all 100,000 Lync users – it takes a few minutes, but provided it is only done once per session it is workable, (I stored the users in a global variable if it was present I didn’t re-fetch them) 

So: I had a variable $users with users and a variable $members which contains all the members of the group; I just had to work out who is each one but not in both. But I had a new problem. Some of the groups contain tens of thousands of users. Lets assume half the users have the policy and half don’t. If I run
$users.where{$_.policy -ne $PolicyName -and $members -contains $_.DistinguishedName}
and
$users.where{$_.policy -eq $PolicyName -and $members -notcontains $_.DistinguishedName}
the -contains operation is going to have a LOT of work to do: if everybody has been given the right policy none of the 50,000 users without it are in the group but we have to look at all 50,000 group-members to be sure – 2,500,000,000 string comparisons. For the 50,000 users who do have the policy, on average [Not]contains has to look at half the group  members before finding a match so that’s 1,250,000,000 comparisons. 3.75 Billion comparisons for each policy  means it is still too slow for testing. Then I had a flash of inspiration, something which might work.

I learnt about Hash tables as a computer science undergraduate, and – as the on-line help puts it  – they are “very efficient for finding and retrieving data”. This can be read as they lead to neat code (which has been their attraction for me in the past) or as they minimize CPU use, which is what I need here.  Microsoft also call hash tables  “associative arrays” and often the boundary between a set of key-value pairs (a dictionary) and a “true” hash table is blurred – with a “true” hash tables the location of data in memory is based on the key value – so an item can be found without scanning the whole list. Some ways to do fast finds make tables slow to build. Things I’d never considered with PowerShell hash tables might turn out to be important at this scale. So I built a hash table to return a user’s policy given their DN:
$users | ForEach-Object -Begin {$hash=@{}} -Process {$hash[$_.distinguishedName] = "" + $_.policy}

About 100,000 users were processed in under 4 seconds, which was a relief; and the right policy came back for $hash[“cn=bob…”] – it looked instant compared with a couple of seconds with $users.where({$_.distinguishedName –eq “cn=bob…”}).policy

This hash table will return one of 3 things. If Bob isn’t set-up for Lync/Skype for Business I will get NULL; if Bob has no policy I will get an empty string (that’s why I add the policy to an empty string – it also forces the policy object to be a string), and if he has a policy I get the policy name. So it was time to see how many users have the right policy (that’s the magenta bit in the middle of the Venn diagram above)
$members.where({$hash[$_] -like $policyname}).count

I’d found ~10,000 members in one of the policy groups, and reckoned if I could get the “find time” down from 20 minutes to 20 seconds that would be OK and … fanfare … it took 0.34 seconds. If we can check can look up 10,000 items in a 100,000 item table in under a second, these must be proper hash tables. I can have the .where() method evaluate
{$hash[$_] –eq $Null} for the AD users who aren’t Lync users or
{$hash[$_] –notin @($null,$policyName) } for users who need the policy to be set. 
It works just as well the other way around for setting up the hash table to return “True” for all members of the AD group; non-members will return null, so we can use that to quickly find users with the policy set but who are not members of the group. 

$members | ForEach-Object -Begin {$MemberHash=@{}} -Process {$MemberHash[$_] = "" + $true}
$users.where({$_.policy -like $policyname -and -not $memberhash[$_.distinguishedName]}).count

Applying this to all the different policies slashed the time to do everything in the script from several hours down to a a handful of minutes.  So I could test thoroughly before the ahead of putting the script into production.

January 29, 2017

Sharing GetSQL

Filed under: Databases / SQL,Powershell — jamesone111 @ 8:03 pm

I’ve uploaded a tool named “GetSQL” to the PowerShell Gallery
Three of my last four posts (and one upcoming one) are about sharing stuff which I have been using for a long time and GetSQL goes back several years – every now and then I add something to it, so it seems like it might never be finished, but eventually I decided that it was fit to be shared.

When I talk about good PowerShell habits, one of the themes is avoiding massive “do-everything” functions; it’s also good minimize the number of dependencies on the wider system state – for example by avoiding global variables. Sometimes it is necessary to put these guidelines to one side –  the module exposes a single command, Get-SQL which uses argument completers (extra script to fill in parameter values). There was a time when you needed to get Jason Shirks TabExpansion Plus Plus to do anything with argument completers but, if you’re running a current version of PowerShell, Register-ArgumentCompleter is now a built in cmdlet. PowerShell itself helps complete parameter names – and Get-SQL is the only place I’ve made heavy use of parameter sets to strip out parameters which don’t apply (and had to overcome some strange behaviour along the way); having completers to fill in the values for the names of connections, databases, tables and column names dramatically speeds up composing commands – not least by removing some silly typos.

One thing about Get- commands in PowerShell is that if the PowerShell parser sees a name which doesn’t match a command where a command-name should be it tries putting Get- in front of the name so History runs Get-History, and SQL runs Get-SQL. That’s one of the reasons why the command didn’t begin life as “Invoke-SQL” and instead became a single command. I also wanted to be able to run lots of commands against the same database without having to reconnect each time. So the connection objects that are used are global variables, which survive from one call to the next – I can run
SQL “select id, name from customers where name like ‘%smith’ ”
SQL “Select * from orders where customerID = 1234”

without needing to make and break connections each time – note, these are shortened versions of the command which could be written out in full as: 
Get-SQL –SQL “Select * from orders where customerID = 1234”
The first time Get-SQL runs it needs to make a connection, so as well as –SQL it has a –Connection parameter (and extra switches to simplify connections to SQL server, Access and Excel). Once I realized that I needed to talk to multiple databases, I started naming the sessions and creating aliases from which the session  can be inferred (I described how here) so after running
Get-SQL -Session f1 -Excel  -Connection C:\Users\James\OneDrive\Public\F1\f1Results.xlsx
(which creates a session named F1 with an Excel file containing results of formula one races) I can dump the contents of a table with
f1 –Table “[races]”

Initially, I wanted to either run a chunk of SQL –like the first example (and I added a –Paste switch to bring in SQL from the Windows Clipboard), or to get the whole of a table (like the example above with a –Table parameter), or see what tables had be defined and their structure so I added –ShowTables and –Describe.  The first argument completer I wrote used the “show tables” functionality to get a list of tables and fill in the name for the –Table or –Describe parameters. Sticking to the one function per command rule one would mean writing “Connect-SQL”, “Invoke-RawSQL” “Invoke-SQLSelect”, and “Get-SQLTable” as separate commands, but it just felt right to be able to allow
Get-SQL -Session f1 -Excel  -Connection C:\Users\James\OneDrive\Public\F1\f1Results.xlsx –showtables

It also just felt right to allow the end of a SQL statement to be appended to what the command line had built giving a command like the following one – the output of a query can, obviously, be passed through a PowerShell Where-Object command but makes much more sense to filter before sending the data back.
f1 –Table “[races]” “where season = 1977”

“where season = 1977” is the –SQL parameter and –Table “[races]” builds “Select * from [Races]” and the two get concatenated to
“Select * from [Races] where season = 1977”.
With argument completers for the table name, it makes sense to fill column names in as well, so there is a –Where parameter with a completer which sees the table name and fills in the possible columns. So the same query results from this:
f1 –Table “[races]”   -where “season”   “= 1977”  
I took it a stage further and made the comparison operators work like they do in where-object, allowing
f1 –Table “[races]” -where “season” -eq 1977

The script will sort out wrapping things in quotes, changing the normal * into SQL’s % for wildcards and so on. With select working nicely (I added –Select to choose columns, –OrderBy, –Distinct and –GroupBy ) I moved on to insert, update (set) and Delete functionality. You can see what I meant when I said I keep adding bits and it might never be “finished”.

Sometimes I just use the module as a quick way to add a SQL query to something I’m working on: need to query a Lync/Skype backend database ? There’s Get-SQL. Line-of-business database ? There’s Get-SQL. Need to poke round in Adobe Lightroom’s SQL-Lite database? I do that with with Get-SQL -  though in both cases its simple query building is left behind, it just goes back to its roots of running a lump of SQL which was created in some other tool.  The easiest way to get data into Excel is usually the EXPORT part of Doug Finke’s excellent import-excel and sometimes it is easier to do it with insert or update queries using Get-SQL. The list goes on … when your hammer seems like a good one, you find an awful lot of nails.

December 6, 2016

Do the job 100 times faster with Parallel Processing in PowerShell

Filed under: Powershell — jamesone111 @ 11:12 pm
Tags: ,

It’s a slightly click-baity title, but I explain below where the 100 times number comes from below. The module is on the PowerShell gallery and you can install it with  Install-Module -Name Start-parallel

Some of the tasks we need to do in PowerShell involve firing off many similar requests and waiting for their answers – for example getting status from lots computers on a network. It might take several seconds to do each one – maybe longer if the machines don’t respond. Doing them one after the other could take ages. If I want to ping all 255 addresses on on my home subnet, most machines will be missing and it will take 3 seconds to time out for each of the 200+ inactive addresses. Even if I try only one ping for each address it’s going to take 10-12 minutes, and for >99% of that time my computer will be waiting for a response. Running them in parallel could speed things up massively.
Incidentally the data which comes back from ping isn’t ideal, and I prefer this to the Test-Connection cmdlet.
Function QuickPing {
    param ($LastByte)
    $P = New-Object -TypeName "System.Net.NetworkInformation.Ping"
    $P.Send("192.168.0.$LastByte") | where status -eq success | select address, roundTripTime
}

PowerShell allows you to start multiple processes using Jobs and there are places where Jobs work well. But it only takes a moment to see the flaw in jobs: if you run
Get-Process *powershell*
Start-Job -ScriptBlock {1..5 | foreach {start-sleep -Seconds 1 ; $_ } }
Get-Process *powershell*

You see that the job creates a new instance of PowerShell … doing that for a single ping is horribly inefficient – jobs are better suited to tasks where run time is much longer than the set up time AND where we don’t want run lots concurrently. In fact I’ve found creating large numbers of jobs tends to crash the PowerShell ISE; so my first attempts at parallelism involved tracking the number of jobs running and keeping to a maximum – starting new jobs only as others finished. It worked but in the process I read this by Boe Prox and this by Ryan Witschger which led me to a better way: RunSpaces and the RunSpace factory.
MSDN defines a RunSpace as “the operating environment where the command pipeline of the PowerShell object is invoked”; and says that the PowerShell object allows applications that programmatically use Windows PowerShell to create pipelines of commands, invoke them and access the results. The factory can create single RunSpaces, or a pool of RunSpaces. So a program (or script) can get a PowerShell object which says “Run this, with these named parameters and these unnamed arguments. Run it asynchronously (i.e. start it and don’t wait for it complete, give me some signal when it is done), and in an a space from this pool.” If there are more things wanting to run than there are RunSpaces, the pool handles queuing for us. 

Thus the idea for Start-Parallel was born.  I wanted to be able to do this
Get-ListOfComputers | Start-Parallel Get-ComputerSettings.ps1
or this  
1..255 | Start-Parallel -Command QuickPing -MaxThreads 500
or even pipe PS objects or hash tables in to provide multiple parameters a same command

-MaxThreads in the second example says create a pool where 500 pings can be in progress, so every QuickPing can be running at the same time (performance monitor shows a spike of threads). So how long does it take to do 255 pings now? 240 inactive addresses taking 3 seconds each gave me ~720 seconds and the version above runs in a little under 7, so a that’s 100 fold speed increase!  This is pretty consistent with what I’ve found with polling servers over the couple of years I’ve been playing with Start-Parallel – things that would take a morning or an afternoon run in a couple of minutes. 

You can install it from the PowerShell Gallery. Some tips

  • Get-ListOfComputers | Start-Parallel Get-ComputerSettings.ps1 
    works better than
    $x = Get-ListOfComputers ; Start-Parallel -InputObject $x -Command Get-ComputerSettings.ps1
    if Get-ListOfComputers is slow, we will probably have the results for the first computer(s) before we have been told the last one on the list to check.    
  • Don’t hit the same same service with many requests in parallel – unless you want to mount a denial of service attack.   
  • Remember that RunSpaces don’t share anything – the parallel RunSpaces won’t load your profile, or inherit anything from the session which launches them. And there is no guarantee that every module out there always behaves as expected if run in multiple RunSpaces simultaneously. In particular if, “QuickPing” is defined in a the same PS1 file which runs Start-Parallel, then Start-Parallel is defined in the global scope and can’t see QuickPing in the script scope. The work round for this is to use  
    Start-Parallel –scriptblock ${Function:\QuickPing}
  • Some commands by their nature specify a computer. For others it is easier to define a script block inside another script block (or a function) which takes a computer name as a parameter and runs
    Invoke-Command –ComputerName $computer –scriptblock $InnerScriptBlock
  • I don’t recommend running Start-Parallel inside itself, but based on very limited testing it does appear to work.

You can install it by running Install-Module -Name Start-parallel

 

November 30, 2016

Powershell Piped Parameter Peculiarities (and a Palliative pattern!)

Filed under: Uncategorized — jamesone111 @ 7:33 am

Writing some notes before sharing a PowerShell module,  I did a quick fact check and rediscovered a hiccup with piped parameters and (eventually) remembered writing a simplified script to show the problem – 3 years ago as it turns out. The script appears below: it has four parameter sets and all it does is tell us which parameter set was selected: There are four parameters: A is in all 4 sets, B is in Sets 2,3 and 4, C is only in 3 and D is only in set 4. I’m not really a fan of parameter sets but they help intellisense to remove choices which don’t apply. 

function test { 
[CmdletBinding(DefaultParameterSetName="PS1")]
param (  [parameter(Position=0, ValueFromPipeLine=$true)]
         $A
         [parameter(ParameterSetName="PS2")]
         [parameter(ParameterSetName="PS3")]
         [parameter(ParameterSetName="PS4")]
         $B,
         [parameter(ParameterSetName="PS3", Mandatory)]
         $C,
         [parameter(ParameterSetName="PS4", Mandatory)]
         $D
)
$PSCmdlet.ParameterSetName
}

So lets check out what comes back for different parameter combinations
> test  1
PS1

No parameters or parameter A only gives the default parameter set. Without parameter C or D it can’t be set 3 or 4, and with no parameter B it isn’t set 2 either.

> test 1 -b 2
PS2
Parameters A & B or parameter B only gives parameter set 2, – having parameter B it must be set 2,3 or 4 and but 3 & 4 can be eliminated because C and D are missing. 

> test 1 -b 2 –c 3 
PS3

Parameter C means it must be set 3 (and D means it must be set 4) ; so lets try piping the input for parameter A
> 1 | test 
PS1
> 1 | test  -b 2 -c 3
PS3

So far it’s as we’d expect.  But then something goes wrong.
> 1 | test  -b 2
Parameter set cannot be resolved using the specified named parameters

Eh ? If data is being piped in, PowerShell no longer infers a parameter set from the absent mandatory parameters.  Which seems like a bug. And I thought about it: why would piping something change what you can infer about a parameter not being on the command line? Could it be uncertainty whether values could come from properties the piped object ? I thought I’d try this hunch
   [parameter(ParameterSetName="PS3", Mandatory,ValueFromPipelineByPropertyName=$true)]
  $C,
  [parameter(ParameterSetName="PS4", Mandatory,ValueFromPipelineByPropertyName=$true)]
  $D

This does the trick – though I don’t have a convincing reason why two places not providing the values works better than one – (in fact that initial hunch doesn’t seem to stand up to logic) . This (mostly) solves the problem– there could be some odd results if parameter D was named “length” or “path” or anything else commonly used as a property name. I also found in the “real” function that adding ValueFromPipelineByPropertyName to too many parameters – non mandatory ones – caused PowerShell to think a set had been selected and then complain that one of the mandatory values was missing from the piped object. So just adding it to every parameter isn’t the answer

November 19, 2016

Format-XML on the PowerShell Gallery

Filed under: Powershell — jamesone111 @ 8:08 pm
Tags: , ,

In the last post, I spoke about those bits of PowerShell we carry around and never think to share. Ages ago I wrote a function named “Format-XML” which “pretty prints” XML with nice indents. I’ve passed it on to a few people over the years -  it’s been included as a “helper” in various modules – but I hadn’t published it on its own.

I’ve got that nagging feeling  I should be crediting someone for providing the original bits but I’ve long since lost track of who. In Britain people sometimes talk about “Trigger’s broom” which classicists tend to call the Ship of Theseus – if you change a part it’s still the same thing, right? But after every part has been changed? That’s even more true of the “SU” script which will be the subject of a future post but in that case I’ve kept track of its origins.

Whatever… Format-XML is on the PowerShell gallery – you can click Show under FileList on that page to look at the code, or use PowerShell Get (see the Gallery homepage for details) to install it, using Install-Script -Name format-xml the licence is chosen to all you to incorporate it into anything you want with no strings attached.

Then you can run to load it with .  format-xml.ps1 – that leading “.” matters … and run it with
Format-XML $MyXML
or $MyXML | Format-XML
Where $MyXML is either any of

  • An XML object
  • Some text in XML format
  • The name of a file which contains XML, or
  • A file object where the file contains XML

Incidentally, if you have stuff to share on the PowerShell gallery the sign-up process is quick, and the PowerShell Get module has a Update-ScriptFileInfo command to set the necessary metadata and then Publish-Script puts the script into the gallery – it couldn’t be easier.  

November 13, 2016

One of those “everyday” patterns in PowerShell –splitting a list

Filed under: Powershell — jamesone111 @ 9:16 pm

For a PowerShell enthusiast, the gig I’ve been on for the last few weeks is one of those “Glass Half Full/Glass Half Empty” situations: the people I’m working with could do a lot more with PowerShell than they are (half empty) but that’s an opportunity to make things better (half full). A pattern which I take for granted took on practically life-changing powers for a couple of my team this week…. 

We had to move some  … lets just say “things”, my teammates know they run Move-Thing Name  Destination but they had been mailed several lists with maybe 100 things to move in each one. Running 100 command lines is going to be a chore.  So I gave them this
@"
PASTE
YOUR
LIST
HERE
"@
   -Split  "\s*[\r\n]+\s*"  | ForEach-Object { Move-thing $_ "Destination"}

Text which is wrapped in @"<newline> and <newline> "@ is technically called a "here string" but to most people it is just a way to have a multiline string.  So pasting a list of items between the quotes is trivial, but the next bit looks like some magic spell …
PowerShell’s –split operator takes regular expressions and splits the text where it finds them (and throws matching bit away). in Regex \r is carriage Return, and \n is New line and [\r\n] is “either return or newline”, so [\r\n]+ means "at least one of the line break characters, but any number in any order." And I usually use –split this way, but here we found the lists often included spaces and tabs – adding \s* at the beginning and end adds “preceded by / followed by  any number of white space characters – even zero”

So the multiline string is now a bunch of discrete items. The command we want to run doesn’t always need to be in a foreach {} – text piped in to many commands becomes the right parameter like this 
@"
List
"@
   -Split  "\s*[\r\n]+\s*"  | Get-Thing | Format-Table
But for  a foreach {} will always work even if it is cumbersome.

I think lots of us have these ready made patterns – as much as anything this post is a call to think about ones you might share. it was nice to pass this one on and hear the boss’s surprise when one of junior guys told him everything was done.

July 1, 2016

Just enough admin and constrained endpoints. Part 2: Startup scripts

Filed under: Uncategorized — jamesone111 @ 1:36 pm

In part 1 I looked at endpoints and their role in building your own JEA solution, and said applying constraints to end points via a startup script did these things

  • Loads modules
  • Hides cmdlets, aliases and functions from the user.
  • Defines which scripts and external executables may be run
  • Defines proxy functions to wrap commands and modify their functionality
  • Sets the PowerShell language mode, to further limit the commands which can be run in a session, and prevent new ones being defined.

The endpoint is a PowerShell RunSpace running under its own user account (ideally a dedicated account) and applying the constraints means a user connecting to the endpoint can do only a carefully controlled set of things. There are multiple ways to set up an endpoint, I prefer to do it with using a start-up script, and below is the script I used in a recent talk on JEA. It covers all the points and works but being an example the scope is extremely limited :

$Script:AssumedUser  = $PSSenderInfo.UserInfo.Identity.name
if ($Script:AssumedUser) {
   
Write-EventLog -LogName Application -Source PSRemoteAdmin -EventId 1 -Message "$Script:AssumedUser, Started a remote Session"
}
# IMPORT THE COMMANDS WE NEED
Import-Module -Name PrintManagement -Function Get-Printer

#HIDE EVERYTHING. Then show the commands we need and add Minimum functions
if (-not $psise) { 
    Get-Command -CommandType Cmdlet,Filter,Function | ForEach-Object  {$_.Visibility = 'Private' }
    Get-Alias                                       | ForEach-Object  {$_.Visibility = 'Private' }
    #To show multiple commands put the name as a comma separated list 
    Get-Command -Name Get-Printer                   | ForEach-Object  {$_.Visibility = 'Public'  } 

    $ExecutionContext.SessionState.Applications.Clear()
    $ExecutionContext.SessionState.Scripts.Clear()

    $RemoteServer =  [System.Management.Automation.Runspaces.InitialSessionState]::CreateRestricted(
                                     
[System.Management.Automation.SessionCapabilities]::RemoteServer)
    $RemoteServer.Commands.Where{($_.Visibility -eq 'public') -and ($_.CommandType -eq 'Function') } |
              
ForEach-Object {  Set-Item -path "Function:\$($_.Name)" -Value $_.Definition }
}

#region Add our functions and business logic
function Restart-Spooler {
<#
.Synopsis
    Restarts the Print Spooler service on the current Computer
.Example
    Restart-Spooler
    Restarts the spooler service, and logs who did it  
#>

    Microsoft.PowerShell.Management\Restart-Service -Name "Spooler"
    Write-EventLog -LogName Application -Source PSRemoteAdmin -EventId 123 -Message "$Script:AssumedUser, restarted the spooler"
}
#endregion
#Set the language mode
if (-not $psise) {$ExecutionContext.SessionState.LanguageMode = [System.Management.Automation.PSLanguageMode]::NoLanguage}

Logging
Any action taken from the endpoint will appear to be carried out by privileged Run As account, so the script needs to log the name of the user who connects runs commands. So the first few lines of the script get the name of the connected user and log the connection: I set-up PSRemoteAdmin as a source in the event log by running.  
New-EventLog -Source PSRemoteAdmin -LogName application

Then the script moves on to the first bullet point in the list at the start of this post: loading any modules required; for this example, I have loaded PrintManagement. To make doubly sure that I don’t give access to unintended commands, Import-Module is told to load only those that I know I need.

Private functions (and cmdlets and aliases)
The script hides the commands which we don’t want the user to have access to (we’ll assume everything). You can try the following in a fresh PowerShell Session (don’t use one with anything you want to keep!)

function jump {param ($path) Set-Location -Path $path }
(Get-Command set-location).Visibility = "Private"
cd \
This defines jump as a function which calls Set-Location – functionally it is the same as the alias CD; Next we can hide Set-location, and try to use CD but this returns an error
cd : The term 'Set-Location' is not recognized
But Jump \ works: making something private stops the user calling it from the command line but allows it to be called in a Function. To stop the user creating their own functions the script sets the language mode as its final step 

To allow me to test parts of the script, it doesn’t hide anything if it is running in the in the PowerShell ISE, so the blocks which change the available commands are wrapped in  if (-not $psise) {}. Away from the ISE the script hides internal commands first. You might think that Get-Command could return aliases to be hidden, but in practice this causes an error. Once everything has been made Private, the Script takes a list of commands, separated with commas and makes them public again (in my case there is only one command in the list). Note that script can see private commands and make them public, but at the PowerShell prompt you can’t see a private command so you can’t change it back to being public.

Hiding external commands comes next. If you examine $ExecutionContext.SessionState.Applications and $ExecutionContext.SessionState.Scripts you will see that they are both normally set to “*”, they can contain named scripts or applications or be empty. You can try the following in an expendable PowerShell session

$ExecutionContext.SessionState.Applications.Clear()
ping localhost
ping : The term 'PING.EXE' is not recognized as the name of a cmdlet function, script file, or operable program.
PowerShell found PING.EXE but decided it wasn’t an operable program.  $ExecutionContext.SessionState.Applications.Add("C:\Windows\System32\PING.EXE") will enable ping, but nothing else.

So now the endpoint is looking pretty bare, it only has one available command – Get-Printer. We can’t get a list of commands, or exit the session, and in fact PowerShell looks for “Out-Default” which has also been hidden. This is a little too bare; we need to Add constrained versions of some essential commands;  while to steps to hide commands can be discovered inside PowerShell if you look hard enough, the steps to put in the essential commands need to come from documentation. In the script $RemoteServer gets definitions and creates Proxy functions for:

Clear-Host   
Exit-PSSession
Get-Command  
Get-FormatData
Get-Help     
Measure-Object
Out-Default  
Select-Object

I’ve got a longer explanation of proxy functions here, the key thing is that if PowerShell has two commands with the same name, Aliases beat Functions, Functions beat Cmdlets, Cmdlets beat external scripts and programs. “Full” Proxy functions create a steppable pipeline to run a native cmdlet, and can add code at the begin stage, at each process stage for piped objects and at the end stage, but it’s possible to create much simpler functions to wrap a cmdlet and change the parameters it takes; either adding some which are used by logic inside the proxy function, removing some or applying extra validation rules. The proxy function PowerShell provides for Select-Object only supports two parameters: property and InputObject, and property only allows 11 pre-named properties. If a user-callable function defined for the endpoint needs to use the “real” Select-Object – it must call it with a fully qualified name: Microsoft.PowerShell.Utility\Select-Object (I tend to forget this, and since I didn’t load these proxies when testing in the ISE, I get reminded with a “bad parameter” error the first time I use the command from the endpoint).  In the same way, if the endpoint manages active directory and it creates a Proxy function for Get-ADUser, anything which needs the Get-ADUser cmdlet should specify the ActiveDirectory module as part of the command name.

By the end of the first if … {} block the basic environment is created. The next region defines functions for additional commands; these will fall mainly into two groups: proxy functions as I’ve just described and functions which I group under the heading of business logic. The end point I was creating had “Initialize-User” which would add a user to AD from a template, give them a mailbox, set their manager and other fields which appear in the directory, give them a phone number, enable them Skype-For-Business with Enterprise voice and set-up Exchange voice mail, all in one command. How many proxy and business logic commands there will be, and how complex they are both depend on the situation; and some commands – like Get-Printer in the example script – might not need to be wrapped in a proxy at all.
For the example I’ve created a Restart-Spooler command. I could have created a Proxy to wrap Restart-Service and only allowed a limited set of services to be restarted. Because I might still do that the function uses the fully qualified name of the hidden Restart-Service cmdlet, and I have also made sure the function writes information to the event log saying what happened. For a larger system I use a 3 digits where the first indicates the type of object impacted (1xx for users , 2xx for mailboxes and so on) and the next two what was done (x01 for Added , x02 for Changed a property).

The final step in the script is to set the language mode. There are four possible language modes Full Language is what we normally see; Constrained language limits calling methods and changing properties to certain allowed .net types, the MATH type isn’t specifically allowed, so [System.Math]::pi will return the value of pi, but [System.Math]::Pow(2,3) causes an error saying you can’t invoke that method, the SessionState type isn’t on the allowed list either so trying to change the language back will say “Property setting is only allowed on core types”. Restricted language doesn’t allow variables to be set and doesn’t allow access to members of an object (i.e. you can look at individual properties, call methods, or access individual members of an array), and certain variables (like $pid) are not accessible. No language stops us even reading variables 

Once the script is saved it is a question of connecting to the end point to test it. In part one I showed setting-up the end point like this
$cred = Get-Credential
Register-PSSessionConfiguration -Name "RemoteAdmin"       -RunAsCredential $cred `
                                -ShowSecurityDescriptorUI
-StartupScript 'C:\Program Files\WindowsPowerShell\EndPoint.ps1'
The start-up script will be read from the given path for each connection, so there is no need to do anything to the Session configuration when the script changes; as soon as the script is saved to the right place I can then get a new session connecting to the “RemoteAdmin” endpoint, and enter the session. Immediately the prompt suggests something isn’t normal:

$s = New-PSSession -ComputerName localhost -ConfigurationName RemoteAdmin
Enter-PSSession $s
[localhost]: PS>

PowerShell has a prompt function, which has been hidden. If I try some commands, I quickly see that the session has been constrained

[localhost]: PS> whoami
The term 'whoami.exe' is not recognized…

[localhost]: PS> $pid
The syntax is not supported by this runspace. This can occur if the runspace is in no-language mode...

[localhost]: PS> dir
The term 'dir' is not recognized ….

However the commands which should be present are present. Get-Command works and shows the others

[localhost]: PS> get-command
CommandType  Name                    Version    Source
-----------  ----                    -------    ------
Function     Exit-PSSession
Function     Get-Command
Function     Get-FormatData
Function     Get-Help
Function     Get-Printer                 1.1    PrintManagement                                                                                        
Function     Measure-Object
Function     Out-Default
Function     Restart-Spooler
Function     Select-Object

We can try the following to show how the Select-object cmdlet has been replaced with a proxy function with reduced functionality:
[localhost]: PS> get-printer | select-object -first 1
A parameter cannot be found that matches parameter name 'first'.

So it looks like all the things which need to be constrained are constrained, if the functions I want to deliver – Get-Printer and Restart-Spooler – if  work properly I can create a module using
Export-PSSession -Session $s -OutputModule 'C:\Program Files\WindowsPowerShell\Modules\remotePrinters' -AllowClobber -force
(I use -force and -allowClobber so that if the module files exist they are overwritten, and if the commands have already been imported they will be recreated.)  
Because PowerShell automatically loads modules (unless $PSModuleAutoloadingPreference tells it not to), saving the module to a folder listed in $psModulePath means a fresh PowerShell session can go straight to using a remote command;  the first command in a new session might look like this

C:\Users\James\Documents\windowsPowershell> restart-spooler
Creating a new session for implicit remoting of "Restart-Spooler" command...
WARNING: Waiting for service 'Print Spooler (Spooler)' to start...

The message about creating a new session comes from code generated by Export-PSSession which ensures there is always a session available to run the remote command. Get-PSSession will show the session and Remove-PSSession will close it. If a fix is made to the endpoint script which doesn’t change the functions which can be called or their parameters, then removing the session and running the command again will get a new session with the new script. The module is a set of proxies for calling the remote commands, so it only needs to change to support modifications to the commands and their parameters. You can edit the module to add enhancements of your own, and I’ve distributed an enhanced module to users rather than making them export their own. 

You might have noticed that the example script includes comment-based help – eventually there will be client-side tests for the script, written in pester, and following the logic I set out in Help=Spec=Test, the test will use any examples provided. When Export-PsSession creates the module, it includes help tags to redirect requests, so running Restart-Spooler –? locally requests help from the remote session; unfortunately requesting help relies on a existing session and won’t create a new one.

June 29, 2016

Just enough admin and constrained endpoints. Part 1 Understanding endpoints.

Filed under: DevOps,Powershell — jamesone111 @ 1:42 pm

Before we can dive into Just Enough Admin and constrained end points, I think we need fill in some of the background on endpoints and where they fit in PowerShell remoting

When you use PowerShell remoting, the local computer sees a session, which is connected to an endpoint on a remote computer. Originally, PowerShell installations did not enabling inbound sessions but this has changed with newer. If the Windows Remote Management service (also known as WSMAN) is enabled, it will listen on port 5985; you can check with
NetStat -a | where {$_ -Match 5985}
If WSMAN is not listening you can use the Enable-PSRemoting cmdlet to enable it.

With PS remoting enabled you can try to connect. If you run
$s = New-PSSession -ComputerName localhost
from a Non-elevated PowerShell session, you will get an access denied error but from an elevated session it should run successfully. The reason for this is explained later. When then command is successful, $s will look something like this:
Id Name ComputerName State ConfigurationName Availability
-- ---- ------------ ----- ----------------- ------------
2 Session2 localhost Opened Microsoft.PowerShell Available

We will see the importance of ConfigurationName later as well. The Enter-PSSession cmdlet switches the shell from talking to the local session to talking to a remote one running  
Enter-PSSession $s
will change the prompt to something like this
[localhost]: PS C:\Users\James\Documents>
showing where the remote session is connected: Exit-PSSession returns to the original (local) session; you can enter and exit the session at will, or create a temporary session on demand, by running
Enter-PsSession -ComputerName LocalHost

The Get-PsSession cmdlet shows a list of sessions and will show that there is no session left open after exiting an “on-demand” session. As well as interacting with a session you can use Invoke-command to run commands in the session, for example
Invoke-Command -Session $s -ScriptBlock {Get-Process -id $pid}
Handles NPM(K) PM(K) WS(K) VM(M) CPU(s)   Id SI ProcessName PSComputerName
------- ------ ----- ----- ----- ------   -- -- ----------- -------------- 
    547     26 61116 77632 ...45   0.86 5788 0  wsmprovhost      localhost

At first sight this looks like a normal process object, but it has an additional property, "PSComputerName". In fact, a remote process is represented different type of object. Commands in remote sessions might return objects which are not recognised on the local computer. So the object is serialized – converted to a textual representation – sent between sessions, and de-serialized back into a custom object. There are two important things to note about this.

  1. De-serialized objects don’t have Methods or Script Properties. Script properties often will need access to something on the originating machine – so PowerShell tries to convert them to Note Properties. A method can only be invoked in the session where the object was created – not one which was sent a copy of the object’s data.
  2. The object type changes. The .getType() method will return PsObject, and the PSTypeNames property says the object is a Deserialized.System.Diagnostics.Process; PowerShell uses PSTypenames to decide how to format an object and will use rules defined for type X to format a Deserialized.X object.
    However, testing the object type with -is [x] will return false, and a function which requires a parameter to be of type X will not accept a Deserialized.X. In practice this works as a safety-net, if you want a function to be able to accept remote objects you should detect that they are remote objects and direct commands to the correct machine.

Invoke-Command allows commands which don’t support a -ComputerName parameter (or equivalent) to be targeted at a different machine, and also allows commands which aren’t available on the local computer to be used remotely. PowerShell provides two additional commands to make the process of using remote modules easier. Import-PSSession creates a temporary module which contains proxies for all the cmdlets and functions in the remote session that don’t already exist locally, this means that instead of having to write
Invoke-Command -Session $s -ScriptBlock {Get-ADUser}
the Get-ADUser command works much as it would with a locally installed Active Directory module. Using Invoke-Command will return a Deserialized AD user object and the local copy of PowerShell will fall back to default formatting rules to display it; but when the module is created it includes a format XML file describing how to format additional objects.
Import-PSSession adds commands to a single session using a temporary module: its partner Export-PSSession saves a module that can be imported as required – running commands from such a module sets up the remote session and gives the impression that the commands are running locally.

What about the configuration name and the need to logon as Admin?

WSMAN has multiple end points which sessions can connect to, the command Get-PSSessionConfiguration lists them – by default the commands which work with PS Sessions connect to the end point named "microsoft.powershell", but the session can connect to other end points depending on the tasks to be carried out.
Get-PSSessionConfiguration shows that by default for the "microsoft.powershell" endpoint has StartUpScript and RunAsUser properties which are blank and a permission property of
NT AUTHORITY\INTERACTIVE        AccessAllowed,
BUILTIN\Administrators          AccessAllowed,
BUILTIN\Remote Management Users AccessAllowed

This explains why we need to be an administrator (or in the group “Remote Management Users”) to connect. It is possible to modify the permissions with
Set-PSSessionConfiguration -Name "microsoft.powershell" -ShowSecurityDescriptorUI

When Start-up script and Run-As User are not set, the session looks like any other PowerShell session and runs as the user who connected – you can see the user name by running whoami or checking the $PSSenderInfo automatic variable in the remote session.

Setting the Run-As User allows the session to run with more privileges than are granted to the connecting user: to prevent this user running amok, the end point is Constrained a- in simpler terms we put limits what can be done in that session. Typically, we don’t the user to have access to every command available on the remote computer, and we may want to limit the parameters which can be used with those that are allowed. The start-up script does the following to setup the constrained environment:

  • Loads modules
  • Defines proxy functions to wrap commands and modify their functionality
  • Hides cmdlets, aliases and functions from the user.
  • Defines which scripts and external executables may be run
  • Sets the PowerShell language mode, to further limit the commands which can be run in a session, and prevent new ones being defined.

If the endpoint is to work with Active Directory, for example, it might hide Remove-ADGroupMember. (or import only selected commands from the AD module); it might use a proxy function for Add-ADGroupMember so that only certain groups can be manipulated. The DNS Server module might be present on the remote computer but the Import-Module cmdlet is hidden so there is no way to load it.

Hiding or restricting commands doesn’t stop people doing the things that their access rights allow. An administrator can use the default endpoint (or start a remote desktop session) and use the unconstrained set of tools. The goal is to give out fewer admin rights and give people Just Enough Admin to carry out a clearly defined set of tasks: so the endpoint as a privileged account (even a full administrator account) but other, less privileged accounts are allowed to connect run the constrained commands that it provides.
Register-PSSessionConfiguration sets up a new endpoint can and Set-PSSessionConfiguration modifies an existing one ; the same parameters work with both -for example

$cred = Get-Credential
Register-PSSessionConfiguration -Name "RemoteAdmin" `
                               
-RunAsCredential $cred `
                               
-ShowSecurityDescriptorUI  `
                                -StartupScript
'C:\Program Files\WindowsPowerShell\EndPoint.ps1'
The -ShowSecurityDescriptorUI switch pops up a permissions dialog box – to set permissions non-interactively it is possible to use -SecurityDescriptorSddl and specify the information using SDDL but writing SDDL is a skill in itself.

With the end point defined the next part is to create the endpoint script, and I’ll cover that in part 2

June 27, 2016

Technical Debt and the four most dangerous words for any project.

Filed under: Uncategorized — jamesone111 @ 9:15 am

I’ve been thinking about technical debt. I might have been trying to avoid the term when I wrote Don’t swallow the cat, or more likely I hadn’t heard it, but I was certainly describing it – to adapt Wikipedia’s definition it is the future work that arises when something that is easy to implement in the short run is used in preference to the best overall solution”. However it is not confined to software development as Wikipedia suggests.
“Future work” can come from bugs (either known, or yet to be uncovered because of inadequate testing), design kludges which are carried forward, dependencies on out of date software, documentation that was left unwritten… and much more besides.

The cause of technical debt is simple: People won’t say “I (or we) cannot deliver what you want, properly, when you expect it”.
“When you expect it” might be the end of a Scrum Sprint, a promised date or “right now”. We might be dealing with someone who asks so nicely that you can’t say “No” or the powerful ogre to whom you dare not say “No”. Or perhaps admitting “I thought I could deliver, but I was wrong” is too great a loss of face. There are many variations.

I’ve written before about “What you measure is what you get” (WYMIWIG) it’s also a factor. In IT we measure success by what we can see working. Before you ask “How else do you judge success?”, Technical debt is a way to cheat the measurement – things are seen to be working before all the work is done. To stretch the financial parallel, if we collect full payment without delivering in full, our accounts must cover the undelivered part – it is a liability like borrowing or unpaid invoices.

Imagine you have a deadline to deliver a feature. (Feature could be a piece of code, or an infrastructure service however small). Unforeseeable things have got in the way. You know the kind of things: the fires which apparently only you know how to extinguish, people who ask “Can I Borrow You”, but should know they are jeopardizing your ability to meet this deadline, and so on.
Then you find that doing your piece properly means fixing something that’s already in production. But doing that would make you miss the deadline (as it is you’re doing less testing than you’d like and documentation will have to be done after delivery). So you work around the unfixed problem and make the deadline. Well done!
Experience teaches us that making the deadline is rewarded, even if you leave a nasty surprise for whoever comes next – they must make the fix AND unpick your workaround. If they are up against a deadline they will be pushed to increase the debt. You can see how this ends up in a spiral: like all debt, unless it is paid down, it increases in future cycles.

The Quiet Crisis unfolding in Software Development has a warning to beware of high performers, they may excel at the measured things by cutting corners elsewhere. It also says watch out for misleading metrics – only counting “features delivered” means the highest performers may be leaving most problems in their wake. Not a good trait to favour when identifying prospective managers.

Sometimes we can say “We MUST fix this before doing anything else.”, but if that means the whole team (or worse its manager) can’t do the thing that gets rewarded then we learn that trying to complete the task properly can be unpopular, even career limiting. Which isn’t a call to do the wrong thing: some things can be delayed without a bigger cost in the future; and borrowing can open opportunities that refusing to ever take on any debt (technical or otherwise) would deny us. But when the culture doesn’t allow delivery plans to change, even in the face of excessive debt, it’s living beyond its means and debt will become a problem.

We praise delivering on-time and on-budget, but if capacity, deadline and deliverables are all fixed, only quality is variable. Project management methodologies are designed to make sure that all these factors can be varied and give project teams a route to follow if they need to vary by too great a margin. But a lot of work is undertaken without this kind of governance. Capacity is what can be delivered properly in a given time by the combination of people, skills, equipment and so on, each of which has a cost. Increasing headcount is only one way to add capacity, but if you accept adding people to a late project makes it later then it needs to be done early. When me must demonstrate delivery beyond our capacity, it is technical debt that covers the gap.

Forecasting is imprecise, but it is rare to start with plan we don’t have the capacity to deliver. I think another factor causes deadlines which were reasonable to end up creating technical debt.

The book The Phoenix Project has a gathered a lot of fans in the last couple of years, and one of its messages is that Unplanned work is the enemy of planned work. This time management piece separates Deep work (which gives satisfaction and takes thought, energy, time and concentration) from Shallow work (the little stuff). We can do more of value by eliminating shallow work and the Quiet Crisis article urges managers to limit interruptions and give people private workspaces, but some of each day will always be lost to email, helping colleagues and so on.

But Unplanned work is more than workplace noise. Some comes from Scope Creep, which I usually associate with poor specification, but unearthing technical debt expands the scope, forcing us to choose between more debt and late delivery. But if debt is out in the open then the effort to clear it – even partially – can be in-scope from the start.
Major incidents can’t be planned and leave no choice but to stop work and attend to them. But some diversions are neither noise, nor emergency. “Can I Borrow You?” came top in a list of most annoying office phrases and “CIBY” serves as an acronym for a class of diversions which start innocuously. These are the four dangerous words in the title.

The Phoenix Project begins with the protagonist being made CIO and briefed “Anything which takes focus away from Phoenix is unacceptable – that applies to whole company”. For most of the rest of the book things are taking that focus. He gets to contrast IT with manufacturing where a coordinator accepts or declines new work depending on whether it would jeopardize any existing commitments. Near the end he says to the CEO Are we even allowed to say no? Every time I’ve asked you to prioritize or defer work on a project, you’ve bitten my head off. …[we have become] compliant order takers, blindly marching down a doomed path”. And that resonates. Project steering boards (or similarly named committees) can to assign capacity to some projects and disappoint others. Without one – or if it is easy to circumvent – we end up trying to deliver everything and please everyone;  “No” and “What should I drop?” are answers, we don’t want to give especially to those who’ve achieved their positions by appearing to deliver everything, thanks to technical debt.

Generally, strategic tasks don’t compete to consume all available resources. People recognise these should have documents covering

  • What is it meant to do, and for whom? (the specification / high level design)
  • How does it do it? (Low level design, implementation plan, user and admin guides)
  • How do we know it does what it is meant to? (test plan)

But “CIBY” tasks are smaller, tactical things; they often lack specifications: we steal time for them from planned work assuming we’ll get them right first time, but change requests are inevitable. Without a spec, there can be no test plan: yet we make no allowance for fixing bugs. And the work “isn’t worth documenting”, so questions have to come back to the person who worked on it.  These tasks are bound to create technical debt of their own and they jeopardize existing commitments pushing us into more debt.

Optimistic assumptions aren’t confined to CIBY tasks. We assume strategic tasks will stay within their scope: we set completion dates using assumptions about capacity (the progress for each hour worked) and about the number of hours focused on the project each day. Optimism about capacity isn’t a new idea, but I think planning doesn’t allow for shallow / unplanned work – we work to a formula like this:
TIME = SCOPE / CAPACITY
In project outcomes, debt is a fourth variable and time lost to distracting tasks a fifth. A better formula would look like this
DELIVERABLES = (TIME * CAPACITY) – DISTRACTIONS + DEBT  

Usually it is the successful projects which get a scope which properly reflects the work needed, stick to it, allocate enough time and capacity and hold on to it. It’s simple in theory, and projects which go off the rails don’t do it in practice, and fail to adjust. The Phoenix Project told how failing to deliver “Phoenix” put the company at risk. After the outburst I quoted above, the CIO proposes putting everything else on hold, and the CEO, who had demanded 100% focus on Phoenix, initially responds “You must be out of your right mind”. Eventually he agrees, Phoenix is saved and the company with it. The book is trying to illustrate many ideas, but one of them boils down to “the best way to get people to deliver what you want is to stop asking them to deliver other things”.

Businesses seem to struggle to set priorities for IT: I can’t claim to be an expert in solving this problem, but the following may be helpful

Understanding the nature of the work. Jeffrey Snover likes to say “To ship is to choose”. A late project must find an acceptable combination of additional cost, overall delay, feature cuts, and technical debt. If you build websites, technical debt is more acceptable than if you build aircraft. If your project is a New Year’s Eve firework display, delivering without some features is an option, delay is not. Some feature delays incur cost, but others don’t.

Tracking all work: Have a view of what is completed, what is in Progress, what is “up next”, and what is waiting to be assigned time. The next few points all relate to tracking.
Work in progress has already consumed effort but we only get credit when it is complete. An increasing number of task in progress may mean people are passing work to other team members faster than their capacity to complete it or new tasks are interrupting existing ones.
All work should have a specification
before it starts. Writing specifications takes time, and “Create specification for X” may be task in itself.
And yes, I do know that technical people generally hate tracking work and writing specifications. 
Make technical debt visible. It’s OK to split an item and categorize part as completed and the rest as something else. Adding the undelivered part to the backlog keeps it as planned work, and also gives partial credit for partial delivery – rather than credit being all or nothing. It means some credit goes to the work of clearing debt.
And I also know technical folk see “fixing old stuff” as a chore, but not counting it just makes matters worse.
Don’t just track planned work. Treat jobs which jumped the queue, that didn’t have a spec or that displaced others like defects in a manufacturing process – keep the score, and try to drive it down to zero. Incidents and “CIBY” jobs might only be recorded as an afterthought but you want see where they are coming from and try to eliminate them at source.

Look for process improvements. if a business is used to lax project management, it will resist attempts to channel all work through a project steering board. Getting stakeholders together in a regular “IT projects meeting” might be easier, but get the key result (managing the flow of work).

And finally Having grown-up conversations with customers.
Businesses should understand the consequences of pushing for delivery to exceed capacity; which means IT (especially those in management) must be able to deliver messages like these.
“For this work to jump the queue, we must justify delaying something else”
“We are not going be able to deliver [everything] on time”, perhaps with a follow up of “We could call it delivered when there is work remaining but … have you heard of technical debt?”

June 1, 2016

A different pitch for Pester

Filed under: DevOps,Powershell,Testing — jamesone111 @ 2:10 pm

If you work with PowerShell but don’t consider yourself to be a developer, then when people get excited by the new (newish) testing framework named Pester you might think “what has that got to with me” …
Pester is included with PowerShell 5 and downloadable for older versions, but most things you find written abut it are by software testers for software testers – though that is starting to change. This post is for anyone thinks programs are like Sausages: you don’t want to know how either are made.

Let’s consider a way of how we’d give someone some rules to check something physical 
“Here is a description of an elephant
It is a mammal
It is at least 1.5 M tall
It has grey wrinkly skin
It has long flexible nose” 

Simple stuff. Tell someone what you are describing, and make simple statements about it (that’s important, we don’t say “It is a large grey-skinned mammal with a long nose” . Check those statements and if they are all true you can say “We have one of those”. So lets do the same, in PowerShell for something we can test programmatically – this example  has been collapsed down in the ISE which shows a couple of “special” commands from Pester

$Connections = Get-NetIPConfiguration | Where-Object {$_.netadapter.status -eq "up" }
Describe "A computer with an working internet connection on my home network" {
    It "Has a connected network interface"  {...}
    foreach ($c in $Connections)            {  
        It "Has the expected Default Gateway on the interface named  '$($C.NetAdapter.InterfaceDescription)' "   {...}
        It "Gets a 'ping' response from the default gateway for      '$($C.NetAdapter.InterfaceDescription)' "   {...} 
        It "Has an IPV4 DNS Server configured on the interface named '$($C.NetAdapter.InterfaceDescription)' "   {...}
    }
    It "Can resolve the DNS Name 'www.msftncsi.com' " {...}
    It "Fetches the expected data from the URI 'http://www.msftncsi.com/ncsi.txt' " {...}
}

So Pester can help testing ANYTHING, it isn’t just for checking that Program X gives output Y with input Z: Describe which says what is being tested
Describe "A computer with an working internet connection on my home network" {}
has the steps needed to perform the test inside the braces. Normally PowerShell is easier to read with parameter names included but writing this out in full as
Describe -Name "A computer with an working internet connection on my home network" -Fixture  {}
would make it harder to read, so the norm with Pester is to omit the switches.  
We describe a working connection by saying we know that it has a connected network, it has the right default gateway and so on. The It statements read just like that with a name and a test inside the the braces (again switches are omitted for readability). When expanded, the first one in the example looks like this.

     It "Has a connected network interface"  {
        $Connections.count | Should not beNullOrEmpty
    }

Should is also defined in Pester. It is actually a PowerShell function which goes to some trouble to circumvent normal PowerShell syntax (the PowerShell purist in me doesn’t like that, but and I have to remember the famous quote about “A foolish consistency is the hobgoblin of little minds”) the idea is to make the test read more like natural language than programming.
This example has a test that says there should be some connections, and then three tests inside a loop use other variations on the Should syntax.

$c.DNSServer.ServerAddresses -join "," | Should match "\d+\.\d+\.\d+\.\d+"
$c.IPv4DefaultGateway.NextHop          | Should  be "192.168.0.1"
{
Test-Connection -ComputerName $c.IPv4DefaultGateway.NextHop  -Count 1} | Should not throw

You can see Should allows us to check for errors being thrown (or not) empty values (or not) regular expression matches (or not) values, and depending on what happens in the Should the it command can decide if that test succeeded. When one Should test fails the script block being run by the It statement stops, so in my example it would be better to combine “has a default gateway”, and “Gets a ping response” into a single It, but as it stands the script generates output like this:

Describing A computer with an working internet connection on my home network
[+] Has a connected network interface 315ms
[+] Has the expected Default Gateway on the interface named  'Qualcomm Atheros AR956x Wireless Network Adapter'  56ms
[+] Gets a 'ping' response from the default gateway for      'Qualcomm Atheros AR956x Wireless Network Adapter'  524ms
[+] Has an IPV4 DNS Server configured on the interface named 'Qualcomm Atheros AR956x Wireless Network Adapter'  25ms
[+] Can resolve the DNS Name 'www.msftncsi.com'  196ms
[+] Fetches the expected data from the URI 'http://www.msftncsi.com/ncsi.txt'  603ms

Pester gives this nicely formatted output without having to do any extra work  – it can also output the results as XML so we can gather up the results for automated processing. It doesn’t allow us to test anything that couldn’t be tested before – the benefit is it simplifies turning a description of the test into a script that will perform it and give results which mirror the description.
The first example showed how a folding editor (the PowerShell ISE or any of the third party ones) can present the script as so it looks like a the basic specification.
Here’s an outline of a test to confirm that a machine had been built correctly – I haven’t filled in the code to test each part.  
Describe "Server 432" {
   It "Is Registered in Active Directory"                 {}
   It "Is has an A record in DNS"                         {}
   It "Responds to Ping at the address in DNS"            {}
   It "Accepts WMI Connections and has the right name"    {}
   It "Has a drive D: with at least 100 GB of free space" {}
   It "Has Dot Net Framework installed"                   {}
}
 
This doesn’t need any PowerShell knowledge: it’s easy to take a plain text file with suitable indents and add the Describes, Its, braces and quote marks – and hand the result to someone who knows how to check DNS from PowerShell and so on, they can fill in the gaps. Even before that is done the test script still executes. 

Describing Server 432
[?] Is Registered in Active Directory 32ms
[?] Is has an A record in DNS 13ms
[?] Responds to Ping at the address in DNS 4ms
[?] Accepts WMI Connections and has the right name 9ms
[?] Has a drive D: with at least 100 GB of free space 7ms
[?] Has Dot Net Framework installed 5ms

The test output uses [+] for a successful test, [-] for a failure, [!] for one it was told to skip, and [?] for one which is “pending”, i.e. we haven’t started writing it. 
I think it is good to start with relatively simple set of tests, and add to them, so for checking the state of a machine, is such-and-such a service present and running, are connections accepted on a particular port, is data returned, and so on.  In fact whenever we find something wrong which can be tested it’s often a good idea to add a test for that to the script.

So if you were in any doubt at the start, hopefully you can see now that Pester is just as valuable as a tool for Operational Validation as it is for software testing.

May 31, 2016

Help = Spec = Test

Filed under: Powershell,Testing — jamesone111 @ 2:55 pm

Going back for some years – at least as far the talks which turned into the PowerShell Deep Dives book – I have told people ”Start Help Early” (especially when you’re writing anything that will be used by anyone else).
In the face of time pressure documentation is the first thing to be cut – but this isn’t a plea to keep your efforts from going out into the world undocumented. 
Help specifies what the command does, and help examples are User Stories – a short plain English description of something someone needs to do.
Recently I wrote something to combine the steps of setting up a Skype for business (don’t worry – you don’t need to know S4B to follow the remainder) – the help for one of the commands looked like this

<#
.SYNOPSIS
Sets up a Skype for business user including telephony, conference PIN and Exchange Voice mail
.EXAMPLE
Initialize-CsUser –ID bob@mydomain –PhoneExtension 1234 –pin 2468 –EnterpriseVoice
Enables a pre-existing user, with enterprise voice, determines and grants the correct voice policy,
sets a conferencing PIN, updates the Phone number in AD, and enables voice mail for them in Exchange.
#>

I’ve worked with people who would insist on writing user stories as “Alice wants to provision Bob… …to do this she …”  but the example serves well enough as both help for end users and a specification for one use case: after running the command  user “bob” will

  • Be enabled for Skype-for-Business with Enterprise Voice – including “Phone number to route” and voice policy
  • Have a PIN to allow him to use voice conferencing
  • Have a human readable “phone number to dial”  in AD
  • Have appropriate voice mail on Exchange

The starting point for a Pester test (the Pester testing module ships with PowerShell V5, and is downloadable for earlier versions) ,  is set of simple statements like this – the thing I love about Pester it is so human readable.

Describe "Adding Skype for business, with enterprise voice, to an existing user"  {
### Code to do it and return the results goes here
    It "Enables enterprise voice with number and voice policy" {    }
    It "Sets a conference PIN"                                 {    }
    It "Sets the correct phone number in the directory"        {    }
    It "Enables voice mail"                                    {    }
}

The “doing” part of the test script is the command line from the example (through probably with different values for the parameters).
Each thing we need to check to confirm proper operation is named in an It statement with the script to test it inside the braces. Once I have my initial working function, user feedback will either add further user stories (help examples), which drive the creation of new tests or it will refine this user story leading either to new It lines in an existing test (for example “It Sets the phone number in AD in the correct format”) or to additional tests (for example “It generates an error if the phone number has been assigned to another user”)

In my example running the test a second time proves nothing, because the second run will find everything has already been configured, so a useful thing to add to the suite of tests would be something to undo what has just been done. Because help and test are both ways of writing the specification, you can start by writing the specification in the test script – a simplistic interpretation of “Test Driven Development”.  So I might write this

Describe "Removing Skype for business from a user"   {
### Code to do it and return the results goes here       
    It "Disables S4B and removes any voice number"   {    } –Skip
    It "Removes voice mail"                          {    } –Skip
}

The –Skip prevents future functionality from being tested. Instead of making each command a top-level Describe section in the Pester script, each can be a second-level Context section.

Describe "My Skype for business add-ons" {
    Context "Adding Skype for business, with enterprise voice, to an existing user"   {...}
    Context "Removing Skype for business from a user"  {...}
}

So… you can start work by declaring the functions with their help and then writing the code to implement what the help specifies, and finally create a test script based on the Help/Spec OR you can start by writing the specification as the outline of a Pester test script, and as functionality is added, the help for it can be populated with little more than a copy and paste from the test script.
Generally, the higher level items will have a help example, and the lower level items combine to give the explanation for the example. As the project progresses, each of the It commands has its –Skip removed and the test block is populate, to-do items show up on the on the test output as skipped.

Describing My Skype for business add-ons
   Context Adding Skype for business, with enterprise voice, to an existing user

    [+] Sets the phone number to call in the directory 151ms
    [+] Enables enterprise voice with the phone number to route and voice policy  33ms
    [+] Sets a conference PIN  18ms
    [+] Enables voice mail  22ms

   Context Removing Skype for business from a user
    [!] Disables S4B and removed any voice number 101ms
    [!] Removes voice mail 9m
Tests completed in 347ms
Passed: 4 Failed: 0 Skipped: 2 Pending: 0
 

With larger pieces of work it is possible to use –skip and an empty script block for an It statement to mean different things (Pester treats the empty script block as “Pending”), so the test output can shows me which parts of the project are done, which are unfinished but being worked on, and which aren’t even being thought about at the moment, so it compliments other tools to keep the focus on doing the things that are in the specification. But when someone says “Shouldn’t it be possible to pipe users into the remove command”, we don’t just go and write the code, we don’t even stop at writing and testing. We bring the example in to show that way of working.

May 23, 2016

Good and bad validation in PowerShell

Filed under: Powershell — jamesone111 @ 10:35 am
Tags:

I divide input validation into good and bad. image

Bad validation on the web makes you hate being a customer of a whichever organization. It’s the kind which says “Names can only contain alphabetic characters” so O’Neill isn’t a valid name.
Credit card companies think it’s easier to write blocks of 4 digits but how many web sites demand an unbroken string of 16 digits?

Good validation tolerates spaces and punctuation and also spots credit card numbers which are too short or don’t checksum properly and knows the apostrophe needs special handling. Although it requires the same care on the way out as on the way in as this message from Microsoft shows.
And bad validation can be good validation paired with an unhelpful message  – for example telling your new password you chose isn’t acceptable without saying what is.

In PowerShell, parameter declarations can include validation, but keep in mind validation is not automatically good.
Here’s good validation at work: I can write parameters like this. 
     [ValidateSet("None", "Info", "Warning", "Error")]
     [string]$Icon = "error"

PowerShell’s intellisense can complete values for the -Icon parameter, but if I’m determined to put an invalid value in here’s the error I get.
Cannot validate argument on parameter 'Icon'.
The argument "wibble" does not belong to the set "None,Info,Warning,Error" specified by the ValidateSet attribute.
Supply an argument that is in the set and then try the command again.

It might be a bit a verbose, but it’s clear what is wrong and what I have to do to put it right. But PowerShell builds its messages from templates and sometimes dropping in the text from the validation attribute gives something incomprehensible, like this 
Cannot validate argument on parameter 'Path'.
The argument "c:" does not match the "^\\\\\S*\\\S*$" pattern.
Supply an argument that matches "^\\\\\S*\\\S*$" and try the command again.

This is trying to use a regular expression to check for a UNC path to a share ( \\Server\Share), but when I used it in a conference talk none of 50 or 60 PowerShell experts could work that out quickly. And people without a grounding in regular expressions have no chance.
Moral: What is being checked is valid but to get a good message, do the test in the body of the function.

Recently I saw this – or something like it via a link from twitter.

function Get-info {
  [CmdletBinding()]
  Param (
          [string]$ComputerName
  )
  Get-WmiObject –ComputerName $ComputerName –Class 'Win32_OperatingSystem'
}

Immediately I can see too things wrong with the parameter.
First is “All parameters must have a type” syndrome. ComputerName is a string, right? Wrong! GetWmiObject allows an array of strings, most of the time you or I or the person who wrote the example will call it with a single string, but when a comma separated list is used the “Make sure this is a string” validation concatenates the items into a single string.
Moral. If a parameter is passed straight to something else, either copy the type from there or don’t specify a type at all.

And Second, because the parameter isn’t mandatory and doesn’t’ have a default, so if we run the function with no parameter, it calls Get-WmiObject with a null computer name, which causes an error. I encourage people to get in the habit of setting defaults for parameters.

The author of that article goes on to show that you can use a regular expression to validate the input. As I’ve shown already regular expression give unhelpful error messages, and writing comprehensive ones can be and art in itself in the example, the author used
  [ValidatePattern('^\w+$')]
But if I try
Get-info MyMachine.mydomain.com
Back comes a message to
Supply an argument that matches "^\w+$" and try the command again
The author specified only “word” characters (letters and digits), no dots, no hyphens and so on. The regular expression can be fixed, but as it becomes more complicated, the error message grows harder to understand.

He moves on to a better form of validation, PowerShell supports a validation script for parameters, like this
[ValidateScript({ Test-Connection -ComputerName $_ -Quiet -Count 1 })]
This is a better test, because it checks whether the target machine is pingable or not. But it is still let down by a bad error message.
The " Test-Connection -ComputerName $_ -Quiet -Count 1 " validation script for the argument with value "wibble" did not return a result of True.
Determine why the validation script failed, and then try the command again.

In various PowerShell talks I’ve said that a user should not have to understand the code inside a function in order to use the function. In this case the validation code is simple enough that someone working knowledge of PowerShell can figure out the problem but, again, to get a good message, do the test in the body seems good advice, in simple form the test would look like this
if (Test-Connection -ComputerName $ComputerName -Quiet -Count 1) {
        Get-WmiObject –ComputerName $ComputerName –Class 'Win32_OperatingSystem'
}
else {Write-Warning "Can't connect to $computername" }

But this doesn’t cope with multiple values in computer name – if any are valid the code runs so it would be better to run.
foreach ($c in $ComputerName) {
    if (Test-Connection -ComputerName $c -Quiet -Count 1 ) {
        Get-WmiObject –ComputerName $c –Class 'Win32_OperatingSystem'
    }
    else {Write-Warning "Can't connect to $c"}
}

This doesn’t support using “.” to mean “LocalHost” in Get-WmiObject – hopefully by now you can see the problem: validation strategies can either end up stopping things working which should work or the validation becomes a significant task. If a bad parameter can result in damage, then a lot validation might be appropriate. But this function changes nothing so there is no direct harm if it fails; and although the validation prevents some failures, it doesn’t guarantee the command will succeed. Firewall rules might allow ping but block an RPC call, or we might fail to logon and so on. In a function which uses the result of Get-WmiObject we need to check that result is valid before using it in something else. In other words, validating the output might be better than validating the input.

Note that I say “Might”: validating the output isn’t always better. Depending on the kind of things you write validating input might be best, most of the time. Think about validation rather than cranking it out while running on autopilot. And remember you have three duties to your users

  • Write help (very simple, comment-based help is fine) for the parameter saying what is acceptable and what is not. Often the act of writing “The computer name must only contain letters” will show you that you have lapsed into Bad validation
  • Make error messages understandable. One which requires the user to read code or decipher a regular expression isn’t, so be wary of using some of the built in validation options.
  • Don’t break things. Work the way the user expects to work. If commands which do similar things take a list of targets, don’t force a single one.
    If “.” works, support it.
    If your code uses SQL syntax where “%” is a wildcard, think about converting “*” to “%”, and doubling up single apostrophes (testing with my own surname is a huge help to me!)
    And if users want to enter redundant spaces and punctuation, it’s your job to remove them.

September 1, 2014

The start of a new chapter.

Filed under: General musings — jamesone111 @ 7:19 pm

A symbolic moment earlier, I updated my Linked-in profile. From September 1st I am Communications and Collaboration Architect at the MERCEDES AMG PETRONAS Formula One Team.
Excited doesn’t really cover it – even if there are some “new school” nerves too. I’ve spent the last three years working with people I thought the world of, so I’m sad to say goodbye to them; but this is a role I’d accept at almost any company – but at a company where I’d take just about any role – I’ve joked with that if they had offered me a job as senior floor sweeper I’d have asked "how senior ?" People I’ve told have said “Pretty much your dream job then ?”. Yes, in a nutshell.

F1 is a discipline where you can lose a competitive advantage by careless talk: anything to do with the car or comings and goings at the factory are obviously off limits. Pat Simmons of Williams said in a recent interview that the intellectual property (IP) in racing “is not the design of our front wing endplate, you can take a photo of that. The IP is the way we think, the way we operate, the way we do things.” The first lesson of induction at Microsoft was ‘Never compromise the IP’ (and I learnt that IP wasn’t just the software, but included the processes used in Redmond). So although it’s been part of my past jobs to talk about what the company was doing, at Mercedes it won’t be. I find F1 exciting – more so this season than the last few – and it’s not really possible to be excited but not have opinions about the sport, although things I’ve said in the past don’t all match my current opinion: the James Hunt/ Niki Lauda battle of 1976 is almost my first F1 memory, if I still thought of Lauda as the enemy I wouldn’t work for a company which had him as chairman. In fact one of the initial attractions of the team for me was the degree I found myself agreeing with what their management said in public. Commenting on the F1 issues of the day from inside a team looks like something which needs to take a lot of different sensitivities into account and it’s something I’m more than happy to leave to those who have it in their job description.

If I have interesting things to blog about things which don’t relate to motor sport or the job or about software which anyone working with same produces could find out for themselves (i.e. not specific to one company), then hopefully the blog posts will continue. 

Next Page »

Blog at WordPress.com.