James O'Neill's Blog

June 16, 2014

A trick with Powershell aliases–making them do parameters

Filed under: Powershell — jamesone111 @ 10:37 am


The first thing that everyone learns about PowerShell Aliases is that they just replace the name of a command, aliases don’t do parameters.
DIR is an alias for Get-ChildItem ; you can’t make an alias RDIR for Get-ChildItem –Recurse. If you want that you need a function.
To quote Redmond’s most famous resident* I canna change the laws of physics, Captain, but I can find ye a loophole.

I wrote a function which I use a lot – 100+ times some days – named Get-SQL. Given an unknown command “xxx”,  PowerShell will see if there is a command “Get-XXX” before reporting an “Not recognized” error, so I usually just run it as “SQL” without the to Get-. The function talks to databases and sessions it keeps connections open between calls: connections tend to named after places in South America, so to open a session I might run
> SQL -Session Peru -Connection DSN=PERU
and then to find out the definition of a table I use
> SQL -Session Peru -Describe Projects
I’ve previously written about Jason Shirk’s tab expansion++ which gets a list of tables available to -Describe (or tables which can be Selected from, or updated or Inserted into) in that session, and provides tab expansion or an intellisense pick list: this is incredibly useful when you can’t remember if the table is named “project”, or “Projects”, and tab expansion++ does the same for field names so I don’t have to remember if the field I want is named “ID_Project”, “Project_ID”, “ProjectID” or simply “ID”

Get-SQL has a default session: I might use-Session Peru 20 times in a row but still want to leave the default connection alone
I found myself thinking ‘I want “Peru” to be an alias for Get-SQL -Session Peru. One line – New-Alias -Name $session -Value something – inside Get-SQL could set it all up for me when I make the connection.’
As I said we all know you can’t do that with an alias, but doing this with functions is – bluntly – a nightmare, creating functions on the fly is possible but awkward, and Tab expansion++ wouldn’t know it was supposed to work with them (it does figure out aliases). Defining the functions in advance for each data source is would give me a maintenance headache…

Then I had a flash of inspiration: if I needed this to work for a built-in cmdlet, I’d need to create a proxy function … but Get-SQL is already a function. So, if I can write something in the function to check how it was invoked it can say “A-ha! I was called as ‘Peru’ and ‘Peru’ is the name of a database session, so I should set $session to ‘Peru’.” Whatever the alias is, provided there is a connection of the same name it will get used. This turns out to be almost laughably easy.

In my Get-SQL function the $session Parameter is declared like this
[string]$Session = "Default"

A function can find out the name was used to invoke it by looking at $MyInvocation.InvocationName. If Get-SQL is invoked with no value provided for -session the value of $Session will be set to  ‘Default’: if that is the case and there is a database session whose name matches the invocation name then that name should go into $Session, like this:   
if ($Session -eq "Default" -and  $Global:DbSessions[$MyInvocation.InvocationName])
$Session = $MyInvocation.InvocationName}

Of course the parameter is not part of the alias definition – but the function can detect the alias and set the parameter internally – the laws stand, but I have my loophole. Although it’s split here into two lines I think of the IF statement as one line of code. When Get-SQL creates a connection it finishes by calling New-Alias -Name $session -Value Get-SQLForce. So two lines give me what I wanted.
Tab expansion++ was so valuable, but stopping here would mean its argument completers don’t know what to do – when they need a lists for fields or tables they call Get-SQL, and this worked fine when a -session parameter was passed but I’ve gone to all this trouble to get rid of that parameter, so now the completers will try to get a list by calling the function using its canonical name and the default connection. There is a different way to find the invocation name inside an argument completer – by getting it from the parameter which holds the Command Abstract Syntax Tree, like this:   

$cmdnameused = $commandAst.toString() -replace "^(.*?)\s.*$",'$1'
if ($Global:DbSessions[$cmdnameused]) {$session = $cmdnameused}
else {set $session the same way as before}

> PeruDescribe 

Will pop up the list of tables in that database for me to choose “Projects”. There was a fair amount of thinking about it, but as you can see, only four lines of code. Result

* James Doohan – the actor who played “Scotty” in Star Trek actually lived in Redmond – ‘home’ of Microsoft, though Bill Gates and other famous Microsoft folk lived elsewhere around greater Seattle. So I think it’s OK to call him the most famous resident of the place.

June 9, 2014

Screen scraping for pleasure or profit (with PowerShell’s Invoke-RestMethod)

Filed under: Powershell — jamesone111 @ 3:28 pm

Twenty odd years ago I wrote some Excel Macros to get data into Excel from one of the worlds less friendly materials management systems. It was far easier to work with the data in Excel,  and the Macros were master-pieces of send-keys and prayer-based parsing and it was the first time I heard the term Screen Scrape . (When I searched for “Prayer-based parsing” – it took me to another page on this blog where you might develop the argument that in Unix and Linux, the pipe we take for granted is little more than an automated screen scrape).

The technology changes (and gets easier) but the same things push us to screen scrape. Data would be more useful in from other than the one in which it is made available. There is a benefit if we can shift it from one form to the other. There are times when taking the data might break usage terms, or have other legal or ethical reasons why you shouldn’t do it. You need to work out those questions for yourself, here I’m just going to talk about one technique which I’ve found myself using several times recently.  

imageA lot of the data we want today is delivered in web pages. This, of itself, can be helpful, because well formed HTML can often be treated as XML data so with something like PowerShell the parsing is (near enough) free. Instead of having to pull the text completely apart there might be a small amount of preliminary tidying up followed and then putting [XML] in front of something conjures up a navigable hierarchy of data. The data behind my post on Formula one statistics was gathered mainly using this method.  (And the odd regular expression for pulling a needle-like scrap of information from a Haystack of HTML)

But when you’re lucky, the you don’t even need to parse the data out of the format it is displayed. Recently someone sent me a link a to a story about Smart phone market share. It’s presented as a map but it’s not a great way to see how share has gone or down at different times, or compare one place against another at the same time. These days when I see something like this I think “how do they get the data for the map”. The easy way to find out out is to Press [F12] in internet explorer, turn on the network monitor (click the router Icon and then the  “play” button) and reload the page. The hope is a tell-tale sign of data being requested so and processed by the browser ready for display. Often this data will jump out because of the way it is formatted. And circled in the network trace is some JSON format data. JSON or XML data is a real gift for PowerShell….

Once upon a time if you wanted to get data from a web server you had to write a big chuck of code. Then Microsoft provided a System.Net.WebClient object which would do fetching and carrying but left the parsing to you. In recent versions of PowerShell there are two cmdlets, Invoke-WebRequest is basically a wrapper for this. It will do some crunching of the HTML page so you can work with the document. PowerShell also has ConvertFrom-JSON, so you can send the content into that and don’t have to write your own JSON parser. But it’s even easier than that. Invoke-RestMethod will get the page and if it can parse what comes back, it does: so you don’t need a separate convert step. So I can do get the data back and quickly explore it like this:

> $j = Invoke-RestMethod -Uri http://www.kantarworldpanel.com/php/comtech_data.php
{@{year=2012; months=System.Object[]}, @{year=2013; months=System.Object[]}, @{year=2014; months=System.Obj...

> $j.years[0]
year                                                    months                                               
----                                                    ------                                               
2012                                                    {@{month=0; cities=System.Object[]}, @{month=1; cit...

> $j.years[0].months[0]
month                                                   cities                                               
-----                                                   ------                                               
0                                                       {@{name=USA; lat=40.0615504; lng=-98.51893029999997...

> $j.years[0].months[0].cities[0]
name                        lat                         lng                         platforms                
----                        ---                         ---                         ---------                
USA                         40.0615504                  -98.51893029999997          {@{name=Android; share=...

> $j.years[0].months[0].cities[0].platforms[0]
name                                                    share                                                
----                                                    -----                                                
Android                                                 43 

From there it’s a small step to make something which sends the data to the clipboard ready formatted for Excel

$j = Invoke-RestMethod -Uri "http://www.kantarworldpanel.com/php/comtech_data.php"
$(   foreach ($year  in $J.years)  {
        foreach ($month in $year.months) {
            foreach ($city in $month.cities) {
                foreach ($platform in $city.platforms) {
                    ('{0} , {1} , "{2}" , "{3}", {4}' -f $year.year ,
  ($month.month + 1) , $city.name , $platform.name , $platform.share) }}}}) | clip

Web page to Excel-ready in less time than it takes to describe it.  The other big win with these two cmdlets is that they understand how to keep track of sessions (which technically means Invoke-RestMethod  can also work with things which aren’t really RESTful)  Invoke-WebRequest understands forms on the page. I can logon to a photography site I use like this
if (-not $cred) { $Global:cred  = Get-Credential -Message "Enter logon details For the site"}
$login                          = Invoke-WebRequest "$SiteRoot/login.asp" –SessionVariable SV
$login.Forms[0].Fields.email    = $cred.UserName
$login.Forms[0].Fields.password = $cred.GetNetworkCredential().Password
$homePage                       = Invoke-WebRequest ("$SiteRoot/" + $login.Forms[0].action) -WebSession $SV -Body $login -Method Post
if ($homePage.RawContent        -match "href='/portfolio/(\w+)'")
    {$SiteUser                    = $Matches[1]}

Provided my subsequent calls to Invoke-WebRequest and Invoke-RestMethod specify the same session variable in their –WebSession parameter the site treats me as logged in and gives me my  data (which mostly is ready prepared JSON with the occasional needs to be parsed out of the HTML) rather than public data. With this I can get straight to the data I want with only the occasional need to resort to regular expressions. Once you have done one or two of these you start seeing a pattern for how you can either pick up XML or JSON data, or how you can Isolate links from a page of HTML which contain the data you want, and then isolate the table which has the data

For example

$SeasonPage  =  Invoke-WebRequest -Uri  "http://www.formula1.com/results/season/2014/"
$racePages   =  $SeasonPage.links | where  href -Match "/results/season/2014/\d+/$"
Will give me a list in $racepages of all the results pages for the 2014 F1 seasons  – I can loop though the data. Setting $R to the different URLS.

$racePage        = Invoke-WebRequest -Uri $R
$contentmainDiv  = ([xml]($racePage.ParsedHtml.getElementById("contentmain").outerhtml -replace " "," ")).div
$racedate        = [datetime]($contentmainDiv.div[0].div.span -replace "^.*-","")

$raceName        = $contentmainDiv.div[0].div.h2

Sometimes finding the HTML section to convert to XML is easy – here it needed a little tweak to remove non breaking spaces because the [XML] type converter doesn’t like them, but once I have $contentMainDiv I can start pulling race data out  – and 20 years on Excel is still the tool of choice, and how I get data into Excel will have to wait for another day.

May 19, 2014

Being an F1 stats nerd. When can you say “dominant” ?

Filed under: Uncategorized — jamesone111 @ 12:16 pm

I’ve been enjoying the Formula One this season, not least because my daughter is a major fan of Lewis Hamilton and loves seeing him putting a string of wins together. Someone has probably told him already that no one has won four races in a row and not won the world drivers championship, so he’s looking good. I like the new cars – I can hear things like the noise as they go over the kerbs which was drowned out before; I like the fact they have more power but less grip, and I like the clever turbo chargers which use spare exhaust pressure to make electricity and then use electricity to run the turbo like an electric supercharger. And so on.

And then people say it’s boring. TOSH! Bahrain was the most exciting race for years, Spain had so many battles between pairs of cars up a down the field the TV director didn’t know where to turn. “But Hamilton is winning all the time” say people who have already forgotten that  last season ended with Vettel winning a record nine races in a row, and tying Michael Schumacher’s record of 13 wins in a season. (Schumacher did it in a shorter season – 2004 when Ferrari won 15 out of 17 races.Ferrari also won 15 races in the 18 race 2002 season. And  McLaren won 15 races back in 1988 – which was only a 16 race season. )   The word dominant keeps getting trotted out.  And I keep saying “Too soon to say” so I thought to save myself from typing up why I think that in on-line discussions I’d write something I can link to…. hence this post.

I found with the the aid of a little PowerShell it is quite easy to get all the results form all 65 seasons of F1; and crunch it with a bit more PowerShell and some Excel . With some summaries of the stats one should be able to see the difference successful and dominant. The first thing I found most years a single team a team wins at least half the races it has happened in 22 of the last 30 seasons. Oddly four of the 8 eight  times it hasn’t happened have been in the last six seasons. Winning most of the races is certainly not dominance. In fact in the list below of teams which won half the races or more, McLaren won 10 of 19 in 2005 and Ferrari won 9 of 18 in 2006 and Renault picked up drivers and constructors championships both years. (Note also that since 1992 all the teams which hit the 50% of wins mark had Adrian Newey designing the car or Michael Schumacher driving it except for McLaren in 2005 and Ferrari in 2007. Boring is knowing early on that the Michael or the better of the two guys with the Newey cars will win again. )

  Team Won Out of Proportion
1988 McLaren 15 16 94%
2002 Ferrari 15 17 88%
2004 Ferrari 15 18 83%
2013 Red Bull Racing 13 20 65%
1984 McLaren 12 16 75%
1996 Williams 12 16 75%
2011 Red Bull Racing 12 19 63%
1995 Benetton 11 17 65%
1989 McLaren 10 16 63%
1992 Williams 10 16 63%
1993 Williams 10 16 63%
2000 Ferrari 10 17 59%
2005 McLaren 10 19 53%
1986 Williams 9 16 56%
1987 Williams 9 16 56%
1998 McLaren 9 16 56%
2001 Ferrari 9 17 53%
2007 Ferrari 9 17 53%
2006 Ferrari 9 18 50%
1991 McLaren 8 16 50%
1994 Benetton 8 16 50%
2003 Ferrari 8 16 50%

To qualify as dominant you probably need to win 3/4 of a seasons races. And if you judge based solely on wins there is only one way to beat 1988’s “every race but one in a season”. That year and the following year, McLaren got every pole position but one. Williams did the same in 1992 and 1993.  And those were seasons of 16 races.  Red Bull got 18 poles out of 19 in 2011. Pole can be a better judge of outright speed, because accidents, car failures, driver errors, pit-stop mishaps, and a host of other things can mean the fastest car isn’t the one which takes the chequered flag.

Mercedes have had 4 successive races with Pole, First place, and Second place. That’s a pretty good sign of dominating a race. Ferrari did that five times at the end of 2002, so if Mercedes repeat the feat  at Monaco next weekend it will equal a record. But McLaren did it 10 times in 16 races in 1988; Ferrari did it 9 times in 17 races in 2002, and 8 times in 18 races in 2004. It needs a few more clean sweeps to stand comparison with those season-long displays of dominance.

Most 1:2 finishes in a season



Proportion Winning team Season  
10 16 63% McLaren-Honda 1988 All 10 with pole
9 17 53% Ferrari 2002 6 of 9 with pole
8 18 44% Ferrari 2004 7 of 8 with pole
6 8 75% Ferrari 1952 All 6 with pole
6 16 38% Williams-Renault 1992 All 6 with pole
6 16 38% Williams-Renault 1996 4 with Pole
5 16 31% McLaren-Mercedes 1998 4 of 5 with pole
4 7 57% Mercedes 1955 3 of 4 with pole
4 9 44% Ferrari 1953 2 of 4 with pole
3 7 43% Alfa Romeo 1950 All 3 with pole
3 8 38% Ferrari 1956 All 3 with pole
3 8 38% Ferrari 1961 All 3 with pole
3 10 30% Cooper-Climax 1960 1 of 3 with pole
Selection:  > 30% wins.  Some with 3 or 4 wins in long seasons excluded

Looking at consecutive results was something that was easier with a little bit of PowerShell rather trying to do it in a database and I’ll write that up another time. A team winning 5 in a row – as Mercedes have – is good but McLaren managed 11 in a row in 1988. Ferrari did 10 in a row in 2002 , Red Bull finished last year with 9 in a row (All by Vettel)

Consecutive Wins   Consecutive Podiums
Sequence Driver Up to   Sequence Driver Up to
9 Sebastian Vettel Brazilian 24/11/2013   15 Michael Schumacher Japanese 13/10/2002
7 Alberto Ascari Argentine 18/01/1953   11 Sebastian Vettel British 10/07/2011
7 Michael Schumacher Hungarian 15/08/2004   10 Fernando Alonso Canadian 25/06/2006
5 Jack Brabham Portuguese 14/08/1960   8 Nelson Piquet Italian 06/09/1987
5 Jim Clark German 01/08/1965   8 Ayrton Senna Belgian 28/08/1988
5 Nigel Mansell San Marino 17/05/1992   7 Alberto Ascari Argentine 18/01/1953
4 Alain Prost German 25/07/1993   7 Niki Lauda Monaco 30/05/1976
4 Ayrton Senna Belgian 28/08/1988   7 Alain Prost French 03/07/1988
  and Monaco 12/05/1991     and Monaco 07/05/1989
4 Damon Hill Argentine 07/04/1996   7 Nigel Mansell Monaco 31/05/1992
4 Fernando Alonso Canadian 25/06/2006   6 Jim Clark Italian 08/09/1963
4 Jenson Button Turkish 07/06/2009   6 Damon Hill Japanese 06/11/1994
4 Jochen Rindt German 02/08/1970   6 Mika Hakkinen Italian 10/09/2000
4 Lewis Hamilton Spanish 11/05/2014   6 Lewis Hamilton United States 17/06/2007
3 Alan  Jones Dutch 26/08/1979   5 Jack Brabham Portuguese 14/08/1960
  and USA West 15/03/1981   5 Graham Hill Monaco 26/05/1963
3 Jackie Stewart British 19/07/1969   5 Jackie Stewart Italian 07/09/1969
  and German 01/08/1971   5 Alan  Jones Brazilian 29/03/1981
3 Juan Manuel Fangio Italian 05/09/1954   5 Carlos Reutemann Argentine 12/04/1981
3 Mika Hakkinen Brazilian 29/03/1998   5 Rubens Barrichello Japanese 13/10/2002
3 Niki Lauda Swedish 08/06/1975   5 Nico Rosberg Spanish 11/05/2014
  and South African 06/03/1976   Including each driver’s longest streak only
3 Stirling Moss Argentine 19/01/1958          
Including each driver’s longest streak only          

Nico Rosberg has had 5 successive podiums, but Hamilton started his career with a run of 6 and that pales into insignificance against the 15 that Michael Schumacher managed in 2002 (Ferrari had a run of 53 podiums, including every race of 2000,2001, and 2002 – they missed four in 2003, but got one in every race of 2004. They had a block of 88 races where they had at least one driver on the podium for 84 and both drivers on the podium 37 times.  They won 58 races, and took 53 pole positions.)  Ferrari in 2001-2004 and Red Bull in 2010-2013, and McLaren in 1988-1991 all won more than half the races and took more than half the poles averaged over a block of four years. And maybe one has to view true dominance as something that lasts more than one season. A quarter of the way into a season one might wonder if this will turn into something of that kind, but it’s too soon to rank along side them. In 4 years Vettel (with 34 wins) and Schumacher (with 39) each won more races than any other driver –except Prost and Senna – has won in their whole career. With Prost for a teammate, Senna “only” clocked up  27 wins from 64 races of 1988-91 – the same as Jackie Stewart managed in a 99 race career, and only bettered by Alonso and Mansell in their longer careers.

Since I mention Nigel Mansell I have to talk about  his rampancy in 1992 – in the first 14 races: 9 wins, second place 3 times, & 2 failures to finish. Even that is put in the shade by Schumacher’s start to 2004: 12 Wins, a second and a failure to finish. So far Lewis has 4 wins and a DNF – he has a rare chance to match these displays, but – and sorry to bang on about this – the job is so far only half done.

I woke to the news that Sir Jack Brabham had died: a 3 times world champion he won 14 races of the 126 he started, one more than Vettel and Schumacher each won in their most successful seasons.  For completeness Emerson Fittipalidi and Graham Hill each won two championships with a career tally of 14 wins. When Mansell won nine races in a year it seemed superhuman and maybe double figures was impossible. Then that happened four times in 12 seasons.

It will be interesting to see  if 2014 continues as it started because then it will be a dominant season to compare with Mclaren in 88/89 and 98, Williams in 92/93 and 96 Ferrari in 2002 and 2004 or Red Bull in 2011 and 2013. Of those which of the drivers was going to champion was only in serious question in 88 (Prost or Senna). At least Rosberg has some chance of beating Hamilton – though don’t tell my daughter that !

March 19, 2014

Exploring the Transcend Wifi-SD card

Filed under: Linux / Open Source,Photography — jamesone111 @ 1:37 pm
Tags: , , , , ,

There are a number variations on a saying  ”Never let a programmer have a soldering iron; and never let an engineer have a compiler”

WP_20140309_11_34_59_ProIt’s been my experience over many years that hardware people are responsible for rubbish software. Years upon years of shoddy hardware drivers, dreadful software bundled with cameras (Canon, Pentax I’m looking at both of you); Printers (HP, Epson), Scanners (HP – one day I might forgive you) have provided the evidence. Since leaving Microsoft I’ve spent more time working with Linux, and every so often I get into a rant about the lack of quality control: not going back and fixing bugs, not writing proper documentation (the “Who needs documentation when you’ve got Google” attitude meant when working on one client problem all we could find told us it could not be solved. Only a lucky accident found the solution). Anyone can program: my frustrations arise when they do it without  proper specification, testing regime, documentation and “after care”. The Question is … what happens when Engineers botch together an embedded Linux system.

Let me introduce you to what I believe to be the smallest commercially available  Linux computer and Web server.

I’ve bought this in its Transcend form – which is available for about £25. It’s a 16GB memory card, an ARM processor and a WIFI chip all in an SD card package.  Of course chip designers will be able to make it smaller but since it’s already too easy to lose a Micro-SD card, I’m not sure the would be any point in squeezing it into a smaller form factor.  Transcend aren’t the only firm to use the same hardware. There is a page on OpenWrt.Org which shows that Trek’s Flu-Card, and PQI’s Aircard use the same hardware and core software. The Flu card is of particular interest to me, as Pentax have just released the O-FC1 : a custom version of the flu card with additional functions including the ability to remotely control their new K3 DSLR. Since I don’t have the K3 (yet) and Pentax card is fairly expensive I went for the cheap generic option.

The way these cards works is different from the better known Eye-FI card. They are SERVERS : they don’t upload pictures to a service by themselves, instead they expect a client to come to them, discover the files they want and download them. The way we’re expected to do this is using HTTP , either from a web browser or from an App on a mobile device which acts as wrapper for the same HTTP requests. If you want your pictures to be uploaded to photo sharing sites like flickr, photobucket, smugmug, one line storage like Dropbox, Onedrive (nee skydrive), or social media sites (take your pick) these cards – as shipped – won’t do that. Personally I don’t want that, so that limitation’s fine. The cards default to being an access point on their own network – which is known as “Direct share mode” – it feels odd but can be changed.   

imageVarious people have reported that Wifi functionality doesn’t start if you plug the card into the SD slot of a laptop; and it’s suggested this is a function of the power supplied. Transcend supply a USB card reader in the package, and plugged into it my brand-new card soon popped up as a new wireless network. It’s not instant – there’s an OS to load – but it’s less than a minute. This has another point for use in a camera: if the camera powers down, the network goes with it; so the camera has to stay on long enough for you to complete whatever operations are needed.

imageThe new card names its network WIFISD and switching to that – which has a default Key of 12345678gave me wireless connection with a nominal connection speed of 72Mbits/sec and a new IP configuration, a Connection-Specific DNS Suffix of WifiCard, an IP Address of and DNS server, Default gateway, and DHCP server of : that’s the server. The first thing I did to point my browser at, enter the login details (user admin, password admin) and hey presto up came the home page. This looks like it was designed by someone with the graphic design skills of a hardware engineer, or possibly a blind person. I mean, I know the card is cheap, but effort seemed to have gone in to making it look cheap AND nasty.

However with the [F12] developer tools toggled on in Internet explorer I get to my favourite tool. Network monitor. First of all I get a list of what has been fetched, and if I look at Details for one of the requests, the response headers tell me the clock was set to 01 Jan 2012 when the system started and the server is Boa/0.94.14rc21

The main page has 4 other pages which are arranged as a set of frames. frame1 is the menu on the left, frame2 is the banner (it only contains Banner.jpg) and frame3 initally holds page.html, which just contains page.jpg and there is a blank.html to help the layout. Everything of interest is in frame1, what is interesting is that you can navigate to frame1.html without entering a user name and password and from there you can click settings and reset the admin password.
The settings page is built by a perl script (/cgi-bin/kcard_edit_config_insup.pl) and if you view the page source, the administrator password is there in the html form so you don’t even need to reset it. Secure ? Not a bit of it. Within 5 minutes of plugging the card in I’d found a security loophole (I was aware of others before I started thanks to the openwrt page and Pablo’s investigation). I love the way that Linux fans tell me you can build secure systems with Linux (true) and it can be used on tiny bits of embedded hardware where Windows just isn’t a an option (obviously true here): but you don’t automatically get both at the same time. A system is only as good as the people who specified, tested, documented and patched it.

While I had the settings page open I set the card to work in “internet mode” by default and gave it the details of my access point. You can specify 3 access points; it seems if the card can’t find a known access point it drops back to direct share mode so you can get in and change the settings (I haven’t tried this for myself). So now the card is on my home wifi network with an address from that network. (The card does nothing to tell you the address, so you have to discover it for yourself). Since there is a just a process of trying to connect to an access point with a pre-shared key, any hotspots which need a browser-based sign-on won’t work.

The next step was to start exploring the File / Photo / Video pages. Using the same IE monitor as before it’s quite easy to see how they work – although Files is a Perl script and pictures & videos are .cgi files the result is the same. A page which calls   /cgi-bin/tslist?PATH=%2Fwww%2Fsd%2FDCIM and processes the results. What’s interesting is that path /www/sd/DCIM. It looks like an absolute path… What is returned by changing to path to, for example, / ? A quick test showed that /cgi-bin/tslist?PATH=%2F does return the contents of the root directory. So /cgi-bin/tslist?PATH={whatever} requires no security and shows the contents of any directory.
The pictures page shows additional calls to /cgi-bin/tscmd?CMD=GET_EXIF&PIC={fullpath}  and /cgi-bin/thumbNail?fn={full path}. The files page makes calls to /cgi-bin/tscmd?CMD=GET_FILE_INFO&FILE={full path} (picture EXIF is a bit disappointing it doesn’t show Lens, or shutter settings, or camera model or exposure time it just shows file size – at least with files we see modified date; thumbnail is also a disappointment. There is a copy of DCRAW included on the system which is quite capable of extracting the thumbnail stored in the raw files, but it’s not used)
And there is a link to download the files /cgi-bin/wifi_download?fn={name}fd={directory}.  By the way, notice the lack of consistency of parameter naming the same role is filled by PATH=, PIC=, fn=  and fn=&fd=  was there an organised specification for this ?

Of course I wanted to use PowerShell to parse some of the data that came back from the server and I hit a snag early on
Throws an error: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF

Shock horror! More sloppiness in the CGI scripts, the last response header is followed not by [CR][LF] but by [LF][LF] fortunately Lee Holmes has already got an answer for this one.  I also found found the space in my folder path /www/sd/hello James caused a problem. When it ran through [System.Web.HttpUtility]::UrlEncode the space became a + sign not the %20 in the line above: the CGI only accepts %20, so that needs to be fixed up. Grrr. 

Since we can get access to any of the files on the server we can examine all the configuration files, and those which control the start-up are of particular interest. Pablo’s post was the first that I saw where someone had spotted that init looks for a autorun.sh script in the root of the SD file system which can start services which aren’t normally launched. There seems to be only one method quoted for starting an FTP service
tcpsvd -E 21 ftpd -w / &
There are more ways given for starting the telnet service, and it looks for all the world as if this revision of transcend card has a non-working version of telnetd (a lot of the utilities are in a single busybox executable), so Pablo resorted to getting a complete busybox, quickly installing it and using
cp /mnt/sd/busybox-armv5l /sbin/busybox
chmod a+x /sbin/busybox
/sbin/busybox telnetd -l /bin/bash &

This was the only one which worked for me. Neither ftp nor telnet need any credentials: with Telnet access it doesn’t take long to find the Linux Kernel is,  the Wi-Fi is an Atheros AR6003 11n and the package is a KeyASIC WIFI-SD (searching for this turns up pages by people who have already been down this track), or more specifically KeyASIC Ka2000 EVM with an ARM926EJ-S CPU, which seems to be used in tablets as well.

Poking around inside the system there are various references to “Instant upload” and to G-PLUS but there doesn’t seem to be anything present to upload to any of the services I talked about before, when shooting gigabytes of photos it doesn’t really make sense to send them up to the cloud before reviewing and editing them. In fact even my one-generation-behind camera creates problems of data volume. File transfer with FTP is faster than HTTP but it is still slow. HTTP manages about 500KBytes/sec and FTP between 750 and 900KBytes/Sec. That’s just too slow, much too slow.  Looking at some recent studio shoots I’ve use 8GB of storage in 2 hours: averaging a bit more than 1MB/Second. With my K5, RAW files are roughly 22MB so take about 45 seconds to transfer using HTTP but it can shoot 7 frames in a second – and then spend five minutes to transferring  the files: it’s quicker to take the memory card out of the camera, plug it into the computer, copy files and return the card to the camera. It might get away with light use, shooting JPGs, but in those situations – which usually mean wandering round snapping a picture here and a picture there – would your WiFi connected machine be setup and in range ?

The sweet spot seems to be running something on a laptop / tablet phone to transfer preview JPGs – using lower than maximum resolution, and some compression rather than best quality (the worry here is forgetting to go back to best possible JPEG and turning RAW support off). In this situation it really is a moot point which end is the client and which end is the server. Having the card upload every file to the cloud is going run into problems with the volume of data, connecting to access points and so on. So is pulling any great number of RAW files off the card. Writing apps to do this might be fun, and of course there’s a world of possible hacks for the card itself.

February 26, 2014

Depth of field

Filed under: Uncategorized — jamesone111 @ 7:51 pm

Over the years I have seen a lot written about Depth of Field and recently I’ve seen it explained wrongly but with great passion. So I thought I would post the basic formulae, show how they are derived and explain how they work in practical cases.

So first: a definition. When part of an image is out of focus that’s not an on/off state. There’s massively blurred, a little blurred, and such a small amount of blur it still looks sharply focused: if we magnify the slightly parts we can see that they are not sharply focused. Depth of field is a measure of how far either side of the point of focus, appears to be properly focused (even though it is very slightly out of focus). 


When a lens focuses an image, the lens-to subject distance, D, and lens-to-image distance, d, are related to the focal length with the equation
1/D + 1/d = 1/f

We can rearrange this to derive the distance to the subject (D) in terms of focal length (f) and image distance (d)
D = df/(d-f)

Since d is always further than f, we can write the difference as Δ and replace d with f+Δ. Putting that into the previous equation makes it
D = (f2+Δf)/Δ which re-arranges to
D = (f2/Δ)+f

This can be rearranged as the Newtonian form of the equation
D-f = f2 , therefore
Δ(D-f) = f2 and since Δ = (d-f)
(d-f)(D-f) = f2

We can work out a focus scale for a lens using D = (f2/Δ)+f . Assume we have a 60mm lens, and it moves in or out 1/3mm for each 30 degrees of turn ; 
When the lens is 60mm from the image we can mark ∞ at the 12 O’clock position: Δ = 0 and D= ∞,
if we turn the ∞ mark to the 1 O’clock position (30 degrees) Δ = 1/3 and D= 3600/(1/3) = 10800 + 60 = 10.86 M, so we can write 10.9 in at the new 12 O’clock position
turn the ∞ mark to the 2 O’clock position (60 degrees)  Δ = 2/3 and  D= 3600/(2/3) = 5400 + 60 = 5.46 M , so we can write 5.5 at the latest 12 O’clock position
turn the ∞ mark to the 3 O’clock position (90 degrees)  Δ = 1  and D= 3600 + 60 = 3.66 M, so this time we write 3.7 at 12 O’clock
turn the ∞ mark to the 4 O’clock position (120 degrees) Δ = 4/3 and D= 3600/(4/3) = 2700 + 60 = 2.76 M, so 2.8 goes on the scale at 12 O’clock
turn the ∞ mark to the 5 O’clock position (150 degrees) Δ = 5/3 and D= 3600/(5/3) = 2160 + 60 = 2.22 M
turn the ∞ mark to the 6 O’clock position (180 degrees) Δ = 2 and  D= 3600/2 = 1800 + 60 = 1.86 M so we can 2.2 and 1.9 to the scale to finish the job

And so on. For simplicity of calculation we often consider the extra 60mm insignificant and D ≈(f2/Δ) is usually close enough. It’s also worth noting that the roles of D as subject distance and d as image distance can be swapped – the whole arrangement is symmetrical.

In the diagram above, the blue lines show the lens is focused at a distance D, which has a lens to image distance (d) of (f+Δ) and D=f2/Δ +f , something further away than D will not come to a sharp focus at the image plane, but some distance in front of it (something nearer than D will come to a focus behind the image plane). Focused rays of light form a cone: if the point of the code is not on the image plane, the rays form a disc which is called “a circle of confusion”. The red lines in the diagram illustrate the case for something at infinity and show how smaller a aperture width (bigger f/ number) leads to a smaller circle of confusion. 
The only factors which determine the size of the circle that is formed are focal length, aperture, and the distance between the lens and the image (i.e. the distance at which the lens is focused) Two set-ups using the same focal length, and same aperture, focused at the same distance will produce the same size circle regardless of the size of the recording medium which captures the image, the size of the image circle produced by the lens or any other factor.

A point at infinity will form an image at a distance f behind the lens (that’s the definition of focal length) and so we know it forms a image Δ in front of the film/sensor in the setup in the diagram.
The red lines form two similar triangles between the lens and the image. The “base” of the large one is w (the aperture width) and its "height" is f.
We normally write aperture as a ratio between width and focal length, e.g. f/2 means the aperture’s width is half the focal length.
So f = aw (where a is the f/ number) , so we can say this triangle has a base of w and a height of w*a

The base of smaller triangle is the circle of confusion from the mis-focused point at infinity.
This circle’s diameter is normally written as c, so using similar triangles the height of the smaller triangle must be its base * a, so:
Δ = c * a

As the lens moves further away from the image, the circle for the point at infinity gets bigger: a small enough circle looks like a point and but there comes a size where we can see it is a circle.
If we know that size we, can calculate the value of Δ as c*a and since we know that D = (f2/Δ) + f, we can define the subject distance when a point at infinity starts to look out of focus as
(f2/ca) + f  .
This distance is known as the hyperfocal distance (H) strictly, H = (f2/ca ) + f,  but it usually accurate enough to write H ≈ f2/ca ;
It later we’ll use a rearrangement of this: since Δ = ca, this simplified form of the equation can be turned into Δ≈f2/H


We can see that we get the same size circle if the image plane is c*a in front of where the image would focus as well as c*a behind it, so we can say
(1) for an subject at distance is D, the lens to image distance is approximately (f2/D)+f  (more accurately it is  (f2/(D-f))+f ) and
(2) the zone of apparent sharp focus runs from anything which would be in focus at (f2/D)+f -ca to anything which would be in focus at (f2/D)+f + ca

This formula is accurate enough for most purposes: but it would be more accurate to say the range runs ((f2/(D-f))+f)*(f+ca)/f  to ((f2/(D-f))+f)*(f-ca)/f because this accounts for  Δ getting slightly bigger as d increases for nearer and nearer subjects.  The error is biggest at short distances with wide apertures.
A  35mm frame with c=0.03 and an aperture of f/32, gives c*a ≈ 1. If we focus a 50mm lens at 1m (f2/D)+f = 52.5,
So the simple form of the formula would say an image formed 51.5-53.5mm behind the lens is in the “in focus zone”. The long form is 51.45- 53.55.
So instead of the dof extending from  1.716M to 0.764m  it actually goes from 1.774 to 0.754m.   
Since we only measure distance to 1 or two significant figures, aperture to 1 or 2 significant figures (and f/22 is really f/23) and focal length to the nearest whole mm (and stated focal length can be inaccurate by 1 or 2 mm) the simple formula gives the the point where most people kind-of feel that the image isn’t really properly focused to enough accuracy.  

It’s also worth noting that the if we have a focus scale like the one out lined above the same distance either side of a focus will give the same Δ, so we can calculate Δ for each aperture mark, and put depth of field scale marks on a lens.

∞    11.    5.5    3.7    2.8    2.2    1.9M 
^ | ^

If we want to work out D.o.F numbers (e.g. to make our own tables) , we know that the lens to image distance for the far point (df ) is (f2/Df)+f  and for the near point, (dn) it is (f2/Dn)+f

therefore,  f2/Df + f = f2/D + f – Δ    (or + Δ for the near point)

we can remove +f from each side and get f2/Df  = f2/D – Δ ;

since Δ = f2/H, we can rewrite this it as f2/Df  = f2/D  – f2/H ;

the f2 terms cancel out so we get  1/Df = 1/D – 1/H , for the far point and for the near point 1/Dn = 1/D + 1/H  ;

We can rewrite these as  1/Df =(H-D)/(H*D), for the far point and for the near point 1/Dn =(H+D)/(H*D) so

Df = HD/(H-D), for the far point and for the near point Dn =HD/(H+D)

These produce an interesting series

Focus Distance (D) Near Point(Dn) Far Point(Df)
H H/2
H/2 H/3 H
H/3 H/4 H/2
H/4 H/5 H/3
H/5 H/6 H/4

In other words, if the focus distance is H/x the near point is H/(x+1) and the far point is H/(x-1).

[These formulae: Dn = H/((H/D)+1)  ,  Df = H/((H/D)-1)  can be re-arranged to Dn = H/((H+d)/D)  ,  Df = H/((H-D)/D) and then to Dn = HD/(H+d), Df = HD/(H-D) – the original formulae ]

This can useful for doing a quick mental d.o.f calculation. A 50mm lens @ f/8 on full frame has a hyperfocal distance of roughly 10m (502/(.03*8) +50 = 10.46M). If I focus at 1M (roughly H/10) the near point is H/11 = 0.90909M and the far point is 1.1111M so I have roughly 9CM in front and 11 CM behind

Earlier I said “As the lens moves further away from the image, the circle gets bigger: a small enough circle looks like a point and but there comes a size where it starts looking like a circle. If we know that size…

How much of the image the circle occupies, determines whether it is judged to be still in focus, or a long way out of focus. So value for c must be proportional to the size of the image, after any cropping has been done.

By convention 35mm film used c=0.03mm and APS-C crop sensor cameras use c=0.02mm. Changing image (sensor) size changes allowable circle size c, and so changes Δ , and so the depth of field scale on a lens designed for one size of of image needs to be adjusted if used on a camera where the image is different size (on an APS-C camera reading the scale for 1 stop wider aperture than actually set will give roughly the right reading).

Size of the circle formed does not depend on image size but allowable circle size does and hyperfocal distance and apparent depth of field change when c changes

Changing sensor size (keeping same position with the same lens and accepting a change of framing).

If we use two different cameras – i.e. use a different circle size – at the same spot, focused on the same place and we use the same focal length and same aperture on both, then the one with the smaller image has less depth of field. It doesn’t matter how we get to the smaller image, whether it is by cropping a big one or starting with a smaller film/sensor size.

We get less D.o.F because c has become smaller, so f2/ca – the hyperfocal distance has moved further away. When you look at f2/ca,  a smaller value of c needs a larger value of a to compensate.

Changing sensor size and focal length (getting the same framing from same position)

If we use two cameras – with different circle size – and use different focal lengths to give the same angle of view, but keep the same aperture then the larger image will have less depth of field because the f and c have gone up by the same factor , but f is squared in the equation. A larger value of a is needed to compensate for f being squared.

So: a 50mm @ f/8 on full frame has the approximate field of view and depth of field of a 35mm @ f/5.6 on APS-C . If that’s we want, the full frame camera needs to use a slower shutter speed or higher ISO to compensate, which have their own side effects.

If we want the depth that comes from the 35mm @ f/32 on APS-C , the 50 might not stop down to f/44 to give the same depth on Full Frame.

But if we use the 50 @ f/1.4 to isolate a subject from the background on full frame the 35 probably doesn’t open up to f/1

Changing focal length and camera position

People often think of perspective as a function of the angle of view of the lens. Strictly that isn’t correct : perspective is a function of the ratios of subject to camera distances. If you have two items the same size with one a meter behind the other and you stand a meter from the nearer one, the far one is 2M away, and will appear 1/2 the size. If you stand 10 meters from the first (and therefore 11 meters from the second), the far object will appear 10/11ths of the size. It doesn’t matter what else is in the frame. But: if you fit a wider lens the natural response is to move closer to the subject : it is that change of viewpoint which causes that the change of perspective. Changing focal length and keeping position constant means the perspective is constant, and the framing changes. Changing focal length and keeping framing constant means a change of position and with it a change of perspective.

If you have two lenses for the same camera and a choice between standing close with a wide angle lens or further away with a telephoto (and accepting the change of perspective for the same framing) we can work out the distances.

Let’s say with the short lens, H is 10 and you stand 5 meters away.

The near point is (10 * 5) / (10+5) = 3.33 : 1.67 meters in front

The far point is (10 * 5) / (10-5) = 10 : 5 meters behind = 6.67 in total

If we double the focal length and stand twice as far away the hyperfocal distance increases 4 fold (if the circle size and aperture don’t change), so we get a d.o.f zone like this

(40*10) / (40+10) = 8 : 2 meters in front

(40*10) / (40-10) = 13.33: 3.33 meters behind =5.33 in total.

Notice the background is more out of focus with the long lens, but there is actually MORE in focus in front of the subject. The wider lens includes more "stuff" in the background and it is sharper – which is why long lenses are thought of as better at isolating a subject from the background.

Changing camera position and sensor size.

If you only have one lens and your choice is to move further away and crop the image (or use a smaller sensor) or come close and use a bigger image what we can calculate that too: keeping the full image / close position as the first case from the previous example we would keep the near point 1.6667 meters in front and a far point 5 meters behind = 6.67 in total

If we use half the sensor width, we halve c and double H, if we double the distances we have doubled every term in the equation.

(20*10) / (20+10) = 6.6667 : 3.33 meters (in front)

(20*10) / (20-10) = 20 : 10 meters (behind) – 13.33Meters in total, so you get twice as much in the zone either side of the image but dropping back and cropping.

July 5, 2013

PowerShell TabCompletion++ or TabCompletion#

Filed under: Databases / SQL,Powershell — jamesone111 @ 9:23 pm

One of the tricks of PowerShell V3 is that it is even cleverer with Tab completion / intellisense than previous versions, though it is not immediately obvious how you can take control of it.  When I realised what was possible I had to apply it to a Get-SQL command command I had written. I wanted the ISE to be able to give me something like this


I happened to have Jan Egil Ring ‘s article on the subject for PowerShell magazine (where he credits another article by Tobias Weltner ) open in my browser, when I watched Jason Shirk gave a talk covering the module he has published via GitHub named TabExpansion ++. This module includes the custom tab completers for some PowerShell which don’t have them and some for legacy programs.  Think about that for a moment,  if you use NET STOP in PowerShell instead of in CMD, tab completion fills in the names of the services. Yes, I know PowerShell has a Stop-Service cmdlet, but if you’ve been using the NET commands since the 1980s (yes, guilty) why stop using them ?

More importantly Jason has designed a framework where you can easily add your own tab completers – which are the basis for intellisense in the ISE. On loading, his module searches all ps1 and psm1 files in the paths $env:PSModulePath and $env:PSArgumentCompleterPath for functions with the ArgumentCompleter attribute – I’ll explain that shortly.  When it finds one, it extracts the function body and and "imports" it into the TabExpansion++ module. If I write argument completer functions and save them with my modules (which are in the PSModulePath) then when I load TabExpansion++ …. whoosh! my functions get tab completion.

A lot of my work at the moment involves dealing with data in a MySQL database, and I have installed the MySQL ODBC driver, and I wrote a function Named Get-SQL (which I can just invoke as SQL)
When I first wrote it, it was simple enough: leave an ODBC connection open as a global variable and pump SQL queries into it. After a while I found I was sending a lot of “Select * From table_Name” queries, and so I gave it a –Table parameter which would be built into a select query and a –gridview parameter which would sent the data to the PowerShell grid viewer. Then I found that I was doing a lot of “Desc table_name” queries, so I added a -describe parameter. One way and another the databases have ended up with long table names which are prone to mistyping, this seemed like a prime candidate for an argument completer, so I set about extending TabExpansion++ (does that make it TabExpansion#? if you haven’t noticed with C# the # sign is ++ ++ one pair above the other).

It takes 4 things to make a tab completer function. First: one or more ArgumentCompleter attributes
[ArgumentCompleter(Parameter = 'table',
                         Command = ('SQL','Get-SQL'),
                      Description = 'Complete Table names for Get-SQL , for example: Get-SQL -GridView -Table ')]

This defines the parameter that the completer works with – which must be a single string. If the completer supports multiple parameters, you must use multiple ArgumentCompleter attributes. 
And it defines the command(s) that the completer works with. The definition can be a string, an array of strings, or even a ScriptBlock. 
The Second thing needed is a param block  that understands the parameters passed to a tab completer.
param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)

The main one here is $wordToComplete – the partially typed word that tab completion is trying to fill in. However as you can see in the screen shot it is possible to look at the parameters already completed and use them to produce the list of possible values.
$wordToComplete -is used in the third part is the body that gets those possible parameter value. So in my function I have something a bit like this…
$parameters = Get-TableName | Where-Object { $_ -like "$wordToComplete*" } | Sort-Object

And the final part is to return the right kind of object to tab completion process, and Jason’s module has a helper function for this
$parameters | ForEach-Object {$tooltip = "$_"
                              New-CompletionResult $_ $tooltip}

There is the option to have different text as the tool tip – in some places $tooltip – which is shown in intellisense – would be set to something other than the value being returned. Here I’ve kept it in place to remind me rather than a calling  New-CompletionResult $_ $_

And that’s it. Unload and reload TabExpansion++ and my SQL function now knows how to expand -table. I added a second attribute to allow the same code to handle -describe and then wrote something to get field names so I could have a picklist for –orderby and –select as well. With -select intellisense doesn’t pop up a second time if you select a name and enter a comma to start a second; but tab completion works. Here’s the finished item

Function SQLFieldNameCompletion {
   [ArgumentCompleter(Parameter = ('where'),
                        Command = ('SQL','Get-SQL'),
                    Description = 'Complete field names for Get-SQL , for example: Get-SQL -GridView -Table ')]
   [ArgumentCompleter(Parameter = ('select'),
                        Command = ('SQL','Get-SQL'),
                    Description = 'Complete field names for Get-SQL , for example: Get-SQL -GridView -Table ')]
   [ArgumentCompleter(Parameter = ('orderBy'),
                        Command = ('SQL','Get-SQL'),
                    Description = 'Complete field names for Get-SQL , for example: Get-SQL -GridView -Table ')]
   param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)
   If ($DefaultODBCConnection) {
       $TableName = $fakeBoundParameter['Table']
       Get-SQL -describe $TableName | Where-Object { $_.column_name -like "$wordToComplete*" } | Sort-Object -Property column_name |
       ForEach-Object {$tooltip           = $_.COLUMN_NAME + " : " + $_.TYPE_NAME
                       New-CompletionResult $_.COLUMN_NAME $tooltip

Which all saves me a few seconds a few dozen times a day.

June 30, 2013

PowerShell where two falses make a true … and other quirks.

Filed under: Powershell — jamesone111 @ 9:11 pm

I was in a conversation on twitter recently about the way operations in PowerShell aren’t always commutative. We expect addition, multiplication and equality operators to work the same way whether we write 2 x 3 or 3 x 2.
But what happens in a language like PowerShell which tries to flatten out the differences between types and let you multiply a text string by a number?  Let’s explain 

Rule 1. If the operator dictates types, then PowerShell will convert the operands to match.
The classic case is for the –and and –or operators: PowerShell will covert the operands to Booleans – knowing how the conversion is done opens up some useful shortcuts and also avoids some traps:

Any non-zero number is treated as true. Only Zero is false,
for example
> 3.14 -and $true

> 0 -and $true

In some programming languages 8 –or 16 would calculate a "bitwise OR" (also called a binary OR), in other words it would convert 8 to binary 0000 1000 and convert 16 to binary 0001 0000 and do OR operations on each column to produce 0001 1000 – 24 in decimal. PowerShell provides this functionality though separate operators ‑bOr, ‑bAnd –bXor and –bNot

Any non-empty string is true. Only empty strings are false.
For example.
> "" -and $true

> "Hello, world" -and $true

The string "FALSE" is not empty, and so is treated as the Boolean value True. If you convert $False to a string and back to a boolean it doesn’t come back to false
> [boolean][string]$false

If you need the text "true" to convert to True and "False" to convert to false you can use the [Convert] class
> [convert]::ToBoolean("false")

Any non-empty object is treated as true. An empty object , or array, or Null converts to false.

> (dir *.jpg) -and $true

> (dir *.pjg) -and $true

> @() –and $true

Any array with one element is treated as  that element.
> @(0) -and $true

Any array with multiple elements is treated as true even if all those elements are false
> @($false,$false) -and $true

Rule 2. Null is less than to anything except another null

> "" -gt $null

Rule 3. If the operator does not dictate types and the arguments are different types, the first operand’s type determines behaviour

This causes bewilderment in people who don’t understand it, because an operator which is normally commutative (works the same with the operands reversed) is only commutative if the operands are of the same type.
> 5 * "25"

> "25" * 5

The string “25” is converted to a number, 5×25 = 125, but if the string is placed first, the multiplication operator repeats it.
> 3.14 -eq $true

> $true -eq 3.14

Converting $true to a number returns 1. Converting a (non-zero) number to a Boolean returns true.
Similarly converting any non empty string to a Boolean returns true, but converting false to a string returns “false”
$true -eq "false"

> "false" -eq $true

Rule 4. When applied to an array , an operator which returns a Boolean when applied to single items, will return an array of items which individually return true

> @(1,2,3,4) –gt 2

When you put this together with tests for null, confusion can result: see if you can work out what the following returns.

> @($null,1,2,””,$null) –eq $null

And why the following return the opposite results

> [boolean](@($null,1,2,””,$null) –eq $null)

> [boolean](@($null,1,2,”” ) –eq $null)

Hint : if you can’t work it out try

(@($null,1,2,””,$null) –eq $null).Count

February 8, 2013

Getting SkyDrive to sync the way I want (like Mesh)

Filed under: Windows 7 — jamesone111 @ 10:37 pm

A few days ago Geekwire ran a story entitled “Microsoft, Let’s be friends” It began

Dear Microsoft,
Can we just be friends again? Please?
It’s been exactly five years now since I left you. During our time together, I poured all the emotion and energy I had into the products I helped build for you.

Whilst it is only slightly more than two years since I left Microsoft, that grabbed my interest.  The author goes though some of the products in which he invested emotional capital but saw Microsoft kill off.

  • For 14 years, I used Microsoft Money fanatically … And then you killed it
  • [I] fell in love with FrontPage, … But you threw it under the bus for no apparent reason.
  • I bought my wife a Zune for her birthday … you gave up and now Zune is in the graveyard, may it rest in peace. Meanwhile, I feel like a sucker … again.
  • Microsoft Digital Image Suite was the best image editing package ever to have existed for consumers. Yes, better than Photoshop Elements. … I want you to know that you truly broke my heart when you buried this product.

imageI never used MS-Money. FrontPage passed me by. Being in a small area which Redmond calls “rest of the world”,  I never got to buy a Zune. But I felt the heartbreak at the loss of Digital Image suite. It was one of the few non Xbox things I ever bought on staff purchase and I still use it. But my present pain is for Windows Live Mesh. Since I left Microsoft this product has quietly kept half a dozen key folders backed up to the cloud and replicated to all the computers I use.
Microsoft have mailed me twice about their producticidal plans for Mesh. Worryingly, they say 40% of Mesh users are using SkyDrive meaning 60% are not. I never used Mesh’s remote access. But it can sync folders from all over my hard disk and SkyDrive needs folders to share a common root in order to sync them.  I could move some of the folders I sync out of “My Documents” into “SkyDrive” but PowerShell (for example) insists on having its profile folder in a specific location. In short “out of the box” SkyDrive can’t do what Mesh can.

With the the death of Mesh now only a week away, I decided to try something I’ve been meaning to do for ages. Create a symbolic link from the Skydrive folder to the “proper” folders where my files reside.


It’s pretty easy. Creating Links requires an Administrative command prompt (unless you change the system policy on the machine), and you need a /d to tell MkLink it is a directory not a file – then it is a question of the name of the link and the place it links to and – cue drum roll – you have a link.

imageThe folder appears under SkyDrive, and right away the SkyDrive client starts syncing in the background. Maybe it is designed not to hog bandwidth or maybe it’s plain slow but on my computer it took a fair while to copy everything.


Repeat for as necessary for the other folders which need to be sync’d

August 9, 2012

Getting to the data in Adobe Lightroom–with or without PowerShell

Filed under: Databases / SQL,Photography,Powershell — jamesone111 @ 7:01 am

Some Adobe software infuriates me (Flash), I don’t like their PDF reader and use Foxit instead, apps which use Adobe-Air always seem to leak memory. But I love Lightroom .  It does things right – like installations – which other Adobe products get wrong. It maintains a “library” of pictures and creates virtual folders of images ( “collections” ) but it maintains metadata in the images files so data stays with pictures when they are copied somewhere else – something some other programs still get badly wrong. My workflow with Lightroom goes something like this.

  1. If I expect to manipulate the image at all I set the cameras to save in RAW, DNG format not JPG (with my scuba diving camera I use CHDK to get the ability to save in DNG)
  2. Shoot pictures – delete any where the camera was pointing at the floor, lens cap was on, studio flash didn’t fire etc. But otherwise don’t edit in the camera.
  3. Copy everything to the computer – usually I create a folder for a set of pictures and put DNG files into a “RAW” subfolder. I keep full memory cards in filing sleeves meant for 35mm slides..
  4. Using PowerShell I replace the IMG prefix with something which tells me what the pictures are but keeps the camera assigned image number. 
  5. Import Pictures into Lightroom – manipulate them and export to the parent folder of the “RAW” one. Make any prints from inside Lightroom. Delete “dud” images from the Lightroom catalog.
  6. Move dud images out of the RAW folder to their own folder. Backup everything. Twice. [I’ve only recently learnt to export the Lightroom catalog information to keep the manipulations with the files]
  7. Remove RAW images from my hard disk

There is one major pain. How do I know which files I have deleted in Lightroom ? I don’t want to delete them from the hard-disk I want to move them later. It turns out Lightroom uses a SQL Lite database and there is a free Windows ODBC driver for SQL Lite available for download.  With this in place one can create a ODBC data source – point it at a Lightroom catalog and poke about with data. Want a complete listing of your Lightroom data in Excel? ODBC is the answer. But let me issue these warnings:

  • Lightroom locks the database files exclusively – you can’t use the ODBC driver and Lightroom at the same time. If something else is holding the files open, Lightroom won’t start.
  • The ODBC driver can run UPDATE queries to change the data: do I need to say that is dangerous ? Good.
  • There’s no support for this. If it goes wrong, expect Adobe support to say “You did WHAT ?” and start asking about your backups. Don’t come to me either. You can work from a copy of the data if you don’t want to risk having to fall back to one of the backups Lightroom makes automatically

   I was interested in 4 sets of data shown in the following diagrams. Below is image information with the Associated metadata, and file information. Lightroom stores images (Adobe_Images table) IPTC and EXIF metadata link to images – their “image” field joins to the “id_local” primary key in images. Images have a “root file” (in the AgLibraryFile table) which links to a library folder (AgLibraryFolder) which is expressed as a path from a root folder (AgLibraryRootFolder table). The link always goes to the “id_local” field I could get information about the folders imported into the catalog just by querying these last two tables (Outlined in red)


The SQL to fetch this data looks like this for just the folders
SELECT RootFolder.absolutePath || Folder.pathFromRoot as FullName
FROM   AgLibraryFolder     Folder
JOIN   AgLibraryRootFolder RootFolder O
N  RootFolder.id_local = Folder.rootFolder
ORDER BY FullName 

SQLlite is one of the dialects of SQL which doesn’t accept AS in the FROM part of a SELECT statement . Since I run this in PowerShell I also put a where clause in which inserts a parameter. To get all the metadata the query looks like this
SELECT    rootFolder.absolutePath || folder.pathFromRoot || rootfile.baseName || '.' || rootfile.extension AS fullName, 
          LensRef.value AS Lens,     image.id_global,       colorLabels,                Camera.Value       AS cameraModel,
          fileFormat,                fileHeight,            fileWidth,                  orientation ,
captureTime,               dateDay,               dateMonth,                  dateYear,
          hasGPS ,                   gpsLatitude,           gpsLongitude,               flashFired,
focalLength,               isoSpeedRating ,       caption,                    copyright
FROM      AgLibraryIPTC              IPTC
JOIN      Adobe_images               image      ON      image.id_local = IPTC.image
JOIN      AgLibraryFile              rootFile   ON   rootfile.id_local = image.rootFile
JOIN      AgLibraryFolder            folder     ON     folder.id_local = rootfile.folder
JOIN      AgLibraryRootFolder        rootFolder ON rootFolder.id_local = folder.rootFolder
JOIN      AgharvestedExifMetadata    metadata   ON      image.id_local = metadata.image
LEFT JOIN AgInternedExifLens         LensRef    ON    LensRef.id_Local = metadata.lensRef
LEFT JOIN AgInternedExifCameraModel  Camera     ON     Camera.id_local = metadata.cameraModelRef

Note that since some images don’t have a camera or lens logged the joins to those tables needs to be a LEFT join not an inner join. Again the version I use in PowerShell has a Where clause which inserts a parameter.

OK so much for file data – the other data I wanted was about collections. The list of collections is in just one table (AgLibraryCollection) so very easy to query, and but I also wanted to know the images in each collection.


Since one image can be in many collections,and each collection holds many images AgLibraryCollectionImage is a table to provide a many to relationship. Different tables might be attached to AdobeImages depending on what information one wants from about the images in a collection, I’m interested only in mapping files on disk to collections in Lightroom, so I have linked to the file information and I have a query like this.

SELECT   Collection.name AS CollectionName ,
         RootFolder.absolutePath || Folder.pathFromRoot || RootFile.baseName || '.' || RootFile.extension AS FullName
FROM     AgLibraryCollection Collection
JOIN     AgLibraryCollectionimage cimage     ON collection.id_local = cimage.Collection
OIN     Adobe_images             Image      ON      Image.id_local = cimage.image
JOIN     AgLibraryFile            RootFile   ON   Rootfile.id_local = image.rootFile
JOIN     AgLibraryFolder          Folder     ON     folder.id_local = RootFile.folder
JOIN     AgLibraryRootFolder      RootFolder ON RootFolder.id_local = Folder.rootFolder
ORDER BY CollectionName, FullName

Once I have an ODBC driver (or an OLE DB driver) I have a ready-made PowerShell template for getting data from the data source. So I wrote functions to let me do :
Get-LightRoomItem -ListFolders -include $pwd
To List folders, below the current one, which are in the LightRoom Library
Get-LightRoomItem  -include "dive"
To list files in LightRoom Library where the path contains  "dive" in the folder or filename
Get-LightRoomItem | Group-Object -no -Property "Lens" | sort count | ft -a count,name
To produce a summary of lightroom items by lens used. And
$paths = (Get-LightRoomItem -include "$pwd%dng" | select -ExpandProperty path)  ;   dir *.dng |
           where {$paths -notcontains $_.FullName} | move -Destination scrap -whatif

  Stores paths of lightroom items in the current folder ending in .DNG in $paths;  then gets files in the current folder and moves those which are not in $paths (i.e. in Lightroom.) specifying  -Whatif allows the files to be confirmed before being moved.

Get-LightRoomCollection to list all collections
Get-LightRoomCollectionItem -include musicians | copy -Destination e:\raw\musicians    to copies the original files in the “musicians” collection to another disk

I’ve shared the PowerShell code on Skydrive

August 7, 2012

The cloud, passwords, and problems of trust and reliance

Filed under: Privacy,Security and Malware — jamesone111 @ 9:02 pm

In recent days a story has been emerging of a guy called Mat Honan. Mat got hacked, the hackers wanted his twitter account simply because he had a three letter twitter name. Along the way they wiped his Google mail account and (via Apple’s iCloud) his iPhone, iPad and his Macbook. Since he relied on stuff being backed up in the cloud he lost irreplaceable family photos, and lord only knows what else. There are two possible reactions Schadenfreude – “Ha, ha I don’t rely on Google or Apple look what happens to people who do” , “What an idiot, not having a backup”, or “There but for the grace of God goes any of us”.

Only people who’ve never lost data can feel unsympathetic to Mat and I’ve lost data. I’ve known tapes which couldn’t be read on a new unit after the old one was destroyed in a fire. I’ve learnt by way of a disk crash that a server wasn’t running it’s backups correctly. I’ve gone back to optical media which couldn’t be read. My backup drive failed a while back – though fortunately everything on it existed somewhere else, making a new backup showed me in just how many places. I’ve had memory cards fail in the camera before I had copied the data off them and I had some photos which existed only on a laptop and a memory card which were in the same bag that got stolen (the laptop had been backed up the day before the photos were taken). The spare memory card I carry on my key-ring failed recently, and I carry that because I’ve turned up to shoot photos with no memory card in the camera – never close the door on the camera with the battery or memory card out. I treat memory cards like film and just buy more and keep the old cards as a backstop copy. So my data practices look like a mixture of paranoia and superstition and I know, deep down, that nothing is infallible.

For many of us everything we have in the cloud comes down to one password. I don’t mean that we logon everywhere with “Secret1066!”  (no, not my password). But most of us have one or perhaps two email address which we use when we register.  I have one password which I use on many, many sites which require me to create an identity but that identity doesn’t secure anything meaningful to me. It doesn’t meet the rules of some sites (and I get increasingly cross with sites which define their own standards for passwords) and on those sites I will set a one off password. Like “2dayisTuesday!” when I come to use the site again I’ll just ask them to reset my password. Anything I have in the cloud is only as secure as my email password. 
There are Some hints here, first: any site which can mail you your current password doesn’t encrypt it properly the proper way to store passwords is as something computed from the password so it is only possible to tell if the right password was entered not what the password is. And second, these computations are case sensitive and set no maximum password length, so any site which is case insensitive or limits password length probably doesn’t have your details properly secured.  Such sites are out there – Tesco for example – and if we want to use them we have to put up with their security. However if they get hacked (and you do have to ask , if they can’t keep passwords securely, what other weaknesses are there ?) your user name , email and password are in the hands of the hackers, so you had better use different credentials anywhere security matters – which of course means on your mailbox.

So your email password is the one password to rule them all and obviously needs to be secure. But there is a weak link, and that seems to be where the people who hacked Mat found a scary loophole. The easiest way into someone’s mailbox might be to get an administrator to reset the password over the phone – not to guess or brute force it. The only time I had my password reset at Microsoft the new one was left on my voicemail – so I had to be able to login to that. If the provider texts the password to a mobile phone or resets it (say) to the town where you born (without saying what it is) that offers a level of protection; but – be honest – do you know what it takes to get someone at your provider to reset your password, or what the protocol is ?  In Mat’s case the provider was Apple – for whom the hacker knew an exploitable weakness – but it would be naive to think that Apple was uniquely vulnerable.

Mat’s pain may show the risk in having only a mailbox providers password reset policy to keep a hacker out of your computer and/or your (only) backup. One can build up a fear of other things that stop you having access to either computer or backup without knowing how realistic they are.  I like knowing that my last few phones could be wiped easily but would I want remote wipe of a laptop ? When my laptop was stolen there wasn’t any need to wipe it remotely as it had full volume encryption with Microsoft’s bitlocker (saving me a difficult conversation with corporate security) and after this story I’ll stick to that. Cloud storage does give me off-site backup and that’s valuable – it won’t be affected if I have a fire or flood at home – but I will continue to put my faith in traditional off-line backup and I’ve just ordered more disk capacity for that.

July 31, 2012

Rotating pictures from my portfolio on the Windows 7 Logon screen

Filed under: Photography,Powershell — jamesone111 @ 12:15 pm

In the last 3 posts I outlined my Get-IndexedItem function for accessing windows Search. The more stuff I have on my computers the harder it is to find a way of classifying it so it fits into hierarchical folders : the internet would be unusable without search, and above a certain number of items local stuff is too.  Once I got search I start leaving most mail in my Inbox and outlook uses search to find what I want; I have one “book” in Onenote with a handful of sections and if I can’t remember where I put something, search comes to the rescue. I take the time to tag photos so that I don’t have to worry too much about finding a folder structure to put them in. So I’ll tag geographically  (I only have a few pictures from India – one three week trip, so India gets one tag but UK pictures get divided by County , and in counties with many pictures I put something like Berkshire/Reading. Various tools will make a hierarchy with Berkshire then Newbury, Reading etc) People get tagged by name – Friends and Family being a prefix to group those and so on. I use Windows’ star ratings to tag pictures I like – whether I took them or not – and Windows “use top rated pictures” for the Desktop background picks those up. I also have a tag of “Portfolio”

Ages ago I wrote about Customizing the Windows 7 logon screen. So I had the idea “Why not find pictures with the Portfolio tag, and make them logon backgrounds.”  Another old post covers PowerShell tools for manipulating images so I could write a script to do it, and use Windows scheduled tasks to run that script each time I unlocked the computer so that the next time I went to the logon screen I would have a different picture. That was the genesis of Get-IndexedItem. And I’ve added it, together with the New-LogonBackground to the image module download on the Technet Script Center

If you read that old post you’ll see one of the things we depend on is setting a registry key so the function checks that registry key is set and writes a warning if it isn’t:

if ( (Get-ItemProperty HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background
).oembackground -ne 1) {
        Write-Warning "Registry Key OEMBackground under
          HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background needs to be set to 1"

        Write-Warning "Run AS ADMINISTRATOR with -SetRegistry to set the key and try again."

So if the registry value isn’t set to 1, the function prints a warning which tells the user to run with –SetRegistry. After testing this multiple times – I found changing windows theme resets the value – and forgetting to run PowerShell with elevated permissions, I put in a try / catch to pick this up and say “Run Elevated”. Just as a side note here I always find when I write try/catch it doesn’t work and it takes me a moment to remember catch works on terminating errors and the command you want to catch must usually needs –ErrorAction stop

if ($SetRegistry ) {
  try{ Set-ItemProperty -Name oembackground -Value 1 -ErrorAction Stop `
               -PATH "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background" 

  catch [System.Security.SecurityException]{
     Write-Warning "Permission Denied - you need to run as administrator"

The function also tests that it can write to the directory where the images are stored, since this doesn’t normally have user access: if it can’t write a file, it tells the user to set the permissions. Instead of using try/catch here I use $? to see if the previous command was successful
Set-content -ErrorAction "Silentlycontinue" -Path "$env:windir\System32\oobe\Info\Backgrounds\testFile.txt" `
              -Value "This file was created to test for write access. It is safe to remove"
if (-not $?) {write-warning "Can't create files in $env:windir\System32\oobe\Info\Backgrounds please set permissions and try again"
else         {Remove-Item -Path "$env:windir\System32\oobe\Info\Backgrounds\testFile.txt"}

The next step is to find the size of the monitor. Fortunately, there is a WMI object for that, but not all monitor sizes are supported as bitmap sizes so the function takes –Width and –Height parameters. If these aren’t specified it gets the value from WMI and allows for a couple of special cases – my testing has not been exhaustive, so other resolutions may need special handling. The Width and height determine the filename for the bitmap, and later the function check the aspect ratio – so it doesn’t try to crop a portrait image to fit landscape monitor.

if (-not($width -and $height)) {
    $mymonitor = Get-WmiObject Win32_DesktopMonitor -Filter "availability = '3'" | select -First 1
    $width, $height = $mymonitor.ScreenWidth, $mymonitor.ScreenHeight
    if ($width -eq 1366) {$width = 1360}
    if (($width -eq 1920) -and ($height -eq 1080)) {$width,$height = 1360,768}
if (@("768x1280" ,"900x1440" ,"960x1280" ,"1024x1280" ,"1280x1024" ,"1024x768" , "1280x960" ,"1600x1200",
      "1440x900" ,"1920x1200" ,"1280x768" ,"1360x768") -notcontains "$($width)x$($height)" )
    write-warning "Screen resolution is not one of the defaults. You may need to specify width and height"
$MonitorAspect = $Width / $height
$SaveName = "$env:windir\System32\oobe\Info\Backgrounds\Background$($width)x$height.jpg"

The next step is to get the image – Get-Image is part of the PowerShell tools for manipulating images .

$myimage = Get-IndexedItem -path $path -recurse -Filter "Kind=Picture","keywords='$keyword'",
                            "store=File","width >= $width ","height >= $height " |
                      where-object {($_.width -gt $_.height) -eq ($width -gt $height)} |
 get-random | get-image

Get-Indexed item looks for files in folder specified by –Path parameter – which defaults to [system.environment]::GetFolderPath( [system.environment+specialFolder]::MyPicture - the approved way to find the "my pictures" folder -recurse tells it to look in sub-folders and it looks for a file with keywords which match the –Keyword Parameter (which defaults to “Portfolio”). It filters out pictures which are smaller than the screen and then where-object filters the list down to those with have the right aspect ratio. Finally one image is selected at random and piped into Get-Image.

If this is successful , the function logs what it is doing to the event log. I set up a new log source “PSLogonBackground” in the application log by running PowerShell as administrator and using the command
New-EventLog -Source PSLogonBackground -LogName application
Then my script can use that as a source – since I don’t want to bother the user if the log isn’t configured I use -ErrorAction silentlycontinue here
write-eventlog -logname Application -source PSLogonBackground -eventID 31365 -ErrorAction silentlycontinue `
                -message "Loaded $($myImage.FullName) [ $($myImage.Width) x $($myImage.Height) ]"


The next thing the function does is to apply cropping and scaling image filters from the original image module as required to get the image to the right size.  When it has done that it tries to save the file, by applying a conversion filter and saving the result. The initial JPEG quality is passed as a parameter if the file is too big, the function loops round reducing the jpeg quality until the file fits into the 250KB limit and logs the result to the event log.

Set-ImageFilter -filter (Add-ConversionFilter -typeName "JPG" -quality $JPEGQualitypassthru) -image $myimage -Save $saveName
$item = get-item $saveName
while ($item.length -ge 250kb -and ($JPEGQuality -ge 15) ) {
      $JPEGQuality= 5
      Write-warning "File too big - Setting Quality to $Jpegquality and trying again"
      Set-ImageFilter -filter (Add-ConversionFilter -typeName "JPG" -quality $JPEGQuality -passThru) -image $myimage -Save $saveName
      $item = get-item $saveName
if ($item.length -le 250KB) {
write-eventlog -logname Application -source PSLogonBackground -ErrorAction silentlycontinue `
           -eventID 31366 -message "Saved $($Item.FullName) : size $($Item.length)"


That’s it. If you download the module  remove the “Internet block” on the zip file and expand the files into \users\YourUserName\windowsPowerShell\modules, and try running New-logonbackground  (with –Verbose to get extra information if you wish).
If the permissions on the folder have been set, the registry key is set,  pressing [Ctrl]+[Alt]+[Del] should reveal a new image.  YOU might want to use a different keyword or a different path or start by trying to use a higher JPEG quality in which case you can run it with parameters as needed.

Then it is a matter of setting up the scheduled task: here are the settings from my scheduler




The program here is the full path to Powershell.exe and the parameters box contains
-noprofile -windowstyle hidden -command "Import-Module Image; New-logonBackground"

Lock, unlock and my background changes. Perfect. It’s a nice talking point and a puzzle – sometimes people like the pictures (although someone said one of a graveyard was morbid) – and sometimes they wonder how the background they can see is not only not the standard one but not the one they saw previously.

June 30, 2012

Using the Windows index to search from PowerShell: Part three. Better function output

Filed under: Powershell — jamesone111 @ 10:53 am

Note: this was originally written for the Hey,Scripting guy blog where it appeared as the 27 June 2012 episode. The code is available for download . I have some more index related posts coming up so I wanted to make sure everything was in one place

In part one, I introduced a function which queries the Windows Index using filter parameters like
  • "Contains(*,’Stingray’)"
  • "System.Keywords = ‘Portfolio’ "
  • "System.Photo.CameraManufacturer LIKE ‘CAN%’ "
  • "System.image.horizontalSize > 1024"

In part two, I showed how these parameters could be simplified to

  • Stingray (A word on its own becomes a contains term)
  • Keyword=Portfolio (Keyword, without the S is an alias for System.Keywords and quotes will be added automatically))
  • CameraManufacturer=CAN* (* will become %, and = will become LIKE, quotes will be added and CameraManufacturer will be prefixed with System.Photo)
  • Width > 1024 (Width is an alias or System.image.horizontalsize, and quotes are not added around numbers).

There is one remaining issue. PowerShell is designed so that one command’s output becomes another’s input. This function isn’t going to do much with Piped input: I can’t see another command spitting out search terms for this one, nor can I multiple paths being piped in. But the majority of items found by a search will be files: and so it should be possible to treat them like files, piping them into copy-item or whatever.
The following was my first attempt at transforming the data rows into something more helpful

$Provider= "Provider=Search.CollatorDSO; Extended Properties=’Application=Windows’;"
$adapter = new-object system.data.oledb.oleDBDataadapter -argument $SQL, $Provider
$ds      = new-object system.data.dataset
if ($adapter.Fill($ds))
{ foreach ($row in $ds.Tables[0])
            {if ($row."System.ItemUrl" -match "^file:")
                  {$obj = New-Object psobject -Property @{
                                Path = (($row."System.ItemUrl" -replace "^file:","") -replace "\/","\")}}
             Else {$obj = New-Object psobject -Property @{Path = $row."System.ItemUrl"}
             Add-Member -force -Input $obj -Name "ToString" -MemberType "scriptmethod" -Value {$this.path}
             foreach ($prop in (Get-Member -InputObject $row -MemberType property |
                                    where-object {$row."$($_.name)" -isnot [system.dbnull] }))
                  { Add-Member -ErrorAction "SilentlyContinue" -InputObject $obj -MemberType NoteProperty `
                               -Name (($prop.name -split "\." )[-1]) -Value $row."$($prop.name)"
             foreach ($prop in ($PropertyAliases.Keys |
                                    Where-Object {$row."$($propertyAliases.$_)" -isnot [system.dbnull] }))
                  { Add-Member -ErrorAction "SilentlyContinue" -InputObject $obj ` -MemberType AliasProperty ` 
                               -Name $prop ` -Value ($propertyAliases.$prop -split "\." )[-1]
This is where the function spends most of its time, looping through the data creating a custom object for each row; non-file items are given a path property which holds the System.ItemURL property; for files the ItemUrl is processed into normal format (rather than file:c/users/james format) – in many cases the item can be piped into another command successfully if it just has a Path property.

Then, for each property (database column) in the row a member is added to the custom object with a shortened version of the property name and the value (assuming the column isn’t empty).
Next, alias properties are added using the definitions in $PropertyAliases.
Finally some standard members get added. In this version I’ve pared it down to a single method, because several things expect to be able to get the path for a file by calling its tostring() method.

When I had all of this working I tried to get clever. I added aliases for all the properties which normally appear on a System.IO.FileInfo object and even tried fooling PowerShell’s formatting system into treating my file items as a file object, something that only needs one extra line of code
$Obj.psobject.typenames.insert(0, "SYSTEM.IO.FILEINFO")
Pretending a custom object is actually another type seems dangerous, but everything I tried seemed happy provided the right properties were present. The formatting worked except for the "Mode" column. I found the method which that calculates .Mode for FILEINFO objects, but it needs a real FILEINFO object. It was easy enough to get one – I had the path and it only needs a call to Get‑Item but I realized that if I was getting a FILEINFO object anywhere in the process, then it made more sense to add extra properties to that object and dispense with the custom object. I added an extra switch -NoFiles to supress this behaviour
So the code then became
$Provider ="Provider=Search.CollatorDSO; Extended Properties=’Application=Windows’;"
$adapter  = new-object system.data.oledb.oleDBDataadapter -argument $SQL, $Provider
$ds       = new-object system.data.dataset
if ($adapter.Fill($ds))
     { foreach ($row in $ds.Tables[0])
 { if (($row."System.ItemUrl" -match "^file:") -and (-not $NoFiles)) 
                 {$obj = Get-item -Path (($row."System.ItemUrl" -replace "^file:","") -replace "\/","\")}
            Else {$obj = New-Object psobject -Property @{Path = $row."System.ItemUrl"}
                  Add-Member -force -Input $obj -Name "ToString" -MemberType "scriptmethod" -Value {$this.path} 
The initial code was 36 lines, making the user input more friendly took it to 60 lines, and the output added about another 35 lines, bring the total to 95.
There were 4 other kinds of output I wanted to produce:

  • Help. I added comment-based-help with plenty of examples. It runs to 75 lines making it the biggest constituent in the finished product.
    In addition I have 50 lines that are comments or blank for readability as insurance against trying to understand what those regular expressions do in a few months’ time – but there are only 100 lines of actual code.
  • A –list switch which lists the long and short names for the fields (including aliases)
  • Support for the –Debug switch – because so many things might go wrong, I have write‑debug $SQL immediately before I carry out the query, and to enable it that I have
    [CmdletBinding()] before I declare the parameters.
  • A –Value switch which uses the GROUP ON… OVER… search syntax so I can see what the possible values are in a column.
    GROUP ON queries are unusual because they fill the dataset with TWO tables.
    GROUP ON System.kind OVER ( SELECT STATEMENT) will produce a something like this as the first table.

-----------   -------
communication 0
document      1
email         2
folder        3
link          4
music         5
picture       6
program       7
recordedtv    8

The second table is the normal data suitably sorted. In this case it has all the requested fields grouped by kind plus one named "Chapter", which ties into the first table. I’m not really interested in the second table but the first helps me know if I should enter "Kind=image", "Kind=Photo" or "Kind=Picture"

I have a Select-List function which I use in my configurator and Hyper-V library, and with this I can choose which recorded TV program to watch, first selecting by title, and then if there is more than one episode, by episode.
$t=(Get-IndexedItem -Value "title" -filter "kind=recordedtv" -recurse |
            Select-List -Property title).title
start (Get-IndexedItem -filter "kind=recordedtv","title='$t'" -path |

In a couple of follow up posts I’ll show some of the places I use Get-IndexedItem. But for now feel free to download the code and experiment with it.

Using the Windows index to search from PowerShell:Part Two – Helping with user input

Filed under: Uncategorized — jamesone111 @ 10:46 am

Note: this was originally written for the Hey,Scripting guy blog where it appeared as the 26 June 2012 episode. The code is available for download . I have some more index related posts coming up so I wanted to make sure everything was in one place

In part one I developed a working PowerShell function to query the Windows index. It outputs data rows which isn’t the ideal behaviour and I’ll address that in part three; in this part I’ll address another drawback: search terms passed as parameters to the function must be "SQL-Ready". I think that makes for a bad user experience so I’m going to look at the half dozen bits of logic I added to allow my function to process input which is a little more human. Regular expressions are the way to recognize text which must be changed, and I’ll pay particular attention to those as I know I lot of people find them daunting.

Replace * with %

SQL statements use % for wildcard, but selecting files at the command prompt traditionally uses *. It’s a simple matter to replace – but for the need to "escape" the* character, replacing * with % would be as simple as a –replace statement gets:
$Filter = $Filter -replace "\*","%"
For some reason I’m never sure if the camera maker is Canon or Cannon so I’d rather search for Can*… or rather Can%, and that replace operation will turn "CameraManufacturer=Can*" into "CameraManufacturer=Can%". It’s worth noting that –replace is just as happy to process an array of strings in $filter as it is to process one.

Searching for a term across all fields uses "CONTAINS (*,’Stingray’)", and if the -replace operation changes* to % inside a CONTAINS() the result is no longer a valid SQL statement. So the regular expression needs to be a little more sophisticated, using a "negative look behind"
$Filter = $Filter -replace " "(?<!\(\s*)\*","%"

In order to filter out cases like CONTAINS(*… , the new regular expression qualifies "Match on *",with a look behind – "(?<!\(\s*)" – which says "if it isn’t immediately preceded by an opening bracket and any spaces". In regular expression syntax (?= x) says "look ahead for x" and (?<= x) says "Look behind for x" (?!= x) is “look ahead for anything EXCEPT x” and (?<!x) is “look behind for anything EXCEPT x” these will see a lot of use in this function. Here (?<! ) is being used, open bracket needs to be escaped so is written as \( and \s* means 0 or more spaces.

Convert "orphan" search terms into ‘contains’ conditions.

A term that needs to be wrapped as a "CONTAINS" search can be identified by the absence of quote marks, = , < or > signs or the LIKE, CONTAINS or FREETEXT search predicates. When these are present the search term is left alone, otherwise it goes into CONTAINS, like this.
$filter = ($filter | ForEach-Object {
    if  ($_ -match "'|=|<|>|like|contains|freetext") 
    else   {"Contains(*,'$_')"}

Put quotes in if the user omits them.

The next thing I check for is omitted quote marks. I said I wanted to be able to use Can*, and we’ve seen it changed to Can% but the search term needs to be transformed into "CameraManufacturer=’Can%’ ". Here is a –replace operation to do that.
$Filter = $Filter -replace "\s*(=|<|>|like)\s*([^'\d][^\s']*)$",' $1 ''$2'' '
This is a more complex regular expression which takes a few moments to understand

Regular expression




Any spaces (or none)



= or < or > or "Like"



Anything which is NOT a ‘ character
or a digit



Any number of non-quote,
non-space characters (or none)



End of line


Capture the enclosed sections
as matches

$Matches[0]= "=Can%"
$Matches[1]= "="
$Matches[2]= "Can%"

‘ $1 ”$2” ‘0

Replace Matches[0] ("=Can%")
with an expression which uses the
two submatches "=" and "can%".

= ‘Can%’

Note that the expression which is being inserted uses $1 and $2 to mean matches[1] and[2] – if this is wrapped in double quote marks PowerShell will try to evaluate these terms before they get to the regex handler, so the replacement string must be wrapped in single quotes. But the desired replacement text contains single quote marks, so they need to be doubled up.

Replace ‘=’ with ‘like’ for Wildcards

So far, =Can* has become =’Can%’, which is good, but SQL needs "LIKE" instead of "=" to evaluate a wildcard. So the next operation converts "CameraManufacturer = ‘Can%’ "into "CameraManufacturer LIKE ‘Can%’ ":
$Filter = $Filter -replace "\s*=\s*(?='.+%'\s*$)" ," LIKE "

Regular expression




= sign surrounded by any spaces

CameraManufacturer = ‘Can%’


A quote character

CameraManufacturer = Can%’


Any characters (at least one)

CameraManufacturer = ‘Can%’


% character followed by ‘

CameraManufacturer = ‘Can%’


Any spaces (or none)
followed by end of line


Look ahead for the enclosed expression but don’t include it in the match

$Matches[0] = "="
(but only if ‘Can%’ is present)

Provide Aliases

The steps above reconstruct "WHERE" terms to build syntactically correct SQL, but what if I get confused and enter “CameraMaker” instead of “CameraManufacturer” or “Keyword” instead of “Keywords” ? I need Aliases – and they should work anywhere in the SQL statement – not just in the "WHERE" clause but in "ORDER BY" as well.
I defined a hash table (a.k.a. a "dictionary", or an "associative array") near the top of the script to act as a single place to store the aliases with their associated full canonical names, like this:
$PropertyAliases = @{
    Width       = "System.Image.HorizontalSize";
    Height      = "System.Image.VerticalSize";
    Name        = "System.FileName";
    Extension   = "System.FileExtension";
    Keyword     = "System.Keywords";
    CameraMaker = "System.Photo.CameraManufacturer"
Later in the script, once the SQL statement is built, a loop runs through the aliases replacing each with its canonical name:
$PropertyAliases.Keys | ForEach-Object {
    $SQL= $SQL -replace "(?<=\s)$($_)(?=\s*(=|>|<|,|Like))",$PropertyAliases[$_]
A hash table has .Keys and .Values properties which return what is on the left and right of the equals signs respectively. $hashTable.keyName or $hashtable[keyName] will return the value, so $_ will start by taking the value "width", and its replacement will be $PropertyAliases["width"] which is "System.Image.HorizontalSize", on the next pass through the loop, "height" is replaced and so on. To ensure it matches on a field name and not text being searched for, the regular expression stipulates the name must be preceded by a space and followed by "="or "like" and so on.

Regular expression




The literal text "Width"

Width > 1024


A Space


Look behind for the enclosed expression
but don’t include it in the match

$Matches[0] = "Width"
(but only if a leading space is present)


any spaces (or none)


The literal text "Like", or any of the characters comma, equals, greater than or less than

Width > 1024


Look ahead for the enclosed expression
but don’t include it in the match

$Matches[0] = "Width"
(but only if " >" is present)

If the prefix is omitted put the correct one in.

This builds on the ideas we’ve seen already. I want the list of fields and prefixes to be easy to maintain, so just after I define my aliases I define a list of field types
$FieldTypes = "System","Photo","Image","Music","Media","RecordedTv","Search"
For each type I define two variables, a prefix and a fieldslist : the names must be FieldtypePREFIX and FieldTypeFIELDS – the reason for this will become clear shortly but here is what they look like
$SystemPrefix = "System."
$SystemFields = "ItemName|ItemUrl"
$PhotoPrefix  = "System.Photo."
$PhotoFields  = "cameramodel|cameramanufacturer|orientation"
In practice the field lists are much longer – system contains 25 fieldnames not just the two shown here. The lists are written with "|" between the names so they become a regular expression meaning "ItemName or ItemUrl Or …". The following code runs after aliases have been processed
foreach ($type in $FieldTypes) {
   $fields = (get-variable "$($type)Fields").value
   $prefix = (get-variable "$($type)Prefix").value 
   $sql    = $sql -replace "(?<=\s)(?=($Fields)\s*(=|>|<|,|Like))" , $Prefix
I can save repeating code by using Get-Variable in a loop to get $systemFields, $photoFields and so on, and if I want to add one more field, or a whole type I only need to change the variable declarations at the start of the script. The regular expression in the replace works like this:

Regular expression




Look behind for a space
but don’t include it in the match



The literal text "orientation" or "cameramanufacturer"

CameraManufacturer LIKE ‘Can%’


any spaces (or none)



The literal text "Like", or any of the characters comma, equals, greater than or less than

CameraManufacturer LIKE ‘Can%’


Look ahead for the enclosed expression
but don’t include it in the match

$match[0] is the point between the leading space and "CameraManufacturer LIKE" but doesn’t include either.

We get the effect of an "insert" operator by using ‑replace with a regular expression that finds a place in the text but doesn’t select any of it.
This part of the function allows "CameraManufacturer LIKE ‘Can%’" to become "System.Photo CameraManufacturer LIKE ‘Can%’ " in a WHERE clause.
I also wanted "CameraManufacturer" in an ORDER BY clause to become "System.Photo CameraManufacturer". Very sharp-eyed readers may have noticed that I look for a Comma after the fieldname as well as <,>,=, and LIKE. I modified the code which appeared in part one so that when an ORDER BY clause is inserted it is followed by a trailing comma like this:
if ($orderby) { $sql += " ORDER BY " + ($OrderBy -join " , " ) + ","}

the new version will work with this regular expression but the extra comma will cause a SQL error and so it must be removed later.
When I introduced the SQL I said the SELECT statement looks like this:

SELECT System.ItemName, System.ItemUrl,      System.FileExtension, System.FileName, System.FileAttributes, System.FileOwner, 
       System.ItemType, System.ItemTypeText , System.KindText,     System.Kind,     System.MIMEType,       System.Size

Building this clause from the field lists simplifies code maintenance, and as a bonus anything declared in the field lists will be retrieved by the query as well as accepted as input by its short name. The SELECT clause is prepared like this:
if ($First) 
     {$SQL = "SELECT TOP $First "}
else {$SQL = "SELECT "}
foreach ($type in $FieldTypes)
     {$SQL +=((get-variable "$($type)Fields").value -replace "\|",", " ) + ", "}

This replaces the "|" with a comma and puts a comma after each set of fields. This means there is a comma between the last field and the FROM – which allows the regular expression to recognise field names, but it will break the SQL , so it is removed after the prefixes have been inserted (just like the one for ORDER BY).
This might seem inefficient, but when I checked the time it took to run the function and get the results but not output them it was typically about 0.05 seconds (50ms) on my laptop – it takes more time to output the results.
Combining all the bits in this part with the bits in part one turns my 36 line function into about a 60 line one as follows

Function Get-IndexedItem{
Param ( [Alias("Where","Include")][String[]]$Filter ,
$PropertyAliases = @{Width ="System.Image.HorizontalSize"; 
                    Height = "System.Image.VerticalSize"}
$FieldTypes      = "System","Photo"
$PhotoPrefix     = "System.Photo."
$PhotoFields     = "cameramodel|cameramanufacturer|orientation"
$SystemPrefix    = "System."
$SystemFields    = "ItemName|ItemUrl|FileExtension|FileName"
if ($First) 
     {$SQL = "SELECT TOP $First "}
else {$SQL = "SELECT "}
foreach ($type in $FieldTypes)
     {$SQL +=((get-variable "$($type)Fields").value -replace "\|",", ")+", " }
if ($Path -match "\\\\([^\\]+)\\.")
     {$SQL += " FROM $($matches[1]).SYSTEMINDEX WHERE "}
if ($Filter)
     {$Filter = $Filter -replace "\*","%"
      $Filter = $Filter -replace"\s*(=|<|>|like)\s*([^'\d][^\s']*)$",' $1 ''$2'' '
      $Filter = $Filter -replace "\s*=\s*(?='.+%'\s*$)" ," LIKE "
      $Filter = ($Filter | ForEach-Object {
          if ($_ -match "'|=|<|>|like|contains|freetext")
          else {"Contains(*,'$_')"}
      $SQL += $Filter -join " AND "
if ($Path)
    {if ($Path -notmatch "\w{4}:") {$Path = "file:" + $Path}
     $Path = $Path -replace "\\","/"
     if ($SQL -notmatch "WHERE\s$") {$SQL += " AND " }
     if ($Recurse) 
          {$SQL += " SCOPE = '$Path' "}
     else {$SQL += " DIRECTORY = '$Path' "}
if ($SQL -match "WHERE\s*$")
     { Write-warning "You need to specify either a path , or a filter." ; return }
if ($OrderBy) { $SQL += " ORDER BY " + ($OrderBy -join " , " ) + ","}
$PropertyAliases.Keys | ForEach-Object 
     { $SQL= $SQL -replace"(?<=\s)$($_)(?=\s*(=|>|<|,|Like))", $PropertyAliases.$_ }
foreach ($type in $FieldTypes)
{$fields = (get-variable "$($type)Fields").value
     $prefix = (get-variable "$($type)Prefix").value
     $SQL    = $SQL -replace "(?<=\s)(?=($Fields)\s*(=|>|<|,|Like))" , $Prefix
$SQL = $SQL -replace "\s*,\s*FROM\s+" , " FROM "
$SQL = $SQL -replace "\s*,\s*$" , ""
$Provider="Provider=Search.CollatorDSO;"+ "Extended Properties=’Application=Windows’;"
$Adapter = new-object system.data.oledb.oleDBDataadapter -argument $SQL, $Provider
$DS     = new-object system.data.dataset
if ($Adapter.Fill($DS)) { $DS.Tables[0] }

In part 3 I’ll finish the function by turning my attention to output

Using the Windows index to search from PowerShell: Part one: Building a query from user input.

Filed under: Uncategorized — jamesone111 @ 10:43 am

Note: this was originally written for the Hey,Scripting guy blog where it appeared as the 25 June 2012 episode. The code is available for download . I have some more index related posts coming up so I wanted to make sure everything was in one place

I’ve spent some time developing and honing a PowerShell function that gets information from the Windows Index– the technology behind the search that is integrated into explorer in Windows 7 and Vista. The Index can be queried using SQL and my function builds the SQL query from user input, executes it and receives rows of data for all the matching items. In Part three, I’ll look at why rows of data aren’t the best thing for the function to return and what the alternatives might be. Part two will look at making user input easier – I don’t want to make an understanding SQL a prerequisite for using the function. In this part I’m going to explore the query process.

We’ll look at how at how the query is built in a moment, for now please accept that a ready-to-run query stored in the variable $SQL. Then it only takes a few lines of PowerShell to prepare and run the query

$Provider="Provider=Search.CollatorDSO;Extended Properties=’Application=Windows’;"
$adapter = new-object system.data.oledb.oleDBDataadapter -argument $sql, $Provider
$ds      = new-object system.data.dataset
if ($adapter.Fill($ds)) { $ds.Tables[0] }

The data is fetched using oleDBDataAdapter and DataSet objects; the adapter is created specifying a "provider" which says where the data will come from and a SQL statement which says what is being requested. The query is run when the adapter is told to fill the dataset. The .fill() method returns a number, indicating how many data rows were returned by the query – if this is non-zero, my function returns the first table in the dataset. PowerShell sees each data row in the table as a separate object; and these objects have a property for each of the table’s columns, so a search might return something like this:

SYSTEM.ITEMURL : file:C:/Users/James/pictures/DIVE_1771+.JPG
SYSTEM.KIND : {picture}
SYSTEM.MIMETYPE : image/jpeg
SYSTEM.SIZE : 971413

There are lots of fields to choose from, so the list might be longer. The SQL query to produce it looks something like this.

SELECT System.ItemName, System.ItemUrl,        System.FileExtension,
       System.FileName, System.FileAttributes, System.FileOwner, 
       System.ItemType, System.ItemTypeText ,  System.KindText, 
       System.Kind,     System.MIMEType,       System.Size
WHERE  System.Keywords = 'portfolio' AND Contains(*,'stingray')

In the finished version of the function, the SELECT clause has 60 or so fields; the FROM and WHERE clauses might be more complicated than in the example and an ORDER BY clause might be used to sort the data.
The clauses are built using parameters which are declared in my function like this:

Param ( [Alias("Where","Include")][String[]]$Filter ,

In my functions I try to use names already used in PowerShell, so here I use -Filter and -First but I also define aliases for SQL terms like WHERE and TOP. These parameters build into the complete SQL statement, starting with the SELECT clause which uses -First

if ($First) {$SQL = "SELECT TOP $First "}
else        {$SQL = "SELECT "}
$SQL += " System.ItemName, System.ItemUrl " # and the other 58 fields

If the user specifies –First 1 then $SQL will be "SELECT TOP 1 fields"; otherwise it’s just "SELECT fields". After the fields are added to $SQL, the function adds a FROM clause. Windows Search can interrogate remote computers, so if the -path parameter is a UNC name in the form \\computerName\shareName the SQL FROM clause becomes FROM computerName.SYSTEMINDEX otherwise it is FROM SYSTEMINDEX to search the local machine.
A regular expression can recognise a UNC name and pick out the computer name, like this:

if ($Path -match "\\\\([^\\]+)\\.") {
$sql += " FROM $($matches[1]).SYSTEMINDEX WHERE "
else {$sql += " FROM SYSTEMINDEX WHERE "}

The regular expression in the first line of the example breaks down like this

Regular expression




2 \ characters: "\" is the escape character, so each one needs to be written as \\



Any non-\ character, repeated at least once



A \,followed by any character



Capture the section which is enclosed by the brackets as a match

$matches[0] =\\computerName\s
$matches[1] =computerName

I allow the function to take different parts of the WHERE clause as a comma separated list, so that
-filter "System.Keywords = 'portfolio'","Contains(*,'stingray')"
is equivalent to
-filter "System.Keywords = 'portfolio' AND Contains(*,'stingray')"

Adding the filter just needs this:

if ($Filter) { $SQL += $Filter -join " AND "}

The folders searched can be restricted. A "SCOPE" term limits the query to a folder and all of its subfolders, and a "DIRECTORY" term limits it to the folder without subfolders. If the request is going to a remote server the index is smart enough to recognise a UNC path and return just the files which are accessible via that path. If a -Path parameter is specified, the function extends the WHERE clause, and the –Recurse switch determines whether to use SCOPE or DIRECTORY, like this:

if ($Path){
     if ($Path -notmatch "\w{4}:") {
           $Path = "file:" + (resolve-path -path $Path).providerPath
     if ($sql -notmatch "WHERE\s*$") {$sql += " AND " }
     if ($Recurse)                   {$sql += " SCOPE = '$Path' " }
      else                           {$sql += " DIRECTORY = '$Path' "}

In these SQL statements, paths are specified in the form file:c:/users/james which isn’t how we normally write them (and the way I recognise UNC names won’t work if they are written as file://ComputerName/shareName). This is rectified by the first line inside the If ($Path) {} block, which checks for 4 "word" characters followed by a colon. Doing this will prevent ‘File:’ being inserted if any protocol has been specified –the same search syntax works against HTTP:// (though not usually when searching on your workstation), MAPI:// (for Outlook items) and OneIndex14:// (for OneNote items). If a file path has been given I ensure it is an absolute one – the need to support UNC paths forces the use of .ProviderPath here. It turns out there is no need to convert \ characters in the path to /, provided the file: is included.
After taking care of that, the operation -notmatch "WHERE\s*$" sees to it that an "AND" is added if there is anything other than spaces between WHERE and the end of the line (in other words if any conditions specified by –filter have been inserted). If neither -Path nor -filter was specified there will be a dangling WHERE at the end of the SQL statement .Initially I removed this with a ‑Replace but then I decided that I didn’t want the function to respond to a lack of input by returning the whole index so I changed it to write a warning and exit. With the WHERE clause completed, final clause in the SQL statement is ORDER BY, which – like WHERE – joins up a multi-part condition.

if ($sql -match "WHERE\s*$") {
     Write-warning "You need to specify either a path, or a filter."
if ($orderby) { $sql += " ORDER BY " + ($OrderBy -join " , ") }

When the whole function is put together it takes 3 dozen lines of PowerShell to handle the parameters, build and run the query and return the result. Put together they look like this:

Function Get-IndexedItem{
Param ( [Alias("Where","Include")][String[]]$Filter ,
if ($First) {$SQL = "SELECT TOP $First "}
else        {$SQL = "SELECT "}
$SQL += " System.ItemName, System.ItemUrl " # and the other 58 fields
if ($Path -match "\\\\([^\\]+)\\.") {
              $SQL += "FROM $($matches[1]).SYSTEMINDEX WHERE "
else         {$SQL += " FROM SYSTEMINDEX WHERE "}
if ($Filter) {$SQL += $Filter -join " AND "}
if ($Path) {
    if ($Path -notmatch "\w{4}:") {$Path = "file:" + $Path}
    $Path = $Path -replace "\\","/"
    if ($SQL -notmatch "WHERE\s*$") {$SQL += " AND " }
    if ($Recurse)                   {$SQL += " SCOPE = '$Path' " }
    else                            {$SQL += " DIRECTORY = '$Path' "}
if ($SQL -match "WHERE\s*$") {
    Write-Warning "You need to specify either a path or a filter."
if ($OrderBy) { $SQL += " ORDER BY " + ($OrderBy -join " , " ) }
$Provider = "Provider=Search.CollatorDSO;Extended Properties=’Application=Windows’;"
$Adapter  = New-Object system.data.oledb.oleDBDataadapter -argument $SQL, $Provider
$DS       = New-Object system.data.dataset
if ($Adapter.Fill($DS)) { $DS.Tables[0] }

The -Path parameter is more user-friendly as a result of the way I handle it, but I’ve made it a general rule that you shouldn’t expect the user to know too much of the underlying syntax ,and at the moment the function requires too much knowledge of SQL: I don’t want to type

Get-Indexed-Item –Filter "Contains(*,'Stingray')", "System.Photo.CameraManufacturer Like 'Can%'"

and it seems unreasonable to expect anyone else to do so. I came up with this list of things the function should do for me.

  • Don’t require the user to know whether a search term is prefixed with SYSTEM. SYSTEM.DOCUMENT, SYSTEM.IMAGE or SYSTEM.PHOTO. If the prefix is omitted put the correct one in.
  • Even without the prefixes some fieldnames are awkward for example "HorizontalSize" and "VerticalSize" instead of width and height. Provide aliases
  • Literal text in searches needs to be enclosed in single quotes, insert quotes if the user omits them.
  • A free text search over all fields is written as Contains(*,’searchTerm’) , convert "orphan" search terms into contains conditions.
  • SQL uses % (not *) for a wild card – replace* with % in filters to cope with users putting the familiar *
  • SQL requires the LIKE predicate(not =) for wildcards : replace = with like for Wildcards

In Part two, I’ll look at how I do those things.

June 25, 2012

Lonesome George

Filed under: Uncategorized — jamesone111 @ 3:15 pm

At Easter I was in the Galapagos Islands; work had taken me to Ecuador diving in the Galapagos was too good an opportunity to miss. Mainland Ecuador was a country I knew little about and two weeks working in the capital (Quito, just South of the Equator, and 10,000 feet up in the Andes) doesn’t qualify me as an expert. The client there was a good one to work with, and what I saw of the city (a bunch of Taxi rides and a bus tour on the one day we weren’t working) mean I’d go back if asked. Travel wasn’t good and the return flights so bad that I’ve vowed never to fly with Iberia again. Flying to the islands the plane had a problem which meant if it landed it couldn’t take off again so having got within sight of the islands we had to go all the way back to Quito and get another plane. Down at sea level the heat was ferocious, the transportation scary and the insect bites the worst I’ve had. But the diving… different kinds of Sharks (including a close encounter with a group of Hammerheads), Seal Lions, Turtles, Rays (including a Manta encounter on the very first dive which set the tone) – I’d put up with a lot for that. And if some search engine has steered you here, I dived with Scuba Iguana, and if I manage to go back I’ll dive with them again.

The Scuba place was pretty much next door to the Darwin station: home of giant tortoises and a tortoise breading programme. Galapagos comes from the Spanish word for saddle because the shape of the giant tortoise’s shell looked like a traditional saddle. I also learnt that some languages – including Spanish – don’t have distinct words for Tortoise and (Marine) Turtles.  The sex of tortoises is determined by the temperature at which the eggs incubate and the breeding programme gathers eggs, incubates them to get extra females, and looks after the baby tortoises keeping them safe human introduced species (like rats) which feed on eggs and baby tortoises. Each island’s tortoises are different so the eggs and hatchlings are marked up so they go back to right island. But there is no breeding programme for Pinta island (also named Abingdon island by the Royal Navy. According to a story told by Stephen Fry on QI, sailors ate giant tortoises and found them very good.)  A giant Tortoise was found on Pinta; but a search over several years failed to find a second. So he – Lonesome George, was brought to the Darwin station in the 1970s. No one knows for sure how he was then. All efforts to find a mate for him failed: so George lived out his final decades as the only known example of  Geochelone nigra abingdoni  the Pinta Galapagos tortoise.
On Easter Sunday I walked up see George and the giants from other islands who live at the station. George was keeping out of the sun; he shared an enclosure and I wondered what he made of the other species – if somewhere in that ancient reptile brain lurked a memory of others just like him, a template into which the other tortoises didn’t quite fi.

Later in trip I was asked to help with some work on survey being prepared about Hammerhead sharks. I was told they estimated as having a 20% chance of becoming extinct in the next 100 years. This statistic is quite hard to digest: my chances of being extinct in 100 years are close to 100%: so my contribution to the survey was suggest that telling people if things continue as they are the chances of seeing Hammerheads on a dive in 5 years will be X amount less than today. It’s not fair, but we care more about some species than others and I hope there will still be Hammerheads for my children to see in a few years. Sadly they won’t get the chance to see the Pinta Tortoise.

June 22, 2012

Windows Phone 8. Some Known Knowns, and Known Unknowns

Filed under: Uncategorized — jamesone111 @ 10:23 am

Earlier this week Microsoft held its Windows Phone Summit where it made a set of announcements about the next generation of Windows Phone – Windows phone 8 in summary these were

  • Hardware Support for Multi-core processors , Additional Screen resolutions, Removable memory cards, NFC
  • Software The Windows 8 core is now the base OS, Support for native code written in C or C++ (meaning better games), IE 10, new mapping (from Nokia), a speech API which apps can use. Better business-oriented features, VOIP support, a new “Wallet” experience and in-app payments, and a new start screen.

This summit came hot on the heels of the Surface tablet launch, which seemed to be a decision by Microsoft that making hardware for Windows 8 was too important to be left to the hardware makers. The first thing I noted about phone announcement was the lack of a Microsoft branded phone. I’ve said that Microsoft should make phones itself since before the first Windows Mobile devices appeared – when Microsoft was talking about “Stinger” internally; I never really bought any of the reasons given for not doing so. But I’d be astounded if Nokia didn’t demand that Microsoft promise not to make a phone (whether there’s a binding agreement or just an understanding between Messrs Elop and Ballmer we don’t know). Besides Nokia Microsoft has 3 other device makers on-board: HTC have devices for every Microsoft mobile OS since 2000, but also have a slew of devices for Android, Samsung were a launch partner for Phone 7 but since then have done more with Android ; LG were in the line up for the Windows Phone 7 launch and are replaced by Huawei.  What these 3  feel about Nokia mapping technology is a matter of guesswork but depends on branding and licensing terms.

There are some things we think we know, but actually they are things we know that we don’t know.

  • Existing 7.x phones will not run Windows Phone 8 but will get an upgrade to Windows 7.8. I have an HTC Trophy which I bought in November 2010 and it has gone from 7.0 to 7.5 and I’ll be quite happy to get 7.8. on a 2 year old phone. Someone who just bought a Nokia Lumia might not feel quite so pleased.  What will be in 7.8 ? The new start screen has been shown. But will it have IE10 ? Will it have the new mapping and Speech capabilities. The Wallet, In-app-payments ?  This matters because….
  • Programs specifically targeting Windows Phone 8 won’t run on 7. Well doh! Programs targeting Windows 7 don’t run XP. But what programs will need to target the new OS ? Phone 7 programs are .NET programs and since .NET compiles to an intermediate language not to CPU instructions, a program which runs on Windows 8-RT (previous called Windows on ARM) should go straight onto a Windows 8-intel machine (but not vice versa), and Phone 7 programs will run on Phone 8. An intriguing comment at the launch says the Phone 8 emulator runs on Hyper-V; unless Hyper-V has started translating between different CPU instruction sets this means the Emulated phone has an Intel CPU but it doesn’t matter because it is running .NET intermediate language not binary machine code. So how many new programs will be 8-specific ? – Say Skype uses the VOIP support and in-app payments for calling credit. Will users with old phones be stuck with the current version of Skype? Do they get a new version where those features don’t light up. Or do they get the same experience as someone with a new phone. If the only things which are Phone 8 specific are apps which need Multiple cores (or other newly supported hardware) there would never have been any benefit upgrading a single core phone from 7 to 8.  
  • Business support. This covers many possibilities, but what is in and what is out ? Will the encryption send keys back to base as bit-locker does ? Will there be support for Direct-Access ? Corporate wireless security ? Will adding Exchange server details auto-configure the phone to corporate settings (like the corporate app-store) . Will it be possible to block updates ? This list goes on and on.

It’s going to interesting to see what this does for Microsoft and for Nokia’s turn-round.

June 19, 2012

Microsoft’s new surface. All it needs is love.

Filed under: General musings — jamesone111 @ 4:44 pm

People who buy Apple stuff worry me. Their attachment to one brand of equipment is somewhere between addiction and religious fervour. It is Love.

For over 20 years I’ve used Microsoft stuff, because it is simply a better way to get a job done. Some of the people in my office use Macs and can’t do their job without creating a virtual PC running Windows. It doesn’t reduce their love for their Macs. As the saying goes: The opposite of love is not hate, it’s indifference. And its hard to feel anything but indifferent to the products of any of the major PC makers. The Dell in front of me it is well made, well specified and does everything I ask and more. But Love it ? Most PC users will tell you loving a computer is crazy (which is why Apple folk are so disturbing). 

I think Microsoft’s new surface tablets are trying to create a Windows machine which people – if they can’t actually Love – feel more than indifferent about.  Surface is two Machines: one uses an ARM processor and runs the new Windows RT, so won’t run all your existing software. The other uses an Intel chip – and is thicker and heavier to give it 1/3 more battery life (roughly 130cc and 225g more to get 40 Watt Hours instead of 30.), but that means it should work with existing software and USB devices. I can plug in a mouse and keyboard, and attach two monitors via display port and have a system just like the one I have today. Unplug it and I can use it iPad style or take the “touch Cover” keyboard * and write documents or use the Pen to annotate documents if that’s what I choose. The ARM version has office built in, but no pen and a different video connector (so probably only 1 screen). Even with its smaller battery it will probably run for longer (though like the shipment date and price, battery life is yet to be confirmed).

Mary-Jo Foley wrote of the launch of Surface “It’s the end of an era. Or maybe the start of a new one.” Indeed. Microsoft began by providing OEMs (including Apple) with BASIC, then with Operating Systems. OEMs where very much the Geese that laid golden eggs for Microsoft – during the 10 years I worked at there (not in the OEM part of the business) there were times when I felt that the company forgot the problem with Geese is they produce a lot of … guano. The OEMs have been poor on design and innovation for a long time: Bloomberg business week no-less talks about how the PC industry should be shamed by Surface, and talks about recent years of PC Development as “the great stagnation”. The Bloomberg piece puts some blame on Microsoft and Intel for taking too much from the OEMs, I doubt that if the chips and OS had cost less the difference would have gone into innovations that add value. That lack of added value means margins on PCs aren’t great and that’s led manufacturers to take money to install all manner of junk on the machines they ship. The whole DoJ saga – which grew out of Microsoft trying to prevent OEMs installing software it didn’t like – left a situation where the company was required to sell Windows to anyone who wanted it and could not do anything to prevent an explosion of crapware. Lots of people are asking WHY has Microsoft chosen to get into making computers? There answer is either (a) It can make much more profit by selling computers and operating systems together. or (b) it has an idea of what a PC should be in the second decade of the 21st century and it doesn’t trust PC makers to deliver that by themselves.

If I were fielding calls from angry OEMs upset Microsoft arriving in their market I’d make the case that no OEM would have made a product like this: their lack of a similar product both lost them right to complain and forced Microsoft to do something: if they do have something , Microsoft is saying they won’t undercut on price, something we won’t know for sure until the units go on sale. Some people wonder if Microsoft will aim to make the same from selling the hardware as anyone else and make the price of Windows on top of that; or if they will think $x of margin per unit sold is the same whether they sell a computer/OS combination or they sell a licence to an OEM. The latter would make it very hard for OEMs to compete; but Microsoft trying to make desirable hardware profitably ? That’s a lot less of a threat to OEMs. Apple doesn’t sell as many units as Samsung, but the profit to Apple per unit is more than the retail price of the Samsung. When the iPhone was launched I questioned whether there was sufficient market for a phone at that price point: it has actually sold more units than Apple envisaged at the start: which proves one thing – people will pay handsomely for something they love. 


image* There are two keyboards. The thin pressure sensitive  “Touch cover” and a thicker moving key “Type cover” You can see the difference in this picture  (click for a bigger version)

May 11, 2012

Lies, damn lies and public sector pensions

Filed under: Uncategorized — jamesone111 @ 9:08 am

Every time I hear that “public sector workers”  are protesting about government plans for their pensions –as happened yesterday – I think of two points I made to fellow school governors when the teachers took a day off (with the knock on cost to parents and the economy) these were

  • Does  classroom teachers understand their union wants them to subsidize the head’s pension from theirs ?  (Have the Unions explained to the classroom teachers that is what they are doing
  • Any teacher who is part of this protest has demonstrated they have insufficient grasp of maths to teach the subject (except, perhaps to the reception class.)

This second point is the easier of the two to explain: the pension paid for over your working life is a function of:

  • The salary you earned (which in turn depends on the rate at which your salary grew)
  • What fraction of your salary was paid in to your pension fund. It might be you gave up X% or your employer put in Y% that you never saw or a combination.   
  • How many years you paid in for (when you started paying in, and when you retire)
  • How well your pension fund grew before it paid out
  • How long the pension is expected to pay out for (how long you live after retirement)
  • Annuity rates – the interest earned on the pension fund as it pays out.

In addition, some people receive some pension which wasn’t paid for in this way; some employers (public or private sector) make guarantees to top up either the pension fund so it will buy a given level of annuity or to top-up  pension payments bought with the fund. The total you receive is the combination of what you have paid for directly and the top-up.
Change any factor – for example how long you expect to live – and either what you have paid for changes or the other factors have to change to compensate. Since earnings and rates of return aren’t something we control, living longer means either we have pay for a smaller pension, or we must pay in more, or retire later or some combination of all three. Demanding that the same amount will come out of a pension for longer, without paying more in is a demand for a guaranteed top-up in future – in the case of public sector employees that future top-up comes from future taxes, for private sector it comes from future profits.

It’s easy for those in Government to make pension promises because those promises don’t need to be met for 30 years or more. Teachers whose retirement is imminent would have come into the profession when Wilson, Heath or Callaghan was in Downing Street, and all 3 are dead and buried; so with the exception of Denis Healey are all the chancellors who set budgets while they were in office. It’s temping for governments to save on spending by under-contributing to pensions today, and leave future governments with the shortfall: taken to the extreme governments can turn pensions into a Ponzi scheme – this year’s “Teachers’/Police/whatever pay” bill covers what all current teachers/police officers / whoever are paid for doing the job and all pensions paid to retired ones for doing the job cheaply in the past. Since I am, more-or-less, accusing governments of all colours of committing fraud, I might as well add false accounting to the charge sheet. Let’s say the Government wants to buy a squadron of new aircraft but doesn’t want to raise taxes to pay for them all this year; it borrows money and the future liability that creates is accounted for. If the deal it makes with public sector workers is for a given amount to spend today, and a promise of a given standard of living in retirement ,does it record that promise – that future liability – as part of pay today? Take a wild guess.
This wouldn’t matter – outside accounting circles – if everything was constant. But the length of time between retirement and death has increased and keeps on increasing. For the sake of a simple example: lets assume someone born in 1920, joined the Police after world war II , served for 30 years and retired in 1975 at age 55 expecting to die at 70. Their retirement was half their length of service. Now consider someone born in 1955, who also joined the police at age 25, served for 30 years and retired 2010. Is any one making plans for their Pension to stop in 2025 ? We might reasonably expect this person to live well into their eighties – so we’ve moved from 1 retired officer for every 2 serving, to a 1:1 ratio. I’m not saying that in 1975 the ratio was 1:2 and in 2012 it is 1:1 but that’s the direction of travel. 

I’ve yet to hear a single teacher say their protests about pensions amount to a demand that they should under-fund their retirement as a matter of public policy and their pupils – who will then be tax payers – should make up difference. As one of those whose work generates the funds to pay for the public sector I must choose a combination of lower pension, later retirement, and higher contributions than I was led to expect when I started work 25 years or so ago. And there are people demanding my taxes insulate them from having to do the same; or (if you prefer) demanding a pay rise to cover the gap between what past governments have promised them and what they are actually paying for, or (and this becomes a bit uncomfortable) that government starts telling us what it really costs to have the teachers, nurses, police officers and so on we want. 

But what of my claim that Unions get low paid staff to subsidize the pensions of higher paid colleagues. Lets take two teachers; I’ll call them Alice and Bob, and since this is a simplified example they’ll fit two stereotypes: Alice sticks to working in the class room; and gets a 2% cost of living rise every year. Bob competes for every possible promotion, and gets promoted in alternate years, so he gets a 2% cost of living rise alternating with a 10% rise. Although they both started on the same salary after 9 end-of-year rises, Alice’s pay has gone up by 19.5% and Bob – who must be a head by now – has seen his rise by 74%  
Throughout the 10 years they pay 10% of their salary into their pension fund – to make the sums easy we’ll assume they pay the whole 10% on the last day of the year, and each year their pension fund manager earns them 10% of what was in their pension pot at the end of the previous year. After 10 years Alice has £17,184 in her pension pot, and Bob has £20,390 in his.

Alice (and her fellow class room teachers) are told by the Union Rep that any attempt to change from final salary as the calculation mechanism is an attack on your pension, for her, this is factually wrong. If you are ever told this you need to ask if you are a high flier like Bob or if your career is more like Alice’s. To see why it is wrong (and lets put it down to the Union rep being innumerate , rather than dishonest), lets pretend the pension scheme only has Alice and Bob in it. So the total pot is £37,574 – Alice put in 46% of that money, but of it is shared in the ratio of the final salaries 11,950 : 17,432 ,Alice gets 41% of the pay out. 
You can argue it doesn’t work like that because Alice’s pot (at 1.44 times her final salary) might just cover the percentage of her final salary she has been promised: Bob’s pension pot is only 1.17 times his final salary which will give him a smaller percentage so the government steps in and boosts his pot to be 1.44 time his  final salary just like Alice’s. So Bob gets a golden handshake of nearly £4700 and Alice gets nothing.
Suppose 1.44 years is nowhere near enough and Alice and Bob need 3 years salary to buy a large enough annuity; the government needs to find £18,668 for Alice (108% of her pot), and 31,907 for Bob (156% of his pot). Whichever way you cut and slice if your salary grows quicker than your colleagues you will do better out of final salary than they do. If it grows more slowly you will fare worse.  

Alice Bob
Salary Increase Pension Payment Pension Pot Salary Increase Pension Payment Pension Pot
Year 1   10,000.00 2%               1,000.00     1,000.00   10,000.00 10%               1,000.00     1,000.00
Year 2   10,200.00 2%               1,020.00     2,120.00   11,000.00 2%               1,100.00     2,200.00
Year 3   10,404.00 2%               1,040.40     3,372.40   11,220.00 10%               1,122.00     3,542.00
Year 4   10,612.08 2%               1,061.21     4,770.85   12,342.00 2%               1,234.20     5,130.40
Year 5   10,824.32 2%               1,082.43     6,330.36   12,588.84 10%               1,258.88     6,902.32
Year 6   11,040.81 2%               1,104.08     8,067.48   13,847.72 2%               1,384.77     8,977.33
Year 7   11,261.62 2%               1,126.16   10,000.39   14,124.68 10%               1,412.47   11,287.53
Year 8   11,486.86 2%               1,148.69   12,149.12   15,537.15 2%               1,553.71   13,970.00
Year 9   11,716.59 2%               1,171.66   14,535.69   15,847.89 10%               1,584.79   16,951.79
Year 10   11,950.93               1,195.09  17,184.35   17,432.68               1,743.27   20,390.23
Average Salary   10,949.72   13,394.10
Combined Final Salary   29,383.60 Total Pot  37,574.58
Alice’s share   15,282.37
Bob’s Share   22,292.21
Combined Average Salary   24,343.82 Total Pot   37,574.58
Alice’s share   16,900.85
Bob’s Share   20,673.73

What if the mechanism for calculating were Average salary , not final salary? It doesn’t quite remove gap but gets very close. Instead of £2,000 of Alice’s money going to Bob it’s less than £300. 
A better way to look at this is to say if the amount of money in the combined Pension pot pays £5000 a year in Pensions, do we split it as roughly £2000 to Alice and £3000 to Bob (the rough ratio of their final salaries – each gets about 1/6th of their final salary) or £2250 to Alice and £2750 to Bob (the ratio of their average salaries and each gets about 1/5th of their average).
Whenever average salary is suggested as a basis, union leaders will say that pensions are calculated from a smaller number as if it reduces the amount paid. If the government wanted to take money that way it would be simpler to say “a full pension will in future be a smaller percentage of final salary”. Changing to average-based implies an increase in the percentage paid. 

That perhaps is the final irony. Rank and file Police officers – whose career pay is like Alice’s in the example – marched through London yesterday demanding that their pensions be left alone; you do not need to spend long reading “Inspector Gadget” to realise when you remove the problems created for the Police by politicians most of the problems that are left are created by senior officers whose career pay follows the “Bob” path. Yet the “many” marching were demanding that they continue to subsidize these “few”. As Gadget himself likes to say : you couldn’t make it up.

April 22, 2012

Don’t Swallow the cat. Doing the right thing with software development and other engineering projects.

Filed under: Uncategorized — jamesone111 @ 8:30 pm

In my time at Microsoft I became aware of the saying “communication occurs only between equals.” usually couched in the terms “People would rather lie than deliver bad news to Steve Ballmer”. Replacing unwelcome truths with agreeable dishonesty wasn’t confined to the CEOs direct reports, and certainly isn’t a disease confined to Microsoft. I came across ‘The Hierarchy of Power Semantics’ more than 30 years ago when I didn’t understand what was meant by the title; it was written in the 1960s and if you don’t recognise “In the "beginning was the plan and the specification, and the plan was without form and the specification was void and there was darkness on the face of the implementation team”  see here – language warning for the easily offended.
Wikipedia says the original form of “communication occurs only between equals”  is Accurate communication is possible only in a non-punishing situation. There are those who (consciously or not) use the impossibly of saying “No” to extract more from staff and suppliers; it can produce extraordinary results, but sooner or later it goes horribly wrong. For example the Challenger disaster was caused by the failure of an ‘O’ ring in solid rocket booster made by Morton Thiokol. The engineers responsible for the booster were quite clear that in cold weather the ‘O’ rings were likely to fail with catastrophic results.  NASA asked if a launch was OK after a freezing night and fearing the consequences of saying “No” managers at Morton Thiokol over-ruled the engineers and allowed the disastrous launch to go ahead.  Most people can think of some case where someone made an impossible promise to a customer, because they were afraid to say no.

Several times recently I have heard people say something to the effect that ‘We’re so committed to doing this the wrong way that we can’t change to the right way.”  Once the person saying it was me, which was the genesis of this post. Sometimes, in a software project because saying to someone – even to ourselves – “We’re doing this wrong” is difficult, so we create work rounds. The the odd title of this post comes from a song which was played on the radio a lot when I was a kid.

There was an old lady, who swallowed a fly, I don’t know why she swallowed a fly. I guess she’ll die.
There was an old lady, who swallowed a spider that wriggled and jiggled and ticked inside her. She Swallowed the spider to catch the fly  … I guess she’ll die
There was an old lady, who swallowed a bird. How absurd to swallow a bird. She swallowed the bird to catch the spider … I guess she’ll die
There was an old lady, who swallowed a cat. Fancy that to swallow a cat. She swallowed the cat to catch the bird …  I guess she’ll die
There was an old lady, who swallowed a dog. What a hog to swallow a dog. She swallowed the dog to catch the cat … I guess she’ll die
There was an old lady, who swallowed a horse. She’s dead, of course

In other words each cure needs a further, more extreme cure.  In my case the “fly” was a simple problem I’d inherited. It would take a couple of pages to explain the context, so for simplicity it concerns database tables and the “spider” was to store data de-normalized. If you don’t spend your days working with databases, imagine you have a list of suppliers, and a list of invoices from those suppliers. Normally you would store an ID for the supplier in the invoice table, and look up the name from the supplier table using the ID. For what I was doing it was better to put the supplier name in the invoices table, and ignore the ID. All the invoices for the supplier can be looked up by querying for the name. The same technique applied to products supplied by that supplier: store the supplier name in the product table, look up products by supplier name. This is not because I didn’t know any better, I had database normal forms drummed into me two decades ago. To stick with the metaphor: I know that, under normal circumstances, swallowing spiders is bad, but faced with this specific fly it was demonstrably the best course of action.
At this point someone who could have saved me from my folly pointed out that supplier names had to be editable. I protested that the names don’t change, but Amalgamated Widgets did, in fact, become Universal Widgets. This is an issue because Amalgamated not Universal raised the invoices in the filing cabinet so matching them to invoices in the system requires preserving the name as it was when the invoice was generated. “See, I was right name should be stored” – actually this exception doesn’t show I was right at all, but on I went. On the other hand all of  Amalgamated’s products belong to Universal now. Changing names means doing a cascaded update (update any product with the old company name to the new name when a name changes) the real case has more than just products. If you’re filling in the metaphor you’ve guessed I’d reached the point of figuring out how to swallow a bird. Worse, I could see another problem looming (anyone for Cat ?): changes to products had to be exported to another system, and the list of changes had their own table requiring cascaded updates from the cascaded updates.

One of the great quotes in Macbeth says “I am in blood stepped in so far that should I wade no more, Returning were as tedious as go o’er.” he knows what he’s doing is wrong, but it is as hard to go back (and do right) as it is to go on.  Except it isn’t: the solution is not to swallow another couple more spiders and a fly, the solution is to swallow a bird, then a cat and so on.  The dilemma is that the effort for an additional work-round is smaller than the effort to go back fix the initial problem and unpick all the work-rounds to date – either needs to be done now, and the easy solution is to choose the one which needs the least effort now. The sum of effort required for future work-rounds is greater but we can discount that effort because it isn’t needed now. Only in a non-punishing situation can we tell people that progress must be halted for a time to fix a problem which has been mitigated up to now. Persuading people that such a problem needs to fixed at all isn’t trivial, I heard this quote in a Radio programme a while back

“Each uneventful day that passes reinforces a steadily growing false sense of confidence that everything is alright:
that I, we, my group must be OK because the way we did things today resulted in no adverse consequences.”

In my case the problem is being fixed at the moment, but in how many organisations is it what career limiting move to tell people that something which has had now adverse consequences to date must be fixed? 

February 4, 2012

Customizing PowerShell, Proxy functions and a better Select-String

Filed under: Uncategorized — jamesone111 @ 9:24 pm

I suspect that even regular PowerShell users don’t customize their environment much. By co-incidence, in the last few weeks I’ve made multiple customizations to my systems (my scripts are sync’d over 3 machines, customize one, customize all). Which has given me multiple things to talk about. My last post was about adding persistent history this time I want to look at Proxy Functions …

Select-String is, for my money, one of the best things in PowerShell. It looks through piped text or through files for anything which matches a regular expression (or simple text) and reports back what matched and where with all the detail you could ever need. BUT It has a couple of things wrong with it: it won’t do a recursive search for files, and sometimes the information which comes back is too detailed. I solved both problems with a function I named “WhatHas” which has been part of my profile for ages. I have been using this to search scripts, XML files and saved SQL whenever I need a snippet of code that I can’t remember or because something needs to be changed and I can’t be sure I’ve remembered which files contain it. I use WhatHas dozens (possibly hundreds) of times a week. Because it was a quick hack I didn’t support every option that Select-string has, so if a code snippet spans lines I have go back to the proper Select-String cmdlet and use its -context option to get the lines either side of the match: more often than not I find myself typing dir -recurse {something} | select-String {options}

A while back I saw a couple of presentations on Proxy functions (there’s a good post about them here by Jeffrey Snover): I thought when I saw them that I would need to implement one for real before I could claim to understand them, and after growing tired of jumping back and forth between select-string and WhatHas, I decided it was time to do the job properly creating a proxy function for Select-String and keep whathas as an alias. 

There are 3 bits of background knowledge you need for proxy functions.

  1. Precedence. Aliases beat Functions, Functions beat Cmdlets. Cmdlets beat external scripts and programs. A function named Select-String will be called instead of a cmdlet named Select-String – meaning a function can replace a cmdlet simply by giving it the same name. That is the starting point for a Proxy function.
  2. A command can be invoked as moduleName\CommandName. If I load a function named “Get-Stuff” from my profile.ps1 file for example, it won’t have an associated module name but if I load it as part of a module, or if “Get-Stuff” is a cmdlet it will have a module name.
    Get-Command get-stuff | format-list name, modulename
    will show this information You can try
    > Microsoft.PowerShell.Management\get-childitem
    For yourself. It looks like an invalid file-system path, but remember PowerShell looks for a matching Alias, then a matching Function and a then a matching cmdlet before looking for a file.
  3. Functions can have a process block (which runs for each item passed via the pipeline) a begin block (which runs before the first pass through process, and an end block (which runs after the last item has passed through process.) Cmdlets follow the same structure, although it’s harder to see.

Putting these together A function named Select-String can call the Select-String cmdlet, but it must call it as Microsoft.PowerShell.Utility\Select-String or it will just go round in a loop. In some cases, calling it isn’t quite enough and PowerShell V2 delivered the steppable pipeline which can take a PowerShell command (or set of commands piped together) and allow us to run its begin block , process block , and end block, under the control of an function. So a Proxy function looks like this :
Function Select-String {
  Param  ( Same Parameters as the real Select-String
           Less any I want to prevent people using
           Plus any I want to add
   Begin { My preliminaries
Process { My Per-item code against current item ($_ )

     end { $steppablePipeline.End
           My Clean up code

What would really help would be something produce a function like this template, and fortunately it is built into PowerShell: it does the whole thing in 3 steps: Get the command to be proxied, get the detailed metadata for command and build a Proxy function with the meta data, like this:
  $cmd=Get-command select-string -CommandType cmdlet
  $MetaData = New-Object System.Management.Automation.CommandMetaData ($cmd)

The last command will output the Proxy function body to the console, I piped the result into Clip.exe and pasted the result into a new definition
Function Select-String { }
And I had a proxy function.

At this point it didn’t do anything that the original cmdlet doesn’t do but that was a starting point for customizing.
The auto-generated parameters are be formatted like this
  [Parameter(ParameterSetName='Object', Mandatory=$true, ValueFromPipeline=$true)]

And I removed some of the line breaks to reduce the screen space they use from 53 lines to about half that.
The ProxyCommand creator wraps parameter names in braces just in case something has a space or other breaking character in the name, and I took those out.
Then I added two new switch parameters -Recurse and -BareMatches.

Each of the Begin, Process and End blocks in the function contains a try...catch statement, and in the try part of the begin block the creator puts code to check if the -OutBuffer common parameter is set and if it is, over-rides it (why I’m not sure) – followed by code to create a steppable pipeline, like this:
  $wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('Select-String',
  $scriptCmd = {& $wrappedCmd @PSBoundParameters }
  $steppablePipeline = $scriptCmd.GetSteppablePipeline($myInvocation.CommandOrigin)

I decided it would be easiest to build up a string and make that into the steppable pipeline . In simplified form
   $wrappedCmd        = "Microsoft.PowerShell.Utility\Select-String " 
  $scriptText        = "$wrappedCmd @PSBoundParameters"
  if ($Recurse)      { $scriptText = "Get-ChildItem @GCI_Params | " + $scriptText }
  if ($BareMatches)  { $scriptText += " | Select-Object –ExpandProperty 'matches' " +
                                      " | Select-Object -ExpandProperty 'value'   " }  
  $scriptCmd         = [scriptblock]::Create($scriptText)  
  $steppablePipeline = $scriptCmd.GetSteppablePipeline($myInvocation.CommandOrigin)

& commandObject works in a scriptblock: the “&” sign says  “run this” and if this is a command object that’s just fine: so the generated code has scriptCmd = {& $wrappedCmd @PSBoundParameters } where $wrappedCmd  is a command object.
but when I first changed the code from using a script block to using a string I put the original object $wrappedCmd inside a string. When the object is inserted into a string, the conversion renders it as the unqualified name of the command – the information about the module is lost, so I produced a script block which would call the function, which would create a script block which would call the function which… is an effective way to cause a crash.

The script above won’t quite work on its own because
(a) I haven’t built up the parameters for Get-Childitem. So if -recurse or –barematches are specified I build up a hash table to hold them, using taking the necessary parameters from what ever was passed, and making sure they aren’t passed on to the Select-String Cmdlet when it is called. I also make sure that a file specification is passed for a recursive search it is moved from the path parameter to the include parameter.
(b) If -recurse or -barematches get passed to the” real” Select-String cmdlet it will throw a “parameter cannot be found” error, so they need to be removed from $psboundParameters.

This means the first part of the block above turns into
  if ($recurse -or $include -or $exclude) {
     $GCI_Params = @{}
     foreach ($key in @("Include","Exclude","Recurse","Path","LiteralPath")) {
          if ($psboundparameters[$key]) {
$GCI_Params[$key] = $psboundparameters[$key]
     # if Path doesn't seem to be a folder it is probably a file spec
     # So if recurse is set, set Include to hold file spec and path to hold current directory
     if ($Recurse -and -not $include -and ((Test-Path -PathType Container $path) -notcontains $true) ) {
        $GCI_Params["Include"] = $path
        $GCI_Params["Path"] = $pwd
   $scriptText = "Get-ChildItem @GCI_Params | "
else { $scriptText = ""}

And the last part is
if ($BareMatches) {
  $scriptText += " | Select-object -expandproperty 'matches' | Select-Object -ExpandProperty 'value' "
$scriptCmd = [scriptblock]::Create($scriptText)
$steppablePipeline = $scriptCmd.GetSteppablePipeline($myInvocation.CommandOrigin)

There’s no need for me to add anything to the process or end blocks, so that’s it – everything Select-String originally did, plus recursion and returning bare matches.

I’ve put the whole file on skydrive here

Next Page »

The Rubric Theme. Create a free website or blog at WordPress.com.


Get every new post delivered to your Inbox.