James O'Neill's Blog

January 27, 2010

The GPS / Sat-Nav experience with the Touch-Pro 2 and CoPilot

Filed under: Mobility — jamesone111 @ 2:53 pm

One of the things with the change of phone means sorting out sat nav software. I’ve been using  ALK’s co-pilot as my sat-Nav on the last two phones, with a Bluetooth GPS puck. I’ve been through 3 different versions of the software and I’ve grown used to its foibles. Since my phone arrived the day I was going to NEbytes in Newcastle I wanted to get CoPilot on installed right away and I decided to take the opportunity to upgrade to the latest version

image Sadly the ALK’s e-commerce site wouldn’t accept my old key to approve an upgrade and so I ended up paying full price – which was still a bargain £27. Then the site wouldn’t release the download – the download won’t work, even as a trial without the key, which only works on one device at a time, so the protection seems unnecessary.  I’ve had problems getting to the point of installing CoPilot before and this time, as then their support line sorted everything out for me in a few minutes.

As it turned out – my phone hadn’t got past the initial handshake with the Network so I knew I wasn’t going to be able to activate CoPilot and I headed for Newcastle with the software on my laptop, but not yet on the phone.  Installing the software is as simple as copying extracting a zip file to a memory card, Powering up the phone with card in it, and answering a couple of simple questions.  Switching to a bigger touch screen changes the UI to something closer to a dedicated sat-nav unit, and its pretty self explanatory, touch friendly controls

Now , it wouldn’t be Copilot without some bit of User interface which doesn’t follow conventions – this time it’s the on-screen keyboard : Windows Mobile provides one and Copilot doesn’t use it. It provides an ABC layout instead of QWERTY. (See the bottom two images on the left for a comparison of the experience in CoPilot and elsewhere – in this case IE). I’m glad of the slide out keyboard! Aside from that it’s a pretty easy going. One of CoPilot’s strong points has always been up to date maps and Points of Interest which still seems to be true, there is a nice feature which tells you which motorway lane to get into in advance and the routing algorithm seems a little less inclined to go through town centres to save a small distance (although it still thinks it easier to get to the Microsoft office by driving through the centre of Reading) . All very easy to switch to, basically.

image It has a number of different modes including walking and cycling. I set it walking mode to guide me the last little way to Monday’s IT tweet-up in London. I was on the train at the time, doing 125MPH – hence the comical message “You are travelling faster than normal walking speed, would you like to change modes”. It was interesting watching the software trying to to match location to roads as we sped down the railway line – it’s worth remembering that the device saves a lot of battery time by throttling back the CPU so this can be expected to hammer the battery life. CoPilot has settings for managing the backlight – the other battery killer – “always on” is going to hit the battery pretty hard so is best kept for when the device is on external power, and leaving Windows Mobile to manage things actually suspends Copilot and turns the GPS receiver off until the device is switched back on. “Backlight: On Near turn” works pretty well,  with one qualification; on the smartphone jabbing any button would bring the display back up, so on a long trip down the motorway one could see the software’s ETA or distance to the next turn. When the Touch-Pro 2 turns the display off it seems to turn the touch functions off so it is necessary to press the wake-up button not such a good thing when driving.

There are several advantages to ditching the GPS puck – not least the battery in it is at the end of its useful life so it has to be powered to be usable, and one integrated unit is always a better proposition. Reception is no worse as far as I can tell, and there are two implementation advantages. First Windows-mobile multiplexes the GPS between applications  so it is possible to have co-existence between CoPilot, the various twitter clients I tried (see this post) which support GPS and a second GPS program – I also use Efficasoft’s GPS utilities when I don’t need navigational guidance – it’s great if what you want is a GPS compass, speedometer, or data logger. The second advantage is the GPS has a “pre-warmer”. When the receiver starts from cold it can get to a fix more quickly given an approximate position – the phone can get a rough position from the 3G network – the information available is not GPS-accurate (otherwise we could dispense with the satellites)  but it speeds up the first plot dramatically – as I found emerging from the underground in London.

CoPilot automatically saves the logs from the GPS receiver which are in the started NMEA 0183 format, and it’s pretty easy to process these in anything which handles CSV files: Excel for example, or PowerShell. I thought it would be rather fun to plot speed against location so I did quick bit of data munging and pushed the information into MapPoint (MapPoint doesn’t seem to like importing Excel files with the 64 bit beta of office 2010 installed so I ended up using PowerShell, but any normal person would have used Excel). For each minute of data I averaged the speed, latitude and longitude and plotted the point a different colour – green for 55 MPH and up, blue for 40-55 MPH and Red for under 40. It’s pretty easy to spot the blocks of camera enforced speed limits through roadworks on my my return journey from Newcastle.

Depending how well the battery manages I may end up geotagging a lot more of my photos – that will be PowerShell again. The tip I learned for this from my diving is photograph the time on the data recorder (The Efficasoft tool in this case). The image of the time on the data is tagged with the time on the camera is of the time on the logger so you can workout the difference when tagging the pictures. I’ll come back to that another time.

This post originally appeared on my technet blog.

January 26, 2010

Finding a Twitter client for Windows Mobile.

Filed under: Mobility,Social Media — jamesone111 @ 11:13 am

I’ve tried more several different twitter clients on my desktop but I always end up gravitating back to the Web interface – I think mostly because I follow a lot links from twitter posts and with IE7 pro installed (on IE8, despite the name) I can use a “flick” gesture to open a link in a new tab and carry on reading down in twitter – then go to the pages I’ve opened. Some of the free standing readers are very good but opening a link takes you away from the reader.

On Saturday I was on my way to London and I knew people were tweeting about the photographers gathering in Trafalgar Square with the tag #PhNAT (photographer, not a terrorist) and I wanted to see what was happening.I tried the full and mobile web versions of twitter and with the bigger better screen of the Touch-Pro 2 it wasn’t a good experience. With 3G bandwidth at my disposal I decided to download a client. But which one ? There are 10 or so for Windows mobile. One wouldn’t download, a couple wouldn’t work once they were installed. I tested 4. Here are my totally unscientific, sample-of-one personal experiences. Don’t take my conclusions as a Microsoft endorsement.

First up, and therefore the standard that the others had to be beat was Twinki

Click for a larger version

Looks nice, and you can add pictures , co-ordinates from the phones GPS or a shortened URL when making a tweet. But where is the search ? If it hasn’t got search I need another client. I know some colleagues use TinyTwitter, so that was next.

Click for a larger version

As you can see Tiny Twitter wants to tell me to the second when a tweet was made and from which client. I can live without that – it’s not a killer but Tiny Twitter doesn’t seem to have any way of showing full names on tweets or adding GPS data to one of yours. It also takes more taps to start a new tweet. If you look at the text of the second post Tiny Twitter gave me the whole thing but Twikini truncated it. One tap shows the whole message in Twikini and second tap takes me to the linked page. In Tiny twitter  it’s tap the Tweet, then menu, status, links and I found it corrupted some links (bit.ly ones). It still doesn’t have search.

Next I tried Twibble, a Java client. I never got it to work, and what little I saw of the UI didn’t encourage me to try very hard. Next came Twitula, which demanded a new version of the .net compact framework and didn’t work even when I installed it (and slapped wrist for us:  I had to do that at when I got back to my laptop as it is not packaged for direct installation on mobile devices).  Then came Twobile.

image

Twobile has the big advantage that it it support all the functions of Twitter including … drum roll … search. It also packs in the maximum number of tweets – although it does so by truncating them so you need to go to open each one to read it. It also can invoke Google translation to translate a tweet (sadly Hebrew isn’t one of the options, so I never will know what Shay was saying). Its looks are definitely against it and some things like following a link require so many taps it borders on the perverse. I could use this with one of the others to get search but that really is all.

[Update: somewhere about this point I tried Pocket Twit. Somehow I left it off the original list]

PocketTwit  is the opposite of Twobile – those controls are geared for stylus input (which the Touch-Pro 2 supports) rather than finger input (which it also supports). Pocket-Twit is a true touch app – slide the column of tweets out of the way and you find menus which are located off to the sides. I found this baffling at first and still not feeling entirely I’ve made touch I had a few misgivings. The major one being when I slid the keyboard out and the device went into landscape mode I found I wasn’t sliding the column far enough to get the menu to snap into place. This may be worth re-visiting as and when I get on better with touch, but I discarded it for lack of search, although (thanks here to Scot Lovegrove) I later found it does have search, just not very accessible. I can understand people liking this app – I feel a bit of luddite for not doing so myself.

Click for a larger version

Finally I arrived at mo-tweets, which comes as an “Ad supported” version or a $3.95 version. 

image

First of all it has the option to run full screen (as here) and the use of bar along the top gives a tap-to-tweet button (of the other 3 only Twikini lets you tweet in a single tap from the home screen). The “sections” item from the main menu also has a short cut button on the top menu , so it can access the same wide range of choices as Twobile. there’s a choice of truncating the messages which don’t have the focus or showing them in full, and tapping on a message gives the widest range of choices (search hash tags in it, send it by e-mail, translate again), and there is an option for short cut buttons on the message with the focus – for reply, retweet and add to favourites. Finally the tweet dialog as buttons for adding people, new or existing pictures, a shortened URL or your GPS location.  I could suggest improvements for mo-tweets. It previews the Google maps page it will display for a URL link but it jumps out to the default browser when a link is clicked – I’d like the option to preview the page inside mo-tweets. It gives a choice of URL shorteners, including Bit.ly – I’d like to put my bit.ly account information in so I can see what traffic has gone to the link, and I’d like to be able to choose my own mapping provider. However as stands it gives me the key things I want in a form that I like so that is the one I have settled on for now – just remember folks, you can’t extrapolate from what one guy at Microsoft likes to anything about Microsoft as a whole.

This post originally appeared on my technet blog.

January 25, 2010

Early days with the HTC Touch Pro 2

Filed under: Mobility — jamesone111 @ 4:12 pm

Last week Orange delivered a consignment of HTC-Touch Pro 2 phones and one of them had my name on it. Every phone I’ve ever had has been driven by buttons and I wrote before Christmas that I was in two minds about “going touch”. But my old E650 was falling apart and I decided the HTC was the best of my choices, so here are some first impressions

This is the 6th device made by HTC I’ve owned (the original Compaq IPAQ 3650, the first 02 XDA – which I never used, the first Orange SPV phone, the SPV-C500 and the E650). It’s the first to have HTC’s name on it. In terms of build quality and design it is the best of them, and so it should be with a price tag which would let me buy a laptop or two net-books. Adjectives which come to mind are “clean” , “minimalist”, “solid”, “business like”.

The device is built around an 800×480 (“Wide screen VGA”) display: my old desktop monitor is about 85 Pixels per inch, my lap is twice that, and the HTC is 3 times that – I suspect that is a large part of the cost of these devices. The screen is actually slightly smaller than that original IPAQ (47x79mm – 3713mm² against 58×77 4466mm²) but packs in 5 times as many pixels. Not to mention phone and GPS functions a Micro-SD memory socket, a much more powerful processor, two cameras and a bucketful more memory.  Each new device seems to have better brightness and saturation than its predecessor and this one is no exception – I thought this was one of the new OLED screens but the specs say it is just LCD.  That screen enables some new scenarios, especially as graphics abilities of the device seem pretty good, and there is no getting away from the fact that applications I had on my E650 are just better on a big screen (that’s not a surprise – I can’t think of anything which gets better on a small screen.)

One oddity of this phone is the extent to which choosing a home screen changes the whole user experience of the phone.

image

From left to right we have HTC’s “Sense” UI which has short-cuts to other HTC apps along the bottom. Then there is the “Microsoft default” which puts people in mind of Zune’s UI. Neither of these is greatly customizable so far as I can tell so I can’t get rid of the getting started or voice mail options (I only use my Exchange voice mail, and have turned off the one that Orange provide). Then there is the Orange home screen – the latest incarnation of something that first saw the light of day on the C500 phone and I’ve never warmed to. And finally there is the traditional Pocket-PC page – this certainly used to be highly customizable using XML files I haven’t found out if that still applies, but it looks dated beside the first two.  However it lets me see time and upcoming appointments at the same time, and it doesn’t provide a button for call history which looks a missed call notification.

The HTC sense “skin” gives access to the main functions via the bar along the bottom  

image

They’re all pretty nice, but don’t do everything, so I end up clinking All programs , all settings or Inbox to open up the built in windows application.

I’m slowly warming to touch as a UI (though I keep growling “Why do I have to slide that, why can’t I tap it”), and with the slide out keyboard I think I’ve got something where I can do mail, note taking, tweeting and so on quite easily. Since E650 was not a 3G device the thing I notice most is the speed of the new device – not only do it’s CPU and graphics make it feel nippy, it hops from my home network on WiFi to 3G, and then to the office Network on WiFi pretty much seamlessly (something I gave up on with the E650 which never seemed happy about changing networks) – the only speed test I’ve done it to date showed a download speed of about 750Kbits/sec and upload of about 128Kbits/sec – plenty good enough for all but the largest downloads. One neat trick about the phone is when connected by USB it asks if you want to go into internet sharing mode, memory stick mode, or normal active sync. I wish it would offer a “Web cam mode” too as the camera looks pretty good. There’s a rear facing autofocus camera of 2.4MP and a front facing one for video calls (though communicator mobile can’t use them). There’s no flash but the camera has fast lens and high ISO rating so it can get pictures in poor light.

Music and video seem pretty good – there is a TV connection cable for the USB socket and like previous HTC devices an adapter is used plug in conventional headphones. And of course the GPS is built in, and seems to work at least as well as the Bluetooth GPS puck I used before – the only Bluetooth configuration needed was to get my earpiece going and that was pretty straight forward.

All in it’s a easy device to like, and I’ll explain a bit more in the next post or 2 … or 3

This post originally appeared on my technet blog.

I’m a photographer, not a terrorist (or any other kind of bogeyman)

Filed under: Photography,Privacy — jamesone111 @ 11:39 am

Click for a larger version Every now and then in photography forums someone will ask “Do I need a release to publish pictures of someone”, the law varies enormously round the world but English law grants rights to the owner of copyright (the photographer or their employer), and not to people who appear in the pictures. The photographer can publish, exhibit or sell pictures provided nothing improper was done to get them in the first place: deceiving a model, trespassing to get a shot, taking a picture somewhere that conditions of entry meant not taking pictures or waiving the normal rights of a copyright holder, or using a long lens and a step ladder to see someone where they would have an expectation of privacy would all be examples of “improper”. The rule of thumb for photography in a public place is sometimes summarized as “if it shows what anyone else could have seen had they been there, its OK”.

Except it is becoming less and less OK. It used to possible to take photographs of children playing in public if it made a good photo. Photographers won’t do that now for fear of being branded paedophiles. People seem to be unable to tell the difference between making a picture of child having fun and a picture of a child being abused – which is far more likely to be at the hands of someone they know. If someone does not interact with a child in any way then logic says no protective action is needed; yet people have stopped taking pictures because of what others think might be in their head.
But photographers have a newer problem – people are losing the ability to distinguish Tourists from Terrorists. Again there seems to be a fear of what might be (but probably is not) in someone’s head. The number of news stories concerning photographers being prevented from taking pictures has been rising, and it triggered a protest this weekend in London, which I went along to. It was organised via the internet, but only ITN made the pun of such a gathering of photographers being a “flash mob”. I noticed the British Journal of Photography was supporting it – they’ve been around since the days of glass plates and have seen a lot of things come and go, so don’t tend to get worked up over nothing. 
Usually these stories concern section 44 of the Terrorism act 2000. Some people were protesting about the act itself , although I see it more as “section 44 is being used far too often on a random basis without any reasoning behind its use” – not my words but Lord Carlile, Government independent reviewer of anti-terrorist legislation quoted by the BBC. If you look up section 44 it says

An authorisation under this subsection authorises any constable in uniform to stop a vehicle, or a pedestrian in an area or at a place specified in the authorisation and to search…
It says that the Authorisation must be given by a police officer for the area who is of at least the rank of assistant chief constable (or Commander for the Metropolitan and City of London forces) and they must consider the authorisation expedient for the prevention of acts of terrorism. Section 46 says the authorisation must specify an end date which must not occur after the end of the period of 28 days beginning with the day on which the authorisation is given. (although the authorisation can be renewed on a rolling basis.)

IMG42743 A list of the authorisations issued would be a draft list of possible targets, so the police don’t publish such a list: however a constable acting under S44 must be able to show they hold the office of constable (Police Community Support Officers, Security Guards and so on have no powers) and that proper authorisation has been given. It would be interesting to see what happened if an officer mentioned section 44 and got the response “You claim to have authorisation issued in the last 28 days by an officer of suitable rank, covering this place. Could you substantiate that claim please.”  It’s my belief that in a lot of cases where an someone claims to be acting in the name of section 44 they either lack the proper authority or exceed the powers it gives them, which are set out in section 45, as follows
The power conferred by an authorisation under section 44(1) or (2) may be exercised only for the purpose of searching for articles of a kind which could be used in connection with terrorism  and Where a constable proposes to search a person or vehicle by virtue of section 44(1) or (2) he may detain the person or vehicle for such time as is reasonably required to permit the search to be carried out at or near the place where the person or vehicle is stopped.

There is no power to demand any personal details or the production of ID – indeed for the time being we are free to go about our business without carrying ID. The power is only to search for items which could be used for terrorism and not to detain a vehicle or person for any longer than reasonable to carry out the search. There is no power to seize photographs or to demand they be deleted.

What is interesting to a photographer is section 58.
It is a defence for a person charged with an offence [of collecting or possessing information likely to be useful to a terrorist] to prove that he had a reasonable excuse for his action or possession.

Train spotters have fallen foul of the act  (seriously, what use would a terror cell have for rolling stock serial numbers – an on-line train timetable would give them all they need) and they have to use their hobby as “reasonable excuse” , just as photographers have to when taking pictures of St Paul’s Cathedral or the Houses of Parliament. (And if you photograph trains well…). Of course there are sites with legitimate bans on photography – the photographer not a terrorist website has a map of them, and you can see just how good a picture Google maps gives of each of them. It does make you wonder why anyone planning an attack would go out with a camera.

None of this post has anything much to do with the normal content of this blog [I’ll post separately on the social media aspect] except that photography having gone mostly digital it is bound up with IT, and anyone who works in technology should be concerned when that technology is used to erode freedoms we take for granted, whether it is governments targeting data held by Google  the planned requirement to provide a National Insurance number when registering to vote – using a single key in many databases makes it so much easier to go on a fishing trip for information – or the national DNA database with it’s pretext that everyone is a potential criminal.  That mentality gave us the Kafkaesque sounding “National safeguarding Delivery unit” which checks people against another database to make sure they can be trusted to work with children but whose boss admits they give a false sense of security, and anecdotal evidence says that the need to be vetted puts people off volunteering. Even the people who will operate the new scanning machines at airports object to being vetted – oh the irony. And as Dara O’Briain put it on Mock the Week recently “If the price of flying is you have to expose your genitals to a man in the box, then the terrorists have already won.”

Ultimately the Photographers gathering this weekend was about that. We won’t go to bed one night in a liberal democracy and wake the next morning in a “Police state”, but if little by little we lose our right to go about our lawful business unmolested, if checks and surveillance become intrusive and if the only people allowed to point a camera at anyone else are unseen CCTV operators then we’ve lost part of the way of life which we are supposed to be safeguarding. The Police seemed to have made the decision that if photographers were demanding that the law shouldn’t be misused they’d just follow the advice given by Andy Trotter, of British Transport police, on behalf of ACPO that “Unnecessarily restricting photography, whether from the casual tourist or professional, is unacceptable.” and leave the photographers to it with minimal police presence. It wasn’t a rally, no speeches were arranged and so we had the fun of photographing each other, in the act of photographing each other. A couple of staff from the national gallery got mildly annoyed with photographers obstructing the gallery entrance but they kept their sense of proportion.  I didn’t take many pictures – the light was dreadful – but you can see a couple here.  As I said above the social media side has given me enough material for at least one more post

This post originally appeared on my technet blog.

January 19, 2010

How to Pretty Print XML from PowerShell, and output UTF, ANSI and other non-unicode formats

Filed under: Powershell — jamesone111 @ 10:34 am

PowerShell has been taking more than its fair share of my time of late and I need redress the balance a bit – just not quite yet.

Powershell and redirection.

I’ve been working on my hyper-V library for codeplex and this has separate files for every command, and then to keep the start-up process for the module manageable these are merged together – I have a very short script to do this . All the constants from the top of the files get grouped together, at the top of the final file, but basically it is getting the files and outputting them to a destination using the > and >> operators . Then I got a mail from Ben who wanted to sign the scripts but found a problem if they saved as “Unicode BigEndian Text”. I hadn’t selected this, but that’s the default for text output from PowerShell. One can use Out-File –encoding ASCII, but that has another undesirable behaviour – it pads (or truncates) text to fit a given width. It turns out that – although the help files for Add-Content and Set-Content don’t mention it, both take –encoding  so > can be replaced with | Set-Content –encoding ASCII filename and  >> can be replaced with | Add-Content –encoding ASCII filename.

Pretty printing XML

Ben’s mail was rather timely because I had parked a problem with XML. I wrote about writing MAML Help files some while back and I’m still using InfoPath to do the job: the formatting is wantonly nasty. So I wanted to reformat the files they were vaguely readable, and went and found various articles about how to do it (I think I ended up adapting this code of James’s but I wish now I’d kept the link to be sure I’m assigning credit correctly)

function Format-XML {Param ([string]$xmlfile)
  $Doc=New-Object system.xml.xmlDataDocument
  $doc.Load((Resolve-Path $xmlfile))
 
$sw=New-Object system.io.stringwriter
 
$writer=New-Object system.xml.xmltextwriter($sw)
  $writer.Formatting = [System.xml.formatting]::Indented
  $doc.WriteContentTo($writer)
  $sw.ToString()
}

Of course I use > to redirect this to a file and it did not work if I used | clip and pasted it into notepad all was well. Eventually it dawned on me that the first line of the file was
<?xml version="1.0" encoding="UTF-8"?>

And of course I was creating unicode files so … | Set-Content –encoding UTF8 and it all works. So I had my  nicely format XML files providing help. And the next post will explain what it was all for.

This post originally appeared on my technet blog.

January 18, 2010

“Vague is good” revisited: How to make usable PowerShell Functions

Filed under: Powershell — jamesone111 @ 8:41 pm

Before Christmas I wrote about the conclusion I was forming on PowerShell parameters: Vague is good. The Christmas season is when my parents used to get various kinds of puzzles out and most of my puzzles these day seem to be PowerShell type things rather than Jigsaws and Crosswords. My equivalent of the 1000 piece jigsaw over the holidays was to try to get my Hyper-V library for PowerShell working with PowerShell-Remoting. There isn’t an overwhelming need to do this because the library can manage a remote server using WMI, but I felt I should at least test it. It failed miserably. I showed the template for my functions back in that post it goes like this

Function Stop-VM{             
      Param( [parameter(Mandatory = $true, ValueFromPipeline = $true)]$VM,             
             $Server = ".")              
    Process{ if ($VM –is [String]) {$VM = GetVM –vm $vm –server $server}             
             if ($VM –is [array])  {[Void]$PSBoundParameters.Remove("VM")             
                                    VM | ForEach-object {Stop-Vm -VM $_ @PSBoundParameters}}               
             if ($VM -is [System.Management.ManagementObject]) {             
                 #Do the work of the function             
                 $vm.RequestStateChange(3) }             
}}

My point when I wrote “vague is good” was don’t assume either that the user will pass a name , or that they will pass an object: specifying types enforce those assumption forces on user behaviour. In the same way don’t assume that the server is a single string.  Accept what user wants to pass – it’s your job to sort it out, not theirs. That’s not like implementing code to be used by other programmers in other languages because they expect the constraints of types.

Remoting doesn’t like the constraints of types: the serialization process it uses to cope with different objects being available on different machines means when a Virtual Machine object comes back from a remote server it is no longer of the type [System.Management.ManagementObject]. So if I send it back to a second command it just drops through without doing anything. If I send an array of such objects it morphs into an ArrayList.  After chatting with some friends in the product team I understand why this is the case but whilst I wasn’t making assumptions about types in my parameters I  hadn’t got rid of them completely and I had to go back and change every instance where I’d used this template. The key bits I needed to change became

if
($VM.__Count –gt 1) {[Void]$PSBoundParameters.Remove("VM")

                      
 VM | ForEach-object {Stop-Vm -VM $_ @PSBoundParameters}}

if ($vm.__CLASS -eq 'Msvm_ComputerSystem') {}

There were other side effect too . In some places I might have written $VM.getRelated(“Msvm_computerSettings”) but the serialized object doesn’t have a getrelated() method, so that fails. Instead I can use Get-wmiobject with a query in the form “associators of {$vm} where resultClass=Msvm_computerSettings”. Even that needs to change because $VM expands to it’s wmi Path, but only when it is of the type [System.Management.ManagementObject] : when it’s been serialized it doesn’t expand in the same way so it needs to be associators of {$($vm.__Path)}

Now you might think that this cements the view that leaving types unspecified is something I’d put forward as a best practice – but it’s more complicated than that having read this post of Raymond’s recently , the advice is more along the lines of “If you are interested in the Tail, don’t specify the animal, just check it has a valid tail.” (Raymond was making a totally different point, he’s very good on those). To see why it is more complicated let’s take another example. I have a function which does things to files – in this case Expand Virtual Hard disks. My code wants the path to the file it is supposed to work on, and if the file lives on a remote server, the server name as well. But as well as typing path1.vhd, path2.vhd , the user may well want to pipe Virtual Hard disks into the command.  They might

  • Have a list of names in a file and do  type Names.txt | Expand-vhd –size 30gb
  • Do DIR D:\VHDS | expand-VHD
  • Pipe In the output of he Get-VHD function I provided which uses WMI to get remote file objects for VHDs on a remote computer
  • Pipe in the output of the Get-VHDInfo command I provided so they can filter the list of disks to only dynamic VHD files
  • Pipe in the output of Get-VMDisk function I provided – which returns disks attached to a virtual machine.

That’s 5 different kinds of object and they use different property names for the path. Fortunately PowerShell has a warren of trained rabbits for me to pull out a hat here. So first here is the simplest way we can write the function to deal with paths, which forces the user to use strings. 

param ( [parameter(ValueFromPipeline=$true)]

        [String[]]$VHDPaths,

        [String]$Server=”.” )

process{ Foreach ($VHDPath in $VHDPaths) { #Do Stuff } }

But here is the first change, which enables the function to get the string from a property of the object. 

[parameter(Mandatory=$true, ValueFromPipelineByPropertyName =$true, ValueFromPipeline=$true)]
[String[]]$VHDPaths

That has turned the parameter declaration into “Take a string from a command-line parameter, or a string from the pipeline, OR if you have an object in the pipeline which has a property named VHDPaths , use that”. Great … except nothing uses the property name VHDpaths so the parameter can be told to use other property names by adding

[Alias("Fullname","Path","DiskPath")]                                

Now the function can take a string, a WMIObject or a file object: when I provide functions which output disk objects I just have to make sure they include a property with one of those names. If we go back to the animals analogy – someone might tell you a number of feet, or might pass you an animal which has “Hooves” or “Paws”, what we’re saying is “If a number, great, but if passed an animal,  look at the count of Feet/Hooves/Paws” Simples.

There is one last trick – more than one parameter can be set using a property of a piped object. If someone does Get-VHD –Server MyServer | Expand-VHD –Size 30gb it should just work. It shouldn’t force them to put a –Server parameter into the Expand part of the command line – so how can that be done? Like this.

[parameter(ValueFromPipelineByPropertyName =$true][Alias("__Server")]

[String]$Server = "." 

tweetmeme_style = 'compact';
tweetmeme_url = 'http://blogs.technet.com/jamesone/archive/2010/01/18/vague-is-good-revisited-how-to-make-usable-powershell-functions.aspx';

This post originally appeared on my technet blog.

January 16, 2010

The “Joy” of Reg-ex part 3: Select-String

Filed under: Powershell — jamesone111 @ 5:56 pm

One of the PowerShell tools I’ve been using a lot recently is Select-String : Going through lots of files trying to find a mistake I know is in several of them – can just bang in

select-string -SimpleMatch "Split-path" -Path *.ps1 –list | edit 

Finding text in files is not exactly radical. But Select string has two things which allow it to do some really clever stuff. First of all, and no surprise since this is a series on Regular Expressions is it can use a reg-ex instead of a simple match. And I can feel people with unix experience mumbling “hasn’t he heard of Grep”. Where things move up a gear, this being PowerShell is we get objects back – so by way of example I’ve wanted to show people how my PowerShell library for Hyper-V is built up – for example, what WMI objects I use, and select string’s objects allow me to get to that in a handful of commands. First off get the script and strip out lines which are blank or comments. I could just pipe this into next command but for now I’m going to store it in a variable.

$script = get-content .\disk.ps1, .\Helper.ps1, .\menu.ps1 |

            
 where {($_ -ne "") -and ($_ -notMatch "^\s*#")}

I actually had a longer list of  files, but once everything is in $script I can the then use Select-string. What I want it find is is: the word “Function” followed by at least one space, followed by word characters, then a hyphen, then word characters which come to a word boundary. Again, for ease of reading I’ll put the lines into a variable rather than pipe them.

$Lines = $script | Select-String -Pattern "Function\s+(\w+-\w+)\b"

Select string returns a MatchInfo objects which have contain the file name (if it is reading files), the line number where the match was found, the pattern which triggered the match (we might have specified more than one), and collection of matches.  Each match in that collection has (among other useful things) a groups collection. Groups[0] holds the whole match but using brackets allows us to isolate groups inside a bigger expression. The brackets in regular expression I used – "Function\s+(\w+-\w+)\b" – said “and isolate the function name”. So I can refer to each line’s Matches[0].groups[1].value to get the function name. What I want is new objects with function name an the line number where it was declared. You’ll see in a moment why I also want them sorted.

$fnlines= $Lines | ForEach-object {new-object psobject -property @{

                             
 Function =($_.matches[0].groups[1].value); 
                                 lineNo =($_.lineNumber)}} | 

                   sort-object -descending -property lineNo

Which gives me items that look like this.

Function             lineNo

--------             ------

Sync-VMClusterConfig   789

Set-VMSerialPort       456

Next I  do the same thing for WMI classes – all the classes are strips of text which start either MSVM, WIN32 or CIM, so I can specify those with a single regular expression, and I can pick up the match using matches[0].groups[0] (I’m looking for the whole match this time, hence group[0] ). This time I want new objects with the WMIClass name and function name where it was found. I know the line where they were found, and that previous set of objects hold the line number where each function was declared , so I need the first of those objects with a  line number less than the one where the class was used (which was why I sorted in reverse order before), and to get its function property.

$Lines = $script | select-string -pattern "Msvm_\w+|win32_\w+|CIM_\w+"

ForEach ($line in $lines) {new-object psobject -property @{

                         
 WmiClass =($line.matches[0].groups[0].value); 
                       FunctionName =($fnlines | where {$_.lineNo -lt $line.LineNumber} | 
                                         | select-object -first 1).Function }

}

So now I get items back which look like this

AndFunctionName     WmiClass                                     
------------        --------                                     
Add-VMNewHardDisk   Msvm_ComputerSystem                          
Add-VMDisk          Msvm_ComputerSystem                          

This example is a case of telling Select-string “Look one of many possibles, and then show me what matched”. It’s possible to take this further. A lot further.  The following line builds a giant reg-ex with every PowerShell cmdlet in it.

Get-command -CommandType cmdlet | foreach-object -Begin {$cmd  = ""} `
-process {$cmd += "$_|"}`
-end {$cmd = $cmd -replace "\|$",""}

I can use this to get back lines which contain a cmdlet, and since there might be more than one cmdlet on a line, I use the -allmatches switch to make sure I get all of them

$Lines = $script | select-string -pattern $cmd -AllMatches

So now I have a similar set of lines to what I got before – this time there is more than one match in each, but each match will only have one group – so I can unpack the objects to get cmdlet names and group the results to see which ones get used most like this

$lines | foreach-Object {$_.matches | foreach-object {$_.groups[0].value }} |            
   group-object -NoElement | sort-object -property count -desc            

Now this is something I just couldn’t have done in any other language I have worked in. (We did SNOBOL at University and if I had every got to grips with it … well maybe). The combination of objects, which give back something way more empowering than plain text and built in cmdlets which will do so much of the work for us with the Power of regular expressions is amazing.  And for those who like to play “PowerShell Golf” – that is doing the job with the fewest strokes you can get it down to one (wrapped) line.

gc .\disk.ps1, .\Helper.ps1 | ?{($_ -ne "") -and ($_ -notMatch "^\s*#")}  |            
  select-string -pattern $cmd -All | %{$_.matches | %{$_.groups[0]$_.value}} |            
  group -No | sort count -Desc

This post originally appeared on my technet blog.

January 14, 2010

The “Joy” of Reg-ex part 2 – ways I use it

Filed under: Powershell — jamesone111 @ 11:26 pm

In the previous post I gave some of the background to regular expression and how they might be used. I thought I’d give a few examples.

1. Checking paths.

Quite a few of my functions take paths to Hyper-V virtual hard disk files as parameters and I don’t want to force the user to type “.VHD”, so I also have a check

if ($Path -notmatch "VHD$") {$path += ".VHD"}            

If the path doesn’t end with VHD , add VHD to the end of it, and I also don’t want to make the user type in the default path so I used to have this in many places. I had variations of this line in many places:

if ((Split-Path $Path) -eq "") {$Path = Join-Path $Default $Path }

It Checks to see if a path parameter is actually just a file name, if it is , it adds a default path to the front of it.  There’s only one problem  – the path might be on a remove server, and might not exist on the local server. That never affected me in testing but one of my users discoverd that PowerShell’s Split-Path tries to be helpful but might come back with

Split-Path : Cannot find drive. A drive with the name 'E' does not exist.

Not what I want at all – I’m actually testing for is “is there a letter followed by a colon OR a characters followed by a \ followed by another characters”, in regex terms this is "(\w:|\w)\\\w" – it helps to read it aloud as “(word=character colon OR word-character) backslash-character word-character” – so my line morphs into:

if ($Path -notmatch "(\w:|\w)\\\w") {$Path = Join-Path $Default $Path }

There is a simpler case when people are attaching the a host DVD to a virtual drive in a Hyper-V. I want to people to have to find the internal drive path windows uses for the drive, I just want them to be able to say D: so the test is for “Start word-character colon end” like this

if ($path -match "^\w:$") {$path = Get-WmiObject etc }

2. Building lists into separated strings

Quite often you have a need to loop through something building up a list with commas or semi-colons or whatever between the items. And you need to deal with the empty strings, and making sure you don’t have a dangling separator at the end

$BootOrder | foreach-object -begin {$bootDevices=""}`
                          -process {$bootdevices += ([bootmedia]$_).tostring() +","} `
                              -end {$bootDevices -replace ",$","" }            

I have bootmedia enum type which converts 0,1,2,3 into CD,Network,Floppy,IDE disk. The loop first sets up an empty string then converts each item using the enum type, and adds a comma. Finally it returns the string with the trailing comma lopped off.

3. Simplifying an or

Sometimes you want to do something if a piece of text is one of multiple values ,for example don’t do something on the core or Hyper-V installations which you would do a full server installation. Some  people would would write this as

$regpath     = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion"            
$regProps    = Get-Itemproperty -Path $regpath            
$WinEdition  = $regprops.editionid            
$Coreinstall = ($winEdition -like "*core*" )
$HyperVSku = ($winEdition -like "*hyper*")
if ( -not ($CoreInstall –or $HyperVSku )

But this can be compacted into a much easier to read version

$regProps = Get-Itemproperty -Path "HKLM:\Software\Microsoft\Windows NT\CurrentVersion"                        
if ( $regprops.editionid  -notmatch "core|hyper") {"boo"}            

or even, single line


if ((Get-Itemproperty -path "HKLM:\Software\Microsoft\Windows NT\CurrentVersion").editionID -notmatch "core|hyper") 
 

I said at the end of the previous post I was going to cover “one of the best tools in PowerShell – select string.” that will actually be in part 3.

This post originally appeared on my technet blog.

The “joy” of Reg-ex part 1.

Filed under: Powershell — jamesone111 @ 4:17 pm

One of the good things in PowerShell is support for regular expressions – in fact I suspect some Unix sys-admins might laugh their Windows counterparts for not got to grips regular expressions sooner.
The downside is that regular expressions are an area which give a lot people a serious headache.

So lets start from first principles.

1. Regular expressions are about looking for text that matches a pattern. When matching text is found various follow-up operations can be performed such as replacing it.

2. The patterns defined by regular expressions allow “classes” of characters to be specified , for example “word” characters, digits and spaces

3. The patterns can specify groups of alternatives or repetitions.

Everything else stems from that: simple. The thing with simple ideas is we build complex practices on them

In PowerShell we can use the –match operator to test an expression: so here are some examples 

"cat" -match "at"  returns true.  “cat” contains “at”, we have a match. That was easy. We can specify alternates
"dogma" -match "cat|dog"  returns true because the test pattern translates as  “cat or dog” (pipe sign is “or”) and dogma is a match for dog.
"AlleyCat" -match "cat|dog"  also returns true because it contains cat

If we want to specify the words rather than “substringscat or dog we can use the first of the special characters, \b means a word Boundary,
So "dogma" -match "cat\b|dog\b"  returns false but "Alleycat” –match “cat\b|dog\b" returns true. We can specify a boundary before as well as after the text to get the exact word

In Regular expressions the Wildcards that most people have been used to divide into classes of characters – another place where we use special characters – and repetitions\w is a word (alphanumeric) character \s is a space character, \d is a digit. A dot stands for “ANY character” at all. Changing case reverses the meaning. \W is any non-word\S is any non-space ..
If we want to specify alternates we can write them as [aeiou] or [a-z] for a range. We can reverse the selection of alternates with the ^, so [^aeiou], is any non vowel 
"oat" -match "\b[a-z]at" returns true, but replacing the letter o with a zero as in "0at" -match "\b[a-z]at" returns false.

"oat" -match "\b[a-z]at" will only return true of there is exactly 1 letter between the start of the word and “at”,  so chat won’t match. We can specify repetition: {2} means 2 exactly repetitions, {2,10} means at least between 2 and 10 repetitions, {2,} means at-least two. We have short-hands for these: * means any number including 0, + means any non zero number and ? means zero times or once but no more. Incidentally if we need to match a character which has a special use – the different kinds of brackets, . * ? and \ we “escape” them it prefixing with a \

"[a-z]at " will find a match with cat and hat but not  at or chat
"[a-z]+at" will find a match with cat, hat and Chat , but not at ,  
"[a-z]?at"
will find a match with cat, hat and at , but not chat ,
"[a-z]*at" will find a match with cat, hat at and chat

This requires unlearning some automatic behaviours we’ve learnt: at most command lines we can use * to mean “any combination of characters, including none”, so A* means “A followed by anything” in regular expressions “A*” will always match because it means “containing any number of instances ‘A’ including none. In regular expressions the syntax is A.*  (A followed by any character, repeated any number of times). Similarly where we use ? to stand for a “a single character” in regular expressions we use .

There are a couple of other special characters worth noting. ^ means start of line and $ means end of line.  These last two are very useful in scripts, where you often need test for something which begins or ends with a given piece of text.  For example if in PowerShell you declare a variable to hold some text – for example $myString = “The cat sat on the mat” , the .net string type has an endswith() method so $myString.endswith(“at”) returns true. Great. Except, we often want to do something with the text – like replace and PowerShell has a replace operator too. If we want to say “Replace the HTM  at the end of a file with HTML” we can do $mystring –replace “HTM$”,”HTML”  Similarly if we’re looking at text and we want to cope with trailing spaces strings have a trim method, but regular expressions can get rid of punctuation as well $myString –match “at\W*$”  will match even if there is punctuation and spaces between “Cat” and the end of the line.

So far so good – we can also use a –split operator in PowerShell: again, .net strings have a split() method, but if we try this
$string=”The cat sat,      on the dog’s mat”  ; $string.split(“ “)
 

It will return blank extra lines for the spaces between “sat” and “on” and the comma will be welded to sat for good. We could split the text at any non-word character. by using –split “\W”  Unfortunately Regualar expressions don’t consider ’ to be a word character so  ’s will be split off from “dog”. This is easily fixed by using $string -split "[^\w‘] +"  which says Split where you find something which is neither a word character nor an apostrophe, treating multiple occurrences as one. 

The last thing I wanted to mention is one I ways have to double check, and that is something called “greedy” / “Lazy” behaviour. Suppose I want to change something in the first tag of a piece of XML . I might look for a match on “^<.*>”  – which says find the start of the text, then a < then any other characters and finally a >. This will match the whole document because * finds as many characters as it can before the final > if we want the fewest characters the * must be followed by a ? sign.

In the next post I’ll look at a couple of ways we can put regular expressions to work including one of the best tools in PowerShell – select string.

This post originally appeared on my technet blog.

New year, New user group … Newcastle.

Filed under: Events — jamesone111 @ 3:08 pm

Thankfully the snow is beginning to recede, and looking at the last blog post I made makes me realise just how much it has disrupted my schedule.

Next week I’m off to Newcastle. Jonathan and Andrew got in touch to say the were launching a new user group North East Bytes

We are pleased to announce a new User Group in the North East of England, based around Microsoft Technologies, North East Bytes (NEBytes).  We have decided to start this group in order to help Developers and IT Pros in the community with the constant battle to learn, stay current and broaden their knowledge.

We run monthly meetings every third Wednesday of the month (except on the second Wednesday in December – to allow time for Christmas parties and shopping!) at Newcastle University.  Each meeting consists of two one hour presentations (one Developer topic and one IT Pro topic) and we have refreshments, food, giveaways and prizes.

Attendance at our meetings is completely FREE!! The venue is provided kindly by the University, our Speakers kindly provide their time for free, and we as organisers provide our time for free to organise the events.  We will provide refreshments and we also provide hot food, all we ask is if you would like to partake in the the hot food, please make a small donation towards the cost via the open contribution box at each meeting.

Their Launch event is Wednesday 20th January 2010, I’m going, to cover Hyper-V, and my developer colleague Mike Taulty is going too – he’ll cover Silverlight. 

We tend to skulk about in the South East of England and not get to other corners of the country, but when we have been to Newcastle the audiences have always been great – I heard from Jonathan that early sign-up for this is looking good too, but if you’re based in that part of the world and haven’t signed up you’ve got a few days to do so. I’m looking forward to it.

Update. Oops linked to Andrew but not Jonathan. No favouritism intended – I’ve put it right

This post originally appeared on my technet blog.

Create a free website or blog at WordPress.com.