James O'Neill's Blog

March 26, 2011

F1: The hidden effects of moving wings

Filed under: Uncategorized — jamesone111 @ 10:25 pm

There seem to be divided opinions about the effect of the “Drag reduction system” introduced in F1 this season. The rules are that

  • Drivers can operate device to lower the effective part of the rear wing, cutting both lift and drag. The wing returns to its original position when the driver applies the brakes.
  • In wet conditions this will be disabled
  • In qualifying the drivers can use this at will
  • In the race it is armed remotely by a system in race control – if the car close enough to the one in front (the margin will be 1 second to begin with – this my change over the season) at a specific point the following driver can lower his wing for a specific section of the track – typically the longest straight. .

“Push to pass” divides people: we had it in the days of turbo engines: in the 1980s we had qualifying engines which wouldn’t last a race distance; the boost button in a race gave a burst of similar power – for a sustained period it was a case of “The engines cannae’ take it”, nor could the tyres, and fuel would run  out. We it had when KERS first appeared;  “Kinetic Energy Recovery Systems” are currently a gimmick: energy stored, rate at which it is returned (Power) and time over which the return can take place are all constrained. F1 talks about being greener, removing the limits on KERS would be an obvious way and I’d have it feeding extra power in when the driver applied full throttle. Now we have it with wings.

Predicted effect 1. Last use wins. IF it turns out to make passing easy then if two cars are evenly matched, drivers won’t want to be re-passed, so they will time their passing move to use the wing at the last moment

Predicted effect 2. More tyre stops. There was always a decision to make: sacrifice position on-track by making a stop for fresh tyre – or hold out ? The harder it is to overtake the bigger the advantage of fresh rubber needs to be before stopping becomes the preferred option – because as Murray Walker always used to say “Catching is one thing, passing is quite another”.  So picture the scene with a dozen or so laps to go the first two cars have been on hard tyres for a good few laps and the leader is being caught: thanks to DRS the 2nd place car gets past. The former leader’s his car is fractionally slower but on fresh soft tyres could go 2-3 seconds a lap quicker – enough to catch the 20-30 seconds a pit stop takes with a couple of laps to go. Most of the tyre advantage will have gone by the time he has caught up: previously it would have been easy for the new leader to defend for the last couple of laps. Now if the chasing car can get within DRS distance he should be able to make a last gasp pass.  In the wet inspired changes of tyres win races – it didn’t really happen in the dry – until now.

Predicted effect 3. The return of slipstreaming. The FIA banned slipstreaming… OK, not as such. Imposing an 18,000 Rev limit banned it. How so ?  Without a limit on revs, in top gear, revs and speed increase until the acceleration force coming from the engine matches the retardation force from friction and aerodynamic drag.  Reduce drag by slipstreaming and top speed and engine revs will increase. But what if gear ratios are optimised to get the best lap time with no slipstream (in qualifying) – hitting the maximum Revs as the driver hits the brake at the fastest point ? If revs are limited the car won’t go any faster with the aid of slipstream.

With the ability to use DRS in qualifying, the optimum is to hit 18,000 in low-drag trim at the fastest point. The teams can’t change gear ratios after qualifying and the race the cars will be in high-drag trim most of the time – so they won’t be reaching 18,000 revs and will have a margin for slipstreaming.

Predicted effect 4. Race pace trumps grid place. Grid penalties become less effective. The advantage of starting ahead of a car which is faster than then yours / or disadvantage of starting behind a slower car varies with the difficulty in passing. Since the car can’t be reconfigured after qualifying, making overtaking easier might mean car set-up is tilted more towards race configuration than qualifying. It also means taking a penalty for a precautionary gearbox change (say) is smaller

Whether or not any of these things happen remains to be seen. Still: fun season in prospect.

March 25, 2011

PowerShell parameters revisited.

Filed under: Powershell — jamesone111 @ 8:49 pm

A little while back, as a follow up to a talk I’d given, I wrote a post entitled why it is better not to use PowerShell Parameter validation. I repeated the talk recently and met up with Thomas who’d organized both.  His initial instinct had been that my “best practice” of NOT declaring parameter types was just wrong – it’s not a surprising view given what he has been exposed to….

Over a quarter of a century ago, when I was studying computer science at University, one of the lecturers wrote the following quote from Dykstra on the board: “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”
I might be pushing a heavy stone up a steep hill here, trying to overcome the “mental mutilation” of too much “good programming”. But to see why “good programmers” can get the declaration of parameter types wrong in PowerShell, consider this example:
Function Use-File { Param ([System.io.FileInfo]$theFile )
     Format-list -InputObject $thefile -Property *

Use-File $f

Simple stuff: I declare a function with one parameter – a FileInfo object – which outputs a list showing all the properties of that object.
Then I pass a string to the function – and it contains a relative path to a file held in the current folder. So … What comes out of the function ?

  1. An error, or.
  2. The properties for the file in the current folder , or
  3. Something else ?

If you have a “good programming” background I’d expect you to say “An error”. Experience tells you passing a type other than the specified one does that. Not here.
In a shell you don’t want to worry about the type of object which comes out of one command or goes into the next, so PowerShell smooths out differences between types. Ask it to evaluate “Fire ” + 1 and it does an implicit conversion – usually called a type cast – of the second argument to the type of the first and returns “Fire 1”.
(1 + “Fire” fails because when it applies the rule of converting to match the first, “fire” can’t be turned into a number).
After spending a while doing only PowerShell, a compiled language like C# seems to need a huge number of “cast to String”, “Isn’t null”, “Isn’t empty”, “Is non-zero” operations, because it requires cast to string or Boolean types to be explicit.  It’s not one is good and the other bad, the different environments impose different requirements.

In PowerShell putting [Type] is how you explicitly cast something to a different type. [FileInfo] does not say “Reject anything which isn’t a file” as it would in C#,
it says to PowerShell “Convert anything which isn’t a file…”.
So, what conversion does it attempt? The underlying FileInfo .net object has a constructor which accepts a string and returns a file, so PowerShell creates a new FileInfo object using the constructor and passing in the string.  Unfortunately the constructor doesn’t handle the relative path, so the object which comes back  represents a non-existent read-only file in the windows\system32 folder. Yes you did read correctly, it doesn’t exist and you can’t write to it. 

I was taught to be as clear as the tools allowed about data types. It’s my contention that “good programmers” learn that it is dangerous to assume parameters will be an acceptable type; so when it is optional to specify types they will still do so out of habit. In PowerShell that leads to a more dangerous (and hidden) assumption- that what is passed will be the correct type or will fail if it can’t be cast correctly.  But what if a casting operation yields a nonsense result as it does here?

“Good programmers” accept writing something for type changes as a necessary a safety mechanism; at a command line it’s unnecessary pedantry. But PowerShell is both command line AND programming environment; and when we program in PowerShell we want to do it well, don’t we ? 
In that previous post I argued functions should cope with be passed names (paths in the example) and resolve them to the desired object (a file).  If a user expects to supply a file to your function by typing part of its name and hitting the [TAB] key then you have to write something after the parameter declaration (like a resolve-path statement) to cope with the change of type; it’s slightly different code to that which C# programmers need to write, but we don’t escape the task completely.

You might have thought ahead and asked “What would happen if I wrote”
Param ([String]$theFile) instead ? If the function is passed a file won’t casting it to string give me the full path ? That WILL work, but here’s an example to show why you should not use [String]:

Function Use-word {           
    Param ([string]$theword )           
    write-host "**$theWord**"           
$w = "Hello","world"           
Use-word $w

This  function takes a single string  – and writes that string to the console: but I haven’t passed it a single string but an array containing  “Hello” and “world”.  I’ll pose the same question as before: what comes out ?

  1. An error.
  2. **Hello** (the first item in the array “wins”)
  3. **World** (the last item in the array “wins”)
  4. **Hello World**  
  5. Something else.

If your instincts said “An array of X into something which takes a single X won’t go so this should give an error” but you now doubt them, I’ve achieved something.
In fact, when PowerShell casts an array to a single string it converts the members to strings and concatenates them with spaces between each.[string]("hello","world") returns “Hello world”.  With paths, two valid paths will become a string which is not a valid path.

A lot of PowerShell cmdlets will accept arrays instead of a single parameter.  Operators also work with arrays  "hello","world" -split "l" gives the same result as
"hello" -split "l" ; "world" -split "l".
For a cmdlet example, you can get the a list of QFEs (Hotfixes) installed on a computer with  Get-WmiObject -class win32_QuickFixEngineering
You can find the results on multiple machines (if they are set up for remote WMI) with
Get-WmiObject -class win32_QuickFixEngineering –computer Server1,Server2,serverN
because the Get-WmiObject cmdlet accepts an array in the –computername parameter.

Any function which takes a –computername parameter does not even need to look at it before passing to Get-WmiObject cmdlet.  But this seems wrong or at least lazy: surely we should catch an error as soon as possible ? Here’s one last example to show what difference it makes:

Function Use-integer {
   Param ($theNumber )
  start-sleep $theNumber

$n = "hello"
Use-integer $n

So here I Don’t validate that $number is actually a number, and I pass in “Hello”, and Start-Sleep will generate an error which looks like this

Start-Sleep : Cannot bind parameter 'Seconds'.
Cannot convert value "hello" to type "System.Int32".
Error: "Input string was not in a correct format."

If  I DO validate that $number is a number, I get this

Use-integer : Cannot process argument transformation on parameter 'theNumber'.
Cannot convert value "hello" to type "System.Int32".
Error: "Input string was not in a correct format."

Early validation didn’t gain anything in this case. It might have saved me from using a lot of time/system resources before the error. If $ErrorActionPreference is set to “Continue”, which it is by default, my function might continue on to something stupid, and early validation would stop that.  So it is not universally wrong to use [type] validate parameters, you just need to ask two questions: “Does what I am doing catch what I need to ?” and “Does what I am doing handle that in the best way”. Best way includes handling names and paths for objects, handling arrays and even (if you’re really bold) handling script block parameters. This post has gone quite long enough so I’ll talk about those in another post.

March 21, 2011

What’s in my PowerShell profile (2) edit

Filed under: How to,Powershell — jamesone111 @ 11:59 am

Carrying on with the theme of Useful Stuff For A PowerShell Profile, which I started with WhatHas, I want to show the edit command I added.
To begin with – in the betas releases – I was  a little bit annoyed and puzzled that there was no easy way to tell the PowerShell ISE to edit a file, but when I found there is a route to do so through the object-model I added an edit function to my profile. A very similar function is in the release version with PSEdit, I’ve kept mine which looks like this:

function edit {
   param ([parameter(ValueFromPipelineByPropertyName=$true)] 
   process {
     if (test-path $path) {
           Resolve-path $path -ErrorAction "silentlycontinue" | foreach-Object {
               if ($host.name -match "\sISE\s") {
     $psise.CurrentPowerShellTab.Files.add($_.path) | out-null }
  else {notepad.exe $_.path} }
       else {write-warning "$path -- not found "}

It only takes one parameter, which can come via the pipeline – enabling piping is a habit I’ve developed. I use a couple of PowerShell’s clever tricks with parameters – first, instead of using the whole piped object it can use a single named property of the object, and second, aliases let me invoke edit with –path or –fullname; that’s not a lot of use here – I can miss the name out completely and PowerShell knows which parameter I mean because there is only one. But the result of putting the two together is clever-squared: it says to PowerShell “if you find the object has a Path property, use it, if it doesn’t but has a FullName property use that, if it doesn’t have one of those but has FileName, use that” and so on.

A real-world use for this is I had some scripts which fetched a folder name from WMI and used PowerShell’s built-in Join-Path cmdlet to add a file name to it to create a parameter for another call to WMI. This worked against remote servers but I found that if the folder starts with a drive letter which exists on the remote server but not on the local machine , Join-path will produce an error, so I had to do the operation without Join-path.  I can use WhatHas Join-path to get  MatchInfo objects for each occurrence of Join-path. These objects have a Path property so loading all the affected scripts into the ISE editor is as simple as:
whatHas join-path | edit

Piping is the main place where my version scores over PSEdit  (the other being that it works in outside the ISE). The body of the function just has to open the requested file(s). Test-Path and Resolve-Path accept arrays, so I don’t need to any work to allow the function to be called as edit.\file1.ps1, .\file2.ps1.
If Test-Path says no valid name was passed, a warning is printed, but if it is valid  Resolve-Path is used to turn a partial name or wildcarded name into one or more paths.  If multiple names are passed and any IS valid, execution will reach Resolve-Path which will produce an error if any IS NOT valid – hence the use of –ErrorAction.
Armed with one or more valid, fully qualified paths, it’s a case of using the ISE’s object model to open the file: of course that only works in the ISE so the command falls back to notepad if the host name isn’t “Windows PowerShell ISE Host”

What’s in my PowerShell profile (1) WhatHas

Filed under: How to,Powershell — jamesone111 @ 10:01 am

This shows simple use of Select-String – which is such a great tool it deserves a long piece – but  I’ve been saying for I while I would write up the functions I have in my profile: so for now I’m just going to show how I use Select-String  there

I keep everything in the Profile.PS1 under Documents\WindowsPowerShell. PowerShell  reads additional user and system-wide profiles depending on whether you start the console version or the ISE version but I ignore these.

In modules or anything that might get used in another script, I follow the proper verb-noun naming conventions;  but for utility functions in the profile I just use short names.  WhatHas is one of 3 short name / profile functions I have, its  job is to find which of my scripts contain(s) something or some things. I use it when I need either to fix something found in multiple files, or to remind myself how I used something in the past in order to use it in something new: I can type
WhatHas reflection
And get a listing of all the places I’ve used [System.Reflection.Assembly]::LoadWithPartialName
showing the file, line number and line itself:  Here’s the code

Function whatHas {
    param ( [Parameter(ValueFromPipeLine=$true,Mandatory=$true)]$pattern, 
  Process { $( if ($recurse) { dir -recurse -include $fileSpec} 
               else  { dir $fileSpec} ) |
 select-string -Pattern $pattern }

Originally I only had two parameters: a –recurse switch choses which form of DIR gets used (I really should use Get-ChildItem rather than DIR or LS somehow I’m happier with DIR),  the files it returns and what every was passed in –pattern are fed into Select-String – which accepts arrays for the pattern so I don’t have to do anything  to support searching for multiple patterns with
WhatHas reflection,assembly
The fileSpec was originally hard coded to "*.ps1" but I realized I might want to search .TXT or .XML files so I made that the default for a parameter instead.
I’m getting in the habit of allowing the “main” parameter of a function to be piped in – making the function body a Process {} block ensures each item piped is processed. 

The output looks like this:

PS C:\Users\James\Documents\windowsPowershell> whathas test-path

hyperv.ps1:236:   If (test-path $VHDPath) {
hypervR2.ps1:236:   If (test-path $VHDPath) {
profile.ps1:6:        if (test-path $path) {

So for the “how to use” case I can often copy what I want from the output and paste it into what I’m working on. For the “need to fix” case I might want to pipe the output into something else, and I’ll look at what that might be in the next post

Create a free website or blog at WordPress.com.