James O'Neill's Blog

June 22, 2019

Last time I saw this many shells, someone sold them by the sea shore.

Filed under: Uncategorized — jamesone111 @ 10:04 pm

I’ve been experimenting with lots of different combinations of shells on Windows 10.

imageBASH.  I avoided the Subsystem for Linux on Windows 10 for a while. There are only two steps to set it up – adding the Subsystem, and adding your chosen Linux to it. If the the idea of installing Linux into Windows, but not as a virtual machine, and getting it from the Windows store gives you a headache, you’re not alone, this may help or it make things worse. I go back to the first versions of Windows NT which had a Windows-16 on Windows-32 subsystem (WoW, which was linked to the Virtual Dos Machine – 32-bit Windows 10 can still install these), an OS/2 subsystem, and then a Posix subsystem. Subsystems translated APIs so binaries intended for a different OS could run on NT, but kernel functions (drivers, file-systems, memory management, networking, process scheduling) – remained the responsibility of underlying OS. 25 years on, the Subsystem for Linux arrives in two parts – the Windows bits to support all the different Linuxes , and then distributor-supplied bits to make it look like Ubuntu 18.4 (which is what I have) or Suse or whichever distribution you chose. wslconfig.exe will tell you which distro(s) you have and change the active one. There is a generic launcher wsl.exe which will launch any Linux binary in the subsystem so you can run wsl bash but a Windows executable, bash.exe streamlines the process

imageLinux has a view of Windows’ files (C: is auto-mounted at/mnt/c and the mount command will mount other Windows filesystems including network and removable drives) but there is strongly worded advice not to touch Linux’s files via their location on C: – see here for more details. – Just before publishing this I updated the 1903 release of Windows 10 which adds a proper access which you can see in the screen shot 
Subsystem processes aren’t isolated – although a Linux API call might have a restricted view of the system. For example ps only sees processes in the subsystem but if you start two instances of bash, they’re both in the subsystem they can both see each other and running kill in one will terminate the other. The subsystem can run a Windows binary (like net.exe start which will see Windows services) and pipe its output into an Linux one, like less;  those who prefer some Linux tools get to use them in their management of Windows.
The subsystem isn’t read-only – anything which changes in that filesystem stays changed – since the subsystem starts configured for US Locale,
sudo locale-gen en_GB.UTF-8 and sudo update-locale LANG=en_GB.UTF-8 got me to a British locale. 

Being writable meant I could install PowerShell core for Linux into the subsystem: I just followed the instructions (including running sudo apt-get update and sudo apt-get upgrade powershell to update from 6.1 to 6.2). Now I can test whether things which work in Windows PowerShell (V5), also work with PowerShell Core (V6) on different platforms.  I can tell the Windows Subsystem for Linux to go straight into PowerShell with  wsl pwsh (or wsl pwsh –nologo if I’m at a command line already). Like bash it can start Windows and Linux binaries and the “in-the-Linux-subsystem” limitations still hold. Get-Process asks about processes in the subsystem , not the wider OS. Most PowerShell commands are there; some familiar aliases overlap with Linux commands and most of those have been removed (so | sort will send something to the Linux sort, not to sort-object,  and ps is not the alias for get-process;  kill and CD are exceptions to this rule.). Some common environment variables (Temp, TMP, UserProfile, ComputerName) are not present on Linux, and Windows specific cmdlets, like Get-Service,  don’t exist in the Linux world, and tab expansion switches to Unix style by default but you can set either environment to match the other. My PowerShell Profile soon gained a Set-PsReadlineOption command to give me the tab expansion I expect and it sets a couple of environment variables which I know some of my scripts use.  It’s possible (and tempting) to create some PSDrives which map single letters to places on /mnt, but things like to revert back to the Linux path. After that V6 core is the same on both platforms

PowerShell on Linux has remoting over SSH; it connects to another instance of PowerShell 6 running in what SSH also terms a “subsystem”. Windows PowerShell (up to 5.1) uses WinRM as its transport and PowerShell Core (6) on Windows can use both. For now at least, options like constrained endpoints (and hence “Just Enough Admin”  or JEA), are only in WinRM.
The instructions for setting up OpenSSH are here; I spent a very frustrating time editing the wrong config file – there is one in with the program files, and my brain filtered out the instruction which said edit the sshd_config file in C:\Program Data\ssh. I edited the one in the wrong directory and could make an SSH session into Windows (a useful thing to know to prove Open SSH is accepting connections) but every attempt to create a PowerShell session gave the error
New-PSSession : [localhost] The background process reported an error with the following message: The SSH client session has ended with error message: subsystem request failed on channel 0.
When I (finally) edited the right file I could connect to it from both Windows and Linux versions of PowerShell core with New-PSSession -HostName localhost.  (Using –HostName instead of –Computername tells the command “This is an SSH host, not a WinRM one”). It always amazes me how people, especially but not exclusively those who spend a lot of time with Linux, are willing to re-enter a password again and again and again. I’ve always thought it was axiomatic that a well designed security system granted or refused access to many things without asking the user to re-authenticate for each (or “If I have to enter my password once more, I’ll want the so-called ‘Security architect’ fired”). So within 5 minutes I was itching to get SSH to sign in with a certificate and not demand my password.

image I found some help here, but not all the steps are needed. Running the ssh-keygen.exe utility which comes with OpenSSH builds the necessary files – I let it save the files to the default location and left the passphrase for the file blank, so it was just a case of hitting enter for each prompt. For a trivial environment like this I was able to copy the id_rsa.pub file to a new file named authorized_keys in the same folder, but in a more real world case you’d copy and paste each new public key file into authorized_keys, then I could test a Windows to Windows remoting session over SSH. When that worked I copied the .ssh directory to my home directory in the Subsystem for Linux, and the same command worked again.

imagePowerShell Core V6 is built on .NET core, so some parts of PowerShell 5 have gone missing: there’s no Out-Grid, or Show-Command, No Out-Printer (I wrote a replacement), no WMI commands, no commands to work with the event log, no transactions and no tools to manage the computer’s relationship with AD.  The  Microsoft.* modules provide about 312 commands in V5.1 and about 244 of those are available in V6; but nearly 70 do things which don’t make sense in the Linux world because they work with WinRM/WSMan, Windows security or Windows services. A few things like renaming the computer, stopping and restarting it, or changing the time zone need to be done with native Linux tools. But we have just over 194 core cmdlets on all platforms, and more in pre-installed modules. There was a also a big step forward with compatibility in PowerShell 6.1 and another with 6.2 – there is a support for a lot more of the Windows API, so although some things don’t work in Core a lot more does than at first release. It may be necessary to specify the explicit path to the module (the different versions use either “…\WindowsPowerShell\…” or “..\PowerShell\…” in their paths and Windows tools typically install their modules for Windows PowerShell) or to use Import-Module in V6 with the –SkipEditionCheck switch. Relatively few stubbornly refuse to work, and there is a solution for them: remotely run the commands that otherwise are unavailable – instead of going over SSH this time you use WinRM, (V5 doesn’t support SSH) When I started working with constrained endpoints I found I liked the idea of not needing to install modules everywhere and running their commands remotely instead, once you have a PSSession to the place where the commands exist, you can use Get-Module and Import-Module with a –PsSession switch, to make them available. So we can bridge between versions – “the place where the commands exist” is “another version of PowerShell on the same machine” it’s all the same to remoting. The PowerShell team have announced that the next release uses .Net core 3.0 which should mean the return of Out-Gridview (eventually), and other home brew tools to put GUI interfaces onto PowerShell; that’s enough of a change to  bump the major version number, and they will drop “Core” from the name to try to remove the impression that it is a poor relation on Windows. The PowerShell team have a script to do a side by side install of the preview – or even the daily builds – Thomas Lee wrote it up here. Preview 1 seems to have done the important but invisible work of changing .Net version; new commands will come later; but at the time of writing PowerShell 7 preview has parity with PowerShell Core 6, and the goal is parity with Windows PowerShell 5

There is no ISE in PowerShell 6/7, Visual Studio Code had some real annoyances but pretty well all of them have been fixed for some months now and somewhere I joined the majority who see it as the future. Having a Git client built-in has made collaborating on the ImportExcel module so much easier, and that got me to embrace it . Code wasn’t built specifically for PowerShell which means it will work with whichever version(s) it finds.  
imageThe right of the status bar looks like this and clicking the green bit pulls up a menu where you can swap between versions and test what you are writing in each one. These swaps close one instance of PowerShell and open another so you know you’re in a clean environment (not always true with the ISE); the flip side is you realise it is a clean environment when you want something which was loaded in the shell in the shell I’ve just swapped away from.
VS Code’s architecture of extensions means it can pull all kinds of clever tricks – like remote editing –and the Azure plug in allows an Azure Cloud Shell to be started inside the IDE. imageWhen you use Cloud Shell in a browser it has nice easy ways to transfer files; but you can discover the UNC path to your cloud drive with Get-cloudDrive  then , Get-AzStorageAccount will show you a list of accounts, you can work out the name of the account from the UNC path and you use this as the user name to logon but you also need to know the resource group it is in, and Get-AzStorageAccount shows that. Armed with the name and resource group  Get-AzStorageAccountKey gives you one or more keys which can be used as a password, and you can map a drive letter to the cloud drive.

Surely that’s enough shells for one post … well not quite. People have been getting excited about the new Windows Terminal which is went into preview in the Windows store a few hours before I posted this Before that you needed to enable developer options on Windows and build it for yourself. It needs the 1903 Windows update and with that freshly installed I thought “I’ve also got [full] Visual Studio on this machine, why not build and play with Terminal”. As it turns out I needed to add the latest Windows SDK and several gigabytes of options to Visual Studio (all described on the github page), but with that done it was one git command to download the files, another to get submodules, then open visual studio, select the right section per the instructions and say build me an X64 release, have a coffee … and the app appears. (In the limited time I’ve spent with version in store it looks to be the same as the build-your-own version).

imageIt’s nice, despite being early code (no settings menu, just a json file of settings to change)., It’s the first time time Microsoft have put out a replacement for the host which Window uses for command line tools – shells or otherwise, so you could run ssh, ftp, or a tool like netsh in it.  I’ve yet to find away to have “as admin” and normal processes running in one instance. It didn’t take long for me to add PowerShell on Linux and PowerShell 7 preview to the default choices (it’s easy to copy/paste/adjust the json – just remember to change the guid when adding an new choice, and you can specify the path to a PNG file to use as an icon).
So, in a single window, I have all the shells, except for 32-bit PowerShell 5, as tabs:  CMD, three different, 64-bit versions of PowerShell on Windows, PowerShell on WSL, BASH on WSL, and PowerShell on Linux in Azure.
I must give a shout out to Scott Hanselman for the last one; I was thinking “there must be a way to do what VS code does” and from his post Scott thought down the same lines a little while before me. He hooked up with others working on it and shared the results. I use a 2 line batch file with title and azshell.exe (I’m not sure when “title” crept into CMD, but I’m fairly sure it wasn’t always there. I’ve used it to keep the tab narrow for CMD: to set the tab names for each of the PowerShell versions I set $Host.UI.RawUI.WindowTitle  which even works with from WSL)  So I get 7 Shells, 8 if I added the 32 bit version of PowerShell. Running them in the traditional host would give me 16 possible shells. Add the 32 and 64 bit PowerShell ISEs and VS code with Cloud shell and 3 Versions of local PowerShell, and we’re up to 22. And finally there is Azure cloud shell in a browser, or , if you must, the azure phone app, so I get to an nice round two dozen shells in all without ssh’ing into other machines (yes terminal can run ssh) , using any of the alternate Linux shells with WSL or loading all the options VS code has. “Just type the following command line” is not as simple as it used to be.

Advertisements

April 6, 2019

PowerShell functions and when NOT to use them

Filed under: Powershell — jamesone111 @ 3:56 pm

When I was taught to program, I got the message that a function should be a “black box”: we know what goes in one side, what comes out on the other, we don’t care how inputs become outputs. We learn that these “boxes” can leak, a function can see things outside and, in some cases (like connecting to another system) its purpose is to change its environment. But a function shouldn’t manipulate the working variables used by the code which called it. Recently I’ve found myself dealing with PowerShell authors who write like this:

$var_x = 10
$var_y = [math]::pi
do_stuff
$var_i = $var_y * $var_a

We can’t tell from reading this what do_stuff does, it seems to set $var_a because that has magically appeared; but does it use $var_x and $var_y? Does it change them? Will things break if we rename them? The only way to find out is to prise open the box and look inside (read the function definition). If you’re saying “That function can’t change the value of $var_x because it’s not global” here’s a fragment for you to copy and paste:

function do_stuff {
  Set-variable -Scope 1 -Name var_x -Value 30
}

$var_x = 10
do_stuff
$var_x

If the function just set $var_x = $var_x + 20 that would put 30 into a new variable, local to the function  ($var_x += 20 would add 20 to a new local variable, so the result is different). But it didn’t do that, it specifically said “set this variable in the scope 1 above this one”. That’s how things like -ErrorVariable and -WarningVariable work. Incidentally if the command setting the variable is in a function in a module, it is a jump of TWO levels to set things in the scope which called it. Recently I saw a good post from Dave Carrol on using the script scope – which is a de-facto module scope as this older post of Mike’s explains – which can help to avoid this kind of thing.

You might wonder “would someone who doesn’t know how to write a function with parameters really use this…?” I’ve encountered it.
Another case where someone should be using parameters or at least making their variables script-scoped or globally-scoped, was this
Function Use-Data {
   $Y = [int]$data.xvalue * [int]$data.xvalue
   Add-Member -InputObject $data -MemberType NoteProperty -Name Yvalue -Value $y
}

$data = New-object pscustomobject
Add-Member -InputObject $data -MemberType NoteProperty -Name Xvalue -Value $x
Use-Data

Normally we can see the = sign and we know this named place now holds that value. But Set-Variable and Add-Member make that harder to see. We would have one problem fewer to unravel if the writer used $Global:X and $Global:Y.

An example like the last one can be given a meaningful name, modified to take input through parameters and made to return the result properly. But the function is only called from one place. One of the main points of a function is to reduce duplication, but single-use is not an automatic reason to bring a function’s code into the main body of the script which calls it . For example, this:
if (Test-PostalCodeValid $P) {...}
saves me reading code which does the validation – there is no need to know how it does it (the sort of regex used in such cases is better hidden); it is enough that it does: and the function has a single purpose communicated by its name. The problematic functions look like they are the writer’s first mental grouping of tasks (which leads to vague names) and the final product doesn’t benefit from that grouping. The script can’t be understood by reading from beginning to end – it requires the reader to jump back and forth – so flattening the script makes it easier to follow. Because the functions are sets of loosely connected tasks, they don’t have a clear set of inputs or outputs and rely on leakiness.

Replacing a block of code with a black-box whose purpose, inputs and outputs are all clear should make for a better script. And if those things are unclear the script is probably worse for putting things in boxes. You need to call a function many times for the tiny overhead in each call to matter, but I hit such a case while I was working on this post. Some users of Export-Excel work on sheets with over a million cells (I use a 24,000-row x 23 column sheet for tests – 550K cells), and the following code was called for each row of data

$ColumnIndex = $StartColumn
foreach ($Name in $script:Header) {
    Add-CellValue -TargetCell $ws.Cells[$Row, $ColumnIndex] -CellValue $TargetData.$Name
    $ColumnIndex += 1
}   

So, for my big dataset the Add-CellValue function was called 550,000 times which took about 80 seconds in total or 150 microseconds per cell, on my machine. I’d say this fragment is clear and easy to work with: for each name in $header, that property of $targetData is added as a cell-value at the current row and column, and we move to the next column. Add-CellValue handles many different kinds of data – how it does so doesn’t matter. This meets all the rules for a good function. BUT… of that 150μS more than 130 is spent going into and out of the function. That 80 seconds becomes about 8 seconds if I put the function code in the for loop instead of calling out to it. Changes that cut the time to run a script from 0.5sec to 0.4999 sec don’t matter – you can’t use the saved time, and it is better to give up 100μS on each run for the time you save reading clearer code. Changing the time to run scripts from minutes to seconds does matter. So even though using the function was more elegant it wasn’t the best way. As both a computer scientist and an IT practitioner I never forget Jeffrey Snover’s saying Computer scientists want elegant code; IT pros just want to go home.

March 20, 2019

PowerShell Text wrangling [not just] part 4 of the Graph API series

Filed under: Uncategorized — jamesone111 @ 10:00 am

Almost the first line of PowerShell I ever saw was combining two strings like this
"{0} {1}"  -f $x , $y
And my reaction was “What !! If the syntax to concatenate two strings is so opaque this might not be for me”
[Edit as if to prove the awkwardness of the syntax, the initial post had two errors in such a short fragment. Thanks Doug.]
The –f operator is a wrapper for .NETs [String]::format and it is useful, partly for inserting strings into another string. For example I might define a SQL Statement in one place like this:
"Insert Into Users [GivenName], [Surname], [endDate] Values ('{0}', '{1}','{2}') "
and later I can get a ready-to-run query, using  $sqlInsert -f $first,$last,$end  
Doing this lets me arrange a script with long strings placed away from the logic; I’m less happy with this:  

@"Update Users
Set[endDate] = '{0}'
where {1} = '{2}'
And   {3} = '{4}'
"@ -f $end,$fieldName1,$value1,$fieldName2,$value    
because the string is there, my brain automatically goes back and forth filling in what should be in {0}, {1} and {2}, so I’d prefer to put $first, $Last and $end inside one string, or move the string out of sight. A string format operator is there to apply formatting and going over some downloaded code –f let me change this :
$now = Get-Date
$d = $now.Day.ToString()
if ($d.Length -eq 1) {$d ="0$d"}
$m = $now.month
if ($m.Length -eq 1) {$m ="0$m"}
$y = $now.Year
$logMessage = "Run on $d/$m/$y"  

To this:  
$now = Get-Date
$
logMessage
= "Run on {0:d}" –f $now

For years my OneNote notebook has had a page which I lifted from the now-defunct blog of Kathy Kam (which you may still find re-posts of) which explains what the formatting strings are. In this case :d is “local short date” which is better than hard coding a date format; formatting strings used in Excel generally work, but there are some extra single-character formats like :g for general date/time, and :D for long date. If you live somewhere that puts the least significant part of the date in the middle then you might ask ‘Why not use  "Run on $now" ?”    
The 10th day of March 2019 outputs “Run on 03/10/2019 13:48:32” –in most of the world that means “3rd day of October”. But we could use
"Run on $($now.ToString('d'))"
And most people who use PowerShell will have used the $() syntax to evaluate the property of a variable embedded in a string.  But you can put a lot inside $(), this example will give you a list of days:
"Days are $(0..6 | foreach {"`r`n" + $now.AddDays($_).ToString('dddd')})"
The quote marks inside the $() don’t end the string  and  what is being evaluated can run over multiple lines like this
"Days are $(0..6 | foreach {
       "`r`n" +
       $now.AddDays($_).ToString('dddd')
} )"

Again, there are places where this I have found this technique to be useful, but encountering it in an unfamiliar piece of script means it takes me a few seconds to see that "`r`n" is a string, inside a code block, inside a string, in my script. I might use @" … "@  , which I think was once required for multi-line strings, instead of "…" which certainly works now, but leaves me looking for the closing quote – which isn’t the next quote. If the first part of the string was set and then a loop added days to the string that would be easier to follow. Incidentally when I talk of “an unfamiliar piece of script ” I don’t just mean other peoples work, I include work I did long enough ago that I don’t remember it.

Embedding in a string, concatenating multiple strings, or using –f  might all work, so which one is best in a given situation varies (sometimes it is shortest code that is easiest to understand and other things are clearer spread over a few lines) and the choice often comes down to personal coding style.
When working on my Graph API module I needed to send JSON like this (from the Microsoft Documentation) to create a group :

{
  "description": "Group with designated owner and members",
  "displayName": "Operations group",
  "groupTypes": [
    "Unified"
  ],
  "mailEnabled": true,
  "mailNickname": "operations2019",
  "securityEnabled": false,
  "owners@odata.bind": [
    "https://graph.microsoft.com/v1.0/users/26be1845-4119-4801-a799-aea79d09f1a2"
  ],
  "members@odata.bind": [
    "https://graph.microsoft.com/v1.0/users/ff7cb387-6688-423c-8188-3da9532a73cc",
    "https://graph.microsoft.com/v1.0/users/69456242-0067-49d3-ba96-9de6f2728e14"
  ]
}

This might be done as a large string with embedded variables, and even a couple of embedded for loops like the previous example, or to I could build the text up a few lines at a time. Eventually I settled on doing it like this.
$settings = @{'displayName'     = $Name
              'mailNickname'    = $MailNickName
              'mailEnabled'     = $true
              'securityEnabled' = $false
              'visibility'      = $Visibility.ToLower()
              'groupTypes'      = @('Unified')
}
if ($Description) {$settings['description']        = $Description  }
if ($Members)     {$settings['members@odata.bind'] = @() + $Members}
if ($Owners)      {$settings['owners@odata.bind']  = @() + $Owners }

$json = ConvertTo-Json $settings
Write-Debug $json
$group = Invoke-RestMethod @webparams -body $json

ConvertTo-Json only processes two levels of hierarchy by default so when the Hash table has more layers it needs the –DEPTH parameter to translate properly. Why do it this way? JSON says ‘here is something (or a collection of things) with a name’, so why say that in PowerShell-speak only to translate it? Partly it’s keeping to the philosophy of only translating into text at last moment; partly it’s getting rid of the mental context-switching – this is script, this is text with script-like bits . Partly it is to make getting things right easier than getting things wrong: if things are built up a few lines at a time,  I need to remember that ‘Unified’ should be quoted, but as a Boolean value ‘false’ should not, I need to track unclosed quotes and brackets, to make sure commas are where they are needed and nowhere else: in short, every edit is a chance to turn valid JSON into something generates a “Bad Request” message – so everywhere I generate JSON I have Write-Debug $Json. But any syntax errors in that hash table will be highlighted as I edit it.
And partly… When it comes to parsing text, I’ve been there and got the T-Shirts; better code than mine is available, built-in with PowerShell; I’d like apply the same logic to creating such text: I want to save as much effort as I can between “I have these parameters/variables” and “this data came back”. That was the thinking behind writing my GetSQL module: I know how to connect to a database and can write fairly sophisticated queries, but why keep writing variations of the same few simple ones, and the connection to send them to the database? SQL statements have the same “context switch” – if I type “–eq” instead of “=” in a query it’s not because I’ve forgotten the SQL I learned decades ago. Get-SQL lets me keep my brain in PowerShell mode and write.  
Get-SQL –Update LogTable –set Progess –value 100 –where ID –eq $currentItem

My perspective – centred on the script that calls the API rather than the API or its transport components – isn’t the only way. Some people prize the skill of handwriting descriptions of things in JSON. A recent project had me using DSC for bare-metal builds (I need need to parse MOF files to construct test scripts, and I could hand crank MOF files, but why go through that pain? ); DSC configuration functions take a configuration data parameter which is a hash holding all the details of all the machines. This was huge. When work started it was natural to create a PowerShell variable holding the data but when it became hundreds of lines I moved that data to its own file but it remained a series of declarations which could be executed – this is the code which did that

Get-ChildItem -Path (Join-Path -Path $scriptPath -ChildPath "*.config.ps1") | ForEach-Object {
    Write-Verbose -Message "Adding config info from $($_.name)"
    ConfigurationData.allNodes += (& $_.FullName )
}

– there was no decision to store data as PowerShell declarations it just happened as a accident of how the development unfolded, and there were people working on that project who found JSON easier to read (we could have used any format which supports a hierarchy). So I added something to put files through ConvertFrom-JSon and convert the result from a PSCustomObject to a hash table so they could express the data in the way which seemed natural to them.

Does this mean data should always be shifted out of the script ? Even that answer is “it depends” and is influenced by personal style. The examples which Doug Finke wrote for the ImportExcel module often start like this:

$data = ConvertFrom-Csv @'
Item,Quantity,Price,Total Cost
Footballs,9,21.95,197.55
...
Baseball Bats,38,159.00,6042.00
'@

Which is simultaneously good and bad. It is placed at the the start of the file not sandwiched between lumps of code, we can see that it is data for later and what the columns are, and it is only one per row of data where JSON would be 6 lines. But csv gives errors a hiding place – a mistyped price, or an extra commas is hard to see. But that doesn’t matter in this case. But we wouldn’t mash together the string being converted from other data… would we ?

March 6, 2019

PowerShell formatting [not just] Part 3 of the Graph API series

Filed under: Microsoft Graph,Powershell — jamesone111 @ 8:12 am

Many of us learnt to program at school and lesson 1 was writing something like

CLEARSCREEN
PRINT “Enter a number”    
INPUT X
Xsqrd = X * X
PRINT “The Square of ” + STR(X) + “Is ” + STR(Xsqrd)

So I know I should not be surprised when I read scripts and see someone has started with CLS (or Clear-Host) and then has a script peppered with Read-Host and Write-Host, or perhaps echo – and what is echoed is a carefully built up string. And I find myself saying “STOP”

  • CLS I might have hundreds or thousands of lines in the scroll back buffer of my shell. Who gave you permission to throw them away ?
  • Let me  run your script with parameters. Only use commands like Read-Host and Get-Credential if I didn’t (or couldn’t) provide the parameter when I started it
  • Never print your output

And quite quickly most of us learn about Write-Verbose, and Write-Progress and the proper way to do “What’s happening messages” ; we also learn to Output an object, not formatted text. However, this can have a sting in the tail: the previous post showed this little snipped of calling the graph API.

Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

@odata.context    : https://graph.microsoft.com/v1.0/$metadata#users/$entity
businessPhones    : {}
displayName       : James O'Neill
givenName         : James
jobTitle          :
mail              : xxxxx@xxxxxx.com
mobilePhone       : +447890101010
officeLocation    :
preferredLanguage : en-GB
surname           : O'Neill
userPrincipalName : xxxxx@xxxxxx.com
id                : 12345678-abcd-6789-ab12-345678912345

Invoke-RestMethod  automates the conversion of JSON into a PowerShell object; so I have something rich to output but I don’t want all of this information, I want a function which works like this

> get-graphuser
Display Name  Job Title  Mail  Mobile Phones UPN
------------  ---------  ----  ------------- ---
James O'Neill Consultant jxxx  +447890101010 Jxxx

If no user is specified my function selects the current user. If I want a different user I’ll give it a –UserID parameter, if I want something about a user I’ll give it other parameters and switches, but if it just outputs a user I want a few fields displayed as a table. (That’s not a real phone number by the way). This is much more the PowerShell way, think about what it does, what goes in and what comes out, but a vaguer about the visuals of that output.

A simple, but effective way get this style of output would be to give Get-GraphUser a –Raw switch and pipe the object through Format-Table, unless raw output is needed; but I need repeat this anywhere that I get a user, and it only works for immediate output. If I do
$U = Get-GraphUser
<<some operation with $U>>

and later check what is in the variable it will output in the original style. If I forget –RAW, $U won’t be valid input… There is a better way and to tell PowerShell “When you see a Graph user format it as a table like this” ; that’s done with a format.ps1xml file – it’s easiest to plagiarize the ones in $PSHOME directory – don’t modify them, they’re digitally signed – you get an XML file which looks like this

<Configuration>
    <ViewDefinitions>
        < View>
            <Name>Graph Users</Name>
            <ViewSelectedBy><TypeName>GraphUser</TypeName></ViewSelectedBy>   
            <TableControl>

                ...

            </TableControl>
        </View>     
    </ViewDefinitions>
< /Configuration>

There is a <view> section for each type of object and a <tableControl> or <listControl> defines how it should be displayed. For OneDrive objects I copied the way headers work for files, but everything else just has a table or list.  The XML says the view is selected by an object with a type name of GraphUser, and we can add any name to the list of types on an object. The core of the Get-GraphUser function looks like this:

$webparams = @{Method = "Get"
              Headers = $Script:DefaultHeader
}

if ($UserID) {$userID = "users/$userID"} else {$userid = "me"}

$uri = "https://graph.microsoft.com/v1.0/$userID&quot;
#Other URIs may be defined 

$results = Invoke-RestMethod -Uri $uri @webparams

foreach ($r in $results) {
   if ($r.'@odata.type' -match 'user$')  {
        $r.pstypenames.Add('GraphUser')
    }
    ...
}

$results

The “common” web parameters are defined, then the URI is determined, then a call to Invoke-RestMethod, which might get one item, or a array of many (usually in a values property). Then the results have the name “GraphUser” added to their list of types, and the result(s) are returned. 

This pattern repeats again and again, with a couple of common modifications ; I can use Get-GraphUser <id> –Calendar to get a user’s calendar, but the calendar that comes back doesn’t contain the details needed to fetch its events. So going through the foreach loop, when the result is a calendar it is better for the function to add a property that will help navigation later

$uri = https://graph.microsoft.com/v1.0/$userID/Calendars

$r.pstypenames.Add('GraphCalendar')
Add-Member -InputObject $r -MemberType NoteProperty -Name CalendarPath -Value "$userID/Calendars/$($r.id)"
  

As well as navigation, I don’t like functions which return things that need to be translated, so when an API returns dates as text strings I’ll provided an extra property which presents them as a datetime object. I also create some properties for display use only, which comes into its own for the second variation on the pattern. Sometimes it is simpler to just tell PowerShell – “Show these properties” when there is no formatting XML PowerShell has one last check – does the object have a PSStandardMembers property with a DefaultDisplayPropertySet child property ? For events in the calendar, the definition of “standard members” might look like this:

[string[]]$defaultProperties = @('Subject','When','Reminder')
$defaultDisplayPropertySet = New-Object System.Management.Automation.PSPropertySet`
             -ArgumentList 'DefaultDisplayPropertySet',$defaultProperties
$psStandardMembers = [System.Management.Automation.PSMemberInfo[]] @($defaultDisplayPropertySet)

Then, as the function loops through the returned events instead of adding a type name it adds a property named PSStandardMembers

Add-Member -InputObject $r -MemberType MemberSet  -Name PSStandardMembers -Value $PSStandardMembers

PowerShell has an automatic variable $FormatEnumerationLimit  which says “up to some number of properties display a table, and for more than that display a list” – the default is 4. So this method suits a list of reminders in the calendar where the ideal output is a table with 3 columns, and there is only one place which gets reminders. If the same type of data is fetched in multiple places it is easier to maintain a definition in an XML file.

As I said before working on the graph module the same pattern is repeated a lot:  discover a URI which can get the data, then write a PowerShell function which:

  • Builds the URI from the function’s parameters
  • Calls Invoke-RestMethod
  • Adds properties and/or a type name to the returned object(s)
  • Returns those objects

The first working version of a new function helps to decide how the objects will be formatted which refines the function and adds to the formatting XML as required. Similarly the need for extra properties might only become apparent when other functions are written; so development is an iterative process.   

The next post will look at another area which the module uses, but applies more widely which I’ve taken to calling “Text wrangling”,  how we build up JSON and other text that we need to send in a request.

March 3, 2019

PowerShell and the Microsoft Graph API : Part 2 – Starting to explore

Filed under: Azure / Cloud Services,Office 365,Powershell — jamesone111 @ 12:21 pm

In the previous post I looked at logging on to use Graph – my msftgraph module has a Connect-MsGraph function which contains all of that and saves refresh tokens so it can get an access token without repeating the logon process, it also refreshes the token when its time is up. Once I have the token I can start calling the rest API. Everything in graph has a URL which looks like

"https://graph.microsoft.com/version/type/id/subdivision"

Version is either “V1.0” or “beta” ; the resource type might be “user” or “group”, or “notebook” and so on and a useful one is “me”; but you might call user/ID to get a different user. to get the data you make an HTTP GET request which returns JSON; to add something it is usually a POST request with the body containing JSON which describes what you want to add, updates happen with a PATCH request (more JSON), and DELETE requests do what you’d expect. Not everything supports all four – there are a few things which allow creation but modification or deletion are on someone’s to do list. 

The Connect-MsGraph function runs the following so the other functions can use the token in whichever way is easiest:

if ($Response.access_token) {
    $Script:AccessToken     = $Response.access_token
    $Script:AuthHeader      = 'Bearer ' + $Response.access_token
    $Script:DefaultHeader   = @{Authorization = $Script:AuthHeader}
}

– by using the script: scope they are available throughout the module, and I can I run

$result = Invoke-WebRequest -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

Afterwards, $result.Content will contain this block of JSON
{ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users/$entity", "businessPhones": [], "displayName": "James O'Neill", "givenName": "James", "jobTitle": null, "mail": "xxxxx@xxxxxx.com", "mobilePhone": "+447890101010", "officeLocation": null, "preferredLanguage": "en-GB", "surname": "O'Neill", "userPrincipalName": "xxxxx@xxxxxx.com", "id": "12345678-abcd-6789-ab12-345678912345" }

It doesn’t space it out to make it easy to read. There’s a better way: Invoke-RestMethod creates a PowerShell object like this 

Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

@odata.context    : https://graph.microsoft.com/v1.0/$metadata#users/$entity
businessPhones    : {}
displayName       : James O'Neill
givenName         : James
jobTitle          :
mail              : xxxxx@xxxxxx.com
mobilePhone       : +447890101010
officeLocation    :
preferredLanguage : en-GB
surname           : O'Neill
userPrincipalName : xxxxx@xxxxxx.com
id                : 12345678-abcd-6789-ab12-345678912345

Invoke-RestMethod  automates the conversion of JSON into a PowerShell object; so
$D = Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me/drive" -Headers $DefaultHeader    
lets me refer to $D.webUrl to get the path to send a browser to to see my OneDrive. It is quite easy out what to do with the objects which come back from Invoke-RestMethod; arrays tend to come back in a .value property, some data is paged and gives a property named ‘@odata.nextLink’  , others objects – like “me” give everything on the object. Writing the module I added some formatting XML so PowerShell would display things nicely. The  The work is discovering URIs that available to send a GET to, and what extra parameters can be used – this isn’t 100% consistent – especially around adding query parameters to the end of a URL (some don’t allow filtering, some do but it might be case sensitive or insensitive, it might not combine with other query parameters and so on) and although the Microsoft documentation is pretty good, in some places it does feel like a work in progress. I ended up drawing a map and labelling it with the functions I was building in the module – user related stuff is on the left, teams and groups on the right and things which apply to both are in the middle. The Visio which this is based on an a PDF version of it are in the Repo at  https://github.com/jhoneill/MsftGraph 

Relationships 

Once you can make your first call to the API the same techniques come up again and again , and future posts will talk how to get PowerShell formatting working nicely, and how to create JSON for POST requests without massive amounts of “text wrangling” But as  you can see from the map there are many rabbit holes to go down, I started with a desire to post a message to a channel in Teams. Then I saw there was support for OneDrive and OneNote , and work I had done on them in the past called out for re-visit. Once I started working with OneDrive I wanted tab completion to expand files and folders, so I had to write an argument completer … and every time I looked at the documentation I saw “There is this bit you haven’t done” so I added more (I don’t have anywhere to experiment with  Intune so that is conspicuous by its absence, but I notice other people have worked on that), and that’s how we end up with big software projects … and patterns I used will come up in those future posts.

February 28, 2019

PowerShell and the Microsoft Graph API : Part 1, signing in

Filed under: Azure / Cloud Services,Microsoft Graph,Office,Office 365,Powershell — jamesone111 @ 6:13 pm

I recently I wanted a script to be able to post results to Microsoft teams,  which led me to the Microsoft Graph API which is the way to interact with all kinds of Microsoft Cloud services, and the scope grew to take in OneNote, OneDrive, SharePoint, Mail, Contacts, Calendars and Planner as well. I have now put V1.0 onto the PowerShell Gallery , and this is the first post on stuff that has come out of it.

if you’ve looked at anything to do with the Microsoft Graph API, a lot things say “It uses OAuth, and here’s how to logon”. Every example seems to log on in a different way (and the authors seem to think everyone knows all about OAuth). So I present… fanfare … my ‘definitive’ guide to logging on. Even if you just take the code I’ve shared, bookmark this because at some point someone will say  What’s Oauth about ?  The best way to answer that question is with another question: How can a user of a service allow something to interact with parts of that service on their behalf?  For example, at the bottom of this page is a “Share” section, WordPress can tweet on my behalf; I don’t give WordPress my Twitter credentials, but I tell Twitter “I want WordPress to tweet for me”. There is a scope of things at Twitter which I delegate to WordPress.  Some of the building blocks are

  • Registering applications and services which permission will be delegated to, and giving them a unique ID; this allows users to say “This may do that”, “Cancel access for that” – rogue apps can be de-registered.  
  • Authenticating the user (once) and obtaining and storing their consent for delegation of some scope.
  • Sending tokens to delegates – WordPress sends me to Twitter with its ID; I have a conversation with Twitter, which ends with “give this to WordPress”.

Tokens help when a service uses a REST API, with self-contained calls. WordPress tells Twitter “Tweet this” with an access token which says who approved it to post. The access token is time limited and a refresh token can extend access without involving the user (if the user agrees that the delegate to should be allowed to work like that).

Azure AD adds extra possibilities and combined with “Microsoft Accounts”, Microsoft Graph logons have a lot permutations.

  1. The application directs users to a web login dialog and they log on with a “Microsoft Account” from any domain which is not managed by Office 365 (like Gmail or Outlook.com). The URI for the login page includes the app’s ID and the the scopes it needs; and if the app does not have consent for those scopes and that user, a consent dialog is displayed for the user to agree or not. If the logon is completed, a code is sent back. The application presents the code to a server and identifies itself and gets the token(s). Sending codes means users don’t hold their own tokens or pass them over insecure links.
  2. From the same URI as option 1, the user logs on with an Azure AD account a.k.a. an Office 365 “Work or school” account; Azure AD validates the user’s credentials, and checks if there is consent for that app to use those scopes.  Azure AD tracks applications (which we’ll come back to in a minute) and administrators may ‘pre-consent’ to an application’s use of particular scopes, so their users don’t need to complete the consent dialog. Some scopes in Microsoft Graph must be unlocked by an administrator before they can appear in a consent dialog

clip_image002For options 1 & 2 where the same application can be used by users with either Microsoft or Azure-AD accounts,  applications are registered at https://apps.dev.microsoft.com/ (see left). The application ID here can be used in a PowerShell script.

Azure AD learns about these as they are used and shows them in the enterprise applications section of the Azure Active imageDirectory Admin Center. The name and the GUID from the App registration site appear in Azure and clicking through shows some information about the app and leads to its permissions.  (See right)

The Admin Consent / User consent tabs in the middle allow us to see where individual users have given access to scopes from a consent dialog, or see and change the administrative consent for all users in that Azure AD tenant.

The ability for the administrator to pre-consent is particularly useful useful with some of the later scenarios, which use a different kind of App, which leads to the next option…

  1. The App calls up the same web logon dialog as the first two options except the logon web page is tied to specific Azure AD tenant and doesn’t allow Microsoft accounts to log on. The only thing which has changed between options 2 and 3 is the application ID in the URI.
    This kind of logon is associated with an app which was not registered at https://apps.dev.microsoft.com/ but from the App Registrations section of the Azure Active Directory Admin Center. An app registered there is only known to oneimage AAD tenant so when the general-purpose logon page is told it is using that app it adapts its behaviour.
    Registered apps have their own Permissions page, similar to the one for enterprise apps; you can see the scopes which need admin consent (“Yes” appears towards the right).
  2. When Azure AD stores the permitted Scopes for an App, there is no need to interact with the user (unless we are using multi-factor authentication) and the user’s credentials can go in a silent HTTPS request. This calls a different logon URI with the tenant identity embedded in it – the app ID is specific to the tenant and if you have the app ID then you have the tenant ID or domain name to use in the login URI.
  3. All the cases up to now have been delegating permissions on behalf of a user, but permissions can be granted to an Azure AD application itself (in the screen shot on the right user.read.all is granted as a delegated permission and as an Application Permission). The app authenticates itself with a secret which is created for it in the Registered Apps part of the Azure AD admin Center. The combination of App ID and Secret is effectively a login credential and needs to be treated like one.

Picking how an app logs on requires some thought.

Decision Result Options
Will it work with “Live” users’ Calendars, OneDrive, OneNote ? It must be a General app and use the Web UI to logon. 1 or 2
Is all its functionality Azure AD/Office 365 only (like Teams) ?
or is the audience Office 365 users only ?
It can be either a General or Azure AD App,
(if general is used, Web UI must be used to logon).
1-4
Do we want users to give consent for the app to do its work ? It must use the Web UI. 1-3
Do we want avoid the consent dialog ? It must be an Azure AD app and use a ‘Silent’ http call to the Tennant-specific logon URI. 4
Do we want to logon as the app rather than a user ? It must be an Azure AD app and use a ‘Silent’ http call to the Tennant-specific logon URI. 5

Usually when you read about something which uses graph the author doesn’t explain how they selected a logon method – or that other ways exist. For example the Exchange Team Blog has a step-by-step example for an app which logs on as itself.  (Option 5 above). The app is implemented in PowerShell and the logon code the boils down to this:

$tenant    = 'GUID OR Domain Name'
$appId     = 'APP GUID'
$appSecret = 'From Certificates and Secrets'
$URI       = 'https://login.microsoft.com/{0}/oauth2/token' -f $tenant

$oauthAPP  = Invoke-RestMethod -Method Post -Uri $URI -Body @{
        grant_type    = 'client_credentials';
        client_id     =  $appid ;
        client_secret =  $appSecret;
        resource      = 'https://graph.microsoft.com';
}

After this runs $oauthApp has an access_token property which can be used in all the calls to the service.
For ease of reading here the URI is stored in a variable, and the Body parameter is split over multiple lines, but the Invoke-RestMethod command could be a single line containing the URI with the body on one line

Logging on as the app is great for logs (which is what that article is about) but not for “Tell me what’s on my one drive”; but that code can quickly be adapted for a user logon as described in Option 4 above, we keep same tenant, app ID and URI and change the grant type to password and insert the user name and password in place of the app secret, like this:

$cred      = Get-Credential -Message "Please enter your Office 365 Credentials"
$oauthUser = Invoke-RestMethod -Method Post -Uri $uri -Body  @{
        grant_type = 'password';
        client_id  =  $clientID;
        username   =  $cred.username;
        password   =  $cred.GetNetworkCredential().Password;
        resource   = 'https://graph.microsoft.com';
}

Just as an aside, a lot of people “text-wrangle”  the body of their HTTP requests, but I find it easier to see what is happening by writing a hash table with the fields and leave it to the cmdlet to sort the rest out for me; the same bytes go on the wire if you write
$oauthUser = Invoke-RestMethod -Method Post -Uri $uri -ContentType  "application/x-www-form-urlencoded"
-body
"grant_type=password&client_id=$clientID&username=$($cred.username)&password=$($cred.GetNetworkCredential().Password)&resource=https://graph.microsoft.com"

As with the first example, the object returned by Invoke-RestMethod, has the access token as a property so we can do something like this

$defaultheader = @{'Authorization' = "bearer $($oauthUser.access_token)"}
Invoke-RestMethod -Method Get -Uri https://graph.microsoft.com/v1.0/me

I like this method, because it’s simple, has no dependencies on other code, and runs in both Windows-PowerShell and PowerShell-core (even on Linux).
But it won’t work with consumer accounts. A while back I wrote something which built on this example from the hey scripting guy blog which displays a web logon dialog from PowerShell; the original connected to a login URI which was only good for Windows Live logins – different examples you find will use different end points – this page gave me replacement ones which seem to work for everything .

With $ClientID defined as before and a list of scopes in $Scope the code looks like this

Add-Type -AssemblyName System.Windows.Forms
$CallBackUri = "https://login.microsoftonline.com/common/oauth2/nativeclient"
$tokenUri    = "https://login.microsoftonline.com/common/oauth2/v2.0/token"
$AuthUri     = 'https://login.microsoftonline.com/common/oauth2/v2.0/authorize' +
                '?client_id='    +  $ClientID           +
                '&scope='        + ($Scope -join '%20') +
                '&redirect_uri=' +  $CallBackUri        +
                '&response_type=code'


$form     = New-Object -TypeName System.Windows.Forms.Form       -Property @{
                Width=1000;Height=900}
$web      = New-Object -TypeName System.Windows.Forms.WebBrowser -Property @{
                Width=900;Height=800;Url=$AuthUri }
$DocComp  = { 
    $Script:uri = $web.Url.AbsoluteUri
    if ($Script:Uri -match "error=[^&]*|code=[^&]*") {$form.Close() }
}
$web.Add_DocumentCompleted($DocComp) #Add the event handler to the web control
$form.Controls.Add($web)             #Add the control to the form
$form.Add_Shown({$form.Activate()})
$form.ShowDialog() | Out-Null

if     ($uri -match “error=([^&]*)”) {
    Write-Warning (“Logon returned an error of “ + $Matches[1])
    Return
}
elseif ($Uri -match “code=([^&]*)” ) {# If we got a code, swap it for a token
    $oauthUser = Invoke-RestMethod -Method Post -Uri $tokenUri  -Body @{
                   ‘grant_type’  =‘authorization_code’;
‘code’       
= $Matches[1];
                   ‘client_id’   = $Script:ClientID;
‘redirect_uri’
= $CallBackUri
}
}

This script uses Windows Forms which means it doesn’t have the same ability to run everywhere; it defines a ‘call back’ URI, a ‘token’ URI and an ‘authorization URI’. The browser opens at the authorization URI, after logging on the server sends their browser to callback URI with code=xxxxx  appended to the end the ‘NativeClient’ page used here does nothing and displays nothing, but the script can see the browser has navigated to somewhere which ends with code= or error=, it can pick out the code and and it to the token URI. I’ve built the Authorization URI in a way which is a bit laborious but easier to read; you can see it contains list of scopes separated by spaces, which have to be escaped to “%20” in a URI, as well as the client ID – which can be for either a generic app (registered at apps.dev.microsoft.com) or an azure AD app.

The  middle part of the script creates a the windows form with a web control which points at the authorization URI, and has a two line script block which runs for the “on_DocumentCompleted” event, it knows the login process is complete when the browser’s URI contains either with a code or an error when it sees that, it makes the browser’s final URI available and closes the form.
When control comes back from the form the If … ElseIf checks to see if the result was an error or a code. A code will be posted to the token granting URI to get the Access token (and refresh token if it is allowed). A different post to the token URI exchanges a refresh token for a new access token and a fresh refresh token.
To test if the token is working and that a minimum set of scopes have been authorized we can run the same script as when the token was fetched silently.

$defaultheader = @{'Authorization' = "bearer $($oauthUser.access_token)"}
Invoke-RestMethod -Method Get -Uri https://graph.microsoft.com/v1.0/me

And that’s it.

In the next part I’ll start looking at calling the rest APIs, and what is available in Graph.

January 30, 2019

PowerShell. Don’t Just Throw

Filed under: Powershell — jamesone111 @ 3:17 pm

I write  “ ;return ” every time that I put throw in my PowerShell scripts. I didn’t always do it. Sooner or later I’ll need to explain why.

First off: when something throws an error it is kind-of ugly, and it can stop things that we don’t to be stopped, sometimes Write-Warning is better than throw.  But many (probably most) people don’t realise the assumption they’re making when they use throw

Here’s a simple function to demonstrate the point
function test {
    [cmdletbinding()]
    Param([switch]$GoWrong)
    Write-verbose "Starting ..."
    if ($GoWrong) {
        write-host "Something bad happened"
        throw "Failure message"
    }
    else {
        Write-Host "All OK So Far"
    }
    if ($GoWrong) {
        write-host "Something worse happens. "
    }
    else {
        Write-Host "Still OK"
    }

}
So some input causes an issue and to prevent things getting worse, the function throws an error. I think almost everyone has written something like this (and yes, I’m using Write-Host – those messages are decoration for the user to look at not output , I could use Write-Verbose with –Verbose but then I’d have to explain… )

I can call the function

>test
All OK So Far
Still OK
 
or like this

>test -GoWrong
Something bad happened
Failure message
At line:9 char:9
+         throw "Failure message"

Exactly what’s expected – where’s the problem? no need to put a return in is there ?
Someone else takes up the function and they write this.

Function test2 {
    Param([switch]$Something)
    $x = 2 + 2 #Really some difficult operation 
    test -GoWrong:$Something
    return $x
}

This function does some work, calls the other function and returns a result

>Test2
All OK So Far
Still OK
4

But some input results in a problem.

>test2 -Something
Something bad happened
Failure message
At line:9 char:9
+         throw "Failure message"
 
That throw in the first function was for protection but it has lost some work. And the author of Test2 doesn’t like big lumps of “blood” on the screen. What would you do here? I know what I did, and it wasn’t to say “Oh somebody threw something, so I should try to catch it” and start wrapping things in Try {} Catch {}. I said “One quick change will fix that!” 

    test -GoWrong:$Something -ErrorAction SilentlyContinue

Problem solved.

What do you think happens if I run that command again; I’m certain a lot of  people will get the answer wrong, and I’m tempted to say copy the code into PowerShell and try it, so that you don’t read ahead and see what happens without thinking about it for a little bit.  Maybe if I waffle for a bit… Have you thought about it ? This is what happens.  

>test2 -Something
Something bad happened
Something worse happens.
4

The change got rid of the ‘blood’, and the result came back. But… the second message got written – execution continued into exactly the bit of code which had to be prevented from running. Specifying the error action stopped the throw doing anything.

Discovering that made me put a return after every throw, even though it should be redundant more than 99% of the time. And I now think any test of error handling should include changing the value of $ErrorActionPreference.

November 15, 2018

Putting Out-Printer back into PowerShell 6.1

Filed under: Powershell — jamesone111 @ 11:10 pm

One of the things long term PowerShell folk have to get on top of is the move from Windows PowerShell (up to V5.1) to PowerShell Core (V6 and beyond). PowerShell Core uses .NET core which is a subset that is available cross platform. Having a “Subset” means we pay a price for getting PowerShell on Linux, things in Windows-PowerShell which used parts not in the subset went missing from PowerShell 6 on Windows. When PowerShell 6.1 shipped the release notes said

On Windows, the .NET team shipped the Windows Compatibility Pack for .NET Core, a set of assemblies that add a number of removed APIs back to .NET Core on Windows.
We’ve added the Windows Compatibility Pack to PowerShell Core 6.1 release so that any modules or scripts that use these APIs can rely on them being available.

When they say “a number of”, I don’t know how big the number is, but I suspect it is a rather bigger and more exciting number than this quite modest statement suggests. The team blog says 6.1 gives Compatibility with 1900+ existing cmdlets in Windows 10 and Windows Server 2019, though they don’t give a breakdown of what didn’t work before, and what still doesn’t work.

But one command which is still listed as missing is Out-Printer. Sending output to paper might cause some people to think “How quaint”  but the command is still useful, not least because “Send to One note” and “Print to PDF” give a quick way of getting things into a file. In Windows PowerShell Out-Printer is in the Microsoft.PowerShell.Utility system module, but it has gone in PowerShell core. So I thought I would try to put it back. The result is named 6Print and you can install it from the PowerShell gallery (Install-Module 6print). It only works with the Windows version of PowerShell – .NET core on Linux doesn’t seem to have printing support. I’ve added some extra things to the original, you can now specify:

  • -PaperSize and –Landscape, -TopMargin, –BottonMargin, –LeftMargin and –RightMargin to set-up the page
  • -FontName and –FontSize, to get the print looking the way you want.
  • -PrintFileName  (e.g to specify the name of a PDF you are printing to)
  • -Path and -ImagePath although you would normally pipe input into the command (or pass the input as –inputObject) you can also specify a text file with -Path or a BMP, GIF, JEPG, PNG, TIFF file with –ImagePath

As well as –Name, –Printer or –PrinterName to select the printer (a little argument completer will help you fill in the name).

I may try to get this added to the main PowerShell project when it has had some testing. Because so many more things now work you can load the CIM cmdlets for print management with Import-Module -SkipEditionCheck PrintManagement.

it will install on PowerShell 5.1 if you want the extra options.

July 30, 2018

On PowerShell Parameters

Filed under: Powershell — jamesone111 @ 2:53 pm

When I talk about rules for good, reusable, PowerShell I often say “Parameters should be flexible …… so should constants.

The second half of that is a reminder that the first step from something quickly hacked together, towards something sharable is moving some key assignment statements to the top of the script, then putting param( ) around them and commas between them. Doing that means the  = whatever  part is setting the default for a parameter which can be changed at runtime. 

Good parameters allow the user to pipe input into commands, to provide an object or a name which allows the object to be fetched, they support multiple targets from one command (e.g. Get the contents of multiple files) and they help intellisense to suggest values to the user (validationSets, enum types and argument completers all help with that). I thought I did a good job with parameters most of the time – until someone commenting on work I’d contributed to Doug Finke’s ImportExcel  module showed me I wasn’t being as flexible as I should be, and how I had developed a bad habit.

The first thing to mention is that PowerShell is different to most other languages when it comes to labelling parameters with a type. In other places doing that means “this must be an X”; but if you write this in PowerShell:
Param (
   $p
  [int]$h,
  [boolean]$b,
  [EnumType]$e
)

It means “Try to make h an int, try to make b a boolean… don’t bother trying to P into anything”  and passing –h “Hello”  doesn’t cause a  “Type Mismatch Error” which other languages would throw, but PowerShell says ‘Cannot convert value "Hello" to type "System.Int32"’

in that example, none of the parameters is mandatory, and if none is specified PowerShell tries to convert the empty values: the Integer parameter – becomes zero, the boolean becomes false, and an enum type, will fail silently. This means we can’t tell from the value of $h or $b if the user wanted to change things to zero and false or they wanted things to be left as they are. We can use [Nullable[Boolean]] and [Nullable[Int]] and then code must allow for three states – the following will run code that we don’t want to be run when $b is null.
if ($b) {do something}
else    {do something different}

it needs to be something like  
# $b can be true, false or null
if     ($b) {do something}
elseif ($null –ne $b) {do something different}

I don’t like using Boolean parameters: when something is a “Do or Do not” choice like “Append” or “Force”* we would never specify –Append $false or –Force $false – so typing “True” is redundant.
The function in question sets formatting so I have
Param (            
  [int]$height,
  [switch]$bold,
  [switch]$italic,
  [switch]$underline,
  [EnumType]$alignment 
)
if ($bold)      {$row.bold      = $true}
if ($italic)    {$row.italic    = $true}
if ($underline) {$row.underline = $true}

This where my bad habit creeps in … at first sight there is nothing wrong with carrying on like this… 
if ($alignment) {$row.alignment = $alignment}
if ($height)    {$row.height    = $height   }

I test this by setting alignment to bottom and height to 20: everything works, and the code sets off into the world.   
Then the person who was testing my code said “I can’t set the height to zero” . My test can’t differentiate between “blank” and “zero”. Not allowing height to be zero might be OK, but there was worse to come: alignment is an Enum type
Top    = 0
Center = 1
Bottom = 2

etc.

Because “Top” is zero it is treated as false , so the code above works except when “top” is chosen. I need to use better tests.
The new test solved the next problem: my tester said “I can’t remove bold” . Of course, I had seen bold as “Do , or Do not. There is no un-do.”; because the main task of the code is to create new Excel sheets it will setting bold etc… almost exclusively.  And “Almost” is a nuisance.

I don’t want to change these parameters to Booleans because (a) it will break a lot of existing things and (b) it feels wrong to make everyone add “  $true” because a few sometimes use “ $false”. The  parameter list is already overcrowded so I don’t want to add -noBold –noUnderline and so on ; I’d need to figure out what to do about –bold and –notbold being specified together.  The least-inelegant solution I could come up with was based on a little used feature of switches…

Switch parameters are used without a value, if specified they are treated as true. But very early in my PowerShell career, I had a function which took a couple of switches which needed to be passed on to another command. I asked some kind soul (I forget who) how to do this and they said call the second command with –SecondSwitch:$FirstSwitch (in fact you can write any PowerShell parameter with a colon between the name and the value, instead of the conventional space) . So –bold:$false is valid, and   -bold still turns bold on.  But checking the value $bold will return false if the parameter was omitted or set to false explicitly.

So now I have 3 cases where I need to ask “was this parameter specified, or has it defaulted to being…”; and that’s what $PSBoundParameters is for – it’s a dictionary with the names and values that were passed into the command. Not values set as a parameter default, not parameters changed as the function proceeds; bound parameters. So I changed my code to this

if ($PSBoundParameters.ContainsKey('Bold')    ) {$row.Bold      = [boolean]$bold}
if ($PSBoundParameters.ContainsKey('Height')  ) {$row.Height    = $Height       }
if ($PSBoundParameters.ContainsKey(Alignment')) {$row.Alignment = $Alignment    }

So now if the parameters are given a value, whether it is false, zero, or an empty string, the property will be set. There is one last thing to do, and this is why I said it was the least inelegant solution, because –switch:$false is a rarely-used syntax, it’s reasonable to assume people won’t expect that to be the way to say “remove bold” so the parameter help needs to be updated to read “Make text bold; use -Bold:$false to remove bold”.

* If “Do or Do not” sounds familiar, Yoda would tell you that using the –Force switch is something you can not do in a try{}/ catch{} construct.

July 29, 2018

Windows Indexing not indexing properly ? Try this.

Filed under: Desktop Productivity — jamesone111 @ 10:54 am

A few weeks ago now my Surface Book died. It powered off in the middle of something; reluctantly powered on again and eventually went off and no known trick would get it to come back on; back it went and after a short delay a replacement (MK I) surface book arrived. A battery report showed the batteries were on their first charge cycle and it looks brand new. Nice.
A lot of my work is sync’d to one drive, and  quietly made its own way back but music and nearly  200GB of files in my Pictures directory, and a few other files aren’t. So they had to be copied back from my external hard disk. Everything seemed good, but a few funnies started to appear; the groove music app thought most of my music was on an unknown album by an unknown artist. Grouping pictures by tag didn’t work. Searches for pictures and documents didn’t find them even though when I picked through the directories they were there. It all pointed to something wrong with the index. So off to indexing options, and click Advanced and then Reset and wait for the index to chomp through all those files and … no change. 

imageSearch on-line and everything says “Re-build the index”; yes, thanks, done that – in fact I’d done it more than once. I’ve enough experience of the index to know that resetting it is usually the answer (“Wait” is sometimes the answer too, reset is good for the impatient). Some things say check that the directory you want is on the the list of directories to index. And yes, users is on the list and all the files are under there.
I’d put the problem to one side when I happened to click the advanced button on the properties of a directory, and there is an option which I had long forgotten:
“Allow files in this folder to have contents indexed”

Ah … now … what if copying files back from the hard disk had cleared that attribute ? So uncheck it, click OK, and Apply, and choose “Apply changes to this folder only”. Then go back, check the box , click OK, and this time say “Apply changes to this folder, subfolders and files” now force a re-index, searches work. Reset groove (from Windows Settings, Apps) and let it rediscover and artists and albums are back. So if the full text meta-data inside the file (as opposed Dates, size and file name) aren’t being indexed, this is worth a try .

May 31, 2018

More tricks with PowerShell and Excel

Filed under: Office,Powershell — jamesone111 @ 6:25 am

I’ve already written about Doug Finke’s ImportExcel module – for example, this post from last year covers

  • Basic exporting (use where-object to reduce the number of rows , select-object to remove columns that aren’t needed)
  • Using -ClearSheet to remove old data, –Autosize to get the column-widths right, setting titles, freezing panes and applying filters, creating tables
  • Setting formats and conditional format      
  • In this post I want to round up a few other things I commonly use.

    Any custom work after the export means asking Export-Excel to pass through the unsaved Excel Package object like this

    $xl = Get-WmiObject -Class win32_logicaldisk | select -Property DeviceId,VolumeName, Size,Freespace |
               Export-Excel -Path "$env:computerName.xlsx" -WorkSheetname Volumes –PassThru

    Then we can set about making modifications to the sheet. I can keep referring to it via the Excel package object, but it’s easier to use a variable. 
    $Sheet = $xl.Workbook.Worksheets["Volumes"]

    Then I can start applying formatting, or adding extra information to the file
    Set-Format -WorkSheet $sheet -Range "C:D" -NumberFormat "0,000"
    Set-Column -Worksheet $sheet -Column 5
    -Heading "PercentageFree" -Value {"=D$row/C$row"} -NumberFormat "0%" 

    I talked about Set-column in another post. Sometimes though, the data isn’t a natural row or column and the only way to do things is by “Poking” individual cells, like this

        
    $sheet.Cells["G2"].value = "Collected on"
    $sheet.Cells["G3"].value = [datetime]::Today
    $sheet.Cells["G3"].Style.Numberformat.Format =
     "mm-dd-yy"
    $sheet.Cells.AutoFitColumns()
    Close-ExcelPackage $xl –Show

    Sharp-eyed readers will see that the date format appears to be “Least-significant-in-the-middle” which is only used by one country – and not the one where I live. It turns out Excel tokenizes some formatsthis MSDN page explains and describes “number formats whose formatCode value is implied rather than explicitly saved in the file….. [some] can be interpreted differently, depending on the UI language”. In other words if you write “mm-dd-yy” or “m/d/yy h:mm” it will be translated into the local date or date time format. When Export-Excel encounters a date/time value it uses the second of these; and yes, the first one does use hyphens and the second does use slashes. My to-do list includes adding an argument completer for Set-Format so that it proposes these formats.

    Since the columns change their widths during these steps I only auto-size them when I’ve finished setting their data and formats. So now I have the first page in the audit workbook for my computer

    image

    Of course there times when we don’t want a book per computer with each aspect on it’s own sheet, but we want book for each aspect with a page per computer.
    If we want to copy a sheet from one workbook to another, we could read the data and write it back out like this

    Import-Excel -Path "$env:COMPUTERNAME.xlsx" -WorksheetName "volumes" | 
         Export-Excel
    -Path "volumes.xlsx" -WorkSheetname $env:COMPUTERNAME

    but this strips off all the formatting and loses the formulas  – however the Workbook object offers a better way, we can get the Excel package for an existing file with
    $xl1 = Open-ExcelPackage -path "$env:COMPUTERNAME.xlsx"

    and create a new file and get the Package object for it with 
    $xl2 = Export-Excel -Path "volumes.xlsx" -PassThru

    (if the file exists we can use Open-ExcelPackage). The worksheets collection has an add method which allows you to specify an existing sheet as the basis of the new one, so we can call that, remove the default sheet that export created, and close the files (saving and loading in Excel, or not, as required) 

    $newSheet = $xl2.Workbook.Worksheets.Add($env:COMPUTERNAME, ($xl1.Workbook.Worksheets["Volumes"]))
    $xl2.Workbook.Worksheets.Delete("Sheet1")
    Close-ExcelPackage $xl2 -show
    Close-ExcelPackage $xl1 –NoSave

    The new workbook looks the same (formatting has been preserved -  although I have found it doesn’t like conditional formatting) but the file name and sheet name have switched places.

    image

    Recently I’ve found that I want the equivalent of selecting “Transpose” in Excel’s paste-special dialog- take an object with many properties and instead of exporting it so it runs over many columns in making a two-column list of Property name and value
    For example
    $x = Get-WmiObject win32_computersystem  | Select-Object -Property Caption,Domain,Manufacturer,
                                Model, TotalPhysicalMemory, NumberOfProcessors, NumberOfLogicalProcessors

    $x.psobject.Properties | Select-Object -Property name,value |
        Export-Excel -Path "$env:COMPUTERNAME.xlsx" -WorkSheetname General -NoHeader -AutoSize –Show

    imagec

    When I do this i a real script I use the –passthru swtich and apply some formatting

    $ws    = $excel.Workbook.Worksheets["General"]
    $ws.Column(1).Width                     =  64
    $ws.Column(1).Style.VerticalAlignment   = "Center"
    $ws.Column(2).Width                     =  128
    $ws.Column(2).Style.HorizontalAlignment = "Left"
    $ws.Column(2).Style.WrapText            = $true

    Of course I could use Set-Format instead but sometimes the natural way is to refer to use .Cells[]  , .Row() or .Column().

    May 14, 2018

    A couple of easy boosts for PowerShell performance.

    Filed under: Powershell — jamesone111 @ 10:55 am

    At the recent PowerShell and Dev-ops summit I met Joshua King and went to his session – Whip Your Scripts into Shape: Optimizing PowerShell for Speed – (an area where I overestimated my knowledge) and it’s made me think about some other issues.  If you find this post interesting it’s a fair bet you’ll enjoy watching Joshua’s talk. There are a few of things to say before looking at a performance optimization which I added to my knowledge this week.

  • Because scripts can take longer to write than to run, we need to know when it is worth optimizing for speed. After all, if cut we the time from pressing return to the reappearance of the prompt from 1/2 second to 1/4 or even to 1/1000th second our reaction time is such that we don’t do the next thing we’re going to do any sooner. On the other hand if something takes 5 minutes to run (which might be the same command being called many times inside a script), giving minutes back is usable time.
  • Execution time varies with input – it often goes up with the square of the number of items being processed.  (Typically when the operation is in the form “For every item, look at [some subset of] all items”). So you might process 1,000 rows of data in half a second … but then someone takes your code and complains that their data take 5 minutes to process, because they’re working with many more rows. Knowing if you should optimize here isn’t straightforward  – most of the time doesn’t matter, but when it matters at all, it matters a lot.  You can discover if performance tails off badly at 10,000 or 1,000,000 rows but it isn’t easy to predict how many of any given size there will be and whether optimizing performance is time is well spent . If the problem happens at scale, then you might run sub-tasks in parallel (especially if each runs on a different computer), or change the way of working – for example this piece on hash tables is about avoiding the “look at every item” problem.
  • No one writes code to be slow. But the fast way might require something which is longer and/or harder to understand. If we want to write scripts which are reusable we might prefer tidy-but-slower over fast-but-incomprehensible. (All other things being equal we’d love the elegance of something tidy and fast, but a lot of us aren’t going to let the pursuit of that prevent us going home). 
    Something like $SetA | where {$_ –notIn $setB}  is easy to understand but if the sets are big enough it might need billions of comparisons, the work which gave rise to the hash tables piece  cut the number from billions to under a million (and meant that we could run the script multiple times per hour instead of once or twice in a day, so we could test it properly for the first time). But it takes a lot more to understand how it works.
  • One area from Joshua’s talk where the performance could be improved without adding complexity was reducing or eliminating the hit from using Pipelines; usually this doesn’t matter – in fact the convenience of being able to construct a bespoke command by piping cmdlets together was compelling before it was named “PowerShell”.  Consider these two scripts which time how long it takes to increment a counter a million times.

    $i  = 0 ; $j = 1..1000000 ;
    $sw = [System.Diagnostics.Stopwatch]::StartNew() ;
    $J | foreach {$i++ }  ;
    $sw.Stop() ; $sw.Elapsed.TotalMilliseconds

    $i  = 0 ; $j = 1..1000000 ;
    $sw = [System.Diagnostics.Stopwatch]::StartNew() ;
    foreach ($a in $j) {$i++ }  ;
    $sw.Stop() ; $sw.Elapsed.TotalMilliseconds

     The only thing which is different is the foreach – is it the alias for ForEach-Object, or is it a foreach statement . The logic hasn’t changed, and readability is pretty much the same; you might expect them to take roughly the same time to run … but they don’t: on my machine, using the statement is about 6 times faster than piping to the cmdlet.
    This is doing unrealistically simple work; replacing the two “ForEach” lines with

    $j | where {$_ % 486331 -eq 0}
    and
    $j.where(  {$_ % 486331 -eq 0} )

    does something more significant for each item and I find the pipeline version takes 3 times as long! And the performance improvement remains if the output of the .where() goes into a pipeline. I’ve written in the past that sometimes very long pipelines can be made easier to read by breaking them up (even though I have a dislike storing intermediate results), and it turns out we also can boost performance by doing that.

    Recently I found another change : if I define a function

    Function CanDivide {
    Param ($Dividend)
        $Dividend % 486331 -eq 0
    }
    and repeat the previous test with the command as
    $j.where( {CanDivide $_ } )

    People will separate roughly 50:50 into those who find the new version easier to understand, and those who say “I have to look somewhere else to see what ‘can divide’ does”. But is it faster or slower and by how much ? It’s worth verifying this for yourself, but my test said the function call makes the command slower by a factor of 6 or 7 times.  If a function is small, and/or is only called from one place, and/or is called many times to complete a piece of work then it may be better to ‘flatten’ the script. I’m in the “I don’t want to look somewhere else” camp so my bias is towards flattening code, but – like reducing the amount of piping – it might feel wrong for other people. It can make the difference between “fast enough”, and “not fast enough” without major changes to the logic.

    January 8, 2018

    Using the Surface Dial with Adobe LightRoom.

    Filed under: Photography — jamesone111 @ 4:54 pm

    When the surface dial came out the, Wired ran a story You Might not need Microsoft’s surface dial, but you’ll want it.  That sums it up for me, I have the Surface book and the “pen” is a great “brush” for editing photos. It also to plays the surface book’s range of form factors – “folded over” tablet mode with an external keyboard and mouse, it is similar to having a Wacom Cintiq tablet . The dial promises to be the ideal companion for the pen – except that Adobe didn’t seem to be in any hurry to add dial support to Photoshop and Lightroom. Then late in 2017 Adobe added dial support as a ‘technology preview’ in Photoshop… And Santa brought me a surface dial.

    The phrase “No can be told what it is – you have to see it for yourself” from The Matrix feels like it is only one step away from the the Wired article.  Which calls it “Microsoft’s coolest input device ever”  and says “The gadget, which … twists like a doorknob, is a peripheral, like a mouse and keyboard. Except it’s not like those things at all.”  It’s not very like a door knob, more like main knob on a hi-fi system or the single do-everything controller you find in some cars find to control the trip-computer, navigation and sound – after a few seconds use it becomes obvious.  The dial has 4 actions – long press, short press twist-left, twist right, and it “tocks” to tell you it has done something. A long press pulls up a menu which depends the active application – this menu lets you choose which function the twist and short press will perform. There are some generic windows functions available – undo/redo, zoom, scroll up & down, and system volume control. Things that support the Pinch-zoom or two-finger scroll gestures or [ctrl]+[z]  / [ctrl]+[y] should work with the out of the box functions – if there is a music player running which supports the “next track / previous track” those functions light up – and when controlling music a short press acts as play/pause.

    The main thing about the dial is that it doesn’t replace the mouse and if you have both hands on the keyboard you probably wouldn’t take one off to use the dial – but when you have the pen in your right hand and you want to change what it is doing, having your left hand on the dial really works, so for Photoshop it controls brush size , opacity and so on. 

    After a short time with it you start to think “It should do this” . In LightRoom you don’t get the brush controls that you have in Photoshop, the pen isn’t great for working slider controls and pictures scroll left and right not up and and down: fortunately there is an answer – in Windows Settings, under Devices, the Wheel option lets you configure these things for yourself – the one restriction is that this sends key strokes to the application so I still don’t have a way to do the things which need  (for example) ALT + Click. But for the three examples above it’s pretty easy. Here’s how the finished result looks

    image

    So on the first page of wheel settings you select your app, and then you configure the tools that are available from the dial,  as you can see I’ve set up three. [1] is “Select” which goes right or left (with no shift/ctrl/alt keys) for twist and D (Develop) for click. Sliders do [+] or [-] for twist, and “,” – the short cut for “cycle round basic sliders” – for click, and finally there is brush size

    image

    You can see when I do a long click I get these options as 1,2 and 3 with the name in the middle, and I get volume control, scroll, zoom , and undo/redo as well. It’s not as elegant as the way it works in Photoshop, but it works well enough.  To control the brush I size I set-up the following:

    image

    It’s all fairly easy, provided that there is a key combination for what you want to do. I’d like to be able to say for the brush clicking the dial is Alt + Left-Mouse-Click (“clone from here” in Adobe) or assign the side button on the surface Pen to Alt + Left-Mouse-Click instead of Right-Mouse-Click. But for now this will do just fine.

    December 12, 2017

    Using the import Excel Module: Part 3, Pivots and charts, data and calculations

    Filed under: Uncategorized — jamesone111 @ 4:43 pm

    In the previous post I showed how you could export data to an XLSx file using the Export-Excel command in Doug Finke’s ImportExcel module (Install it from the PowerShell gallery!). The command supports the creation of Pivot tables and Pivot charts. Picking up from where part 2 left off, I can get data about running processes, export them to a worksheet and then set up a pivot table

    $mydata = Get-Process | Select-Object -Property Name, WS, CPU, Description, Company, StartTime
    $mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet `
       -IncludePivotTable -PivotRows "Company" -PivotData @{"WS"="Sum"} -show

    clip_image002

    To add a pivot chart the command line becomes
    $mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet `
        
    -IncludePivotTable -PivotRows "Company" -PivotData @{"WS"="Sum"} `
        
    -IncludePivotChart -ChartType Pie -ShowPercent -show

    clip_image004

    The chart types are also suggested by intellisense, note that some of them don’t support -ShowPercent or ‑ShowCategory options and “bad” combinations will result in an error on opening Excel. Re-creating existing Pivot charts can cause an error as well.

    There is an alternative way of creating Pivot tables and charts –which is particularly useful when we want more than one in the same workbook

    del .\demo.xlsx

    $xl = $mydata | Export-Excel -Path .\demo.xlsx -WorkSheetname "Processes" -PassThru

    $Pt1 = New-PivotTableDefinition -PivotTableName "WS"  -PivotData @{"WS" ="Sum"} -SourceWorkSheet "Processes" `
              -PivotRows Company -IncludePivotChart -ChartType ColumnClustered -NoLegend
    $Pt2 = New-PivotTableDefinition -PivotTableName "CPU" -PivotData @{"CPU"="Sum"} -SourceWorkSheet "Processes" `
              -PivotRows Company -IncludePivotChart -ChartType ColumnClustered -NoLegend

    $xl = Export-Excel -ExcelPackage $xl -WorkSheetname "Processes" -PivotTableDefinition $Pt1 -PassThru
    Export-Excel -ExcelPackage $xl -WorkSheetname "Processes" -PivotTableDefinition $Pt2 -Show

    clip_image006

    New-PivotTableDefinition builds the table definition as a hash table – we could equally well write a large hash table with multiple pivots defined in it, like this

    del .\demo.xlsx

    $mydata | Export-Excel -Path .\demo.xlsx -WorkSheetname "Processes" -Show -PivotTableDefinition @{
        "WS" = @{"SourceWorkSheet"   = "Processes"      ;
                 "PivotRows"         = "Company"        ;
                 "PivotData"         = @{"WS"="Sum"}    ;
                 "IncludePivotChart" = $true            ;
                 "ChartType"         = "ColumnClustered";
                 "NoLegend"          = $true};
       "CPU" = @{"SourceWorkSheet"   = "Processes"      ;
                 "PivotRows"         = "Company"        ;
                 "PivotData"         = @{"CPU"="Sum"}   ;
                 "IncludePivotChart" = $true            ;
                 "ChartType"         = "ColumnClustered";
                 "NoLegend"          = $true }
    }

    Export-Excel allows [non-pivot] charts to be defined and passed as a parameter in similar same way -in the following example we’re going to query a database for a list of the most successful racing drivers, the SQL for the query looks like this:
    $Sql = "SELECT TOP 25 WinningDriver, Count(RaceDate) AS Wins
            FROM   races 
            GROUP  BY WinningDriver  
            ORDER  BY count(raceDate) DESC"

    Then we define the chart and feed the result of the query into Export-Excel (I’m using my GetSQL module from the PowerShell Gallery for this but there are multiple ways )
    $chartDef = New-ExcelChart -Title "Race Wins" -ChartType ColumnClustered
                   -XRange WinningDriver -YRange Wins -Width 1500 -NoLegend -Column 3

    Get-SQL $Sql | Select-Object -property winningDriver, Wins |
      Export-Excel -path .\demo2.xlsx -AutoSize -AutoNameRange -ExcelChartDefinition $chartDef -Show

    The important thing here is that the chart definition refers to named ranges in the spreadsheet – “Winning Driver” and “Wins” and the Export-Excel command is run with –AutoNameRanges so the first column of is a range named “Winning Driver” and the second “Wins” – you can see in the screen shot “Wins” has been selected in the “Name” box (underneath the File menu) and the data in the Wins column is selected. The chart doesn’t need a legend and positioned the right of column 3

    clip_image008

    I found that he EEPLUS object which Doug uses can insert a Data table object directly into a worksheet, which should be more efficient, and it also saves using a select-object command to remove the database housekeeping properties which are in every row of data as I had to do in the example above. It didn’t take much to a command to Doug’s module to put SQL data into a spreadsheet without having to pipe the data into Export-Excel from another command. And I cheated by passing the resulting object through to Export-Excel so that I could use parameters found in Export-Excel and pass the Worksheet object and parameters on and get Export-Excel to finish the job so I write something like this:
    Send-SQLDataToExcel -SQL $sql -Session $session -path .\demo2.xlsx -WorkSheetname "Winners" `
            -AutoSize  -AutoNameRange -ExcelChartDefinition $chartDef -Show
      

    In this example I use an existing session with a database – the online help shows you how to use different connection strings with ODBC or the SQL Server native client. 

    I also added commands to set values along a row or down a column – for an example we can expand the racing data to not just how many wins, but also how many fastest laps and how many pole positions, export this data and use the -Passthrough switch to get an Excel Package object back
    $SQL = "SELECT top 25 DriverName,         Count(RaceDate) as Races ,
                          Count(Win) as Wins, Count(Pole)     as Poles,
                          Count(FastestLap) as Fastlaps
            FROM  Results
            GROUP BY DriverName
            ORDER BY (count(win)) desc"

    $Excel = Send-SQLDataToExcel -SQL $sql -Session $session -path .\demo3.xlsx `
                -WorkSheetname "Winners" -AutoSize -AutoNameRange -Passthru

    Having done this, we can add columns to calculate the ratios of two pairs of existing columns

    $ws = $Excel.Workbook.Worksheets["Winners"]
    Set-Row    -Worksheet $ws -Heading "Average"     -Value {"=Average($columnName`2:$columnName$endrow)"}
    `
                  -NumberFormat "0.0" -Bold
    Set-Column -Worksheet $ws -Heading "WinsToPoles" -Value {"=D$row/C$row"} -Column 6 -AutoSize -AutoNameRange
    Set-Column -Worksheet $ws -Heading "WinsToFast"  -Value {"=E$row/C$row"} -Column 7 -AutoSize -AutoNameRange
    Set-Format -WorkSheet $ws -Range "F2:G50" -NumberFormat "0.0%"

    In the examples above the value parameter is a Script block, when this is evaluated $row and $column are available so if the value is being inserted in row 5, {"=E$row/C$row"} becomes =E5/C5
    The script block can use $row , $column (current row and column numbers) $columnName (current column letter), $StartRow/$EndRow $StartColumn/$EndColumn (column and row numbers)

    If the value begins with “=” it is treated as a formula rather than a value; we don’t normally want to put in a fixed formula – without the “=” the value inserted down the column or across the row will be a constant

    The set-Column command supports range naming, and both commands support formatting – or we can use the ‑PassThru switch and pipe the results of setting the column into Set-Format. There seems to be a bug in the underlying library where applying number formatting to column after formatting a row applies the same formatting to the column and to the row from the previous operation. So, the example above uses a third way to apply the format which is to specify the range of cells in Set-Format.
    Finally we can output this data, and make use of the names given to the newly added columns in a new chart.

    $chart = New-ExcelChart -NoLegend -ChartType XYScatter -XRange WinsToFast -YRange WinsToPoles `
               -Column 7 -Width 2000 -Height 700

    Export-Excel -ExcelPackage $Excel -WorkSheetname "Winners"-Show -AutoSize -AutoNameRange `
             -ExcelChartDefinition $chart

    clip_image010

    So there you have it; PowerShell objects or SQL data goes in – possibly over multiple sheets; headings and filters get added, panes arranged: extra calculated rows and columns are inserted , and formatting applied, pivot tables and charts inserted – and if Excel itself is available you can export them. No doubt someone will ask before too long if I get the the charts out of Excel and into PowerPoint slides ready for a management meeting … And since the all of this only works with XLSX files, not legacy XLS ones there might me another post soon about reading those files.

    December 11, 2017

    Using the Import Excel module part 2: putting data into .XLSx files

    Filed under: Office,Powershell — jamesone111 @ 3:55 pm

    This is third of a series of posts on Excel and PowerShell – the first on getting parts of an Excel file out as images wasn’t particularly tied to the ImportExcel Module, but the last one, this one and next one are.  I started with the Import Command – which seemed logical given the name of the module; the Export command is more complicated, because we may want to control the layout and formatting of the data, add titles, include pivot tables and draw charts;. so I have split it into two posts. At its simplest the command looks like this :

    Get-Process | Export-Excel -Path .\demo.xlsx -Show

    This gets a list of processes, and exports them to an Excel file; the -Show switch tells the command to try to open the file using Excel after saving it. I should be clear here that import and export don’t need Excel to be installed and one of the main uses is to get things into Excel format with all the extras like calculations, formatting and charts on a computer where you don’t want to install desktop apps; so –Show won’t work in those environments.  If no –WorksheetName parameter is give the command will use “Sheet1”.

    Each process object has 67 properties and in the example above they would all become columns in the worksheet, we can make things more compact and efficient by using Select-Object in the command to filter down to just the things we need:

    Get-Process | Select-Object -Property Name,WS,CPU,Description,StartTime |
    Export-Excel -Path .\demo.xls -Show
     

    Failed exporting worksheet 'Sheet1' to 'demo.xls':
    Exception calling ".ctor" with "1" argument(s):
    "The process cannot access the file 'demo.xls' because it is being used by another process."

    This often happens when you look at the file and go back to change the command and forget to close it – we can either close the file from Excel, or use the -KillExcel switch in Export‑Excel – from now on I’ll use data from a variable

    $mydata = Get-Process | Select-Object -Property Name, WS, CPU, Description, Company, StartTime
    $mydata | Export-Excel -KillExcel -Path .\demo.xlsx -Show

    This works, but Export-Excel modifies the existing file and doesn’t remove the old data – it takes the properties of the first item that is piped into it and makes them column headings, and writes each item as a row in the spreadsheet with those properties. (If different items have different properties there is a function Update-FirstObjectProperties to ensure the first row has every property used in any row). If we are re-writing an existing sheet, and the new data doesn’t completely cover the old we may be left with “ghost” data. To ensure this doesn’t happen, we can use the ‑ClearSheet option

    $mydata | Export-Excel -KillExcel -Path .\demo.xlsx -ClearSheet -Show

    clip_image002

    Sometimes you don’t want to clear the sheet but to add to the end of it, and one of the first changes I gave Doug for the module was to support a –Append switch, swiftly followed by a change to make sure that the command wasn’t trying to clear and append to the same sheet.

    We could make this a nicer spreadsheet – we could make it clear the column headings look like headings, and even make them filters, we can also size the columns to fit…

    $mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet `
                 -BoldTopRow
    -AutoSize -Title "My Processes" -TitleBold -TitleSize 20 -FreezePane 3 -AutoFilter -Show

    clip_image004

    The screen shot above shows the headings are now in bold and the columns have been auto sized to fit. A title has been added in bold, 20-point type; and the panes have been frozen above row 3. (There are options for freezing the top row or the left column or both, as well as the option used here –FreezePane row [column]) and filtering has been turned on.

    Another way to present tabular data nicely is to use the -Table option

    $mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" -ClearSheet –BoldTopRow    -AutoSize `
           -TableName table -TableStyle Medium6 -FreezeTopRow -show

    clip_image006

    “Medium6” is the default table style but there are plenty of others to choose from, and intellisense will suggest them

    clip_image008

    Sometimes it is helpful NOT to show the sheet immediately, and one of the first things I wanted to add to the module was the ability to pass on an object representing the current state of the workbook to a further command, which makes the following possible:

    $xl = $mydata | Export-Excel -Path .\demo.xlsx -KillExcel -WorkSheetname "Processes" `
        
    -ClearSheet -AutoSize -AutoFilter -BoldTopRow –FreezeTopRow -PassThru

    $ws = $xl.Workbook.Worksheets["Processes"]

    Set-Format -WorkSheet $ws -Range "b:b" -NumberFormat "#,###"   -AutoFit
    Set-Format -WorkSheet $ws -Range "C:C" -NumberFormat "#,##0.00" -AutoFit
    Set-Format -WorkSheet $ws -Range "F:F" -NumberFormat "dd MMMM HH:mm:ss" -AutoFit

    The first line creates a spreadsheet much like the ones above, and passes on the Excel Package object which provides the reference to the workbook and in turn to the worksheets inside it.
    The example selected three columns from the worksheet and applied different formatting to each. The module even supports conditional formatting, for example we could add these lines into the sequence above

    Add-ConditionalFormatting -WorkSheet $ws -Range "c2:c1000" -DataBarColor Blue
    Add-ConditionalFormatting -WorkSheet $ws -Range "b2:B1000" -RuleType GreaterThan
    `
               
    -ConditionValue '104857600'  -ForeGroundColor "Red" -Bold

    The first draws data bars so we can see at glance what is using CPU time and the second makes anything using over 100MB of memory stand out.

    Finally, a call to Export-Excel will normally apply changes to the workbook and save the file, but there don’t need to any changes – if you pass it a package object and don’t specify passthrough it will save your work, so “Save and Open in Excel” is done like this once we have put the data in a formatted it the way we want.

    Export-Excel -ExcelPackage $xl -WorkSheetname "Processes" -Show

    clip_image002[1]

    In the next post I’ll look at charts and Pivots, and the quick way to get SQL data into Excel

    December 5, 2017

    Using the Import-Excel module: Part 1 Importing

    Filed under: Office,Powershell — jamesone111 @ 9:15 am

    The “EEPLus” project provides .NET classes to read and write XLSx files without the need to use the Excel object model or even have Excel installed on the computer (XLSx files, like the other  Office Open XML Format are actually .ZIP format files, containing XML files describing different aspects of the document – they were designed to make that sort of thing easier than the “binary” formats which went before.)   Doug Finke, who is well known in the PowerShell community, used EEPlus to build a PowerShell module named ImportExcel which is on GitHub and can be downloaded from the PowerShell gallery (by running Install-Module ImportExcel on PowerShell 5 or PS4 with the Package Management addition installed) As of version 4.0.4 his module contains some of my contributions. This post is to act as an introduction to the Export parts of the module that I contributed to; there are some additional scripts bundled into the module which do require Excel itself but the core Import / Export functions do not. This gives a useful way to get data on a server into Excel format, or to provide users with a work book to enter data in an easy to use way and process that data on the server – without needing to install Microsoft Office or translate to and from formats like .CSV.

    The Import-Excel command reads data from a worksheet in an XLSx file. By default, it assumes the data has headers and starts with the first header in Cell A1 and the first row of data in row 2. It will skip columns which don’t have a header and but will include empty rows. If no worksheet name is specified it will use the first one in the work book, so at its simplest the command looks like :
    Import-Excel -Path .\demo.xlsx  

    It’s possible that the worksheet isn’t the first sheet in the workbook and/or has a title above the data, so we can specify the start point explicitly
    Import-Excel -Path .\demo.xlsx -WorkSheetname winners -StartRow 2  

    We can say the first row does not contain headers and either have each property (column) named P1, P2, P3 etc, by using the ‑NoHeader switch or specify header names with the -HeaderName parameter like this

    Import-Excel -Path .\demo.xlsx -StartRow 3 -HeaderName “Name”,"How Many"

    The module also provides a ConvertFrom-ExcelSheet command which takes -Encoding and -Delimiter parameters and sends the data to Export-CSV with those parameters, and a ConvertFrom-ExcelToSQLInsert command which turns each row into a SQL statement: this command in turn uses a command ConvertFrom-ExcelData, which calls Import-Excel and then runs a script block which takes two parameters PropertyNames and Record.

    Because this script block can do more than convert data, I added an alias “Use-ExcelData” which is now  part of the module and can be used like this
    Use-ExcelData -Path .\NewUsers.xlsx -HeaderRow 2 -scriptBlock $sb

    If I define the script block as below, each column becomes a parameter for the New-AdUser command which is run for each row

    $sb = {
      param($propertyNames, $record)
      $propertyNames | foreach-object -Begin {$h = @{} }  -Process {
          if ($null -ne $record.$_) {$h[$_] = $record.$_}
      } -end {New-AdUser @h -verbose}
    }

    The script block gets a list of property names and a row of data: the script block gets called for each row and creates a hash table, adds an entry for each property and finally Splats the parameters into a command. It can be any command in the end block provided that the column names in Excel match its Parameters , I’m sure you can come up with your own use cases.

    November 25, 2017

    Getting parts of Excel files as images.

    Filed under: Office,Powershell — jamesone111 @ 7:54 pm

    I feel old when I realise its more than two decades since I learnt about the object models in Word, Excel and even Microsoft project and how to control them from other applications. Although my preferred tool is now PowerShell rather than Access’s version of Visual basic, the idea that “it’s all in there somewhere” means I’ll go and do stuff inside Excel from time to time…

    One of the things I needed to do recently was to get performance data into a spreadsheet with charts – which the export part of Doug Finke’s ImportExcel module handles very nicely. But we had a request to display the charts on a web page without the need to open an Excel file, so it was time to have a look around in Excel’s [very hierarchical] object model.

    An Excel.Application contains
    …. Workbooks which contain
    …. …. Worksheets which contain
    …. …. …. Chartobjects each of which contains
    …. …. …. …. A Chart which has
    …. …. …. …. …. An Export Method

    It seems I can get what I need if I get an Excel application object, load the workbook, work through the sheets, find each chart, decide a name to save it as and call its export method. The PowerShell to do that looks like this

    $OutputType    = "JPG"
    $excelApp      = New-Object -ComObject "Excel.Application"
    $excelWorkBook = $excelApp.Workbooks.Open($path)
    foreach ($excelWorkSheet in $excelWorkBook.Worksheets) {
      foreach ($excelchart in $excelWorkSheet.ChartObjects([System.Type]::Missing)) {
        $excelApp.Goto($excelchart.TopLeftCell,$true)
        $imagePath = Join-Path -Path $Destination -ChildPath ($excelWorkSheet.Name +
                            "_" + ($excelchart.Chart.ChartTitle.Text + ".$OutputType"))
        $excelchart.Chart.Export($imagePath, $OutputType, $false)    
      }
    }
    $excelApp.Quit()

    A couple of things to note – the export method can output a PNG, JPG or GIF file and in the final version of this code, $OutputType is passed as a parameter (like $Path and $Destination  I’ve got into the habit of capitalizing parameter names, and starting normal variables with lowercase letters). There’s a slightly odd way of selecting ‘all charts’ and if the chart isn’t selected before exporting it doesn’t export properly.

    I sent Doug a this which he added to his module (along with some other additions I’d been meaning to send him for over a year!). Shortly afterwards he sent me a message 
    Hello again. Someone asked me about png files from Excel. They generate a sheet, do conditional formatting and then they want to save is as a png and send that instead of the xlsx…

    Back at Excel’s object model… there isn’t an Export method which applies to a range of cells or a whole worksheet – the SaveAs method doesn’t have the option to save a sheet (or part of one) as an image. Which left me asking “how would I do this manually?” I’d copy what I needed and paste it into something which can save it. From version 5 PowerShell has a Get-Clipboard cmdlet which can handle image data. (Earlier versions let you access the clipboard via the .net objects but images were painful). The Excel object model will allow a selection to be copied, so a single script can load the workbook, make a selection, copy it, receive it from the clipboard as an image and save the image.

    $Format = [system.Drawing.Imaging.ImageFormat]::Jpeg
    $xlApp  = New-Object -ComObject "Excel.Application"
    $xlWbk  = $xlApp.Workbooks.Open($Path)
    $xlWbk.Worksheets($WorkSheetname).Select()
    $xlWbk.ActiveSheet.Range($Range).Select() | Out-Null
    $xlApp.Selection.Copy() | Out-Null
    $image = Get-Clipboard -Format Image
    $image.Save($Destination, $Format)

    In practice $Path, $Worksheetname, $Range, $Format and $Destination are all parameters. And the whole thing is wrapped in a function Convert-XlRangeToImage
    Excel puts up a warning that there is a lot of data in the clipboard on exit and to stop that I copy a single cell before exiting.

    $xlWbk.ActiveSheet.Range("a1").Select() | Out-Null
    $xlApp.Selection.Copy() | Out-Null
    $xlApp.Quit()

    The Select and Copy methods return TRUE if they succeed so I send those to Null. The whole thing combines with Doug’s module like this

    $excelPackage = $myData | Export-Excel -Path $Path -WorkSheetname $workSheetname
    $workSheet    = $excelPackage.Workbook.Worksheets[$workSheetname]
    $range        = $workSheet.Dimension.Address
    #      << apply formatting >>
    Export-Excel -ExcelPackage $excelPackage -WorkSheetname $workSheetname
    Convert-XlRangeToImage -Path $Path -WorkSheetname $workSheetname -Range $range –Destination "$pwd\temp.png" –Show

    I sent the new function over to Doug and starting with version 4.0.8 it’s part of the downloadable module

    July 24, 2017

    An extra for my PowerShell profile–Elevate

    Filed under: Uncategorized — jamesone111 @ 7:15 pm

    More than usual, in the last few days I’ve found myself starting PowerShell or the ISE only to find I wanted a session as administrator : it’s a common enough thing but eventually I said ENOUGH!  I’d seen “-verb runas” to start an executable as administrator , so I added this to my profile.

    Function Elevate        {
    <#
    .Synopsis
        Runs an instance of the current program As Administrator
    #>

        Start-Process (Get-Process -id $PID).path -verb runas
    }

    June 16, 2017

    More on writing clear scripts: Write-output and return … good or bad ?

    Filed under: Powershell — jamesone111 @ 11:26 am

    My last post talked about writing understandable scripts and I read a piece entitled Let’s kill Write-Output by Mark Krauss (actually I found it because Thomas Lee Tweeted it with “And sort out return too”).

    So let’s start with one practicality. You can’t remove a command which has been in a language for 10 years unless you are prepared for a lot of pain making people re-write scripts. It’s alias “echo” was put there for people who come from other scripting languages, and start by asking “How do I print to the console”. But if removing it altogether is impractical, we can advise people to avoid it, write rules to catch it in the script analyser and so on. Should we ? And when is a good idea to use it ?

    Mark points out he’s not talking about Write-host, which should be kept for limited scenarios: if you want the user to see it by default, but it isn’t part of the output then that’s a job for Write-host, for example with my Get-SQL command   $result = Get-SQL $sqlQuery writes “42 rows returned” to the console but the output saved into $result is the 42 rows of data. Mark gives an example:    
    Write-Output "PowerShell Processes:"
    Get-Process -Name PowerShell

    and says it is better written as  
    "PowerShell Processes:"
    Get-Process -Name PowerShell

    And this is actually a case where Write-host should be used … why ? Let’s turn that into a function.
    Function Get-psProc {
      "PowerShell Processes:"
      Get-Process -Name"*PowerShell*"
    }

    Looks fine doesn’t it ? But it outputs two different types of object into the pipeline. All is well if we run Get-psProc on its own, but if we run
     Get-psProc | ConvertTo-Csv 
    It returns
    #TYPE System.String
    "Length"
    "21" 

    The next command in the pipeline saw that the first object was a string and that determined its behaviour. “PowerShell processes” is decoration you want the user to see but isn’t part of the output. That earlier post on understandable scripts came from a talk about writing good code and the one of the biggest problems I find in other peoples code is fixation with printing to the screen.  That leads to things like the next example – which is meant to read a file and say how many lines there are and their average length.

    $measurement = cat $path | measure -average Length
    echo ("Lines read    : {0}"    -f $measurement.Count  )
    echo ("Average length: {0:n0}" -f $measurement.Average)

    This runs and it does the job the author intended but I’d suggest they might be new to PowerShell and haven’t yet learnt that Output is not the same as “stuff for a user to read” (as in the previous example) , and they feel their output must be printed for reading. Someone more experienced with PowerShell might just write:
    cat $path| measure -average Length
    If they aren’t bothered about the labels,  or if the labels really matter
    cat $path | measure -average Length | select @{n="Lines Read";e={$_.count}}, @{n="Average Length";e={[math]::Round($_.Average,2)}}

    If this is something we use a lot, we might change the aliases to cmdlet names, specify parameter names and save it for later use. And it is re-usable, for example if we want to do something when there are more than x lines in the file, where the previous version can only return text with the number of lines embedded in it.  Resisting the urge to print everything is beneficial and that gets rid of a lot of uses of Write-output (or echo).

    Mark’s post has 3 beefs with Write-Output.

    1. Performance. It is slower but rarely noticeably so, so I’d discount this.
    2. Security / Predictability – Write-Output can be redefined, and that allows for something malign or just buggy. True, but it also allows you to redefine it for logging debugging and so on. So you could use a proxy Write-output for testing and the standard one in production. So this is not exclusively bad
    3. The false sense of security. He says that explicitly returning stuff is held to be better than implicit return, which implies
      Write-Output $result
            is better than just   $result            
      But no-one says you should write    cat $path | Write-Output it’s obviously redundant, but when you don’t isn’t that implying output ?

    My take on the last point is piping output into write-output (or Out-Default) is a tautology “Here’s some output, take it and output it”. It’s making things longer but not clearer. If using write-output does make things clearer then it is a sign the script is hard to read and at least needs some comments, and possibly some redesign. Joel Bennett sums up the false sense of security part in a sentencewhile some people like it because it highlights the spots where you intentionally output something — other people argue it’s presence distracts you from the fact that other lines could output.”  [Thanks Joel, that would have taken me a paragraph!]

    This is where Thomas’ comment about return comes in. Return tells PowerShell to bail out of a function, and there are many good reasons for doing that, it also has a two in one syntax :  return $result is the same as
    $result
    return

    When I linked to Joel above he also asks the question whether, as the last lines of a function, this
    $output = $temp + (Get-Thing $temp)
    return $output

    is better or worse than
    $output = $temp + (Get-Thing $temp)
    $output

    Not many people would add return to the second example – it’s redundant.  But if you store the final output in a variable there is some logic to using return (or Write-output) to send it back. But is it making things any clearer to store the result in a variable ? or it just as easy to read the following.  
    $temp + (Get-Thing $temp)

    As with Write-output, sometimes using return $result makes things clearer and sometimes it’s a habit from other programming languages where functions return results in a single  place so multiple parts must be gathered and then returned. Here’s something which combines the results of 3 queries and returns them

    $result =  (Get-SQL $sqlQuery1)
    $result += (Get-SQL $sqlQuery2)
    $result +  (Get-SQL $sqlQuery3)

    So the first line assigns an array of database rows to a variable the second appends more rows and the third returns these rows together with the results of a third query.  You need to look at  the operator in each line to figure out which sends to the pipeline. Arguably this it is clearer to replace the last line with this:

    $result += (Get-SQL $sqlQuery3)
    return $result

    When there are 3 or 4 lines between introducing $result and putting it into the pipeline this is OK. But lets say there are 50 lines of script between storing the results of the first query and appending the results of the second.  Has the script been made clearer by storing a partial result … or would you see something being appended to $result and look further up the script for where it was originally set and anywhere it was changed ? This example does nothing with the combined segments (like sorting them) we’re just following an old habit of only outputting in one place. Not outputting anything until we have everything can mean it takes a lot longer to run the script – we could have processed all the results from the first query while waiting for the second to run. I would dispense with the variable entirely and use

    Get-SQL $sqlQuery1
    Get-SQL $sqlQuery2
    Get-SQL $sqlQuery3

    If there is a lot of script between each I’d then use a #region around the lines which lead up to each query being run
    #region build query and return rows for x
    #etc etc
    Get-SQL $sqlQuery1
    #endregion 

    so when I collapse the outlining regions in my editor I see
    #region build query and return rows for x
    #region build query and return rows for y
    #region build query and return rows for z

    Which gives me a very good sense of what the script is doing at a high level and then I can drill into the regions if I need to. If I do need to do something to the combined set of rows (like sorting) then my collapsed code might become
    #region build query for x and keep rows for sorting later
    #region build query for y and keep rows for sorting later
    #region build query for z and keep rows for sorting later
    #region return sorted and de-duplicated results of x,y and Z

    Both outlines give a sense of where there should be output and where any output might be a bug.

    In conclusion. 
    When you see Lots of echo / write-output commands that’s usually a bad sign – it’s usually an indication of too many formatted strings going into the pipeline, but Write-Output is not automatically bad when used sparingly – and used properly return isn’t bad either. But if you find yourself adding either for clarity it should make you ask “Is there a better way”.  


    June 2, 2017

    On writing understandable scripts.

    Filed under: Uncategorized — jamesone111 @ 7:20 pm

     

    At two conferences recently I gave a talk on “What makes a good PowerShell module”  (revisiting an earlier talk) the psconf.eu guys have posted a video of it and I’ve made the slides available (the version in US used the same slide deck with a different template). .

    One of the my points was Prefer the familiar way to the clever way. A lot of us like the brilliant PowerShell one-liner (I belong to the “We don’t need no stinking variables” school and will happily pipe huge chains of commands together). But sometimes breaking it into multiple commands means that when you return it later or someone new picks up what you have done, it is easier to understand what is happening.  There are plenty of other examples, but generally clever might be opaque ; opaque needs comments and somewhere I picked up that what applies to jokes, applies to programming: if you have to explain it, it isn’t that good.

    Sometimes, someone doesn’t now the way which is familiar to everyone else, and they throw in something like this example which I used in the talk:
    Set-Variable -Scope 1 -Name "variableName" -Value $($variableName +1)
    I can’t recall ever using Set-Variable, and why would someone use it to to set a variable to its current value + 1? The key must be in the -scope parameter, –scope 1 means “the parent scope” , most people would write $Script:VariableName ++ or $Global:VariableName ++ When we encounter something like this, unravelling what Set-Variable is doing interrupts the flow of understanding … we have to go back and say “so what was happening when that variable was set …” 

    There are lots of cases where there are multiple ways to do something some are easier to understand but aren’t automatically the one we pick: all the following appear to do the same job

    "The value is " + $variable
    "The value is " + $variable.ToString()
    "The value is $variable"
    "The value is {0}" -f $variable
    "The value is " -replace "$",$variable

    You might see .ToString() and say “that’s thinking like a C# programmer” … but if $variable holds a date and the local culture isn’t US the first two examples will produce different results (to string will use local cultural settings) .
    If you work a lot with the –f operator , you might use {0:d} to say “insert the first item in ‘short date’ format for the local culture” and naturally write
    “File {0} is {1} bytes in size and was changed on {2}” –f $variable.Name,$variable.Length,$variable.LastWriteTime
    Because the eye has to jump back and forth along the line to figure out what goes into {0} and then into {1} and so on, this loses on readability compared concatenating the parts with + signs, it also assumes the next person to look at the script has the same familiarity with the –f operator. I can hear old hands saying “Anyone competent with PowerShell should be familiar with –f” but who said the person trying to understand your script meets your definition of competence.   
    As someone who does a lot of stuff with regular expressions, I might be tempted by the last one … but replacing the “end of string” marker ($) to as a way of appending excludes people who aren’t happy with regex.  I’m working on something which auto-generates code at the moment and it uses this because the source that it reads doesn’t provide a way of specifying “append”, but has “replace”. I will let it slide in this case but being bothered by it is a sign that I do ask myself “are you showing you’re clever or writing something that can be worked on later”.  Sometimes the only practical way is hard, but if there is a way which takes an extra minute to write and pays back when looking at the code in a few months time.   

    Next Page »

    Blog at WordPress.com.