James O'Neill's Blog

March 20, 2019

PowerShell Text wrangling [not just] part 4 of the Graph API series

Filed under: Uncategorized — jamesone111 @ 10:00 am

Almost the first line of PowerShell I ever saw was combining two strings like this
"{0} {1}"  -f $x , $y
And my reaction was “What !! If the syntax to concatenate two strings is so opaque this might not be for me”
[Edit as if to prove the awkwardness of the syntax, the initial post had two errors in such a short fragment. Thanks Doug.]
The –f operator is a wrapper for .NETs [String]::format and it is useful, partly for inserting strings into another string. For example I might define a SQL Statement in one place like this:
"Insert Into Users [GivenName], [Surname], [endDate] Values ('{0}', '{1}','{2}') "
and later I can get a ready-to-run query, using  $sqlInsert -f $first,$last,$end  
Doing this lets me arrange a script with long strings placed away from the logic; I’m less happy with this:  

@"Update Users
Set[endDate] = '{0}'
where {1} = '{2}'
And   {3} = '{4}'
"@ -f $end,$fieldName1,$value1,$fieldName2,$value    
because the string is there, my brain automatically goes back and forth filling in what should be in {0}, {1} and {2}, so I’d prefer to put $first, $Last and $end inside one string, or move the string out of sight. A string format operator is there to apply formatting and going over some downloaded code –f let me change this :
$now = Get-Date
$d = $now.Day.ToString()
if ($d.Length -eq 1) {$d ="0$d"}
$m = $now.month
if ($m.Length -eq 1) {$m ="0$m"}
$y = $now.Year
$logMessage = "Run on $d/$m/$y"  

To this:  
$now = Get-Date
= "Run on {0:d}" –f $now

For years my OneNote notebook has had a page which I lifted from the now-defunct blog of Kathy Kam (which you may still find re-posts of) which explains what the formatting strings are. In this case :d is “local short date” which is better than hard coding a date format; formatting strings used in Excel generally work, but there are some extra single-character formats like :g for general date/time, and :D for long date. If you live somewhere that puts the least significant part of the date in the middle then you might ask ‘Why not use  "Run on $now" ?”    
The 10th day of March 2019 outputs “Run on 03/10/2019 13:48:32” –in most of the world that means “3rd day of October”. But we could use
"Run on $($now.ToString('d'))"
And most people who use PowerShell will have used the $() syntax to evaluate the property of a variable embedded in a string.  But you can put a lot inside $(), this example will give you a list of days:
"Days are $(0..6 | foreach {"`r`n" + $now.AddDays($_).ToString('dddd')})"
The quote marks inside the $() don’t end the string  and  what is being evaluated can run over multiple lines like this
"Days are $(0..6 | foreach {
       "`r`n" +
} )"

Again, there are places where this I have found this technique to be useful, but encountering it in an unfamiliar piece of script means it takes me a few seconds to see that "`r`n" is a string, inside a code block, inside a string, in my script. I might use @" … "@  , which I think was once required for multi-line strings, instead of "…" which certainly works now, but leaves me looking for the closing quote – which isn’t the next quote. If the first part of the string was set and then a loop added days to the string that would be easier to follow. Incidentally when I talk of “an unfamiliar piece of script ” I don’t just mean other peoples work, I include work I did long enough ago that I don’t remember it.

Embedding in a string, concatenating multiple strings, or using –f  might all work, so which one is best in a given situation varies (sometimes it is shortest code that is easiest to understand and other things are clearer spread over a few lines) and the choice often comes down to personal coding style.
When working on my Graph API module I needed to send JSON like this (from the Microsoft Documentation) to create a group :

  "description": "Group with designated owner and members",
  "displayName": "Operations group",
  "groupTypes": [
  "mailEnabled": true,
  "mailNickname": "operations2019",
  "securityEnabled": false,
  "owners@odata.bind": [
  "members@odata.bind": [

This might be done as a large string with embedded variables, and even a couple of embedded for loops like the previous example, or to I could build the text up a few lines at a time. Eventually I settled on doing it like this.
$settings = @{'displayName'     = $Name
              'mailNickname'    = $MailNickName
              'mailEnabled'     = $true
              'securityEnabled' = $false
              'visibility'      = $Visibility.ToLower()
              'groupTypes'      = @('Unified')
if ($Description) {$settings['description']        = $Description  }
if ($Members)     {$settings['members@odata.bind'] = @() + $Members}
if ($Owners)      {$settings['owners@odata.bind']  = @() + $Owners }

$json = ConvertTo-Json $settings
Write-Debug $json
$group = Invoke-RestMethod @webparams -body $json

ConvertTo-Json only processes two levels of hierarchy by default so when the Hash table has more layers it needs the –DEPTH parameter to translate properly. Why do it this way? JSON says ‘here is something (or a collection of things) with a name’, so why say that in PowerShell-speak only to translate it? Partly it’s keeping to the philosophy of only translating into text at last moment; partly it’s getting rid of the mental context-switching – this is script, this is text with script-like bits . Partly it is to make getting things right easier than getting things wrong: if things are built up a few lines at a time,  I need to remember that ‘Unified’ should be quoted, but as a Boolean value ‘false’ should not, I need to track unclosed quotes and brackets, to make sure commas are where they are needed and nowhere else: in short, every edit is a chance to turn valid JSON into something generates a “Bad Request” message – so everywhere I generate JSON I have Write-Debug $Json. But any syntax errors in that hash table will be highlighted as I edit it.
And partly… When it comes to parsing text, I’ve been there and got the T-Shirts; better code than mine is available, built-in with PowerShell; I’d like apply the same logic to creating such text: I want to save as much effort as I can between “I have these parameters/variables” and “this data came back”. That was the thinking behind writing my GetSQL module: I know how to connect to a database and can write fairly sophisticated queries, but why keep writing variations of the same few simple ones, and the connection to send them to the database? SQL statements have the same “context switch” – if I type “–eq” instead of “=” in a query it’s not because I’ve forgotten the SQL I learned decades ago. Get-SQL lets me keep my brain in PowerShell mode and write.  
Get-SQL –Update LogTable –set Progess –value 100 –where ID –eq $currentItem

My perspective – centred on the script that calls the API rather than the API or its transport components – isn’t the only way. Some people prize the skill of handwriting descriptions of things in JSON. A recent project had me using DSC for bare-metal builds (I need need to parse MOF files to construct test scripts, and I could hand crank MOF files, but why go through that pain? ); DSC configuration functions take a configuration data parameter which is a hash holding all the details of all the machines. This was huge. When work started it was natural to create a PowerShell variable holding the data but when it became hundreds of lines I moved that data to its own file but it remained a series of declarations which could be executed – this is the code which did that

Get-ChildItem -Path (Join-Path -Path $scriptPath -ChildPath "*.config.ps1") | ForEach-Object {
    Write-Verbose -Message "Adding config info from $($_.name)"
    ConfigurationData.allNodes += (& $_.FullName )

– there was no decision to store data as PowerShell declarations it just happened as a accident of how the development unfolded, and there were people working on that project who found JSON easier to read (we could have used any format which supports a hierarchy). So I added something to put files through ConvertFrom-JSon and convert the result from a PSCustomObject to a hash table so they could express the data in the way which seemed natural to them.

Does this mean data should always be shifted out of the script ? Even that answer is “it depends” and is influenced by personal style. The examples which Doug Finke wrote for the ImportExcel module often start like this:

$data = ConvertFrom-Csv @'
Item,Quantity,Price,Total Cost
Baseball Bats,38,159.00,6042.00

Which is simultaneously good and bad. It is placed at the the start of the file not sandwiched between lumps of code, we can see that it is data for later and what the columns are, and it is only one per row of data where JSON would be 6 lines. But csv gives errors a hiding place – a mistyped price, or an extra commas is hard to see. But that doesn’t matter in this case. But we wouldn’t mash together the string being converted from other data… would we ?


March 6, 2019

PowerShell formatting [not just] Part 3 of the Graph API series

Filed under: Microsoft Graph,Powershell — jamesone111 @ 8:12 am

Many of us learnt to program at school and lesson 1 was writing something like

PRINT “Enter a number”    
Xsqrd = X * X
PRINT “The Square of ” + STR(X) + “Is ” + STR(Xsqrd)

So I know I should not be surprised when I read scripts and see someone has started with CLS (or Clear-Host) and then has a script peppered with Read-Host and Write-Host, or perhaps echo – and what is echoed is a carefully built up string. And I find myself saying “STOP”

  • CLS I might have hundreds or thousands of lines in the scroll back buffer of my shell. Who gave you permission to throw them away ?
  • Let me  run your script with parameters. Only use commands like Read-Host and Get-Credential if I didn’t (or couldn’t) provide the parameter when I started it
  • Never print your output

And quite quickly most of us learn about Write-Verbose, and Write-Progress and the proper way to do “What’s happening messages” ; we also learn to Output an object, not formatted text. However, this can have a sting in the tail: the previous post showed this little snipped of calling the graph API.

Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

@odata.context    : https://graph.microsoft.com/v1.0/$metadata#users/$entity
businessPhones    : {}
displayName       : James O'Neill
givenName         : James
jobTitle          :
mail              : xxxxx@xxxxxx.com
mobilePhone       : +447890101010
officeLocation    :
preferredLanguage : en-GB
surname           : O'Neill
userPrincipalName : xxxxx@xxxxxx.com
id                : 12345678-abcd-6789-ab12-345678912345

Invoke-RestMethod  automates the conversion of JSON into a PowerShell object; so I have something rich to output but I don’t want all of this information, I want a function which works like this

> get-graphuser
Display Name  Job Title  Mail  Mobile Phones UPN
------------  ---------  ----  ------------- ---
James O'Neill Consultant jxxx  +447890101010 Jxxx

If no user is specified my function selects the current user. If I want a different user I’ll give it a –UserID parameter, if I want something about a user I’ll give it other parameters and switches, but if it just outputs a user I want a few fields displayed as a table. (That’s not a real phone number by the way). This is much more the PowerShell way, think about what it does, what goes in and what comes out, but a vaguer about the visuals of that output.

A simple, but effective way get this style of output would be to give Get-GraphUser a –Raw switch and pipe the object through Format-Table, unless raw output is needed; but I need repeat this anywhere that I get a user, and it only works for immediate output. If I do
$U = Get-GraphUser
<<some operation with $U>>

and later check what is in the variable it will output in the original style. If I forget –RAW, $U won’t be valid input… There is a better way and to tell PowerShell “When you see a Graph user format it as a table like this” ; that’s done with a format.ps1xml file – it’s easiest to plagiarize the ones in $PSHOME directory – don’t modify them, they’re digitally signed – you get an XML file which looks like this

        < View>
            <Name>Graph Users</Name>


< /Configuration>

There is a <view> section for each type of object and a <tableControl> or <listControl> defines how it should be displayed. For OneDrive objects I copied the way headers work for files, but everything else just has a table or list.  The XML says the view is selected by an object with a type name of GraphUser, and we can add any name to the list of types on an object. The core of the Get-GraphUser function looks like this:

$webparams = @{Method = "Get"
              Headers = $Script:DefaultHeader

if ($UserID) {$userID = "users/$userID"} else {$userid = "me"}

$uri = "https://graph.microsoft.com/v1.0/$userID&quot;
#Other URIs may be defined 

$results = Invoke-RestMethod -Uri $uri @webparams

foreach ($r in $results) {
   if ($r.'@odata.type' -match 'user$')  {


The “common” web parameters are defined, then the URI is determined, then a call to Invoke-RestMethod, which might get one item, or a array of many (usually in a values property). Then the results have the name “GraphUser” added to their list of types, and the result(s) are returned. 

This pattern repeats again and again, with a couple of common modifications ; I can use Get-GraphUser <id> –Calendar to get a user’s calendar, but the calendar that comes back doesn’t contain the details needed to fetch its events. So going through the foreach loop, when the result is a calendar it is better for the function to add a property that will help navigation later

$uri = https://graph.microsoft.com/v1.0/$userID/Calendars

Add-Member -InputObject $r -MemberType NoteProperty -Name CalendarPath -Value "$userID/Calendars/$($r.id)"

As well as navigation, I don’t like functions which return things that need to be translated, so when an API returns dates as text strings I’ll provided an extra property which presents them as a datetime object. I also create some properties for display use only, which comes into its own for the second variation on the pattern. Sometimes it is simpler to just tell PowerShell – “Show these properties” when there is no formatting XML PowerShell has one last check – does the object have a PSStandardMembers property with a DefaultDisplayPropertySet child property ? For events in the calendar, the definition of “standard members” might look like this:

[string[]]$defaultProperties = @('Subject','When','Reminder')
$defaultDisplayPropertySet = New-Object System.Management.Automation.PSPropertySet`
             -ArgumentList 'DefaultDisplayPropertySet',$defaultProperties
$psStandardMembers = [System.Management.Automation.PSMemberInfo[]] @($defaultDisplayPropertySet)

Then, as the function loops through the returned events instead of adding a type name it adds a property named PSStandardMembers

Add-Member -InputObject $r -MemberType MemberSet  -Name PSStandardMembers -Value $PSStandardMembers

PowerShell has an automatic variable $FormatEnumerationLimit  which says “up to some number of properties display a table, and for more than that display a list” – the default is 4. So this method suits a list of reminders in the calendar where the ideal output is a table with 3 columns, and there is only one place which gets reminders. If the same type of data is fetched in multiple places it is easier to maintain a definition in an XML file.

As I said before working on the graph module the same pattern is repeated a lot:  discover a URI which can get the data, then write a PowerShell function which:

  • Builds the URI from the function’s parameters
  • Calls Invoke-RestMethod
  • Adds properties and/or a type name to the returned object(s)
  • Returns those objects

The first working version of a new function helps to decide how the objects will be formatted which refines the function and adds to the formatting XML as required. Similarly the need for extra properties might only become apparent when other functions are written; so development is an iterative process.   

The next post will look at another area which the module uses, but applies more widely which I’ve taken to calling “Text wrangling”,  how we build up JSON and other text that we need to send in a request.

March 3, 2019

PowerShell and the Microsoft Graph API : Part 2 – Starting to explore

Filed under: Azure / Cloud Services,Office 365,Powershell — jamesone111 @ 12:21 pm

In the previous post I looked at logging on to use Graph – my msftgraph module has a Connect-MsGraph function which contains all of that and saves refresh tokens so it can get an access token without repeating the logon process, it also refreshes the token when its time is up. Once I have the token I can start calling the rest API. Everything in graph has a URL which looks like


Version is either “V1.0” or “beta” ; the resource type might be “user” or “group”, or “notebook” and so on and a useful one is “me”; but you might call user/ID to get a different user. to get the data you make an HTTP GET request which returns JSON; to add something it is usually a POST request with the body containing JSON which describes what you want to add, updates happen with a PATCH request (more JSON), and DELETE requests do what you’d expect. Not everything supports all four – there are a few things which allow creation but modification or deletion are on someone’s to do list. 

The Connect-MsGraph function runs the following so the other functions can use the token in whichever way is easiest:

if ($Response.access_token) {
    $Script:AccessToken     = $Response.access_token
    $Script:AuthHeader      = 'Bearer ' + $Response.access_token
    $Script:DefaultHeader   = @{Authorization = $Script:AuthHeader}

– by using the script: scope they are available throughout the module, and I can I run

$result = Invoke-WebRequest -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

Afterwards, $result.Content will contain this block of JSON
{ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users/$entity", "businessPhones": [], "displayName": "James O'Neill", "givenName": "James", "jobTitle": null, "mail": "xxxxx@xxxxxx.com", "mobilePhone": "+447890101010", "officeLocation": null, "preferredLanguage": "en-GB", "surname": "O'Neill", "userPrincipalName": "xxxxx@xxxxxx.com", "id": "12345678-abcd-6789-ab12-345678912345" }

It doesn’t space it out to make it easy to read. There’s a better way: Invoke-RestMethod creates a PowerShell object like this 

Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

@odata.context    : https://graph.microsoft.com/v1.0/$metadata#users/$entity
businessPhones    : {}
displayName       : James O'Neill
givenName         : James
jobTitle          :
mail              : xxxxx@xxxxxx.com
mobilePhone       : +447890101010
officeLocation    :
preferredLanguage : en-GB
surname           : O'Neill
userPrincipalName : xxxxx@xxxxxx.com
id                : 12345678-abcd-6789-ab12-345678912345

Invoke-RestMethod  automates the conversion of JSON into a PowerShell object; so
$D = Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me/drive" -Headers $DefaultHeader    
lets me refer to $D.webUrl to get the path to send a browser to to see my OneDrive. It is quite easy out what to do with the objects which come back from Invoke-RestMethod; arrays tend to come back in a .value property, some data is paged and gives a property named ‘@odata.nextLink’  , others objects – like “me” give everything on the object. Writing the module I added some formatting XML so PowerShell would display things nicely. The  The work is discovering URIs that available to send a GET to, and what extra parameters can be used – this isn’t 100% consistent – especially around adding query parameters to the end of a URL (some don’t allow filtering, some do but it might be case sensitive or insensitive, it might not combine with other query parameters and so on) and although the Microsoft documentation is pretty good, in some places it does feel like a work in progress. I ended up drawing a map and labelling it with the functions I was building in the module – user related stuff is on the left, teams and groups on the right and things which apply to both are in the middle. The Visio which this is based on an a PDF version of it are in the Repo at  https://github.com/jhoneill/MsftGraph 


Once you can make your first call to the API the same techniques come up again and again , and future posts will talk how to get PowerShell formatting working nicely, and how to create JSON for POST requests without massive amounts of “text wrangling” But as  you can see from the map there are many rabbit holes to go down, I started with a desire to post a message to a channel in Teams. Then I saw there was support for OneDrive and OneNote , and work I had done on them in the past called out for re-visit. Once I started working with OneDrive I wanted tab completion to expand files and folders, so I had to write an argument completer … and every time I looked at the documentation I saw “There is this bit you haven’t done” so I added more (I don’t have anywhere to experiment with  Intune so that is conspicuous by its absence, but I notice other people have worked on that), and that’s how we end up with big software projects … and patterns I used will come up in those future posts.

Blog at WordPress.com.