James O'Neill's Blog

June 22, 2019

Last time I saw this many shells, someone sold them by the sea shore.

Filed under: Azure / Cloud Services,Linux / Open Source,Powershell,Uncategorized — jamesone111 @ 10:04 pm

I’ve been experimenting with lots of different combinations of shells on Windows 10.

imageBASH.  I avoided the Subsystem for Linux on Windows 10 for a while. There are only two steps to set it up – adding the Subsystem, and adding your chosen Linux to it. If the the idea of installing Linux into Windows, but not as a virtual machine, and getting it from the Windows store gives you a headache, you’re not alone, this may help or it make things worse. I go back to the first versions of Windows NT which had a Windows-16 on Windows-32 subsystem (WoW, which was linked to the Virtual Dos Machine – 32-bit Windows 10 can still install these), an OS/2 subsystem, and then a Posix subsystem. Subsystems translated APIs so binaries intended for a different OS could run on NT, but kernel functions (drivers, file-systems, memory management, networking, process scheduling) – remained the responsibility of underlying OS. 25 years on, the Subsystem for Linux arrives in two parts – the Windows bits to support all the different Linuxes , and then distributor-supplied bits to make it look like Ubuntu 18.4 (which is what I have) or Suse or whichever distribution you chose. wslconfig.exe will tell you which distro(s) you have and change the active one. There is a generic launcher wsl.exe which will launch any Linux binary in the subsystem so you can run wsl bash but a Windows executable, bash.exe streamlines the process

imageLinux has a view of Windows’ files (C: is auto-mounted at/mnt/c and the mount command will mount other Windows filesystems including network and removable drives) but there is strongly worded advice not to touch Linux’s files via their location on C: – see here for more details. – Just before publishing this I updated the 1903 release of Windows 10 which adds a proper access which you can see in the screen shot 
Subsystem processes aren’t isolated – although a Linux API call might have a restricted view of the system. For example ps only sees processes in the subsystem but if you start two instances of bash, they’re both in the subsystem they can both see each other and running kill in one will terminate the other. The subsystem can run a Windows binary (like net.exe start which will see Windows services) and pipe its output into an Linux one, like less;  those who prefer some Linux tools get to use them in their management of Windows.
The subsystem isn’t read-only – anything which changes in that filesystem stays changed – since the subsystem starts configured for US Locale,
sudo locale-gen en_GB.UTF-8 and sudo update-locale LANG=en_GB.UTF-8 got me to a British locale. 

Being writable meant I could install PowerShell core for Linux into the subsystem: I just followed the instructions (including running sudo apt-get update and sudo apt-get upgrade powershell to update from 6.1 to 6.2). Now I can test whether things which work in Windows PowerShell (V5), also work with PowerShell Core (V6) on different platforms.  I can tell the Windows Subsystem for Linux to go straight into PowerShell with  wsl pwsh (or wsl pwsh –nologo if I’m at a command line already). Like bash it can start Windows and Linux binaries and the “in-the-Linux-subsystem” limitations still hold. Get-Process asks about processes in the subsystem , not the wider OS. Most PowerShell commands are there; some familiar aliases overlap with Linux commands and most of those have been removed (so | sort will send something to the Linux sort, not to sort-object,  and ps is not the alias for get-process;  kill and CD are exceptions to this rule.). Some common environment variables (Temp, TMP, UserProfile, ComputerName) are not present on Linux, and Windows specific cmdlets, like Get-Service,  don’t exist in the Linux world, and tab expansion switches to Unix style by default but you can set either environment to match the other. My PowerShell Profile soon gained a Set-PsReadlineOption command to give me the tab expansion I expect and it sets a couple of environment variables which I know some of my scripts use.  It’s possible (and tempting) to create some PSDrives which map single letters to places on /mnt, but things like to revert back to the Linux path. After that V6 core is the same on both platforms

PowerShell on Linux has remoting over SSH; it connects to another instance of PowerShell 6 running in what SSH also terms a “subsystem”. Windows PowerShell (up to 5.1) uses WinRM as its transport and PowerShell Core (6) on Windows can use both. For now at least, options like constrained endpoints (and hence “Just Enough Admin”  or JEA), are only in WinRM.
The instructions for setting up OpenSSH are here; I spent a very frustrating time editing the wrong config file – there is one in with the program files, and my brain filtered out the instruction which said edit the sshd_config file in C:\Program Data\ssh. I edited the one in the wrong directory and could make an SSH session into Windows (a useful thing to know to prove Open SSH is accepting connections) but every attempt to create a PowerShell session gave the error
New-PSSession : [localhost] The background process reported an error with the following message: The SSH client session has ended with error message: subsystem request failed on channel 0.
When I (finally) edited the right file I could connect to it from both Windows and Linux versions of PowerShell core with New-PSSession -HostName localhost.  (Using –HostName instead of –Computername tells the command “This is an SSH host, not a WinRM one”). It always amazes me how people, especially but not exclusively those who spend a lot of time with Linux, are willing to re-enter a password again and again and again. I’ve always thought it was axiomatic that a well designed security system granted or refused access to many things without asking the user to re-authenticate for each (or “If I have to enter my password once more, I’ll want the so-called ‘Security architect’ fired”). So within 5 minutes I was itching to get SSH to sign in with a certificate and not demand my password.

image I found some help here, but not all the steps are needed. Running the ssh-keygen.exe utility which comes with OpenSSH builds the necessary files – I let it save the files to the default location and left the passphrase for the file blank, so it was just a case of hitting enter for each prompt. For a trivial environment like this I was able to copy the id_rsa.pub file to a new file named authorized_keys in the same folder, but in a more real world case you’d copy and paste each new public key file into authorized_keys, then I could test a Windows to Windows remoting session over SSH. When that worked I copied the .ssh directory to my home directory in the Subsystem for Linux, and the same command worked again.

imagePowerShell Core V6 is built on .NET core, so some parts of PowerShell 5 have gone missing: there’s no Out-Grid, or Show-Command, No Out-Printer (I wrote a replacement), no WMI commands, no commands to work with the event log, no transactions and no tools to manage the computer’s relationship with AD.  The  Microsoft.* modules provide about 312 commands in V5.1 and about 244 of those are available in V6; but nearly 70 do things which don’t make sense in the Linux world because they work with WinRM/WSMan, Windows security or Windows services. A few things like renaming the computer, stopping and restarting it, or changing the time zone need to be done with native Linux tools. But we have just over 194 core cmdlets on all platforms, and more in pre-installed modules. There was a also a big step forward with compatibility in PowerShell 6.1 and another with 6.2 – there is a support for a lot more of the Windows API, so although some things don’t work in Core a lot more does than at first release. It may be necessary to specify the explicit path to the module (the different versions use either “…\WindowsPowerShell\…” or “..\PowerShell\…” in their paths and Windows tools typically install their modules for Windows PowerShell) or to use Import-Module in V6 with the –SkipEditionCheck switch. Relatively few stubbornly refuse to work, and there is a solution for them: remotely run the commands that otherwise are unavailable – instead of going over SSH this time you use WinRM, (V5 doesn’t support SSH) When I started working with constrained endpoints I found I liked the idea of not needing to install modules everywhere and running their commands remotely instead, once you have a PSSession to the place where the commands exist, you can use Get-Module and Import-Module with a –PsSession switch, to make them available. So we can bridge between versions – “the place where the commands exist” is “another version of PowerShell on the same machine” it’s all the same to remoting. The PowerShell team have announced that the next release uses .Net core 3.0 which should mean the return of Out-Gridview (eventually), and other home brew tools to put GUI interfaces onto PowerShell; that’s enough of a change to  bump the major version number, and they will drop “Core” from the name to try to remove the impression that it is a poor relation on Windows. The PowerShell team have a script to do a side by side install of the preview – or even the daily builds – Thomas Lee wrote it up here. Preview 1 seems to have done the important but invisible work of changing .Net version; new commands will come later; but at the time of writing PowerShell 7 preview has parity with PowerShell Core 6, and the goal is parity with Windows PowerShell 5

There is no ISE in PowerShell 6/7, Visual Studio Code had some real annoyances but pretty well all of them have been fixed for some months now and somewhere I joined the majority who see it as the future. Having a Git client built-in has made collaborating on the ImportExcel module so much easier, and that got me to embrace it . Code wasn’t built specifically for PowerShell which means it will work with whichever version(s) it finds.  
imageThe right of the status bar looks like this and clicking the green bit pulls up a menu where you can swap between versions and test what you are writing in each one. These swaps close one instance of PowerShell and open another so you know you’re in a clean environment (not always true with the ISE); the flip side is you realise it is a clean environment when you want something which was loaded in the shell in the shell I’ve just swapped away from.
VS Code’s architecture of extensions means it can pull all kinds of clever tricks – like remote editing –and the Azure plug in allows an Azure Cloud Shell to be started inside the IDE. imageWhen you use Cloud Shell in a browser it has nice easy ways to transfer files; but you can discover the UNC path to your cloud drive with Get-cloudDrive  then , Get-AzStorageAccount will show you a list of accounts, you can work out the name of the account from the UNC path and you use this as the user name to logon but you also need to know the resource group it is in, and Get-AzStorageAccount shows that. Armed with the name and resource group  Get-AzStorageAccountKey gives you one or more keys which can be used as a password, and you can map a drive letter to the cloud drive.

Surely that’s enough shells for one post … well not quite. People have been getting excited about the new Windows Terminal which is went into preview in the Windows store a few hours before I posted this Before that you needed to enable developer options on Windows and build it for yourself. It needs the 1903 Windows update and with that freshly installed I thought “I’ve also got [full] Visual Studio on this machine, why not build and play with Terminal”. As it turns out I needed to add the latest Windows SDK and several gigabytes of options to Visual Studio (all described on the github page), but with that done it was one git command to download the files, another to get submodules, then open visual studio, select the right section per the instructions and say build me an X64 release, have a coffee … and the app appears. (In the limited time I’ve spent with version in store it looks to be the same as the build-your-own version).

imageIt’s nice, despite being early code (no settings menu, just a json file of settings to change)., It’s the first time time Microsoft have put out a replacement for the host which Window uses for command line tools – shells or otherwise, so you could run ssh, ftp, or a tool like netsh in it.  I’ve yet to find a way to have “as admin” and normal processes running in one instance. It didn’t take long for me to add PowerShell on Linux and PowerShell 7 preview to the default choices (it’s easy to copy/paste/adjust the json – just remember to change the guid when adding an new choice, and you can specify the path to a PNG file to use as an icon).
So, in a single window, I have all the shells, except for 32-bit PowerShell 5, as tabs:  CMD, three different, 64-bit versions of PowerShell on Windows, PowerShell on WSL, BASH on WSL, and PowerShell on Linux in Azure.
I must give a shout out to Scott Hanselman for the last one; I was thinking “there must be a way to do what VS code does” and from his post Scott thought down the same lines a little while before me. He hooked up with others working on it and shared the results. I use a 2 line batch file with title and azshell.exe (I’m not sure when “title” crept into CMD, but I’m fairly sure it wasn’t always there. I’ve used it to keep the tab narrow for CMD: to set the tab names for each of the PowerShell versions I set $Host.UI.RawUI.WindowTitle  which even works with from WSL) [UPDATED 3 Aug. Terminal 0.3 has just been releases with an Azure option which starts the cloud shell, but only in its bash incarnation. AzShell.exe can support a choice of shell by specifying –shell pwsh or –shell bash ] 
So I get 7 Shells, 8 if I added the 32 bit version of PowerShell. Running them in the traditional host would give me 16 possible shells. Add the 32 and 64 bit PowerShell ISEs and VS code with Cloud shell and 3 Versions of local PowerShell, and we’re up to 22. And finally there is Azure cloud shell in a browser, or , if you must, the azure phone app, so I get to an nice round two dozen shells in all without ssh’ing into other machines (yes terminal can run ssh) , using any of the alternate Linux shells with WSL or loading all the options VS code has. “Just type the following command line” is not as simple as it used to be.

Advertisements

March 3, 2019

PowerShell and the Microsoft Graph API : Part 2 – Starting to explore

Filed under: Azure / Cloud Services,Office 365,Powershell — jamesone111 @ 12:21 pm

In the previous post I looked at logging on to use Graph – my msftgraph module has a Connect-MsGraph function which contains all of that and saves refresh tokens so it can get an access token without repeating the logon process, it also refreshes the token when its time is up. Once I have the token I can start calling the rest API. Everything in graph has a URL which looks like

"https://graph.microsoft.com/version/type/id/subdivision"

Version is either “V1.0” or “beta” ; the resource type might be “user” or “group”, or “notebook” and so on and a useful one is “me”; but you might call user/ID to get a different user. to get the data you make an HTTP GET request which returns JSON; to add something it is usually a POST request with the body containing JSON which describes what you want to add, updates happen with a PATCH request (more JSON), and DELETE requests do what you’d expect. Not everything supports all four – there are a few things which allow creation but modification or deletion are on someone’s to do list. 

The Connect-MsGraph function runs the following so the other functions can use the token in whichever way is easiest:

if ($Response.access_token) {
    $Script:AccessToken     = $Response.access_token
    $Script:AuthHeader      = 'Bearer ' + $Response.access_token
    $Script:DefaultHeader   = @{Authorization = $Script:AuthHeader}
}

– by using the script: scope they are available throughout the module, and I can I run

$result = Invoke-WebRequest -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

Afterwards, $result.Content will contain this block of JSON
{ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users/$entity", "businessPhones": [], "displayName": "James O'Neill", "givenName": "James", "jobTitle": null, "mail": "xxxxx@xxxxxx.com", "mobilePhone": "+447890101010", "officeLocation": null, "preferredLanguage": "en-GB", "surname": "O'Neill", "userPrincipalName": "xxxxx@xxxxxx.com", "id": "12345678-abcd-6789-ab12-345678912345" }

It doesn’t space it out to make it easy to read. There’s a better way: Invoke-RestMethod creates a PowerShell object like this 

Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

@odata.context    : https://graph.microsoft.com/v1.0/$metadata#users/$entity
businessPhones    : {}
displayName       : James O'Neill
givenName         : James
jobTitle          :
mail              : xxxxx@xxxxxx.com
mobilePhone       : +447890101010
officeLocation    :
preferredLanguage : en-GB
surname           : O'Neill
userPrincipalName : xxxxx@xxxxxx.com
id                : 12345678-abcd-6789-ab12-345678912345

Invoke-RestMethod  automates the conversion of JSON into a PowerShell object; so
$D = Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me/drive" -Headers $DefaultHeader    
lets me refer to $D.webUrl to get the path to send a browser to to see my OneDrive. It is quite easy out what to do with the objects which come back from Invoke-RestMethod; arrays tend to come back in a .value property, some data is paged and gives a property named ‘@odata.nextLink’  , others objects – like “me” give everything on the object. Writing the module I added some formatting XML so PowerShell would display things nicely. The  The work is discovering URIs that available to send a GET to, and what extra parameters can be used – this isn’t 100% consistent – especially around adding query parameters to the end of a URL (some don’t allow filtering, some do but it might be case sensitive or insensitive, it might not combine with other query parameters and so on) and although the Microsoft documentation is pretty good, in some places it does feel like a work in progress. I ended up drawing a map and labelling it with the functions I was building in the module – user related stuff is on the left, teams and groups on the right and things which apply to both are in the middle. The Visio which this is based on an a PDF version of it are in the Repo at  https://github.com/jhoneill/MsftGraph 

Relationships 

Once you can make your first call to the API the same techniques come up again and again , and future posts will talk how to get PowerShell formatting working nicely, and how to create JSON for POST requests without massive amounts of “text wrangling” But as  you can see from the map there are many rabbit holes to go down, I started with a desire to post a message to a channel in Teams. Then I saw there was support for OneDrive and OneNote , and work I had done on them in the past called out for re-visit. Once I started working with OneDrive I wanted tab completion to expand files and folders, so I had to write an argument completer … and every time I looked at the documentation I saw “There is this bit you haven’t done” so I added more (I don’t have anywhere to experiment with  Intune so that is conspicuous by its absence, but I notice other people have worked on that), and that’s how we end up with big software projects … and patterns I used will come up in those future posts.

February 28, 2019

PowerShell and the Microsoft Graph API : Part 1, signing in

Filed under: Azure / Cloud Services,Microsoft Graph,Office,Office 365,Powershell — jamesone111 @ 6:13 pm

I recently I wanted a script to be able to post results to Microsoft teams,  which led me to the Microsoft Graph API which is the way to interact with all kinds of Microsoft Cloud services, and the scope grew to take in OneNote, OneDrive, SharePoint, Mail, Contacts, Calendars and Planner as well. I have now put V1.0 onto the PowerShell Gallery , and this is the first post on stuff that has come out of it.

if you’ve looked at anything to do with the Microsoft Graph API, a lot things say “It uses OAuth, and here’s how to logon”. Every example seems to log on in a different way (and the authors seem to think everyone knows all about OAuth). So I present… fanfare … my ‘definitive’ guide to logging on. Even if you just take the code I’ve shared, bookmark this because at some point someone will say  What’s Oauth about ?  The best way to answer that question is with another question: How can a user of a service allow something to interact with parts of that service on their behalf?  For example, at the bottom of this page is a “Share” section, WordPress can tweet on my behalf; I don’t give WordPress my Twitter credentials, but I tell Twitter “I want WordPress to tweet for me”. There is a scope of things at Twitter which I delegate to WordPress.  Some of the building blocks are

  • Registering applications and services which permission will be delegated to, and giving them a unique ID; this allows users to say “This may do that”, “Cancel access for that” – rogue apps can be de-registered.  
  • Authenticating the user (once) and obtaining and storing their consent for delegation of some scope.
  • Sending tokens to delegates – WordPress sends me to Twitter with its ID; I have a conversation with Twitter, which ends with “give this to WordPress”.

Tokens help when a service uses a REST API, with self-contained calls. WordPress tells Twitter “Tweet this” with an access token which says who approved it to post. The access token is time limited and a refresh token can extend access without involving the user (if the user agrees that the delegate to should be allowed to work like that).

Azure AD adds extra possibilities and combined with “Microsoft Accounts”, Microsoft Graph logons have a lot permutations.

  1. The application directs users to a web login dialog and they log on with a “Microsoft Account” from any domain which is not managed by Office 365 (like Gmail or Outlook.com). The URI for the login page includes the app’s ID and the the scopes it needs; and if the app does not have consent for those scopes and that user, a consent dialog is displayed for the user to agree or not. If the logon is completed, a code is sent back. The application presents the code to a server and identifies itself and gets the token(s). Sending codes means users don’t hold their own tokens or pass them over insecure links.
  2. From the same URI as option 1, the user logs on with an Azure AD account a.k.a. an Office 365 “Work or school” account; Azure AD validates the user’s credentials, and checks if there is consent for that app to use those scopes.  Azure AD tracks applications (which we’ll come back to in a minute) and administrators may ‘pre-consent’ to an application’s use of particular scopes, so their users don’t need to complete the consent dialog. Some scopes in Microsoft Graph must be unlocked by an administrator before they can appear in a consent dialog

clip_image002For options 1 & 2 where the same application can be used by users with either Microsoft or Azure-AD accounts,  applications are registered at https://apps.dev.microsoft.com/ (see left). The application ID here can be used in a PowerShell script.

Azure AD learns about these as they are used and shows them in the enterprise applications section of the Azure Active imageDirectory Admin Center. The name and the GUID from the App registration site appear in Azure and clicking through shows some information about the app and leads to its permissions.  (See right)

The Admin Consent / User consent tabs in the middle allow us to see where individual users have given access to scopes from a consent dialog, or see and change the administrative consent for all users in that Azure AD tenant.

The ability for the administrator to pre-consent is particularly useful useful with some of the later scenarios, which use a different kind of App, which leads to the next option…

  1. The App calls up the same web logon dialog as the first two options except the logon web page is tied to specific Azure AD tenant and doesn’t allow Microsoft accounts to log on. The only thing which has changed between options 2 and 3 is the application ID in the URI.
    This kind of logon is associated with an app which was not registered at https://apps.dev.microsoft.com/ but from the App Registrations section of the Azure Active Directory Admin Center. An app registered there is only known to oneimage AAD tenant so when the general-purpose logon page is told it is using that app it adapts its behaviour.
    Registered apps have their own Permissions page, similar to the one for enterprise apps; you can see the scopes which need admin consent (“Yes” appears towards the right).
  2. When Azure AD stores the permitted Scopes for an App, there is no need to interact with the user (unless we are using multi-factor authentication) and the user’s credentials can go in a silent HTTPS request. This calls a different logon URI with the tenant identity embedded in it – the app ID is specific to the tenant and if you have the app ID then you have the tenant ID or domain name to use in the login URI.
  3. All the cases up to now have been delegating permissions on behalf of a user, but permissions can be granted to an Azure AD application itself (in the screen shot on the right user.read.all is granted as a delegated permission and as an Application Permission). The app authenticates itself with a secret which is created for it in the Registered Apps part of the Azure AD admin Center. The combination of App ID and Secret is effectively a login credential and needs to be treated like one.

Picking how an app logs on requires some thought.

Decision Result Options
Will it work with “Live” users’ Calendars, OneDrive, OneNote ? It must be a General app and use the Web UI to logon. 1 or 2
Is all its functionality Azure AD/Office 365 only (like Teams) ?
or is the audience Office 365 users only ?
It can be either a General or Azure AD App,
(if general is used, Web UI must be used to logon).
1-4
Do we want users to give consent for the app to do its work ? It must use the Web UI. 1-3
Do we want avoid the consent dialog ? It must be an Azure AD app and use a ‘Silent’ http call to the Tennant-specific logon URI. 4
Do we want to logon as the app rather than a user ? It must be an Azure AD app and use a ‘Silent’ http call to the Tennant-specific logon URI. 5

Usually when you read about something which uses graph the author doesn’t explain how they selected a logon method – or that other ways exist. For example the Exchange Team Blog has a step-by-step example for an app which logs on as itself.  (Option 5 above). The app is implemented in PowerShell and the logon code the boils down to this:

$tenant    = 'GUID OR Domain Name'
$appId     = 'APP GUID'
$appSecret = 'From Certificates and Secrets'
$URI       = 'https://login.microsoft.com/{0}/oauth2/token' -f $tenant

$oauthAPP  = Invoke-RestMethod -Method Post -Uri $URI -Body @{
        grant_type    = 'client_credentials';
        client_id     =  $appid ;
        client_secret =  $appSecret;
        resource      = 'https://graph.microsoft.com';
}

After this runs $oauthApp has an access_token property which can be used in all the calls to the service.
For ease of reading here the URI is stored in a variable, and the Body parameter is split over multiple lines, but the Invoke-RestMethod command could be a single line containing the URI with the body on one line

Logging on as the app is great for logs (which is what that article is about) but not for “Tell me what’s on my one drive”; but that code can quickly be adapted for a user logon as described in Option 4 above, we keep same tenant, app ID and URI and change the grant type to password and insert the user name and password in place of the app secret, like this:

$cred      = Get-Credential -Message "Please enter your Office 365 Credentials"
$oauthUser = Invoke-RestMethod -Method Post -Uri $uri -Body  @{
        grant_type = 'password';
        client_id  =  $clientID;
        username   =  $cred.username;
        password   =  $cred.GetNetworkCredential().Password;
        resource   = 'https://graph.microsoft.com';
}

Just as an aside, a lot of people “text-wrangle”  the body of their HTTP requests, but I find it easier to see what is happening by writing a hash table with the fields and leave it to the cmdlet to sort the rest out for me; the same bytes go on the wire if you write
$oauthUser = Invoke-RestMethod -Method Post -Uri $uri -ContentType  "application/x-www-form-urlencoded"
-body
"grant_type=password&client_id=$clientID&username=$($cred.username)&password=$($cred.GetNetworkCredential().Password)&resource=https://graph.microsoft.com"

As with the first example, the object returned by Invoke-RestMethod, has the access token as a property so we can do something like this

$defaultheader = @{'Authorization' = "bearer $($oauthUser.access_token)"}
Invoke-RestMethod -Method Get -Uri https://graph.microsoft.com/v1.0/me

I like this method, because it’s simple, has no dependencies on other code, and runs in both Windows-PowerShell and PowerShell-core (even on Linux).
But it won’t work with consumer accounts. A while back I wrote something which built on this example from the hey scripting guy blog which displays a web logon dialog from PowerShell; the original connected to a login URI which was only good for Windows Live logins – different examples you find will use different end points – this page gave me replacement ones which seem to work for everything .

With $ClientID defined as before and a list of scopes in $Scope the code looks like this

Add-Type -AssemblyName System.Windows.Forms
$CallBackUri = "https://login.microsoftonline.com/common/oauth2/nativeclient"
$tokenUri    = "https://login.microsoftonline.com/common/oauth2/v2.0/token"
$AuthUri     = 'https://login.microsoftonline.com/common/oauth2/v2.0/authorize' +
                '?client_id='    +  $ClientID           +
                '&scope='        + ($Scope -join '%20') +
                '&redirect_uri=' +  $CallBackUri        +
                '&response_type=code'


$form     = New-Object -TypeName System.Windows.Forms.Form       -Property @{
                Width=1000;Height=900}
$web      = New-Object -TypeName System.Windows.Forms.WebBrowser -Property @{
                Width=900;Height=800;Url=$AuthUri }
$DocComp  = { 
    $Script:uri = $web.Url.AbsoluteUri
    if ($Script:Uri -match "error=[^&]*|code=[^&]*") {$form.Close() }
}
$web.Add_DocumentCompleted($DocComp) #Add the event handler to the web control
$form.Controls.Add($web)             #Add the control to the form
$form.Add_Shown({$form.Activate()})
$form.ShowDialog() | Out-Null

if     ($uri -match “error=([^&]*)”) {
    Write-Warning (“Logon returned an error of “ + $Matches[1])
    Return
}
elseif ($Uri -match “code=([^&]*)” ) {# If we got a code, swap it for a token
    $oauthUser = Invoke-RestMethod -Method Post -Uri $tokenUri  -Body @{
                   ‘grant_type’  =‘authorization_code’;
‘code’       
= $Matches[1];
                   ‘client_id’   = $Script:ClientID;
‘redirect_uri’
= $CallBackUri
}
}

This script uses Windows Forms which means it doesn’t have the same ability to run everywhere; it defines a ‘call back’ URI, a ‘token’ URI and an ‘authorization URI’. The browser opens at the authorization URI, after logging on the server sends their browser to callback URI with code=xxxxx  appended to the end the ‘NativeClient’ page used here does nothing and displays nothing, but the script can see the browser has navigated to somewhere which ends with code= or error=, it can pick out the code and and it to the token URI. I’ve built the Authorization URI in a way which is a bit laborious but easier to read; you can see it contains list of scopes separated by spaces, which have to be escaped to “%20” in a URI, as well as the client ID – which can be for either a generic app (registered at apps.dev.microsoft.com) or an azure AD app.

The  middle part of the script creates a the windows form with a web control which points at the authorization URI, and has a two line script block which runs for the “on_DocumentCompleted” event, it knows the login process is complete when the browser’s URI contains either with a code or an error when it sees that, it makes the browser’s final URI available and closes the form.
When control comes back from the form the If … ElseIf checks to see if the result was an error or a code. A code will be posted to the token granting URI to get the Access token (and refresh token if it is allowed). A different post to the token URI exchanges a refresh token for a new access token and a fresh refresh token.
To test if the token is working and that a minimum set of scopes have been authorized we can run the same script as when the token was fetched silently.

$defaultheader = @{'Authorization' = "bearer $($oauthUser.access_token)"}
Invoke-RestMethod -Method Get -Uri https://graph.microsoft.com/v1.0/me

And that’s it.

In the next part I’ll start looking at calling the rest APIs, and what is available in Graph.

November 1, 2010

Thinking about the cloud – part 2, Office 365

Filed under: Azure / Cloud Services,Exchange,Office,Real Time Collaboration — jamesone111 @ 3:03 pm

In my previous post I was talking in general terms about why BPOS was a sound idea. The recent announcement of Ray Ozzie’s retirement set people quoting his mantra “Three screens and a cloud” – the three screens being Computer, Mobile device, and TV.  The unwritten part of “Three screens” is recognising their diversity: people should interact with the best possible client – which means adapting to the specifics of each “screen”; it’s not “any browser and a cloud”: many phone apps do something which PCs do in the browser, they only exist because of the need to give a different experience on a different kind of screen. Instead of seeing a monolithic website (which in reality probably wasn’t monolithic) we see an app which consumes a service (probably the same service which was behind the web site).

But there was more than publishing stuff using services instead of HTML pages; more even than the likes of Groove or Live Meeting which used the cloud to enable new things.  From Ozzie’s vision, famously expressed in 2005, came a realization that services already used by business PCs and devices would increasingly be in the cloud, instead of on an organizations own servers. That was the cue to provide Exchange as a service, SharePoint as a service and so on. We’ve tried to make a distinction between “Software as a Service” – which in some people’s minds is “Any browser and a cloud” and “Software PLUS Services” – which covers a plethora of client software: from multi-player games on Xbox to iTunes to Outlook talking to an Exchange server. But when Office Outlook on a PC accesses Exchange-Online , Exchange is software and it is provided as a service –it just isn’t accessed using a browser: I haven’t yet seen a successful way to make the distinction between the two kinds of “Software as a service” just understand it has different meanings depending on who is speaking.

I don’t know if it was planned but it seemed fitting that we should announce the next generation of BPOS on the day after Ray’s announcement.  I prefer the new name Office 365. Mary Jo Foley posted something headed “This is not Office in the cloud” – in which she says “this was not some out-of-the-blue change in Microsoft’s business model. Microsoft is still pushing Office first and foremost as a PC-based software package.” Which is spot on: if you need office in a browser, Office Web App is there but it is not a replacement. I wrote in the previous post about the challenges of providing SharePoint, Exchange and so on, it is not Office but the services behind Office which are in the cloud. The key points of Office 365 are these:

  • At it’s core are the latest versions of the Server Software (Lync replaces Office Communications Server and provides Live Meeting functionality, and both Exchange and SharePoint are updated).  The FAQ page has a link to explain what happens to existing BPOS customers (and there are plenty of them – sending 167 million e-mails a day).
  • The ability to create a Public website (previously part of Office Live Small Business) has moved into Office 365 (Again the FAQ page explains what will happen to Office Live Small Business)
  • The update to SharePoint 2010 enables us to offer Office Web Apps – so documents can be viewed in high fidelity and edited from the browser.
  • Despite the the presence of Office Web Apps the main client will be Office on Desktop computers : Office Professional Plus for the desktop is now available as a part of the package on the same monthly subscription basis
  • There is a-la-carte pricing for individual parts of the suite and bundles known as plans targeted at different market segments.

I think the a-la-carte pricing option is a good thing – though some are bound to say “Microsoft are offering too many options”. The plans are just the combinations of cloud services we think will be popular; services can be added to a plan or bought standalone – for example “Kiosk” workers can get on the company e-mail system with Outlook web access from $2.  We’ve announced that the plans will cost between $4 to $27 per month,  that one of the enterprise plans closely mirrors the current BPOS at the same $10/user/month, and that there will be $6 plan with the features we think small business will need. In the run up to the launch I did see some details of different plans and options but I haven’t seen all of these in the announcements and it is not impossible that they will be fine tuned before the system goes fully live.  When will that be? The launch has a beta programme (sign-up is at http://office365.microsoft.com) , Mary-Jo said back in July that the plan was for full launch was early 2011 which sounds about right – it’s also necessarily vague, because a beta might reveal a lot of unexpected work to be done: if you want a more precise date I always say in these cases those who know won’t talk, and those who talk don’t know.

We’ve positioned Office 365 as helping small businesses to think big and big business to act fast – the link gives examples which range from the Starwood hotel chain to a single independent restaurant – it’s worth taking time to work out what it might mean to the organization(s) you work in/with: the cloud might be right for you, it might not – but if it isn’t I’d want to be able to explain why not and not have people think an opportunity was being missed through inertia.

This post originally appeared on my technet blog.

October 19, 2010

Thinking about the cloud (part 1).

Filed under: Azure / Cloud Services,Exchange,Office,Real Time Collaboration — jamesone111 @ 5:49 pm

I was telling someone recently that before I joined Microsoft I spent the late 1990s running a small training company. The number of employees varied, averaging out at a dozen or so. I delivered training, did the business management, helped the win over customers and I looked after the IT. It was like doing two or three jobs.

I’ve been quite reticent about our  “Business Productivity Online Service“partly because it takes a long and closely argued post to cover why, from an IT professional’s point of view, getting rid of your servers isn’t abdicating. (This is not going to be that post). But as chance would have it I was looking at BPOS again with my old job in my thoughts.  B-POS sounds like it should be something… ”points of sale”, but it is Exchange,Communications server and Sharepoint provided as Pay-monthly “Cloud services”

In the training company we ran all our own IT services, but there’s no way I’d host my own web-server today: the sense of using a hosting company was clear before I left for Microsoft.  The launch of BPOS gave businesses a way to get hosted Mail (Exchange), Presence & IM (OCS) and Collaboration & Document management (Sharepoint) for $10 US per month – or in round numbers £80 annually – per user. Comparing that with the cost of server hardware and software and especially the time that in-house systems took up, if I were running that business today, my head would say get rid of the servers.  You can mix in-house and in-cloud servers; users keep the same desktop software which is crucial: you don’t give up Outlook to move your mailboxes to the cloud.

It needs a change of attitude to give up the server. If my head argued costs and figures,  my heart might have come back with benefits like “You are master of your own destiny with the servers in-house”. But are you ? Back then we couldn’t justify clustering our servers, so if hardware failed – work would stop until it was repaired. Paying for a service in a Microsoft datacentre means it runs on clustered hardware, which someone else maintains. Microsoft’s datacentre is a bigger target for attack, but the sheer scale of the operation allows investment in tiers of defence. Small businesses tend not to worry about these things until something goes wrong, and you can always tell yourself that the risk is OK if you’re getting a better service in-house. But the truth is you’re probably not getting  better service.  As a Microsoft employee I’m used to having access to my mail and calendar from anything that connect to the internet – laptop at home, or on the move, any PC with web access, or Sync’d to a phone. I doubt if I would have set that up for the training company but it’s part of BPOS – even to the extent of supporting iPhones and Blackberries.   Getting rid of servers could not only save money but give users a better set of tools to use in their jobs – an easier thing to accept now that I don’t run servers for a business.

Now if you’ve come across the idea of the HypeCycle (see Wikipedia if not) – I agree with Gartner that cloud technologies somewhere near “peak of inflated expectations”  – in other words people are talking up “the cloud” beyond it’s true capabilities, and if things follow a normal course there will be a “trough of disillusionment” before things find their true level. I don’t buy into the idea that in the future scarcely any business will bother with keeping their own server, any more than they would generate their own electricity.  Nor do I buy into the polar opposite – that very few organisations, and none with any sense, will keep critical services in the cloud – that idea seems just as implausible to me. So the truth must lie in between: the method of delivering services to users won’t change from one foregone conclusion (the in-house server) to another foregone conclusion (the service in the cloud), like so many things it will be a question of businesses asking “does it make sense to do this in-house”, and I think IT professionals will want to avoid depending on that question being answered one way.

This post originally appeared on my technet blog.

March 5, 2010

Steve Ballmer talks about our cloud strategy

Filed under: Azure / Cloud Services — jamesone111 @ 3:19 pm

Yesterday Steve gave a talk at the University of Washington to discuss how cloud computing will change the way people and businesses use technology.

You can watch the speech on demand here, the video is about 85 minutes long, and if you want to get a snapshot of how we see things developing you can’t do much better. 

This post originally appeared on my technet blog.

February 23, 2010

What is Windows Azure ?

Filed under: Azure / Cloud Services — jamesone111 @ 2:37 pm

"What is Windows Azure?" click for Video RNLI - Click for a larger version with readable text 

One of the things that seemed odd when I first came to Microsoft, was the way we put up posters for our own internal consumption. I’ve long since grown used to it: inevitably some of these are interesting, some are not, some are eye-catching and some are not. In the atrium of my building this week is a picture of a lifeboat and as a Scuba diver I have a bit of an interest in lifeboats, an organization I hope not to see when they are on duty but really want to be there. So a picture of one is bound to get my attention.

It turns out that the RNLI is one of the early customers using real world – sorry for being melodramatic – life and death applications on Windows Azure (the system handles man overboard alerts: email might seem like life and death at times, but getting this right is the difference between lives saved and lives lost.)  This is one of a set of posters around the place advertising the LiveOnAzure website using different case studies (you can go straight to the UK-Focused case studies themselves)

“Stuff in the cloud” is unknown territory for many people. There are those  run away with the idea and and start talking as if it means getting rid of all the IT in a business or the talk degenerates into  something like “Buzzword, Buzzword, cloud, Buzzword, utility, Buzzword, Buzzword, services Buzzword cloud cloud Buzzword, platform, Buzzword,pay as you go, Buzzword,Buzzword”

The LiveOnAzure site links to some great resources, including the 4  Minute video I’ve linked to here. If you want to cut through to quick understanding of what Azure is about, it’s 255 seconds well  spent, afterwards, if you’re interested, there is plenty more to study on the live on azure site.

This post originally appeared on my technet blog.

Blog at WordPress.com.