James O'Neill's Blog

March 3, 2019

PowerShell and the Microsoft Graph API : Part 2 – Starting to explore

Filed under: Azure / Cloud Services,Office 365,Powershell — jamesone111 @ 12:21 pm

In the previous post I looked at logging on to use Graph – my msftgraph module has a Connect-MsGraph function which contains all of that and saves refresh tokens so it can get an access token without repeating the logon process, it also refreshes the token when its time is up. Once I have the token I can start calling the rest API. Everything in graph has a URL which looks like

"https://graph.microsoft.com/version/type/id/subdivision"

Version is either “V1.0” or “beta” ; the resource type might be “user” or “group”, or “notebook” and so on and a useful one is “me”; but you might call user/ID to get a different user. to get the data you make an HTTP GET request which returns JSON; to add something it is usually a POST request with the body containing JSON which describes what you want to add, updates happen with a PATCH request (more JSON), and DELETE requests do what you’d expect. Not everything supports all four – there are a few things which allow creation but modification or deletion are on someone’s to do list. 

The Connect-MsGraph function runs the following so the other functions can use the token in whichever way is easiest:

if ($Response.access_token) {
    $Script:AccessToken     = $Response.access_token
    $Script:AuthHeader      = 'Bearer ' + $Response.access_token
    $Script:DefaultHeader   = @{Authorization = $Script:AuthHeader}
}

– by using the script: scope they are available throughout the module, and I can I run

$result = Invoke-WebRequest -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

Afterwards, $result.Content will contain this block of JSON
{ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users/$entity", "businessPhones": [], "displayName": "James O'Neill", "givenName": "James", "jobTitle": null, "mail": "xxxxx@xxxxxx.com", "mobilePhone": "+447890101010", "officeLocation": null, "preferredLanguage": "en-GB", "surname": "O'Neill", "userPrincipalName": "xxxxx@xxxxxx.com", "id": "12345678-abcd-6789-ab12-345678912345" }

It doesn’t space it out to make it easy to read. There’s a better way: Invoke-RestMethod creates a PowerShell object like this 

Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me" -Headers $DefaultHeader

@odata.context    : https://graph.microsoft.com/v1.0/$metadata#users/$entity
businessPhones    : {}
displayName       : James O'Neill
givenName         : James
jobTitle          :
mail              : xxxxx@xxxxxx.com
mobilePhone       : +447890101010
officeLocation    :
preferredLanguage : en-GB
surname           : O'Neill
userPrincipalName : xxxxx@xxxxxx.com
id                : 12345678-abcd-6789-ab12-345678912345

Invoke-RestMethod  automates the conversion of JSON into a PowerShell object; so
$D = Invoke-Restmethod -Uri "https://graph.microsoft.com/v1.0/me/drive" -Headers $DefaultHeader    
lets me refer to $D.webUrl to get the path to send a browser to to see my OneDrive. It is quite easy out what to do with the objects which come back from Invoke-RestMethod; arrays tend to come back in a .value property, some data is paged and gives a property named ‘@odata.nextLink’  , others objects – like “me” give everything on the object. Writing the module I added some formatting XML so PowerShell would display things nicely. The  The work is discovering URIs that available to send a GET to, and what extra parameters can be used – this isn’t 100% consistent – especially around adding query parameters to the end of a URL (some don’t allow filtering, some do but it might be case sensitive or insensitive, it might not combine with other query parameters and so on) and although the Microsoft documentation is pretty good, in some places it does feel like a work in progress. I ended up drawing a map and labelling it with the functions I was building in the module – user related stuff is on the left, teams and groups on the right and things which apply to both are in the middle. The Visio which this is based on an a PDF version of it are in the Repo at  https://github.com/jhoneill/MsftGraph 

Relationships 

Once you can make your first call to the API the same techniques come up again and again , and future posts will talk how to get PowerShell formatting working nicely, and how to create JSON for POST requests without massive amounts of “text wrangling” But as  you can see from the map there are many rabbit holes to go down, I started with a desire to post a message to a channel in Teams. Then I saw there was support for OneDrive and OneNote , and work I had done on them in the past called out for re-visit. Once I started working with OneDrive I wanted tab completion to expand files and folders, so I had to write an argument completer … and every time I looked at the documentation I saw “There is this bit you haven’t done” so I added more (I don’t have anywhere to experiment with  Intune so that is conspicuous by its absence, but I notice other people have worked on that), and that’s how we end up with big software projects … and patterns I used will come up in those future posts.

Advertisements

February 28, 2019

PowerShell and the Microsoft Graph API : Part 1, signing in

Filed under: Azure / Cloud Services,Microsoft Graph,Office,Office 365,Powershell — jamesone111 @ 6:13 pm

I recently I wanted a script to be able to post results to Microsoft teams,  which led me to the Microsoft Graph API which is the way to interact with all kinds of Microsoft Cloud services, and the scope grew to take in OneNote, OneDrive, SharePoint, Mail, Contacts, Calendars and Planner as well. I have now put V1.0 onto the PowerShell Gallery , and this is the first post on stuff that has come out of it.

if you’ve looked at anything to do with the Microsoft Graph API, a lot things say “It uses OAuth, and here’s how to logon”. Every example seems to log on in a different way (and the authors seem to think everyone knows all about OAuth). So I present… fanfare … my ‘definitive’ guide to logging on. Even if you just take the code I’ve shared, bookmark this because at some point someone will say  What’s Oauth about ?  The best way to answer that question is with another question: How can a user of a service allow something to interact with parts of that service on their behalf?  For example, at the bottom of this page is a “Share” section, WordPress can tweet on my behalf; I don’t give WordPress my Twitter credentials, but I tell Twitter “I want WordPress to tweet for me”. There is a scope of things at Twitter which I delegate to WordPress.  Some of the building blocks are

  • Registering applications and services which permission will be delegated to, and giving them a unique ID; this allows users to say “This may do that”, “Cancel access for that” – rogue apps can be de-registered.  
  • Authenticating the user (once) and obtaining and storing their consent for delegation of some scope.
  • Sending tokens to delegates – WordPress sends me to Twitter with its ID; I have a conversation with Twitter, which ends with “give this to WordPress”.

Tokens help when a service uses a REST API, with self-contained calls. WordPress tells Twitter “Tweet this” with an access token which says who approved it to post. The access token is time limited and a refresh token can extend access without involving the user (if the user agrees that the delegate to should be allowed to work like that).

Azure AD adds extra possibilities and combined with “Microsoft Accounts”, Microsoft Graph logons have a lot permutations.

  1. The application directs users to a web login dialog and they log on with a “Microsoft Account” from any domain which is not managed by Office 365 (like Gmail or Outlook.com). The URI for the login page includes the app’s ID and the the scopes it needs; and if the app does not have consent for those scopes and that user, a consent dialog is displayed for the user to agree or not. If the logon is completed, a code is sent back. The application presents the code to a server and identifies itself and gets the token(s). Sending codes means users don’t hold their own tokens or pass them over insecure links.
  2. From the same URI as option 1, the user logs on with an Azure AD account a.k.a. an Office 365 “Work or school” account; Azure AD validates the user’s credentials, and checks if there is consent for that app to use those scopes.  Azure AD tracks applications (which we’ll come back to in a minute) and administrators may ‘pre-consent’ to an application’s use of particular scopes, so their users don’t need to complete the consent dialog. Some scopes in Microsoft Graph must be unlocked by an administrator before they can appear in a consent dialog

clip_image002For options 1 & 2 where the same application can be used by users with either Microsoft or Azure-AD accounts,  applications are registered at https://apps.dev.microsoft.com/ (see left). The application ID here can be used in a PowerShell script.

Azure AD learns about these as they are used and shows them in the enterprise applications section of the Azure Active imageDirectory Admin Center. The name and the GUID from the App registration site appear in Azure and clicking through shows some information about the app and leads to its permissions.  (See right)

The Admin Consent / User consent tabs in the middle allow us to see where individual users have given access to scopes from a consent dialog, or see and change the administrative consent for all users in that Azure AD tenant.

The ability for the administrator to pre-consent is particularly useful useful with some of the later scenarios, which use a different kind of App, which leads to the next option…

  1. The App calls up the same web logon dialog as the first two options except the logon web page is tied to specific Azure AD tenant and doesn’t allow Microsoft accounts to log on. The only thing which has changed between options 2 and 3 is the application ID in the URI.
    This kind of logon is associated with an app which was not registered at https://apps.dev.microsoft.com/ but from the App Registrations section of the Azure Active Directory Admin Center. An app registered there is only known to oneimage AAD tenant so when the general-purpose logon page is told it is using that app it adapts its behaviour.
    Registered apps have their own Permissions page, similar to the one for enterprise apps; you can see the scopes which need admin consent (“Yes” appears towards the right).
  2. When Azure AD stores the permitted Scopes for an App, there is no need to interact with the user (unless we are using multi-factor authentication) and the user’s credentials can go in a silent HTTPS request. This calls a different logon URI with the tenant identity embedded in it – the app ID is specific to the tenant and if you have the app ID then you have the tenant ID or domain name to use in the login URI.
  3. All the cases up to now have been delegating permissions on behalf of a user, but permissions can be granted to an Azure AD application itself (in the screen shot on the right user.read.all is granted as a delegated permission and as an Application Permission). The app authenticates itself with a secret which is created for it in the Registered Apps part of the Azure AD admin Center. The combination of App ID and Secret is effectively a login credential and needs to be treated like one.

Picking how an app logs on requires some thought.

Decision Result Options
Will it work with “Live” users’ Calendars, OneDrive, OneNote ? It must be a General app and use the Web UI to logon. 1 or 2
Is all its functionality Azure AD/Office 365 only (like Teams) ?
or is the audience Office 365 users only ?
It can be either a General or Azure AD App,
(if general is used, Web UI must be used to logon).
1-4
Do we want users to give consent for the app to do its work ? It must use the Web UI. 1-3
Do we want avoid the consent dialog ? It must be an Azure AD app and use a ‘Silent’ http call to the Tennant-specific logon URI. 4
Do we want to logon as the app rather than a user ? It must be an Azure AD app and use a ‘Silent’ http call to the Tennant-specific logon URI. 5

Usually when you read about something which uses graph the author doesn’t explain how they selected a logon method – or that other ways exist. For example the Exchange Team Blog has a step-by-step example for an app which logs on as itself.  (Option 5 above). The app is implemented in PowerShell and the logon code the boils down to this:

$tenant    = 'GUID OR Domain Name'
$appId     = 'APP GUID'
$appSecret = 'From Certificates and Secrets'
$URI       = 'https://login.microsoft.com/{0}/oauth2/token' -f $tenant

$oauthAPP  = Invoke-RestMethod -Method Post -Uri $URI -Body @{
        grant_type    = 'client_credentials';
        client_id     =  $appid ;
        client_secret =  $appSecret;
        resource      = 'https://graph.microsoft.com';
}

After this runs $oauthApp has an access_token property which can be used in all the calls to the service.
For ease of reading here the URI is stored in a variable, and the Body parameter is split over multiple lines, but the Invoke-RestMethod command could be a single line containing the URI with the body on one line

Logging on as the app is great for logs (which is what that article is about) but not for “Tell me what’s on my one drive”; but that code can quickly be adapted for a user logon as described in Option 4 above, we keep same tenant, app ID and URI and change the grant type to password and insert the user name and password in place of the app secret, like this:

$cred      = Get-Credential -Message "Please enter your Office 365 Credentials"
$oauthUser = Invoke-RestMethod -Method Post -Uri $uri -Body  @{
        grant_type = 'password';
        client_id  =  $clientID;
        username   =  $cred.username;
        password   =  $cred.GetNetworkCredential().Password;
        resource   = 'https://graph.microsoft.com';
}

Just as an aside, a lot of people “text-wrangle”  the body of their HTTP requests, but I find it easier to see what is happening by writing a hash table with the fields and leave it to the cmdlet to sort the rest out for me; the same bytes go on the wire if you write
$oauthUser = Invoke-RestMethod -Method Post -Uri $uri -ContentType  "application/x-www-form-urlencoded"
-body
"grant_type=password&client_id=$clientID&username=$($cred.username)&password=$($cred.GetNetworkCredential().Password)&resource=https://graph.microsoft.com"

As with the first example, the object returned by Invoke-RestMethod, has the access token as a property so we can do something like this

$defaultheader = @{'Authorization' = "bearer $($oauthUser.access_token)"}
Invoke-RestMethod -Method Get -Uri https://graph.microsoft.com/v1.0/me

I like this method, because it’s simple, has no dependencies on other code, and runs in both Windows-PowerShell and PowerShell-core (even on Linux).
But it won’t work with consumer accounts. A while back I wrote something which built on this example from the hey scripting guy blog which displays a web logon dialog from PowerShell; the original connected to a login URI which was only good for Windows Live logins – different examples you find will use different end points – this page gave me replacement ones which seem to work for everything .

With $ClientID defined as before and a list of scopes in $Scope the code looks like this

Add-Type -AssemblyName System.Windows.Forms
$CallBackUri = "https://login.microsoftonline.com/common/oauth2/nativeclient"
$tokenUri    = "https://login.microsoftonline.com/common/oauth2/v2.0/token"
$AuthUri     = 'https://login.microsoftonline.com/common/oauth2/v2.0/authorize' +
                '?client_id='    +  $ClientID           +
                '&scope='        + ($Scope -join '%20') +
                '&redirect_uri=' +  $CallBackUri        +
                '&response_type=code'


$form     = New-Object -TypeName System.Windows.Forms.Form       -Property @{
                Width=1000;Height=900}
$web      = New-Object -TypeName System.Windows.Forms.WebBrowser -Property @{
                Width=900;Height=800;Url=$AuthUri }
$DocComp  = { 
    $Script:uri = $web.Url.AbsoluteUri
    if ($Script:Uri -match "error=[^&]*|code=[^&]*") {$form.Close() }
}
$web.Add_DocumentCompleted($DocComp) #Add the event handler to the web control
$form.Controls.Add($web)             #Add the control to the form
$form.Add_Shown({$form.Activate()})
$form.ShowDialog() | Out-Null

if     ($uri -match “error=([^&]*)”) {
    Write-Warning (“Logon returned an error of “ + $Matches[1])
    Return
}
elseif ($Uri -match “code=([^&]*)” ) {# If we got a code, swap it for a token
    $oauthUser = Invoke-RestMethod -Method Post -Uri $tokenUri  -Body @{
                   ‘grant_type’  =‘authorization_code’;
‘code’       
= $Matches[1];
                   ‘client_id’   = $Script:ClientID;
‘redirect_uri’
= $CallBackUri
}
}

This script uses Windows Forms which means it doesn’t have the same ability to run everywhere; it defines a ‘call back’ URI, a ‘token’ URI and an ‘authorization URI’. The browser opens at the authorization URI, after logging on the server sends their browser to callback URI with code=xxxxx  appended to the end the ‘NativeClient’ page used here does nothing and displays nothing, but the script can see the browser has navigated to somewhere which ends with code= or error=, it can pick out the code and and it to the token URI. I’ve built the Authorization URI in a way which is a bit laborious but easier to read; you can see it contains list of scopes separated by spaces, which have to be escaped to “%20” in a URI, as well as the client ID – which can be for either a generic app (registered at apps.dev.microsoft.com) or an azure AD app.

The  middle part of the script creates a the windows form with a web control which points at the authorization URI, and has a two line script block which runs for the “on_DocumentCompleted” event, it knows the login process is complete when the browser’s URI contains either with a code or an error when it sees that, it makes the browser’s final URI available and closes the form.
When control comes back from the form the If … ElseIf checks to see if the result was an error or a code. A code will be posted to the token granting URI to get the Access token (and refresh token if it is allowed). A different post to the token URI exchanges a refresh token for a new access token and a fresh refresh token.
To test if the token is working and that a minimum set of scopes have been authorized we can run the same script as when the token was fetched silently.

$defaultheader = @{'Authorization' = "bearer $($oauthUser.access_token)"}
Invoke-RestMethod -Method Get -Uri https://graph.microsoft.com/v1.0/me

And that’s it.

In the next part I’ll start looking at calling the rest APIs, and what is available in Graph.

November 1, 2010

Thinking about the cloud – part 2, Office 365

Filed under: Azure / Cloud Services,Exchange,Office,Real Time Collaboration — jamesone111 @ 3:03 pm

In my previous post I was talking in general terms about why BPOS was a sound idea. The recent announcement of Ray Ozzie’s retirement set people quoting his mantra “Three screens and a cloud” – the three screens being Computer, Mobile device, and TV.  The unwritten part of “Three screens” is recognising their diversity: people should interact with the best possible client – which means adapting to the specifics of each “screen”; it’s not “any browser and a cloud”: many phone apps do something which PCs do in the browser, they only exist because of the need to give a different experience on a different kind of screen. Instead of seeing a monolithic website (which in reality probably wasn’t monolithic) we see an app which consumes a service (probably the same service which was behind the web site).

But there was more than publishing stuff using services instead of HTML pages; more even than the likes of Groove or Live Meeting which used the cloud to enable new things.  From Ozzie’s vision, famously expressed in 2005, came a realization that services already used by business PCs and devices would increasingly be in the cloud, instead of on an organizations own servers. That was the cue to provide Exchange as a service, SharePoint as a service and so on. We’ve tried to make a distinction between “Software as a Service” – which in some people’s minds is “Any browser and a cloud” and “Software PLUS Services” – which covers a plethora of client software: from multi-player games on Xbox to iTunes to Outlook talking to an Exchange server. But when Office Outlook on a PC accesses Exchange-Online , Exchange is software and it is provided as a service –it just isn’t accessed using a browser: I haven’t yet seen a successful way to make the distinction between the two kinds of “Software as a service” just understand it has different meanings depending on who is speaking.

I don’t know if it was planned but it seemed fitting that we should announce the next generation of BPOS on the day after Ray’s announcement.  I prefer the new name Office 365. Mary Jo Foley posted something headed “This is not Office in the cloud” – in which she says “this was not some out-of-the-blue change in Microsoft’s business model. Microsoft is still pushing Office first and foremost as a PC-based software package.” Which is spot on: if you need office in a browser, Office Web App is there but it is not a replacement. I wrote in the previous post about the challenges of providing SharePoint, Exchange and so on, it is not Office but the services behind Office which are in the cloud. The key points of Office 365 are these:

  • At it’s core are the latest versions of the Server Software (Lync replaces Office Communications Server and provides Live Meeting functionality, and both Exchange and SharePoint are updated).  The FAQ page has a link to explain what happens to existing BPOS customers (and there are plenty of them – sending 167 million e-mails a day).
  • The ability to create a Public website (previously part of Office Live Small Business) has moved into Office 365 (Again the FAQ page explains what will happen to Office Live Small Business)
  • The update to SharePoint 2010 enables us to offer Office Web Apps – so documents can be viewed in high fidelity and edited from the browser.
  • Despite the the presence of Office Web Apps the main client will be Office on Desktop computers : Office Professional Plus for the desktop is now available as a part of the package on the same monthly subscription basis
  • There is a-la-carte pricing for individual parts of the suite and bundles known as plans targeted at different market segments.

I think the a-la-carte pricing option is a good thing – though some are bound to say “Microsoft are offering too many options”. The plans are just the combinations of cloud services we think will be popular; services can be added to a plan or bought standalone – for example “Kiosk” workers can get on the company e-mail system with Outlook web access from $2.  We’ve announced that the plans will cost between $4 to $27 per month,  that one of the enterprise plans closely mirrors the current BPOS at the same $10/user/month, and that there will be $6 plan with the features we think small business will need. In the run up to the launch I did see some details of different plans and options but I haven’t seen all of these in the announcements and it is not impossible that they will be fine tuned before the system goes fully live.  When will that be? The launch has a beta programme (sign-up is at http://office365.microsoft.com) , Mary-Jo said back in July that the plan was for full launch was early 2011 which sounds about right – it’s also necessarily vague, because a beta might reveal a lot of unexpected work to be done: if you want a more precise date I always say in these cases those who know won’t talk, and those who talk don’t know.

We’ve positioned Office 365 as helping small businesses to think big and big business to act fast – the link gives examples which range from the Starwood hotel chain to a single independent restaurant – it’s worth taking time to work out what it might mean to the organization(s) you work in/with: the cloud might be right for you, it might not – but if it isn’t I’d want to be able to explain why not and not have people think an opportunity was being missed through inertia.

This post originally appeared on my technet blog.

October 19, 2010

Thinking about the cloud (part 1).

Filed under: Azure / Cloud Services,Exchange,Office,Real Time Collaboration — jamesone111 @ 5:49 pm

I was telling someone recently that before I joined Microsoft I spent the late 1990s running a small training company. The number of employees varied, averaging out at a dozen or so. I delivered training, did the business management, helped the win over customers and I looked after the IT. It was like doing two or three jobs.

I’ve been quite reticent about our  “Business Productivity Online Service“partly because it takes a long and closely argued post to cover why, from an IT professional’s point of view, getting rid of your servers isn’t abdicating. (This is not going to be that post). But as chance would have it I was looking at BPOS again with my old job in my thoughts.  B-POS sounds like it should be something… ”points of sale”, but it is Exchange,Communications server and Sharepoint provided as Pay-monthly “Cloud services”

In the training company we ran all our own IT services, but there’s no way I’d host my own web-server today: the sense of using a hosting company was clear before I left for Microsoft.  The launch of BPOS gave businesses a way to get hosted Mail (Exchange), Presence & IM (OCS) and Collaboration & Document management (Sharepoint) for $10 US per month – or in round numbers £80 annually – per user. Comparing that with the cost of server hardware and software and especially the time that in-house systems took up, if I were running that business today, my head would say get rid of the servers.  You can mix in-house and in-cloud servers; users keep the same desktop software which is crucial: you don’t give up Outlook to move your mailboxes to the cloud.

It needs a change of attitude to give up the server. If my head argued costs and figures,  my heart might have come back with benefits like “You are master of your own destiny with the servers in-house”. But are you ? Back then we couldn’t justify clustering our servers, so if hardware failed – work would stop until it was repaired. Paying for a service in a Microsoft datacentre means it runs on clustered hardware, which someone else maintains. Microsoft’s datacentre is a bigger target for attack, but the sheer scale of the operation allows investment in tiers of defence. Small businesses tend not to worry about these things until something goes wrong, and you can always tell yourself that the risk is OK if you’re getting a better service in-house. But the truth is you’re probably not getting  better service.  As a Microsoft employee I’m used to having access to my mail and calendar from anything that connect to the internet – laptop at home, or on the move, any PC with web access, or Sync’d to a phone. I doubt if I would have set that up for the training company but it’s part of BPOS – even to the extent of supporting iPhones and Blackberries.   Getting rid of servers could not only save money but give users a better set of tools to use in their jobs – an easier thing to accept now that I don’t run servers for a business.

Now if you’ve come across the idea of the HypeCycle (see Wikipedia if not) – I agree with Gartner that cloud technologies somewhere near “peak of inflated expectations”  – in other words people are talking up “the cloud” beyond it’s true capabilities, and if things follow a normal course there will be a “trough of disillusionment” before things find their true level. I don’t buy into the idea that in the future scarcely any business will bother with keeping their own server, any more than they would generate their own electricity.  Nor do I buy into the polar opposite – that very few organisations, and none with any sense, will keep critical services in the cloud – that idea seems just as implausible to me. So the truth must lie in between: the method of delivering services to users won’t change from one foregone conclusion (the in-house server) to another foregone conclusion (the service in the cloud), like so many things it will be a question of businesses asking “does it make sense to do this in-house”, and I think IT professionals will want to avoid depending on that question being answered one way.

This post originally appeared on my technet blog.

March 5, 2010

Steve Ballmer talks about our cloud strategy

Filed under: Azure / Cloud Services — jamesone111 @ 3:19 pm

Yesterday Steve gave a talk at the University of Washington to discuss how cloud computing will change the way people and businesses use technology.

You can watch the speech on demand here, the video is about 85 minutes long, and if you want to get a snapshot of how we see things developing you can’t do much better. 

This post originally appeared on my technet blog.

February 23, 2010

What is Windows Azure ?

Filed under: Azure / Cloud Services — jamesone111 @ 2:37 pm

"What is Windows Azure?" click for Video RNLI - Click for a larger version with readable text 

One of the things that seemed odd when I first came to Microsoft, was the way we put up posters for our own internal consumption. I’ve long since grown used to it: inevitably some of these are interesting, some are not, some are eye-catching and some are not. In the atrium of my building this week is a picture of a lifeboat and as a Scuba diver I have a bit of an interest in lifeboats, an organization I hope not to see when they are on duty but really want to be there. So a picture of one is bound to get my attention.

It turns out that the RNLI is one of the early customers using real world – sorry for being melodramatic – life and death applications on Windows Azure (the system handles man overboard alerts: email might seem like life and death at times, but getting this right is the difference between lives saved and lives lost.)  This is one of a set of posters around the place advertising the LiveOnAzure website using different case studies (you can go straight to the UK-Focused case studies themselves)

“Stuff in the cloud” is unknown territory for many people. There are those  run away with the idea and and start talking as if it means getting rid of all the IT in a business or the talk degenerates into  something like “Buzzword, Buzzword, cloud, Buzzword, utility, Buzzword, Buzzword, services Buzzword cloud cloud Buzzword, platform, Buzzword,pay as you go, Buzzword,Buzzword”

The LiveOnAzure site links to some great resources, including the 4  Minute video I’ve linked to here. If you want to cut through to quick understanding of what Azure is about, it’s 255 seconds well  spent, afterwards, if you’re interested, there is plenty more to study on the live on azure site.

This post originally appeared on my technet blog.

Create a free website or blog at WordPress.com.