James O'Neill's Blog

December 24, 2009

Fighting talk from VMware.

Filed under: Virtualization — jamesone111 @ 4:29 pm

There’s running joke on my team – if I want to drive my blog statistics up all I need to do is talk tough about VMware. A few days ago I posted a response to some VMware FUD. and it’s had 3 times the readers of a typical post and more than the usual number re-tweets, inbound links and comments, including one from  Scott Drummonds. He said. 

I work for VMware and am one of the people responsible for our performance white papers.

I know who Scott is, and that’s not all he’s responsible for: I got a huge spike earlier in the year when I went after him for posting a dishonest video on youtube.  He goes on

You incorrectly state that VMware recommends against memory over-commit.  It is foolish for you to make this statement, supported by unknown text in our performance white paper, If you think any of the language in this document supports your position, please quote the specific text.  I urge you to put that comment in a blog entry of its own.

Fighting talk. I gave the link to http://www.vmware.com/pdf/vi_performance_tuning.pdf , where both the following quotes are “unknown” to Scott – they are on page 6.

Avoid frequent memory reclamation. Make sure the host has more physical memory than the total amount of memory that will be used by ESX plus the sum of the working set sizes that will be used by all the virtual machines running at any one time.

Due to page sharing and other techniques, ESX Server can be overcommitted on memory and still not swap. However, if the over-commitment is large and ESX is swapping, performance in the virtual machines is significantly reduced.

Scott says that he is “sure that your [i.e. my] interpretation of the text will receive comments from many of your customers.”  I don’t think there I’m doing any special interpretation here (comment away if you think I am), the basic definition of “Fully committed”  is when the host has an amount physical memory equal to the total amount of memory that will be used by the Virtualization layer plus the sum of the working set sizes that will be used by all the virtual machines running at any one time.  Saying you should Make sure the host has more memory than that translates as DO NOT OVER COMMIT. 

The second point qualifies the first: there’s a nerdy but important point that working set is not the same as memory allocated. The critical thing is to avoid the virtualization layer swapping memory. If you had – say – two hosts running 8GB VMs with Exchange in, and you tried to fit both VMs into one host with 8GB of available RAM, both VMs would try to cache  6-7GB of mail-store data; but without physical memory behind it, what got pulled in from disk gets swapped out to disk again. In this case you would be better off telling the VMs they had 4GB to use, that way they can keep 2GB of critical data (indexes and so) in memory, and not take pot luck with the virtualization layer swaps to disk. “Balloon” drivers make memory allocated to a VM unavailable, reducing an individual working set; page sharing reduces the sum of working sets. It might sound pedantic to say ‘you can over-allocate without over-committing’  but that’s what this comes down to: the question is “by how much”, as I said in the original post:

VMware will cite their ability to share memory pages, but this doesn’t scale well to very large memory systems (more pages to compare), and to work you must not have [1] large amounts of data in memory in the VMs (the data will be different in each), or [2]  OSes which support entry point randomization (Vista, Win7, Server 2008/2008-R2) or [3] heterogeneous operating systems.

We had this argument with VMware before when they claimed you could over-commit by 2:1 in every situation– eventually as “proof” they found a customer (they didn’t name) who was evaluating (not in production with) a VDI solution based on Windows XP, with 256MB in each VM. The goal was to run a single (unnamed) application, so this looked like a much better fit for Terminal Services (one OS instance / many users & apps) than for VDI (Many OS instances), but if the app won’t run in TS then this is a situation where a VMware-based solution has the advantage over a Microsoft based one. [Yes, such situations do exist! I just maintain they are less common than many people would have you believe].

Typing this up I wondered if Scott thought I was saying that VMware’s advice was never, ever, under any circumstances whatsoever should customers use the ability to over commit – which they have taken the time and trouble to put into their products. You can see that they recommend against memory over-commit, as he puts it, in a far more qualified way than that. The article from Elias which I was responding to (and I’ve no idea if Scott read it or not) talked about using oversubscription “to power-on VMs when a host experiences hardware failure”. This sounds like a design with cluster nodes having redundant capacity for CPU, Network and disk I/O, (not sweating the hardware as in the VDI case) but with none for memory; after a failure moving to fully used CPU, Network and disk capacity, but accepting a large-over commit on memory with poor performance as a result.  In my former role in Microsoft consulting services I used to joke in design reviews that I could only say “we can not foresee any problems” and if problems came up we’d say “that was unforeseen”. I wouldn’t sign-off Elias’ design: if I did, and over-committing memory meant service levels weren’t met, any lawyer would say that was a foreseeable problem and produce that paper to show it is exactly the kind of set-up VMware tell people to avoid.

There is one last point: Scott says everyone loves Over-commit (within these limitations presumably), and we will too “once you have provided similar functionality in Hyper-V.”. Before beta of Server 2008-R2 was publically available we showed dynamic memory. This allowed VM to say it needed more memory (if added it showed up using the hot-add ability of newer OSes). The host could ask for some memory back – in which case a balloon driver reduced the working set of the VM. There was no sharing of pages between VMs and no paging of VMs memory by the virtualization layer – the total in use by the VMs could not exceed physical memory. It was gone from the public beta; and I have heard just about every rumour on what will happen to it “it will come out as a patch”, “it will come out in a service pack”, “it will be out in the next release of Windows”, “the feature’s gone for good”, “it will return much as you saw it”, “it will return with extra abilities”. As I often say in these situations those who know the truth aren’t taking, and those who talk don’t know.

Oh yes, Merry Christmas and Goodwill to all.

This post originally appeared on my technet blog.


December 22, 2009

Powershell Parameters 2: common parameters.

Filed under: Powershell — jamesone111 @ 12:45 pm

In the previous post I talked about the template I have for many of my PowerShell functions. And I’ve talked before about adding support for ShouldProcess to functions which change the state of the system (  that allows  –confirm, -prompt, –whatif and –verbose switches). Since then I’ve learnt that functions can identify the level of impact they have – this ties into the $confirmPreference variable to turn confirmation prompts on without needing to specify them explicitly. Adding shouldProcess support turns my function template into this:
Function Stop-VM{
          [CmdletBinding( SupportsShouldProcess=$True, ConfirmImpact=‘High’)]
   Param( [parameter(Mandatory = $true, ValueFromPipeline = $true)]$VM ,
 $Server = “.” ) 
Process{ if ($VM –is  [String]) {$VM = GetVM –vm $vm –server $server}
          if ($VM –is  [array]) {$VM | foreach-object {Stop-VM –vm $_ –server $server}}
          if ($VMis  [System.Management.ManagementObject] `
–and $pscmdlet.shouldProcess($vm.ElementName, “Power-Off VM without Saving”)) {

There is one nuisance when dealing with more than one VM: we will get the following message for every VM


Are you sure you want to perform this action?

Performing operation “Power-Off VM without Saving” on Target “London-DC”.

[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is “Y”): n

even if we select “Yes to all” or “No to all”.
Each recursive call to Stop-VM has its own instance of $psCMDLET. And if confirmImpact is set to High, that will cause a script to stop and prompt. Jason Shirk, one of the active guys in our internal PowerShell alias pointed out first you can have a –force switch to prevent the prompt appearing and secondly you don’t need to use the functions’ OWN instances of $psCMDLET: why not pass one instance around? So my function template morphed into the following:

Function Stop-VM{
[CmdletBinding( SupportsShouldProcess=$True, ConfirmImpact=‘High’)] 
   Param( [parameter(Mandatory = $true, ValueFromPipeline = $true)]$VM ,
 $Server = “.” ,
 Process{  if ($psc -eq $null) {$psc = $pscmdlet}
if (-not $PSBoundParameters.psc) {$PSBoundParameters.add(“psc”,$psc)} 
           if ($VM –is [String])  {$VM = GetVM –vm $vm –server $server}
           if ($VM –is [array])   {[Void]$PSBoundParameters.Remove(“VM”)
                                   $VM | ForEach-object {Stop-Vm -VM $_ @PSBoundParameters}}
 if ($VM -is [System.Management.ManagementObject] `
–and ($force -or $psc.shouldProcess($vm.ElementName, “Power-Off VM without Saving”))){

So now the first function called sets $PSC, which is passed to all other functions – in this case recursive calls to this function – which use it in place of their instance of $psCmdlet. And –force is there to trump everything.

This post originally appeared on my technet blog.

Parameters in PowerShell functions: vague is good.

Filed under: Powershell — jamesone111 @ 12:19 pm

There is a theme which I’ve found has come up in several places with PowerShell. Giving flexibility to the user can mean being vague about parameters – or at least not being excessively precise.
Consider these examples from my Hyper-V library for PowerShell

1. The user can specify one or more virtual machine(s) using one or more string(s) which contains the name(s) 
   Save-VM London-DC or Save-VM *DC or Save-VM London*,Paris*          
2. The user can get virtual machine objects with one command and pipe these into another command 
   Get-vm –running | Stop-VM

3. The user can mix Objects and strings 
   $MyVms = Get-vm –server Wallace,Gromit | where { (Get-VMSettings $_).note –match “LAB1”}
   start-vm –wait “London-DC”, $MyVMs

The last one searches servers “Wallace” and “Gromit” for VMs, narrows the list to those used in lab1 and starts London-DC on the local server followed by the VMs in Lab1.

In an earlier post I showed a couple some things about how parameters changed for V2  of PowerShell  instead of writing Param($VM) ,
I can now write  Param( [parameter(Mandatory = $true, ValueFromPipeline = $true)]$VM ) which ensures the  parameter is present and saves me the work that was needed in V1 to pick up a piped object.
I’m now realizing there is risk of being too prescriptive with the parameter:  V2 allows 8 different validation attributes, in addition to the data type which was there  in V1. In my Hyper-V library I ended up with a lot of functions designed like this one.

Function Stop-VM{
      Param( [parameter(Mandatory = $true, ValueFromPipeline = $true)]$VM,
$Server = ".") 
    Process{ if ($VM –is [String]) {$VM = GetVM –vm $vm –server $server}
             if ($VM –is [array])  {[Void]$PSBoundParameters.Remove("VM")
                                    VM | ForEach-object {Stop-Vm -VM $_ @PSBoundParameters}}  
             if ($VM -is [System.Management.ManagementObject]) {
                 #Do the work of the function
                 $vm.RequestStateChange(3) }

The 3 if statements give me the ability to work with all the permuations.

  • First if the functions was passed a single string (via the pipe or as a parameter), it gets the matching VM(s), if any.
  • If passed an array , or a string which resolved to an array, it calls itself recursively with each member of that array.  Mike Kolitz  (who’s helped out on the Hyper-V library), showed me a trick to simplify the process of a function calling another function using the same parameters – (or calling itself recursively). This uses the “Splatting” operator: @$PsboundParameters puts the contents of a variable into the command (James Brundage wrote it up, here .) Mike’s trick was to  manipulate $psboundParameters in the same way for all recursive calls so they didn’t have to worry about the details of Parameters (previously it was necessary to call functions differently depending on the parameters passed) , so two lines go into the template and only the function name changes each time it is used.
  • Finally If passed a single WMI object or a string which resolved to a single WMI object then it does whatever work is required. Having started to look at PowerShell remoting I’m starting to think I should check the .__CLASS property not the type of the object but the principle is the same either way.

It’s pretty clear that specifying a type for $VM will be a hindrance; I know some people would want to use the [ValidateNotNullOrEmpty] attribute for it and I think that’s a mistake too: if I do
$MyVms = Get-vm –running ;  stop-vm –wait  $MyVMs ;   that shouldn’t error if no VMs are running and Stop-VM is handed an empty list. Similarly I specified in the Hyper-V library that $Server must be a single string: actually it will work perfectly well with an array of strings, I decided to go back and remove any validation from the parameter.

The logic I am now working to says if a parameter is taken by one function with the sole intent of passing it to a another, leave the validation to the next function. Philosophically “proper” programmers tend to think that every function should validate its parameters – those who write quick scripts tend to be happier about being loose with validation and error trapping.

This post originally appeared on my technet blog.

December 21, 2009

Drilling into ‘reasons for not switching to Hyper-V’

Filed under: Powershell,Virtualization,Windows Server 2008-R2 — jamesone111 @ 11:30 am

Information week published an article last week “9 Reasons why enterprises shouldn’t switch to hyper-v”. The Author is Elias Khnaser, this is his website and this is the company he works for.  A few people have taken him to task over it, including Aidan . I’ve covered all the points he made, most of which seem to have come the VMwares bumper book of FUD, but I wanted to start with one point which I hadn’t seen before. 

Live migration. Elias talked of “an infrastructure that would cause me to spend more time in front of my management console waiting for live migration to migrate 40 VMs from one host to another, ONE AT A TIME.” and claimed it “would take an administrator double or triple the time it would an ESX admin just to move VMs from host to host”.  Posting a comment to the original piece he went off the deep end replying to Justin’s comments , saying “Live Migration you can migrate 40 VMs if nothing is happening? Listen, I really have no time to sit here trying to educate you as a reply like this on the live migration is just a mockery. Son, Hyper-v supports 1 live VM migration at a time.” . Now this does at least start with a fact : Hyper-V only allows one VM to be in flight on a given node at any moment: but you can issue one command and it moves all the hyper-v VMs between nodes. Here’s the PowerShell command that does it.
Get-ClusterNode -Name grommit-r2 | Get-ClusterGroup |
  where-object { Get-ClusterResource -input $_ | where {$_.resourcetype -like "Virtual Machine*"}} |
     Move-ClusterVirtualMachineRole -Node wallace-r2             
The video shows it in action with 2 VMs but it could just as easily be 200.  The only people who would “spend more time in front of [a] management console” are those who are not up to speed with Windows Clustering. System Center will sequence moves for you as well. But… does it matter if the VMs are migrated in series or in parallel ?  If you have a mesh of Network connections between cluster nodes you could be copying to 2 nodes of two networks with the parallel method, but if you don’t (and most clusters don’t) then n copies will go at 1/n the speed of a single copy. Surely if you have 40VMs an they take a minute to move it takes 40 minutes either way…  right ? Well no… Let’s use some rounded numbers for illustration only: say 55 seconds of the minute is doing the initial copy of memory, 4 seconds doing the second pass copy of memory pages which changed in that 55 seconds, and 1 second doing the 3rd pass copy and handshaking. Then Hyper-V moves onto the next VM and the process repeats 40 times. What happens with 40 copies in parallel ? Somewhere in 37th minute the first pass copies complete – none of the VMs have moved to their new node yet. Now: if 4 seconds worth changed in 55 seconds – that’s about 7% of all the pages – what percentage will have changed in 36 minutes ?  Some won’t change from hour to hour and others change from second to second – how many actually change in 55 seconds or  36 minutes or any other length of time depends on the work being done at that point, and the memory size and will be enormously variable. However the extreme points are clear (a) In the very best case no memory changes and the parallel copy takes as long as the sequential. In all other cases it takes longer (b) In the worst case scenario the second pass has to copy everything – when that happens the migration will never complete.  

Breadth of OS support. In Microsoft-speak “supported”  means a support incident can go to the point of issuing a hot-fix if need be. Not supported doesn’t mean non-cooperation if you need help – but the support people can’t make the same guarantee of a resolution. By that definition, we don’t “support” any other companies’ software – they provide hot-fixes, not us – but we do have arrangements with some vendors so a customer can open a support case and have it handed on to Microsoft or handed on by Microsoft as a single incident. We have those arrangements with Novell for Suse Linux and Red Hat for RHEL, and it’s reasonable to think we are negotiating arrangements for more platforms: those who know what is likely to be announced in future won’t identify which platforms to avoid prejudicing the process. In VMware-speak “supported”, has a different meaning. In their terms NT4 is “Supported”. NT4 works on HyperV but without hot-fixes for NT4 it’s not “Supported”. If NT4 is supported on VMware and not on Hyper-V exactly how is a customer better off ? Comparisons using different definitions of “support” are meaningless. “Such and Such an OS works on ESX / Vsphere but fails on Hyper-V” or “Vendor X works with VMware but not with Microsoft” allows the customer can say “so what” or “That’s a deal-breaker”.

Security.  Was it hyper-v that had the vulnerability which let VMs break out of into the host partition ? No that was VMware. Elias commented that "You had some time to patch before the exploit hit all your servers" which makes me worry about his understanding of network worms. He also brings up the discredited disk footprint argument; that is based on the fallacy that every Megabyte of  code is equally prone to vulnerabilities, Jeff sank that one months ago and pretty comprehensively – the patch record  shows a little code from VMware has more flaws than a lot of code of Microsoft’s.

Memory over-commit. Vmware’s advice is don’t do it. Deceiving a virtualized OS about the amount of memory at its disposal means it makes bad decisions about what to bring into memory – with the virtualization layer paging blindly – not knowing what needs to be in memory and what doesn’t. That means you must size your hardware for more disk operations, and still accept worse performance. Elias writes about using oversubscription, “to power-on VMs when a host experiences hardware failure”. In other words the VMs fail over to another host which is already at capacity and oversubscription magically makes the extra capacity you need. We’d design things with a node’s worth of unused memory (and CPU , Network, and Disk IOps ) in the other node[s] of the cluster. VMware will cite their ability to share memory pages, but this doesn’t scale well to very large memory systems (more pages to compare), and to work you must not have [1] large amounts of data in memory in the VMs (the data will be different in each), or [2]  OSes which support entry point randomization (Vista, Win7, Server 2008/2008-R2) or [3] heterogeneous operating systems. Back in March 2008 I showed how a Hyper-v solution was more cost effective if you spent some of the extra cost of buying VMware on memory – in fact I showed the maths underneath it and how under limited circumstances VMware could come out better. Advocates for VMware [Elias included] say buying VMware buys greater VM density: the same amount spent on RAM buys even-greater density. The VMware case is always based on a fixed amount of memory in the server: as I said back then, either you want to run [a number of] VMs on the box, or the budget per box is [a number] Who ever yelled "Screw the budget, Screw the workload. Keep the memory constant !" ? The flaw in that argument is more pronounced now than it was when I first pointed it out as the amount of RAM you get for the price of VMware has increased.

Hot add memory. Hyper-v only does hot-add of disk, not memory. Some guest OSes won’t support it at all. Is it an operation which justifies the extra cost of VMware ? . 

Priority restart – Elias describes a situation where all the domain controllers / DNS servers on are one host. In my days in Microsoft Consulting Services reviewing designs customers had in front of them, I would have condemned a design which did that, and asked some tough questions of whoever proposed it.  It takes scripting (or very conservative start-up timeouts) in Hyper-V to manage this. I don’t know enough of the feature in VMware to know how sequences things not based on the OS running but all the services being ready to respond

Fault tolerance. VMware can offer parallel running – with serious restrictions. Hyper-v needs 3rd party products (Marathon) to match that.  What this saves is the downtime to restart the VM after an unforeseen hardware failure. It’s no help with software failures if the app crashes, or the OS in the VM crashes, then both instances crash identically. Clustering at the application level is the only way to guarantee high levels of service: how else do you cope with patching the OS in the VM or the application itself ?      click for larger version

Maturity: If you have a new competitor show up in your market, you tell people how long you have been around. But what is the advantage in VMware’s case ? Shouldn’t age give rise to wisdom, the kind of wisdom which stops you shipping Updates which cause High Availability VMs to unexpectedly reboot, or shipping beta time-bomb code in a release product. It’s an interesting debating point whether VMware had that Wisdom and lost it – if so they have passed through maturity and reached senility.

 Third Party vendor support. Here’s a photo. At a meet-the-suppliers event one of our customers put on, they had us next to VMware. Notice we’ve got System Center Virtual Machine manager on our stand, running in VM, managing two other hyper-V hosts which happen to be clustered, but the lack of traffic at the VMware stand allows us to see they weren’t showing any software – a full demo of our latest and greatest needs 3 laptops, and theirs ? Well the choice of hardware is a bit limiting. There is a huge range of management products to augment Windows – indeed the whole reason for bring System Center in is that it manages hardware, Virtualization (including VMware) and Virtualized workloads. When Elias talks of 3rd party vendors I think he means people like him – and that would mean he’s saying you should buy VMware because that’s what he sells.

This post originally appeared on my technet blog.

December 15, 2009

Server 2008 R2 feature map.

Filed under: Photography,Windows Server 2008-R2 — jamesone111 @ 7:33 pm

One of the popular giveaways at our events this year has been the feature poster for server 2008-R2 – which is now available for download. I think the prints were A2 size, although at 300 DPI it is closer to A1 dimensions – the paper copies have all gone although

I’m told more are being printed if you want a paper copy.

One of my fellow evangelists thought it was a great application for Silverlight deep zoom (the technology formerly known as Seadragon). and and I have to say I agree. What better way is there to look at 93 Megapixels worth of image ?
The buttons in the lower right corner include maximize so you can use your full screen to view it. I haven’t got a wheel mouse plugged in at the moment but that is the best way to zoom in and out.

This post originally appeared on my technet blog.

Add-ons and plug-ins – do you have a favourite ?

Filed under: Internet Explorer,Powershell,Social Media — jamesone111 @ 2:13 pm

I was chatting with a couple of colleagues yesterday about internet explorer. Someone grumbled “When people think of browser add-ons they automatically think of firefox, but there are some really good ones for IE”. I have talked in the past about IE7 pro (which despite the name works with IE8).  I use it for four things – first it has a flash blocker. This gets round my issue of not being able to use a page when there is some animation going on, without needing to turn the flash add-on off  [yes, flash is an IE add-on, so is Silverlight.] Secondly it will block content being pulled in from selected external sites. This helps defeat flash but also stops those firms which want to track my visits to sites. Finally for things which get through the first two it has the ability to cut out some of the really annoying bit of a page – those which would circumvent the flash blocker. Although this is mostly targeted at adverts I used it to make the Independent newspaper’s web site usable.  The last trick it has contributes to my habit of having dozens of tabs open – click/drag opens a link in a new tab – it has other mouse gestures but that’s the only one I use. [Some people like the download manager  , but I’m not one of them]

I added an accelerator for twitter but after that I’m not really using any add-ons. In yesterdays conversation my other colleague said – roughly “I have no plug-ins at all and I don’t think many people use them, a lot a buggy and they all slow the browser down to different degrees”. So … if you have added anything to IE7 or 8 that you really wouldn’t be without  (or for that matter if you use firefox because of an add-on) , please post a comment.

I write my blog posts using Windows live writer and I’ve talked before before about one which adds tweetmeme support to each post. This morning I’ve been trying to help someone get the onefor Bit.ly – I had to add &history=1 onto the end of the password to get it to log my links to a history on bit.ly – which is much more useful as I can see which links people are following. It won’t work for him. I guess the same criticisms can be made for Live writer plug-ins as IE ones, but the same question interests me – if you blog with live writer and have a favourite plug in, please post a comment.

Finally, I was talking James Brundage – the guy behind the PowerShellPack which is in the Windows 7 resource kit and also available for download, part of that is an add-in for the PowerShell interactive scripting environment. It adds a menu and a set of short-cut keys – among other things to copy the text with syntax colouring. If you ever need the code to “un-bitly-fy” a URL here it is , in colour. Of course it works for more than bit-ly – it gives back the canonical URL for anything which is redirected from another URL.

Function Get-trueURL {            
    Param ([parameter(ValueFromPipeLine= $true, mandatory=$true)]$URL )            
    $req = [System.Net.WebRequest]::Create($url)             
    If ($resp.StatusCode -eq 301 ) {$resp.GetResponseHeader("Location")}            
    else                           {$resp.responseURI}            

This post originally appeared on my technet blog.

December 10, 2009

Hyper-V resource round-up.

Filed under: Virtualization — jamesone111 @ 6:43 pm

I spent this morning with a group of our partners and one of the comments which came back was “we know there is a lot of material out there for Hyper-V .. can you point us to some of the highlights”. So it was a nice surprise to find a mail from the product team telling me that they’ve now got a “best of the Hyper-v resources” page – doubly pleasant was that they had linked to several of my past posts on the subject. In the among the other links is the answer to a question I got asked a couple of days ago “how do I allow hyper-V to get access to shared files on other computers” (the point being a logged on user could see these files but VMs could not) – that’s there as well – the two posts about file shares.

This post originally appeared on my technet blog.

Offline Virtual Machine Servicing Tool update

Filed under: Virtualization — jamesone111 @ 4:59 pm

I had a mail about an update to the Offline Virtual Machine Servicing Tool , which I thought was worth passing on: in case you haven’t come across it before :

imageThe Offline Virtual Machine Servicing Tool 2.1 manages the workflow of updating large numbers of offline virtual machines according to their individual needs. To do this, the tool works with Microsoft® System Center Virtual Machine Manager (VMM) 2008 or 2008 R2 and with either Windows Server Update Services (3.0 SP1 or SP2 ) or System Center Configuration Manager (2007 SP1 or SP2 or 2007-R2.)

It uses “servicing jobs” to manage the update operations based on lists of existing virtual machines stored in VMM. For each virtual machine, the servicing job:

          • * “Wakes” the virtual machine (deploys it to a host and starts it).
          • * Triggers the appropriate software update cycle (Configuration Manager or WSUS).
          • * Shuts down the updated virtual machine and returns it to the library.
  • Using a maintenance host with carefully controlled connectivity you can reduce the risk to a VM while it is in the process of being patched

We’ve just released a new version of the tool to fully support the latest releases of WSUS, SCCM, SCVMM, and Hyper-V and Windows, which you can download – complete with additional information, here

If all your VMs are running all the time, this will not be of all that much interest to you – but if you have complex environment with a library of VMs which see occasional use, then this is something which you really should take a look at.

This post originally appeared on my technet blog.

December 8, 2009

Search stories … or “how do people manage on XP”

Filed under: Windows 7,Windows Vista,Windows XP — jamesone111 @ 9:56 pm

I know from experience that the people I meet in this job , and those who read this blog are more likely to be early adopters than the population at large so you, as a reader may well be on Windows 7 by now, and had a better chance than most of running Vista. But we know there is a lot of Windows XP still out there.  So here is something that I’m generally curious about: of those still on XP how many have added Microsoft’s (or a third party’s) search solution ?

This being Christmas time people are thinking about sending cards and in recent days two people have – unknowingly – each asked me for the others address.  Now I have some addresses and phone numbers in my contacts, but as it turns out neither of these two. Both addresses were buried in attachments in my e-mail and in both cases I had a fragment of the address. Tap that fragment into the search bar in outlook (which uses Windows search) and in less time than it took to type it I have the answer. I’ve had a chapter of problems with my car of late. We lease cars through different companies and we have a firm who coordinate everything – normally the extra layer would gets in the way, but throw this lot a problem and they make it a personal mission to get to a solution.  So have I put their number in my contacts ? er. no. Lease company? Yes. Garage? Sure. People who actually sort things out ? No. And the reason – it takes about 2 seconds to type their name into search and get an email with the number in the signature. (if I can persuade them to make that a clickable link things would be perfect).

If this saves me an hour a week [and that’s a low estimate] it would mean Microsoft gets a week of extra work out of me per year. (Actually it’s 6 days) If your organization is still on search-less XP think of that next time you can’t find something you know is on your PC or in your mailbox. And when you hear an excuse for staying on old software try asking “What percentage of the salary bill are we prepared to forego for this reason”. When you take public holidays, vacation allowance, sickness and training off the total there are a little less than 200 days to actually work in a year. So it’s easy – think of features in “days saved per year” , halve it and that’s the percentage of the salary bill. 

This post originally appeared on my technet blog.

Get-Scripting podcast

Filed under: Powershell — jamesone111 @ 8:46 pm

I was pleased to get asked to record a Podcast with Jonathan and Alan to become episode 14 of their get-scripting series. To be honest to be asked to sit and chat about PowerShell with these guys – especially when they invite someone like Thomas Lee to join us – well it’s more fun than work – and then it dawns on me that technically it is work. We tried to keep the conversation to PowerShell V2, and for a little fun Alan set us the challenge to name V2 cmdlets – whoever named the most was the winner and it was … well you’ll just have to listen to the Podcast to find out.

This post originally appeared on my technet blog.

December 2, 2009

Security updates.

Filed under: Security and Malware — jamesone111 @ 11:21 am

There are some rumours circulating about problems with the latest round of security updates. The Security response centre have posted about it. So has Roger Halbheer our Chief Security Advisor for Europe. Now you can say “They would say that, I’m going to take my chances with whatever security loopholes were being closed” or you can say “The reports of problems are of dubious provenance, Microsoft probably wouldn’t post outright lies” etc. Given how quickly our patches get reverse engineered into exploits I’d see a lot more risk in not having them than something which screws up coming out of one of the product groups , as the MSRC post says "it appears they’re saying that our security updates are making permission changes in the registry” and “We’ve conducted a comprehensive review [which] has shown that none of these updates make any changes to the permissions in the registry”

And if, you do think an update has broken your system, don’t suffer in silence I’m told Microsoft support will always take the call from someone in that position even if they don’t have a support contract.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2009/12/02/security-updates.aspx’;

This post originally appeared on my technet blog.

Blog at WordPress.com.