James O'Neill's Blog

March 18, 2010

Virtualization announcements today.

Filed under: Virtualization — jamesone111 @ 5:23 pm

In another window I am listening to the desktop Virtualization hour which I blogged about yesterday. A couple of hours ahead of the broadcast we posted the press release on Press pass which contained the following detail of what we are announcing today.

• New VDI promotions available for qualified customers to choose from today. Microsoft and Citrix Systems are offering the “Rescue for VMware VDI” promotion, which allows VMware View customers to trade in up to 500 licenses at no additional cost, and the “VDI Kick Start” promotion, which offers new customers a more than 50 percent discount off the estimated retail price. Eligibility and other details on the two promotions can be found at http://www.citrixandmicrosoft.com.

• Improved licensing model for virtual Windows desktop. Beginning July 1, 2010, Windows Client Software Assurance customers will no longer have to buy a separate license to access their Windows operating system in a VDI environment, as virtual desktop access rights now will be a Software Assurance benefit. [Note the new name VDA is what we used to call VECD.]

• New roaming use rights improve flexibility. Beginning July 1, 2010, Windows Client Software Assurance and new Virtual Desktop Access license customers will have the right to access their virtual Windows desktop and their Microsoft Office applications hosted on VDI technology on secondary, non-corporate network devices, such as home PCs and kiosks.

Windows XP Mode no longer requires hardware virtualization technology. This change simplifies the experience by making virtualization more accessible to many more PCs for small and midsize businesses wanting to migrate to Windows 7 Professional or higher editions, while still running Windows XP-based productivity applications.

• Two new features coming in Windows Server 2008 R2 service pack 1. Microsoft Dynamic Memory will allow customers to adjust memory of a guest virtual machine on demand to maximize server hardware use. Microsoft RemoteFX will enable users of virtual desktops and applications to receive a rich 3-D, multimedia experience while accessing information remotely. [Note the new name RemoteFx is the technology we acquired with the purchase of Calista.]

New technology agreement with Citrix Systems. The companies will work together to enable the high-definition HDX technology in Citrix XenDesktop to enhance and extend the capabilities of the Microsoft RemoteFX platform.

 

Good stuff all round, but from a technical viewpoint it’s the new bits in SP1 which will get the attention. I’ll post a little more on what Dynamic memory is and is not in the next day or two.

This post originally appeared on my technet blog.

March 17, 2010

Re-post : Desktop Virtualization Hour

Filed under: Events,Virtualization — jamesone111 @ 3:44 pm

About a month ago I mentioned that we have a “Desktop Virtualization Hour” planned for 4PM (GMT) tomorrow, March 18th. (That’s 9AM Seattle time, 5PM CET … you can work out the others I’m sure). More information and a downloadable meeting request are Here.

I said then that I thought it might be “more than the average web cast.” and over the last couple of days I’ve had some information about announcements which are planned for the session.  Obviously I am not got say what they are until tomorrow, but if you want the news as it breaks – click the link above.

This post originally appeared on my technet blog.

March 15, 2010

IE 8 is safest. Fact.

Filed under: Internet Explorer,Security and Malware,Virtualization — jamesone111 @ 1:11 pm

Every now and then a news story comes up which reminds us that if people with bad intentions, even sensible people can fall into traps on-line. There was one such story last week where friends of the victim said she was “the sensible one” – if she wasn’t unusually gullible it could happen to anyone. I wrote about safer internet day recently and it’s worth making another call to readers who are tech savvy to explain to others who are less so just how careful we need to be trusting people on-line.  I got a well constructed phishing mail last week claiming to have come from Amazon I would have fallen for if it had been sent to my home rather than work account – it’s  as well to be reminded sometimes we’re not as smart as we like to think.

I’ve also been reading about a libel case. I avoid making legal commentary and won’t risk repeating a libel: the contested statement said that something had been advocated for which there was no evidence. I read a commentary which said something to the effect that in scientific disciplines, if your advocacy is not in dispute and someone says you have no evidence for it, you produce the evidence. Without evidence you have a belief, not a scientific fact.  This idea came up on later in the week when I was talking to someone about VMware:  you might have noticed there is a lack of virtualization Benchmarks out in the world, and the reason is in VMware’s licence agreement (under 3.3)

You may use the Software to conduct internal performance testing and benchmarking studies, the results of which you (and not unauthorized third parties) may publish or publicly disseminate; provided that VMware has reviewed and approved of the methodology, assumptions and other parameters of the study

imageTesting, when done scientifically, involves publishing ,methodology, assumptions and other parameters along with the test outcomes and the conclusions drawn That way others can review the work to see if is rigorous and reproducible. If someone else’s conclusions go against what you believe to be the case, you look to see if they are justified from the outcomes: then you move to the assumptions and parameters of the test and it’s methodology. You might even repeat the test to see if the outcomes are reproducible. If a test shows your product and yours is shown in a bad light then you might bring something else to the debate. “Sure the competing product is slightly better at that measure, but ours is better at this measure”. What is one to think of a company which uses legal terms to stop people conducting their own tests and putting the results in public domain for others to review ?

After that conversation I saw a link to an article IE 8 Leads in Malware Protection . NSS labs have come out with their third test of web browser protection against socially engineered malware*. The first one appeared in March of last year, and it looks set to be a regular twice yearly thing. The first one pointed out that there was a big improvement between IE7 and IE8 (IE6 has no protection at all  if you are still working for one of the organizations that has it, I’d question what you’re doing there).
IE 8 does much better than its rivals : the top 4 have all improved since the last run of of the tests. IE was up from 81 to 85% , Firefox from 27 to 29%, Safari from 21% to 29% and Chrome from 7% to  17%:

Being pessimistically inclined I look at the numbers the other way round : in the previous test we were letting 19 out of every 100 through, now it’s 15 – down by 21%: in the first test we were letting 31 of every 100 through so 52% of what got through a year ago gets blocked today. Letting that many through means we can’t sit back and say the battle is won, but IE8 is the only Browser which is winning against the criminals:  Google,for example, have improved Chrome since last time,so it only lets through 83 out of every 100 malware URLs -  that’s blocking 11% of the 93 it let through before from each 100. With every other browser the crooks are winning, which is nothing to gloat over – I hope to see a day when we’re all scoring well into the 90s.

I haven’t mentioned Opera – which has been have been consistently last, and by some margin, slipping from 5% in the first test to 1% in the second to less than 1 in the most recent. In a spirit of full scientific disclosure I’ll say I think the famous description of Real Networks fits Opera. Unable to succeed against Safari or Chrome , and blown into the weeds by Firefox,  Opera said its emaciated market-share was because IE was supplied by default with Windows. Instead of producing a browser people might want, Opera followed the path trodden by Real Networks – complaining to the European Commissioner for the protection of lame ducks competition. The result was the browser election screen.

I’m not a fan of browser election screen – not least because it is easily mistaken for Malware. To see the fault let me ask you, as reader of an IT blog, which of the following would you choose ? 

  1. The powerful and easy-to-use Web browser. Try the only browser with Browser-A Turbo technology, and speed up your Internet connection.
  2. Browser-B . A fast new browser. Made for everyone
  3. Browser-C is the world’s most widely used browser, designed by Company-C with you in mind.
  4. Browser-D from Company-D, the world’s most innovative browser.
  5. Your online security is Browser E’s top priority. Browser-E is free, and made to help you get the most out of the web.

You might say (for example) “I want Firefox”, but which is Firefox in that list ? You are probably more IT savvy than the people the election screen is aimed at and if you can’t choose from that information, how are they supposed to ? You see, if you have done your testing and know a particular browser will meet your needs best, you’d go to it by name you don’t need the screen. People who don’t know the pros and cons of the options before seeing the screen might just as well pick at random – which favours whoever has least market share – which would be Opera.

The IE 8 Leads in Malware Protection  article linked to a post of Opera’s complaining that the results of the first test were fixed “Microsoft sponsored the report, so it must be fixed!” If we’d got NSS labs to fix the results a year ago would we stipulate that Opera should be so far behind everyone else? Did we have a strategy to show Opera going from “dire failure” to “not even trying”? Or that IE8 should start at a satisfactory score and improve over several surveys with the others static  ? But to return to my original point: the only evidence which I’m aware of shows every other browser lets at least 4 times as much Malware through as IE. The only response to anyone who disputes it is let’s see your evidence to counter what NSS labs found.Google have spent a fortune advertising Chrome: if Chrome really did let fewer than 5 out of 6 malware sites through they’d get someone else to do a [reviewable] study which showed that.

And since we’re back at the question of evidence, if you want are asked for advice on the election screen and you want to advocate the one which will help people to stay safe from Phising attacks – I don’t think you have any evidence to recommend anything other than IE.  But remember it’s not a problem which can be solved by technology alone. Always question the motives of something which wants to change the configuration of your computer.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2010/03/15/ie-8-is-safest-fact.aspx’;

This post originally appeared on my technet blog.

March 10, 2010

UK techdays Free events in London – including after hours.

clip_image001

You may have seen that registration for UK TechDays events from 12th to 16th April is already open – but you probably won’t have seen this newly announced session, even if you are following @uktechdays on twitter

After Hours @ UK Tech Days 2010 – Wednesday 14th April, 7pm – 9pm. Vue Cinema, Fulham Broadway.

Reviving the critically acclaimed series of mad cap hobbyist technology demonstrations – After Hours reappears at Tech Days 2010. After Hours is all about the fun stuff people are building at home with Microsoft technology, ranging from the useful ‘must haves’ no modern home should be without, too the bleeding edge of science fiction made real! Featuring in this fun filled two hour installment of entertaining projects are: Home Entertainment systems, XNA Augmented Reality, Natural User Interfaces, Robotics and virtual adventures in the real world with a home brew holodeck!

Session 1: Home entertainment.

In this session we demonstrate the integration of e-home technologies to produce the ultimate in media entertainment systems and cyber home services.  We show you how to inspire your children to follow the ‘way of the coder’ by tapping into their Xbox 360 gaming time.

Session 2: Augmented reality.

2010 promises to be the year of the Natural User Interface. In this session we demonstrate and discuss the innovations under development at Microsoft, and take an adventure in the ultimate of geek fantasies – the XNA Holodeck.

Like all other techdays session this one is FREE to attend  – if you hadn’t heard: UK Tech Days 2010 is a week-long series of events run by Microsoft and technical communities to celebrate and inspire developers, IT professionals and IT Managers to get more from Microsoft technology.  Our day events in London will cover the latest technology releases including Microsoft Visual Studio 2010, Microsoft Office 2010, Virtualisation, Silverlight, Microsoft Windows 7 and Microsoft SQL Server 2008 R2 plus events focusing on deployment and an IT Manager day. Oh and did I say they were FREE

IT Professional Week – Shepherds Bush

Monday, 12 April 2010   – Virtualization Summit – From the Desktop to the Datacentre

Designed to provide you with an understanding of the key products & technologies enabling seamless physical and virtual management, interoperable tools, and cost-savings & value.

Tuesday, 13 April 2010  – Office 2010 – Experience the Next Wave in Business Productivity

The event will cover how the improvements to Office, SharePoint, Exchange, Project and Visio will provide a practical platform that will allow IT professionals to not only solve problems and deliver business value, but also demonstrate this value to IT’s stakeholders. 

Wednesday, 14 April 2010Windows 7 and Windows Server 2008 R2 – Deployment made easy

This event will provide you with an understanding of these tools including the new Microsoft Deployment Toolkit 2010, Windows Deployment services and the Application Compatibility Toolkit. Understanding of these tools including the new Microsoft Deployment Toolkit 2010, Windows Deployment Services. We will also take you through the considerations for deploying Windows Server 2008 R2 and migrating your server roles.

Thursday, 15 April 2010 SQL Server 2008 R2 – The Information Platform
Highlighting the new capabilities of the platform, as well as diving into specific topics, such as consolidating SQL Server databases, and tips and techniques for Performance Monitoring and Tuning as well as looking at our newly released Cloud platform SQL Azure.

Friday, 16 April 2010 (IT Managers)Looking ahead, keeping the boss happy and raising the profile of IT
IT Managers have more and more responsibilities to drive and support the direction of the business. We’ll explore the various trends and technologies that can bring IT to the top table, from score-carding to data governance and cloud computing.

Developer Week – Fulham Broadway

Monday, 12 April 2010 (For Heads of Development and Software Architects) Microsoft Visual Studio 2010 Launch – A Path to Big Ideas

This launch event is aimed at development managers, heads of development and software architects who want to hear how Visual Studio 2010 can help build better applications whilst taking advantage of great integration with other key technologies.
NB – Day 2 will cover the technical in-depth sessions aimed at developers

Tuesday, 13 April 2010 Getting started with Microsoft .NET Framework 4 and Microsoft Visual Studio 2010 WAITLIST ONLY
Microsoft and industry experts will share their perspectives on the top new and useful features with core programming languages and in the framework and tooling, such as — ASP.NET MVC, Parallel Programming, Entity Framework 4, and the offerings around rich client and web development experiences.

Wednesday, 14 April 2010 The Essential MIX
Join us for the Essential MIX as we continue exploring the art and science of creating great user experiences. Learn about the next generation ASP.NET & Silverlight platforms that make it a rich and reach world.

Thursday, 15 April 2010 Best of Breed Client Applications on Microsoft Windows 7
Windows 7 adoption is happening at a startling pace. In this demo-driven day, we’ll look at the developer landscape around Windows 7 to get you up to speed on the operating system that’ll your applications will run on through the new decade.

Friday, 16 April 2010 – Registration opening soon! Windows phone Day
Join us for a practical day of detailed Windows Phone development sessions covering the new Windows Phone specification, application standards and services

There will also be some “fringe” events , these won’t all be in London and I’ll post about them separately (James in the Midlands, I’ve heard you :-)  )

 

This post originally appeared on my technet blog.

March 9, 2010

A FAT (32) lot of good that did me …

Filed under: General musings,Virtualization — jamesone111 @ 11:35 am

First rule of blogging. Don’t blog when angry.

I’ve been through a time consuming process which could be called educational – in the sense of “Well ! That taught me a lesson”. My drug regime has been mentioned before in my posts, and this is one of those times when the drugs don’t seem to be working – so lets just say I was a shade cranky when before I started and now…

 

Up on youtube I have a video showing Hyper-V server R2 booting from a USB flash drive, (which I described in this post please note the recommendation to check supportability before going down this path yourself).

And I have a second video showing how I made my phone into a bootable USB device from which I could install windows. .

Why not, I thought, Boot HyperV server R2 from a phone – in fact why stop at phones ? I’ve had a good laugh at Will it Blend ? So I was thinking of doing a Will it boot series. Can I boot HVS from my camera ? etc.

 

Let’s stop for a second and think. What file systems do cameras, phones, MP3 players support ? NTFS – er no. They use FAT, in most of its forms, new memory cards show up formatted as FAT32.
And what is the limitation of FAT32 ? A maximum file size of 4GB: not a problem for installing Windows because WIM files are sized at less than 4GB to fit on DVD disks.  A bit of a challenge for VHD files as 4GB is shade small by today’s standards. In fact when I ran the setup for Hyper-V server against my sub-4GB VHD it wouldn’t install. Undeterred I have a customized Hyper-V server R2 VHD – which I use as a testing VM on a server 2008 box – I’d pared this down before so it uses comfortably less than 3.5GB on a 6GB VHD. I attached that VHD as a second drive on another VM which has the Windows Automated Installation Kit installed, created a 3.5GB VHD and added that as third drive, and fired up the VM. I used ImageX to make a WIM image of the this disk, and then it was question of partitioning my new VHD, activating the partition, formatting it, applying the image and making sure the VHD was bootable, and testing it in it’s own VM on server 2008. It worked like a charm. Next I copied it to a “4GB” SD card – the card is 4,000,000,000 bytes, which is only about 3.7 true gigabytes (taking 1GB as 2^30 bytes). I switched my test VM on Server 2008 to use the VHD on the SD card and all was well. I went through the steps to make the card bootable. Abject failure. I tried lots of things: without success – to retain one’s optimism and avoid anger, these are classified as things eliminated rather than failures.

Slowly, a picture began to emerge. I tried testing the VM from the SD card on Server 2008 R2, first I attached the VHD to a VM

Click for fill size version

A file system limitation ? Hmm. OK,  let’s see if we can attach VHD files on the SD card Windows 7’s Computer management or Server manager on Server 2008 R2 , go to storage, then to disk management, right click choose “attach VHD” browse to the disk and

image

I know that R2 removed the ability to use VHDs which had been compressed, and I think I probably did know that R2 also introduced a requirement to keep the VHD on NTFS.

There’s no reason why Windows can’t format an SD card as NTFS, and I can probably use my camera as a card reader for an NTFS formatted card; but the camera can’t save pictures to it. I’m sure I could partition the 16GB MicroSD card which I’m using in the phone so that there was a roughly 4GB active partition which could boot and 12GB left for camera / phone / whatever but I want to be able to reclaim the space at a moment’s notice if I need to put pictures on it – and such a scheme rules that out.

 

Angry at the time I’ve wasted ? No, no I’m calm, composed and working on other ideas for what I can do booting from off the wall devices.

Creating an image of me , in a pram, throwing toys from it is left as an exercise for the reader .

This post originally appeared on my technet blog.

March 7, 2010

How to use old drivers with a new OS – more on XP mode

Filed under: Virtualization,Windows 7 — jamesone111 @ 5:45 pm

In a post a while back about Windows Image Acquisition (WIA) I wrote “I’ve got a bad track record choosing scanners” and described the most recent one I’ve bought as a  “piece of junk”. Because my experience has been bad I don’t scan much, and because I don’t scan much I won’t spend the money to get a better experience. The scanner I have at home is an HP one and after HP failed to produce Vista drivers for it I said I’d never spend my own money on HP kit again. Eventually they DID release Vista drivers (including 64 bit) and these support Windows 7. The trouble is although they support WIA – rather than using HP’s rather crummy software for XP, they are what HP calls “Basic Feature” drivers. The video below shows what this means, and how I was able to get access to the other features using that crummy software in XP mode.

[For some reason the embedded video doesn’t play in all browsers – here is the link to the video on You tube]

This makes quite a good follow up to a video I did for Edge when XP mode was still in Beta, which showed how some 32bit-only camera software (which would work with vista or Windows 7 – but not in the 64 bit version I’m running) could be used in (32 bit) XP mode.

Get Microsoft Silverlight

This post originally appeared on my technet blog.

February 23, 2010

Desktop virtualization update.

Filed under: Virtualization,Windows 7 — jamesone111 @ 4:54 pm

On the MDOP blog there is an announcement  of new releases of both APP-V (which runs applications in a Virtualized “bubble” so they don’t clash with each other) and MED-V (which runs a centrally managed virtualized OS)

The major points App-V 4.6 is now compatible with 64-bit Windows client and server platforms, enabling IT to take advantage of x64 for client hardware refresh AND also deploy App-V to Windows Server 2008 R2 using remote desktop with scale advantages that come from 64 bit. The springboard site has a Q&A on App-V

MED-V adds support for Windows 7 (32bit and 64bit) – this is what large organizations should be using to deliver similar functionality to XP-Mode but with central management.

Bonus link:

On the Windows Team blog, Gavriella explains why this important to improving the Total Economic Impact – and Forrester have already published some very positive numbers on TEI for Windows 7

This post originally appeared on my technet blog.

February 17, 2010

Free ebook: Understanding Microsoft Virtualization R2 Solutions

Filed under: Virtualization,Windows Server 2008-R2 — jamesone111 @ 9:28 am

Over on the MSPress blog they have an announcement

Mitch Tulloch has updated his free ebook of last year. You can now download Understanding Microsoft Virtualization R2 Solutions in XPS format here and in PDF format here.

I’ve worked with Mitch on a couple of books, including the first release of this one, and seen a couple of others they’ve all been good (poor books from MS press are very few and far between). If this was a paper book which you had to pay for I’d suggest it is well worth looking at – but it’s a free download (print it or view it on screen : your choice) so, seriously, if you expect to be working  with Hyper-V in the foreseeable future you’d have to be daft not to download it.

This post originally appeared on my technet blog.

February 16, 2010

Desktop Virtualization Hour

I had a mail earlier telling me about desktop virtualization hour , planned for 4PM (GMT) on March 18th. (That’s 9AM Seattle time, 5PM CET … you can work out the others I’m sure). More information and a downloadable meeting request are Here.

Some effort seems to be going into this one, which makes me think it is more than the average web cast.

This post originally appeared on my technet blog.

December 24, 2009

Fighting talk from VMware.

Filed under: Virtualization — jamesone111 @ 4:29 pm

There’s running joke on my team – if I want to drive my blog statistics up all I need to do is talk tough about VMware. A few days ago I posted a response to some VMware FUD. and it’s had 3 times the readers of a typical post and more than the usual number re-tweets, inbound links and comments, including one from  Scott Drummonds. He said. 



I work for VMware and am one of the people responsible for our performance white papers.


I know who Scott is, and that’s not all he’s responsible for: I got a huge spike earlier in the year when I went after him for posting a dishonest video on youtube.  He goes on



You incorrectly state that VMware recommends against memory over-commit.  It is foolish for you to make this statement, supported by unknown text in our performance white paper, If you think any of the language in this document supports your position, please quote the specific text.  I urge you to put that comment in a blog entry of its own.


Fighting talk. I gave the link to http://www.vmware.com/pdf/vi_performance_tuning.pdf , where both the following quotes are “unknown” to Scott – they are on page 6.



Avoid frequent memory reclamation. Make sure the host has more physical memory than the total amount of memory that will be used by ESX plus the sum of the working set sizes that will be used by all the virtual machines running at any one time.



Due to page sharing and other techniques, ESX Server can be overcommitted on memory and still not swap. However, if the over-commitment is large and ESX is swapping, performance in the virtual machines is significantly reduced.


Scott says that he is “sure that your [i.e. my] interpretation of the text will receive comments from many of your customers.”  I don’t think there I’m doing any special interpretation here (comment away if you think I am), the basic definition of “Fully committed”  is when the host has an amount physical memory equal to the total amount of memory that will be used by the Virtualization layer plus the sum of the working set sizes that will be used by all the virtual machines running at any one time.  Saying you should Make sure the host has more memory than that translates as DO NOT OVER COMMIT. 


The second point qualifies the first: there’s a nerdy but important point that working set is not the same as memory allocated. The critical thing is to avoid the virtualization layer swapping memory. If you had – say – two hosts running 8GB VMs with Exchange in, and you tried to fit both VMs into one host with 8GB of available RAM, both VMs would try to cache  6-7GB of mail-store data; but without physical memory behind it, what got pulled in from disk gets swapped out to disk again. In this case you would be better off telling the VMs they had 4GB to use, that way they can keep 2GB of critical data (indexes and so) in memory, and not take pot luck with the virtualization layer swaps to disk. “Balloon” drivers make memory allocated to a VM unavailable, reducing an individual working set; page sharing reduces the sum of working sets. It might sound pedantic to say ‘you can over-allocate without over-committing’  but that’s what this comes down to: the question is “by how much”, as I said in the original post:


VMware will cite their ability to share memory pages, but this doesn’t scale well to very large memory systems (more pages to compare), and to work you must not have [1] large amounts of data in memory in the VMs (the data will be different in each), or [2]  OSes which support entry point randomization (Vista, Win7, Server 2008/2008-R2) or [3] heterogeneous operating systems.


We had this argument with VMware before when they claimed you could over-commit by 2:1 in every situation– eventually as “proof” they found a customer (they didn’t name) who was evaluating (not in production with) a VDI solution based on Windows XP, with 256MB in each VM. The goal was to run a single (unnamed) application, so this looked like a much better fit for Terminal Services (one OS instance / many users & apps) than for VDI (Many OS instances), but if the app won’t run in TS then this is a situation where a VMware-based solution has the advantage over a Microsoft based one. [Yes, such situations do exist! I just maintain they are less common than many people would have you believe].


Typing this up I wondered if Scott thought I was saying that VMware’s advice was never, ever, under any circumstances whatsoever should customers use the ability to over commit – which they have taken the time and trouble to put into their products. You can see that they recommend against memory over-commit, as he puts it, in a far more qualified way than that. The article from Elias which I was responding to (and I’ve no idea if Scott read it or not) talked about using oversubscription “to power-on VMs when a host experiences hardware failure”. This sounds like a design with cluster nodes having redundant capacity for CPU, Network and disk I/O, (not sweating the hardware as in the VDI case) but with none for memory; after a failure moving to fully used CPU, Network and disk capacity, but accepting a large-over commit on memory with poor performance as a result.  In my former role in Microsoft consulting services I used to joke in design reviews that I could only say “we can not foresee any problems” and if problems came up we’d say “that was unforeseen”. I wouldn’t sign-off Elias’ design: if I did, and over-committing memory meant service levels weren’t met, any lawyer would say that was a foreseeable problem and produce that paper to show it is exactly the kind of set-up VMware tell people to avoid.


There is one last point: Scott says everyone loves Over-commit (within these limitations presumably), and we will too “once you have provided similar functionality in Hyper-V.”. Before beta of Server 2008-R2 was publically available we showed dynamic memory. This allowed VM to say it needed more memory (if added it showed up using the hot-add ability of newer OSes). The host could ask for some memory back – in which case a balloon driver reduced the working set of the VM. There was no sharing of pages between VMs and no paging of VMs memory by the virtualization layer – the total in use by the VMs could not exceed physical memory. It was gone from the public beta; and I have heard just about every rumour on what will happen to it “it will come out as a patch”, “it will come out in a service pack”, “it will be out in the next release of Windows”, “the feature’s gone for good”, “it will return much as you saw it”, “it will return with extra abilities”. As I often say in these situations those who know the truth aren’t taking, and those who talk don’t know.


Oh yes, Merry Christmas and Goodwill to all.


This post originally appeared on my technet blog.

December 21, 2009

Drilling into ‘reasons for not switching to Hyper-V’

Filed under: Powershell,Virtualization,Windows Server 2008-R2 — jamesone111 @ 11:30 am

Information week published an article last week “9 Reasons why enterprises shouldn’t switch to hyper-v”. The Author is Elias Khnaser, this is his website and this is the company he works for.  A few people have taken him to task over it, including Aidan . I’ve covered all the points he made, most of which seem to have come the VMwares bumper book of FUD, but I wanted to start with one point which I hadn’t seen before. 

Live migration. Elias talked of “an infrastructure that would cause me to spend more time in front of my management console waiting for live migration to migrate 40 VMs from one host to another, ONE AT A TIME.” and claimed it “would take an administrator double or triple the time it would an ESX admin just to move VMs from host to host”.  Posting a comment to the original piece he went off the deep end replying to Justin’s comments , saying “Live Migration you can migrate 40 VMs if nothing is happening? Listen, I really have no time to sit here trying to educate you as a reply like this on the live migration is just a mockery. Son, Hyper-v supports 1 live VM migration at a time.” . Now this does at least start with a fact : Hyper-V only allows one VM to be in flight on a given node at any moment: but you can issue one command and it moves all the hyper-v VMs between nodes. Here’s the PowerShell command that does it.
Get-ClusterNode -Name grommit-r2 | Get-ClusterGroup |
  where-object { Get-ClusterResource -input $_ | where {$_.resourcetype -like "Virtual Machine*"}} |
     Move-ClusterVirtualMachineRole -Node wallace-r2             
The video shows it in action with 2 VMs but it could just as easily be 200.  The only people who would “spend more time in front of [a] management console” are those who are not up to speed with Windows Clustering. System Center will sequence moves for you as well. But… does it matter if the VMs are migrated in series or in parallel ?  If you have a mesh of Network connections between cluster nodes you could be copying to 2 nodes of two networks with the parallel method, but if you don’t (and most clusters don’t) then n copies will go at 1/n the speed of a single copy. Surely if you have 40VMs an they take a minute to move it takes 40 minutes either way…  right ? Well no… Let’s use some rounded numbers for illustration only: say 55 seconds of the minute is doing the initial copy of memory, 4 seconds doing the second pass copy of memory pages which changed in that 55 seconds, and 1 second doing the 3rd pass copy and handshaking. Then Hyper-V moves onto the next VM and the process repeats 40 times. What happens with 40 copies in parallel ? Somewhere in 37th minute the first pass copies complete – none of the VMs have moved to their new node yet. Now: if 4 seconds worth changed in 55 seconds – that’s about 7% of all the pages – what percentage will have changed in 36 minutes ?  Some won’t change from hour to hour and others change from second to second – how many actually change in 55 seconds or  36 minutes or any other length of time depends on the work being done at that point, and the memory size and will be enormously variable. However the extreme points are clear (a) In the very best case no memory changes and the parallel copy takes as long as the sequential. In all other cases it takes longer (b) In the worst case scenario the second pass has to copy everything – when that happens the migration will never complete.  

Breadth of OS support. In Microsoft-speak “supported”  means a support incident can go to the point of issuing a hot-fix if need be. Not supported doesn’t mean non-cooperation if you need help – but the support people can’t make the same guarantee of a resolution. By that definition, we don’t “support” any other companies’ software – they provide hot-fixes, not us – but we do have arrangements with some vendors so a customer can open a support case and have it handed on to Microsoft or handed on by Microsoft as a single incident. We have those arrangements with Novell for Suse Linux and Red Hat for RHEL, and it’s reasonable to think we are negotiating arrangements for more platforms: those who know what is likely to be announced in future won’t identify which platforms to avoid prejudicing the process. In VMware-speak “supported”, has a different meaning. In their terms NT4 is “Supported”. NT4 works on HyperV but without hot-fixes for NT4 it’s not “Supported”. If NT4 is supported on VMware and not on Hyper-V exactly how is a customer better off ? Comparisons using different definitions of “support” are meaningless. “Such and Such an OS works on ESX / Vsphere but fails on Hyper-V” or “Vendor X works with VMware but not with Microsoft” allows the customer can say “so what” or “That’s a deal-breaker”.

Security.  Was it hyper-v that had the vulnerability which let VMs break out of into the host partition ? No that was VMware. Elias commented that "You had some time to patch before the exploit hit all your servers" which makes me worry about his understanding of network worms. He also brings up the discredited disk footprint argument; that is based on the fallacy that every Megabyte of  code is equally prone to vulnerabilities, Jeff sank that one months ago and pretty comprehensively – the patch record  shows a little code from VMware has more flaws than a lot of code of Microsoft’s.

Memory over-commit. Vmware’s advice is don’t do it. Deceiving a virtualized OS about the amount of memory at its disposal means it makes bad decisions about what to bring into memory – with the virtualization layer paging blindly – not knowing what needs to be in memory and what doesn’t. That means you must size your hardware for more disk operations, and still accept worse performance. Elias writes about using oversubscription, “to power-on VMs when a host experiences hardware failure”. In other words the VMs fail over to another host which is already at capacity and oversubscription magically makes the extra capacity you need. We’d design things with a node’s worth of unused memory (and CPU , Network, and Disk IOps ) in the other node[s] of the cluster. VMware will cite their ability to share memory pages, but this doesn’t scale well to very large memory systems (more pages to compare), and to work you must not have [1] large amounts of data in memory in the VMs (the data will be different in each), or [2]  OSes which support entry point randomization (Vista, Win7, Server 2008/2008-R2) or [3] heterogeneous operating systems. Back in March 2008 I showed how a Hyper-v solution was more cost effective if you spent some of the extra cost of buying VMware on memory – in fact I showed the maths underneath it and how under limited circumstances VMware could come out better. Advocates for VMware [Elias included] say buying VMware buys greater VM density: the same amount spent on RAM buys even-greater density. The VMware case is always based on a fixed amount of memory in the server: as I said back then, either you want to run [a number of] VMs on the box, or the budget per box is [a number] Who ever yelled "Screw the budget, Screw the workload. Keep the memory constant !" ? The flaw in that argument is more pronounced now than it was when I first pointed it out as the amount of RAM you get for the price of VMware has increased.

Hot add memory. Hyper-v only does hot-add of disk, not memory. Some guest OSes won’t support it at all. Is it an operation which justifies the extra cost of VMware ? . 

Priority restart – Elias describes a situation where all the domain controllers / DNS servers on are one host. In my days in Microsoft Consulting Services reviewing designs customers had in front of them, I would have condemned a design which did that, and asked some tough questions of whoever proposed it.  It takes scripting (or very conservative start-up timeouts) in Hyper-V to manage this. I don’t know enough of the feature in VMware to know how sequences things not based on the OS running but all the services being ready to respond

Fault tolerance. VMware can offer parallel running – with serious restrictions. Hyper-v needs 3rd party products (Marathon) to match that.  What this saves is the downtime to restart the VM after an unforeseen hardware failure. It’s no help with software failures if the app crashes, or the OS in the VM crashes, then both instances crash identically. Clustering at the application level is the only way to guarantee high levels of service: how else do you cope with patching the OS in the VM or the application itself ?      click for larger version

Maturity: If you have a new competitor show up in your market, you tell people how long you have been around. But what is the advantage in VMware’s case ? Shouldn’t age give rise to wisdom, the kind of wisdom which stops you shipping Updates which cause High Availability VMs to unexpectedly reboot, or shipping beta time-bomb code in a release product. It’s an interesting debating point whether VMware had that Wisdom and lost it – if so they have passed through maturity and reached senility.

 Third Party vendor support. Here’s a photo. At a meet-the-suppliers event one of our customers put on, they had us next to VMware. Notice we’ve got System Center Virtual Machine manager on our stand, running in VM, managing two other hyper-V hosts which happen to be clustered, but the lack of traffic at the VMware stand allows us to see they weren’t showing any software – a full demo of our latest and greatest needs 3 laptops, and theirs ? Well the choice of hardware is a bit limiting. There is a huge range of management products to augment Windows – indeed the whole reason for bring System Center in is that it manages hardware, Virtualization (including VMware) and Virtualized workloads. When Elias talks of 3rd party vendors I think he means people like him – and that would mean he’s saying you should buy VMware because that’s what he sells.

This post originally appeared on my technet blog.

December 10, 2009

Hyper-V resource round-up.

Filed under: Virtualization — jamesone111 @ 6:43 pm

I spent this morning with a group of our partners and one of the comments which came back was “we know there is a lot of material out there for Hyper-V .. can you point us to some of the highlights”. So it was a nice surprise to find a mail from the product team telling me that they’ve now got a “best of the Hyper-v resources” page – doubly pleasant was that they had linked to several of my past posts on the subject. In the among the other links is the answer to a question I got asked a couple of days ago “how do I allow hyper-V to get access to shared files on other computers” (the point being a logged on user could see these files but VMs could not) – that’s there as well – the two posts about file shares.

This post originally appeared on my technet blog.

Offline Virtual Machine Servicing Tool update

Filed under: Virtualization — jamesone111 @ 4:59 pm

I had a mail about an update to the Offline Virtual Machine Servicing Tool , which I thought was worth passing on: in case you haven’t come across it before :

imageThe Offline Virtual Machine Servicing Tool 2.1 manages the workflow of updating large numbers of offline virtual machines according to their individual needs. To do this, the tool works with Microsoft® System Center Virtual Machine Manager (VMM) 2008 or 2008 R2 and with either Windows Server Update Services (3.0 SP1 or SP2 ) or System Center Configuration Manager (2007 SP1 or SP2 or 2007-R2.)

It uses “servicing jobs” to manage the update operations based on lists of existing virtual machines stored in VMM. For each virtual machine, the servicing job:

          • * “Wakes” the virtual machine (deploys it to a host and starts it).
          • * Triggers the appropriate software update cycle (Configuration Manager or WSUS).
          • * Shuts down the updated virtual machine and returns it to the library.
  • Using a maintenance host with carefully controlled connectivity you can reduce the risk to a VM while it is in the process of being patched

We’ve just released a new version of the tool to fully support the latest releases of WSUS, SCCM, SCVMM, and Hyper-V and Windows, which you can download – complete with additional information, here

If all your VMs are running all the time, this will not be of all that much interest to you – but if you have complex environment with a library of VMs which see occasional use, then this is something which you really should take a look at.

This post originally appeared on my technet blog.

November 28, 2009

How easy is it to deploy a Virtualized server ?

Filed under: Virtualization — jamesone111 @ 10:50 am

A couple of posts back I showed the video of deploying an operating system with Windows deployment services. But if you have an image in a Virtual hard disk , it is quicker to copy VHD file than it is to make an image of its contents and then expand the image back onto a new system. That’s one of the options in System Center Virtual Machine Manager, so at tech-ed one of the demos I did was to to show how that worked – Andrew and I were showing how the new version of SQL server can be pre-installed with the Operating System. I captured the rehearsal version to show just how quickly this can be done, and the ease with which we can set the virtual hardware settings and configure the things which are set at first boot (like the administrator password, domain membership).

Bonus Links.

Over the coming weeks I plan to put up some more short screen casts focusing on specific tasks in SCVMM but in the meantime I’d like to share the a longer piece of work from my US-based colleague Kevin Remde. He broke a session he ran, entitled Overview of System Center Virtual Machine Manager 2008 into 6 parts of roughly 30 minutes each. Here are the links.
Part 1 SCVMM Administration

Part 2 Common management of Hyper-V and VMWare

Part 3 The SCVMM Library

Part 4 User Roles and the Self Service Portal

Part 5 Integration with Operations Manager

Part 6 SCVMM wrap-up

This post originally appeared on my technet blog.

November 25, 2009

Announcing the PowerShell Configurator.

Filed under: Powershell,Virtualization,Windows Server 2008-R2 — jamesone111 @ 11:17 pm

For a little while I have had a beta version of a project I call PSCONFIG on codeplex. I’ve changed a couple of things but from the people who have given it a try, it seems that it is working pretty well. It’s aimed at servers running either Hyper-V server R2 Or Core installations Windows Server 2008 R2, although it can be useful on just about any version of Windows with PowerShell V2 installed. Here is breakdown of the what is included.

Installed software, product features, drivers and updates
* Add-Driver, Get-Driver
* Add-InstalledProduct ,Get-InstalledProduct ,  Remove-InstalledProduct,
* Add-WindowsFeature , Get-WindowsFeature, Select-WindowsFeature, Remove-WindowsFeature
* Add-HotFix,  Add-WindowsUpdate, Get-WindowsUpdateConfig , Set-WindowsUpdateConfig

Networking and Firewall
* Get-FirewallConfig , Set-FirewallConfig, Get-FirewallProfile , Get-FireWallRule, New-FirewallRule
* Get-NetworkAdapter, Select-NetworkAdapter, Get-IpConfig , New-IpConfig , Remove-IpConfig, Set-IpConfig

Licensing
* Get-Registration , Register-Computer

Page file
* Get-PageFile, Set-AutoPageFile

Shut-down event tracker
* Get-ShutDownTracker , Set-ShutDownTracker

Windows Remote management
* Get-WinRMConfig , Disable-WinRm

Remote Desktop
* Get-RemoteDesktopConfig , Set-RemoteDesktop

Misc
* Rename-Computer
* Set-DateConfig , Set-iSCSIConfig  , Set-RegionalConfig

It has a menu so it can replace the SCONFIG VB utility , to show how it works I’ve put a Video on you tube (see below). It includes on-line help and there is a user guide available for download from codeplex. The documentation is one step behind the code, although the only place where I think this matters is that the New Firewall rule command doesn’t have any explanation – hopefully if you use tab to work through the parameters it is obvious . Obvious as a release candidate I’m looking for feedback before declaring it final. Codeplex discussions are the best place for that.

=

This post originally appeared on my technet blog.

November 19, 2009

“ThriveLive! Online IT Professional Virtualization Tour

Filed under: Events,Virtualization — jamesone111 @ 6:14 pm

My colleagues from the other side of the Atlantic  Dan Stolts, Blain Barton, Yung Chou and John Baker have lined up a a set of on-line sessions on virtualization aimed at IT Professionals.  
They will go from the desktop to the enterprise, Covering VHD native boot , Windows XP mode, Windows Server 2008 R2 Hyper-V™, and  System Center Virtual Machine Manager (SCVMM).

When : Thursday, December 10, 2009 8:00 AM – 12:00 PM Pacific Time (US & Canada)  / 4PM-8PM (UK)  Register here.  I tend to drop into these things from home, and listen to the sessions which interest me as the clock shifts from working day to early evening.

Here’s the full Agenda.

clip_image002

Dan Stolts: on VHD Native Boot We’ll kick off the afternoon by exploring VHD Native Boot, which is a new feature for Windows 7 and Windows Server 2008 R2. VHD Native Boot can be used as the running operating system on designated hardware – without a parent operating system, virtual machine, or hypervisor. This is one of the best virtualization features to date for technology professionals of every kind – from enterprise to small and medium-size business pros and consultants

clip_image003

Blain Barton on Windows XP Mode

With Windows XP Mode, it’s easy to install and run multiple Windows XP productivity applications directly from your Windows 7-based PC. Do you have application compatibility issues? Windows XP Mode can ease those compatibility headaches, because it gives you the best of both worlds. You can easily run older Windows XP business software – including web applications that require an old version of Internet Explorer® – while taking advantage of the many benefits of your Windows 7 desktop. This is a can’t-miss session for IT pros who juggle both new and established software and web applications.

clip_image004

Yung Chou on Windows Server 2008 R2 Hyper-V
It’s time to focus on enterprise with an overview of Windows Server 2008 R2 Hyper-V. In this session, we’ll look at how to create virtual machines in Hyper-V and demonstrate how the snapshot feature can easily revert the virtual machine to a previous state. You’ll come away from this session with a sold understanding of all the capabilities and new features in Windows Server 2008 R2 Hyper-V.

clip_image005

John Baker on System Center Virtual Machine Manager
Finally, no virtualization discussion is complete without a conversation about management. When it comes to managing virtual infrastructures, System Center Virtual Machine Manager 2008 (SCVMM) is the best of the best. This member of the System Center family of system management products provides a straightforward, cost-effective solution for unified management of physical and virtual machines

This post originally appeared on my technet blog.

October 17, 2009

More on VHD files

Filed under: Virtualization,Windows 7,Windows Server 2008-R2 — jamesone111 @ 2:42 pm

I’ve had plenty to say about the uses of VHD files on different occasions. They get used anywhere we need to have a file which contains an image of a disk. So from Vista onwards we have had complete image backup to VHD, we use VHD for holding the virtual disk to be used by a Virtual Machine (be it hyper-V , Virtual PC or Virtual Server – the disks are portable although the OS on them might be configured to be bootable on one form of virtualization and not another), and so on.

Most interesting of all with Windows 7 and Server 2008 R2 the OS can be boot from a VHD file – if you try to do this with an older OS it will begin to boot and then when it discovers it is not on a native hard disk it all goes the way of the pear. However an older OS can be installed on the “native” disk with a newer OS in a VHD, provided that the boot loader is new enough to understand boot from VHD. I’ve done this with my two cluster node laptops – I can boot into Server 2008 “classic” or into 2008 R2: the latter is contained in a VHD and so I don’t have to worry about re-partitioning the disk or having different OSes in different folders. The principles are the same but the process is a bit complicated for XP and for Server 2003 – but Mark has a guest post on his blog which gives a step by step guide. In theory it should work on any OS which uses NTLDR and Boot.ini all the way back to NT3.1 – though I will admit I’ve only run XP and Server 2003 in virtual machines since Hyper-V went into beta,

Of course being able to mount VHDs inside Windows 7 and Server 2008 R2 gives you an alternative way of getting files back from a backup, and I’ve got a video on technet edge showing that and some of the other uses. My attempts to modify a backup VHD into a Virtual Machine VHD have failed – I can access the disk in a VM, but my attempts to find the right set of incantations to make it bootable have left me feeling like one of the less able students at Hogwarts. Into this mix comes a new Disk2VHD  tool from Mark Russinovich and Bryce Cogswell – Mark is the more famous member of the team, but if you do a search on Bryce’s name you’ll see his background with sysinternals  so Disk2VHD comes with an instant provenance. There are multiple places where this tool has a use, lifting an existing Machine to make a boot-from-VHD image or a virtual machine, or as a way of doing an image backup which can be used in a VM.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2009/10/17/more-on-vhd-files.aspx’;

This post originally appeared on my technet blog.

September 14, 2009

On PowerShell function design: vague can be good.

Filed under: How to,Powershell,Virtualization — jamesone111 @ 5:22 pm

There is a problem which comes up in several places in PowerShell – that is helping the user by being vague about parameter types. Consider these examples from my Hyper-V library for PowerShell

1. The user can specify a machine using a string which contains its name
Save-VM London-DC or Save-VM *DC, or  Save-VM London*,Paris*

2. The user can get virtual machine objects with one command and pipe these into another command
Get-vm –running | Stop-VM

3. The user can mix objects and strings
$MyVms = Get-vm –server wallace,Grommit | where { (Get-VMSettings $_).note –match “LAB1”}
start-vm –wait “London-DC”, $MyVMs

The last one searches servers “wallace” and “Grommit” for VMs, narrows the list to those used in lab1 and starts London-DC on the local server followed by the VMs in Lab1.

In a post I made a few days back about adding Edit to your profile I showed a couple of aspects of about piping objects that became easier in V2 of PowerShell,
Instead of writing Param($VM) , I  can now write

Param(
       [parameter(Mandatory = $true, ValueFromPipeline = $true)] 
       $VM
     )

Manatory=$true makes sure I have a parameter from somewhere, and ValueFromPipeLine is all I need to to get it from the the pipeline. PowerShell offers a ValueFromPipeLineByPropery option which looks at the piped object for a property which matches the parameter name or a declared [alias] for it. I could use that to reduce a VM object to its name, but doing so would lose the server information (which I need in example 3 above) and it gets in the way of piping strings into functions, so this is not the place to use it.
Allowing an array gives me problems when the array members expand to more than one VM (in the case of wildcards).  The code for my “Edit” function won’t cope with being handed an array of file objects or an array of arrays, but it doesn’t need to, because I wouldn’t work like that. But things I’m putting out for others need to work the way different users might expect, this needs to handle arrays in arrays (like “london-DC”,$myVMs ) arrays of VM objects ($myVMs), so time for my old friend recursion, and a function ike this.

Function Stop-VM
{ Param(
        [parameter(Mandatory = $true, ValueFromPipeline = $true)] 
        $VM,
        [String]
        $Server = “.”
       )
  Process{
           if ($VM –is [String]) {$VM = GetVM –vm $vm –server $server}
           if ($VM –is [array])  {$VM | foreach-object {Start-VM –vm $_ –server $server}}
           if ($VM -is [System.Management.ManagementObject])  {
               $vm .RequestStateChange(3)
           }
        }

}

This says, if we got passed a single string (via the pipe or as a parameter), we get the matching VM(s), if any. If we were passed an array , or a string which resolved to an array, we call the function again with each member of that array. If we were passed a single WMI object or a string which resolved to a single WMI object then we do the work required.

There’s one thing wrong with this, and that is that it stops the VM without any  warning I covered this back here.  It is easy to support ShouldProcess; there is level at which Confirm prompts get turned on automatically, (controlled by $confirmPreference) and we can say that the impact is high – and at the default value the confirm prompt will appear even if the user doesn’t ask for it.

Function Stop-VM
{ [CmdletBinding(SupportsShouldProcess=$True, ConfirmImpact='High')]
  Param(
          [parameter(Mandatory = $true, ValueFromPipeline = $true)] 
          $VM,
          [String] 
          $Server = “.”
       )
  Process{
           if ($VM –is [String])  {$VM = GetVM –vm $vm –server $server}
           if ($VM –is [array])  {$VM | foreach-object {Stop-VM –vm $_ –server $server}}
           if ($VM -is [System.Management.ManagementObject]
                   –and $pscmdlet.shouldProcess($vm.ElementName, “Power-Off VM without Saving”) {
               $vm .RequestStateChange(3)
           }
        }
}

Nearly there, but we have two problems still to solve: and a simplification to make the message. First the simplification; Mike Kolitz  (who’s going through the same bafflement as I did with Twitter, but more importantly has helped out on the Hyper-V library), introduced me to this trick : when a function calls another function using the same parameters – (or calls itself recursively) if there are many parameters it can be a pain.  But PowerShell has a “Splatting” operator. @$PsboundParameters puts the contents of a variable into the command.  (James Brundage, who I’ve mentioned before wrote it up) And you can manipulate $psboundParameters, so Mike had a clever generic way of recursively calling functions.

if ( $VM -is [Array])   { [Void]$PSBoundParameters.Remove("VM") ;  $VM | ForEach-object {Stop-VmState -VM $_ @PSBoundParameters}}

In other words remove the parameter that is being expanded, and re-call the function with the remaining parameters, specifying only the one being expanded. As James’ post shows it makes life a lot easier when you have a bunch of switches.
OK, now the problem(s) the message

Confirm
Are you sure you want to perform this action?
Performing operation "Power-Off VM without Saving" on Target "London-DC".
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): n

Will appear for every VM, even if we select Yes to all or No to all. Each time Stop-VM is called it gets a new instance of $psCMDLET And what if we don’t want the message – for example in a script which kills the VMs and rolls back to an earlier snapshot.  Jason Shirk, one of the active guys in our internal PowerShell alias pointed out first you can have a –force switch and secondly you don’t need to use the function’s OWN instance of psCMDLET – why not pass one instance around ? So the function morphed into this

Function Stop-VM

{ [CmdletBinding(SupportsShouldProcess=$True, ConfirmImpact='High']

  Param(

          [parameter(Mandatory = $true, ValueFromPipeline = $true)]

          $VM,

          [String]

          $Server = “.”,

          PSC,

          [Switch]

          $force

       )

Process{

          if ($psc -eq $null)  {$psc = $pscmdlet}

         
if (-not $PSBoundParameters.psc) {$PSBoundParameters.add("psc",$psc)}

          if ($VM –is [String])  {$VM = GetVM –vm $vm –server $server}

          if ($VM –is [array])  {$VM | foreach-object {Stop-VM –vm $_ –server $server}}

          if ($VM -is [System.Management.ManagementObject]

                 –and ($force –or $psc.shouldProcess($vm.ElementName, “Power-Off VM without Saving”)) {

              $vm .RequestStateChange(3)

          }

       }

}

So now $PSC either gets passed or it picks up $pscmdlet and then gets passed to anything else we call – in this case recursive calls to this function.  And –force is there to trump everything.  And that’s what I have implemented in dozens of places in my library.

This post originally appeared on my technet blog.

August 18, 2009

Sophos error: facts not found

Filed under: Security and Malware,Virtualization,Windows 7 — jamesone111 @ 4:08 pm

Having begun my previous post with an explanation of “I have a professional disregard for …” it bubbles up again… Quite near were I live is the headquarters of Sophos, as a local company I should be well disposed to them but I’ve had occasion before now to roll my eyes at what their spokespeople have said – the pronouncements being of the “lets make the news, and never mind the facts” variety. One security blogger I talked to after some of these could be labelled “lacking professional regard for them”.  Well, Graham Cluley of Sophos has a prize example of this as a guest post on his blog, written by Sophos’s Chief Technology Officer Richard Jacobs.

“Windows 7’s planned XP compatibility mode risks undoing much of the progress that Microsoft has made on the security front in the last few years and reveals the true colours of the OS giant”. Says Jacobs. “XP mode reminds us all that security will never be Microsoft’s first priority. They’ll do enough security to ensure that security concerns aren’t a barrier to sales… …when there’s a trade off to be made, security is going to lose.”

That second half makes me pretty cross: I talked yesterday about the business of meeting customers’ needs and you don’t do that if security is lacking, but it’s not the only priority.

I’ve got a post Windows 7 XP mode: helpful ? Sure. Panacea ? No, where I point out that the Virtual in XP mode is not managed and I quote what Scott Woodgate said in the first sentence we published anywhere about XP mode “Windows XP Mode is specifically designed to help small businesses move to Windows 7.”  As Jacobs puts it The problem is that Microsoft are not providing management around the XP mode virtual machine (VM). It’s an odd statement because XP mode is just standard virtualization software and a pre-configured VM. You can treat the VM as something to be patched via Windows update or WSUS just like a physical PC. You install anti-virus software on it like a physical PC. To manages the VM you use the big brother of XPmode: MEDV, which is part of MDOP. But from the existence of unmanaged VM and missing other key facts Jacobs feels able to extrapolate an entire security philosophy: he could do worse than to  look up the Microsoft Security Development Lifecycle  to learn how we avoid making security trade offs the way we once did (and others still do ).

Now I’m always loathe to tell people how to do their jobs, but in post companies someone who carries the title of “Chief Technology Officer” would have a better grasp of the key facts before reaching for the insulting rhetoric. And having looked after another blog where we used many guest posts, it’s important to check the facts of your contributors, Cluley either didn’t check or didn’t know better, and let Jacobs end by outlining his idea of customers’ options.

  1. Stick with Windows XP.
  2. Migrate to Windows 7 and block use of XP mode – if you have no legacy applications.
  3. Migrate to Windows 7 and adopt XP mode.
  4. Migrate to Windows 7 and implement full VDI – there are various options for this, but don’t imagine it’s free.
  5. Demand that Microsoft do better

Lets’s review these

Option 2, get rid of legacy applications is plainly the best choice. There are now very few apps which don’t run on Windows Vista / 7 but if you’re lumbered with one those this choice isn’t for you

Option 1. Bad choice. (A) because if you are even thinking about the issue you know you want to get onto a better os and (B) because those legacy apps are probably driving you to running everything as administrator. Given a choice of “use legacy apps”, “run XP”, and “be secure”, you can choose any two. I hope Jacobs has the nouse  not to put this forward as a serious suggestion.

Option 3. Small business with unmanaged desktops ? XP mode is for you. Got management ? Get MEDV.!

Option 4. Full VDI: Bad choice: put the legacy app on a terminal sever if you can – but remember it is badly written, if it doesn’t run on an up-to-date OS will it run on Terminal services ? VDI in the sense of running instances of full XP desktops in VMs (just at the datacenter, not the desktop) has all the same problems of managing what is in those VMs: except they aren’t behind NAT, and they probably run more apps so they are more at risk. 

Option 5. Hmmm. He doesn’t make any proposals, but he seems to demand that Microsoft produce something like MEDV. We’ve done that.

And while I’m taking Cluley and Jacobs to task I should give mention a to Adrian Kingsley-Hughes on ZDNet It was one of my twitter correspondents who pointed me to Adrian and on to Sophos. He quotes Jacobs saying “We all need to tell Microsoft that the current choices of no management, or major investment in VDI are not acceptable”. The response is that if we thought those choices were acceptable we wouldn’t have MEDV. And Adrian should have known that too.

If people like these don’t get then some blame has to be laid at our door for not getting the message across, so for clarity I’ll restate what I said in the other post

  • Desktop virtualization is not a free excuse to avoid updating applications. It is a work around if you can’t update.
  • Desktop virtualization needs work, both in deployment and maintenance – to restate point 1 – it you have the option to update, expect that to be less work.
  • “Windows XP Mode is specifically designed to help small businesses move to Windows 7.”  As I pointed out in an earlier post still.  MED-V is designed for larger organizations with a proper management infrastructure, and a need to deploy a centrally-managed virtual Windows XP environment  on either Windows Vista or Windows 7 desktops. Make sure you use the appropriate one.

Update Adrian has updated his post with quotes from the above.  He has this choice quote “XP Mode is a screaming seige to manage. Basically, you’re stuck doing everything on each and every machine that XP Mode is installed on.”. Yes Adrian, you’re right. No customer who needs to manage Desktop Virtualization in an enterprise should even think of doing it without Microsoft Enterprise Desktop virtualization. Adrian calls the above the “MEDV defense” but asks “OK, fine, but what about XP Mode? That’s what we are talking about here”. What about XP mode ? It’s the wrong product if you have lots of machine (with 5 you can get Software assurance and MDOP). We’re talking about customers who install the wrong product for their needs. My job as an evangelist is to try to get them to use the one that meets their needs. But I think it would help customers if  instead of saying “XP mode is the wrong product” and stopping, commentators (Adrian, Richard Jacobs, Uncle Tom Cobley)  also mentioned the right product.

This post originally appeared on my technet blog.

August 14, 2009

VMware – the economics of falling skies … and disk footprints.

Filed under: Virtualization,Windows Server 2008,Windows Server 2008-R2 — jamesone111 @ 4:36 pm

There’s a phrase which has being go through my head recently: before coming to to Microsoft I ran my a small business; I thought our bank manager was OK, but one of my fellow directors – someone with greater experience in finance than I’ll ever have – sank the guy with 7 words “I have a professional disregard for him.”. I think of “professional disregard” when hearing people talk about VMware. It’s not that people I’m meeting simply want to see another product – HyperV – displace VMware (well, those people would, wouldn’t they ?) , but that nothing they see from VMware triggers those feelings of “professional regard” which you have for some companies – often your toughest competitor.

When you’ve had a sector to yourself for a while, having Microsoft show up is scary. Maybe that’s why Paul Maritz was appointed to the top job at VMware. His rather sparse entry on Wikipedia says that Maritz was born in Zimbabwe in 1955 (same year as Bill Gates and Ray Ozzie, not to mention Apple’s Steve Jobs, and Eric Schmidt – the man who made Novell the company it is today) and that in the 1990’s he was often said to be the third-ranking executive in Microsoft (behind Gates and Steve Ballmer, born in early 1956). The late 90s was when people came to see us as “the nasty company”. It’s a role that VMware seem to be sliding into: even people I thought of as being aligned with VMware now seem inclined to kick them.

Since the beta of Hyper-V last year, I’ve being saying that the position was very like that with Novell in the mid 1990s. The first point of similarity is on economics. Novell Netware was an expensive product and with the kind of market share where a certain kind of person talks of “monopoly”. That’s a pejorative word, as well as one with special meanings to economists and lawyers. It isn’t automatically illegal or even bad to have a very large share (just as very large proportions in parliament can make you either Nelson Mandela or Robert Mugabe). Market mechanisms which act to ensure “fair” outcomes rely on buyers being able to change to another seller (and vice versa – some say farmers are forced to sell to supermarkets on unfair terms) if one party is locked in then terms can be dictated. Microsoft usually gets accused of giving too much to customers for too little money. Economists would say that if a product is over priced, other players will step in – regulators trying wanting to reduce the amount customers get from Microsoft argue they are preserving such players. Economists don’t worry so much about that side, but say new suppliers need more people to buy the product, which means lower prices, so a new entrant must expect to make money at a lower price: they would say if Microsoft makes a serious entry into a existing market dominated by one product, that product is overpriced. Interestingly I’ve seen the VMware side claim that HyperV , Xen and other competitors are not taking market share and VMware’s position is as dominant as ever.

The second point of similarity is that when Windows NT went up against entrenched Netware it was not our first entry into networking – I worked for RM where we OEM’d MS-NET (a.k.a 3COM 3+ Open, IBM PC-Lan program) and OS/2 Lan manager (a.k.a. 3+Open). Though not bad products for their time – like Virtual server – they did little to shift things away from the incumbent. The sky did not fall in on Novell when we launched NT, but that was when people stopped seeing NetWare as the only game in town. [A third point of similarity]. Worse, new customers began to dismiss its differentiators as irrelevant and that marks the beginning of end. 
Having been using that analogy for a while it’s nice to see no less a person than a Gartner Vice President, David Cappuccio, envisaging a Novell-like future for VMware.  In piece entitled “Is the sky falling on VMware” SearchServerVirtualization.com  also quotes him as saying that “‘good enough’ always wins out in the long run”.  I hate “good enough” because so often it is used for “lowest common denominator” I’ve kept the words of a Honda TV ad with me for a several years.

Ever wondered what the most commonly used in the world is ?
”OK”
Man’s favourite word is one which means all-right, satisfactory, not bad
So why invent the light bulb, when candles are OK ?
Why make lifts, if stairs are OK ?
Earth’s OK, Why go to the moon ?
Clearly, not everybody believes OK is OK.
We don’t.

Some people advance the idea that we don’t need desktop apps because web apps are “good enough”. Actually, for a great many purposes, they aren’t. Why have a bulky laptop when a netbook is “good enough” ?. Actually for many purposes it is not. Why pay for Windows if Linux is ‘free’ … I think you get the pattern here. But it is our constant challenge to explain why one should have a new version of Windows or Office when the old version was “good enough” ? The answer – any economist will give you – is that when people choose to spend extra money, whatever differentiates one product from the other is relevant to them and outweighs the cost (monetary or otherwise , real or perceived) : then you re-define “good enough” the old version is not good enough any more. If we don’t persuade customers of that, we can’t make them change. [Ditto people who opt for Apple: they’d be spectacularly ignorant not to know a Mac costs more, so unless they are acting perversely they must see differentiators, relevant to them, which justify both the financial cost and the cost of forgoing Windows’ differentiators. Most people of course, see no such thing.]. One of the earliest business slogans to get  imprinted on me was “quality is meeting the customers needs”: pointless gold-plating is not “quality”. In that sense “Good enough” wins out: not everything that one product offers over and above another is a meaningful improvement. The car that leaves you stranded at the roadside isn’t meeting your needs however sophisticated its air conditioning, the camera you don’t carry with you to shoot photos isn’t meeting your needs even if it could shoot 6 frames a second, the computer system which is down when you need it is (by definition) not meeting your needs. A product which meets more of your needs is worth more.

A supplier can charge more in the market with choices (VMware, Novell, Apple) only if they persuade enough people accept the differentiators in their products meet real needs and are worth a premium. In the end Novell didn’t persuade enough, Apple have not persuaded a majority but enough for a healthy business, and VMware ? Who knows what enough is yet, never mind if they will get that many. If people don’t see the price as a premium but as a  legacy of being able to overcharge when there was no choice then it becomes the “VMware tax” as  Zane Adam calls it in our video interview. He talked about mortgaging everything to pay for VMware: the product which costs more than you can afford doesn’t meet your needs either, whatever features it may have.

I’ll come back to cost another time – there’s some great work which Matt has done which I want to borrow rather than plagiarize. It needs a long post and I can already see lots of words scrolling up my screen so want to give the rest of this post to one of VMware’s irrelevant feature claims :disk footprint.  Disk space is laughably cheap these days, and in case you missed the announcement Hyper-v Server now boots from flash – hence the Video above: before you run off to do this for yourself, check what set-ups are supported in production. And note it is only Hyper-V server not Windows Server, or client versions of Windows. The steps are all on this blog already. See How to install an image onto a VHD file, (I used a fixed size of 4GB). Just boot from VHD stored on a bootable USB stick. Simples.

I’ve never met a customer who cares about a small footprint: VMware want you to believe a tiny little piece of code must need less patching, give better uptime, and be more trustworthy  than a whole OS – even a pared down one like Windows Server Core or Hyper-V server. Now Jeff, who writes on the virtualization team blog , finally decided he’d heard enough of this and decided it was time to sink it once and for all . It’s a great post (with follow-up).  If you want to talk about patching and byte counts, argues Jeff, let’s count bytes in patches over a representative period:  Microsoft Hyper-V Server 2008 had 26 patches, not all of which required re-boots, and many were delivered as combined updates. They totalled 82 MB.  VMware ESXi 3.5 had 13 patches, totalling over 2.7 GB. That’s not a misprint 2700 MB against 82 (see VMware sometimes does give you more), that’s because VMware releases a whole new ESXi image every time they release a patch so  every ESXi patch requires a reboot. Could that be why VMotion (Live Migration, as now found in R2 of HyperV), seemed vital to them and merely important to us ? When we didn’t have it it was the most relevant feature. Jeff goes to town on VMware software quality – including the “Update 2” debacle, that wasn’t the worst thing though. The very worst thing that can happen in on a virtualized platform is  VM’s breaking out of containment and running code on the host: Since the host needs to access the VMs’ memory for snapshots, saving, migration, a VM that can run code on the host can impact all the other VMs. So CVE-2009-1244: “A critical vulnerability in the virtual machine display function allows a guest operating system users to execute arbitrary code on the host OS” is very alarming reading.

And that’s the thing – how can have a regard for a competitor who doesn’t meet the customers needs on on security or reliability, and who calls things like disk space to justify costing customers far, far more money ?

This post originally appeared on my technet blog.

Next Page »

Create a free website or blog at WordPress.com.