James O'Neill's Blog

August 18, 2009

Sophos error: facts not found

Filed under: Security and Malware,Virtualization,Windows 7 — jamesone111 @ 4:08 pm

Having begun my previous post with an explanation of “I have a professional disregard for …” it bubbles up again… Quite near were I live is the headquarters of Sophos, as a local company I should be well disposed to them but I’ve had occasion before now to roll my eyes at what their spokespeople have said – the pronouncements being of the “lets make the news, and never mind the facts” variety. One security blogger I talked to after some of these could be labelled “lacking professional regard for them”.  Well, Graham Cluley of Sophos has a prize example of this as a guest post on his blog, written by Sophos’s Chief Technology Officer Richard Jacobs.

“Windows 7’s planned XP compatibility mode risks undoing much of the progress that Microsoft has made on the security front in the last few years and reveals the true colours of the OS giant”. Says Jacobs. “XP mode reminds us all that security will never be Microsoft’s first priority. They’ll do enough security to ensure that security concerns aren’t a barrier to sales… …when there’s a trade off to be made, security is going to lose.”

That second half makes me pretty cross: I talked yesterday about the business of meeting customers’ needs and you don’t do that if security is lacking, but it’s not the only priority.

I’ve got a post Windows 7 XP mode: helpful ? Sure. Panacea ? No, where I point out that the Virtual in XP mode is not managed and I quote what Scott Woodgate said in the first sentence we published anywhere about XP mode “Windows XP Mode is specifically designed to help small businesses move to Windows 7.”  As Jacobs puts it The problem is that Microsoft are not providing management around the XP mode virtual machine (VM). It’s an odd statement because XP mode is just standard virtualization software and a pre-configured VM. You can treat the VM as something to be patched via Windows update or WSUS just like a physical PC. You install anti-virus software on it like a physical PC. To manages the VM you use the big brother of XPmode: MEDV, which is part of MDOP. But from the existence of unmanaged VM and missing other key facts Jacobs feels able to extrapolate an entire security philosophy: he could do worse than to  look up the Microsoft Security Development Lifecycle  to learn how we avoid making security trade offs the way we once did (and others still do ).

Now I’m always loathe to tell people how to do their jobs, but in post companies someone who carries the title of “Chief Technology Officer” would have a better grasp of the key facts before reaching for the insulting rhetoric. And having looked after another blog where we used many guest posts, it’s important to check the facts of your contributors, Cluley either didn’t check or didn’t know better, and let Jacobs end by outlining his idea of customers’ options.

  1. Stick with Windows XP.
  2. Migrate to Windows 7 and block use of XP mode – if you have no legacy applications.
  3. Migrate to Windows 7 and adopt XP mode.
  4. Migrate to Windows 7 and implement full VDI – there are various options for this, but don’t imagine it’s free.
  5. Demand that Microsoft do better

Lets’s review these

Option 2, get rid of legacy applications is plainly the best choice. There are now very few apps which don’t run on Windows Vista / 7 but if you’re lumbered with one those this choice isn’t for you

Option 1. Bad choice. (A) because if you are even thinking about the issue you know you want to get onto a better os and (B) because those legacy apps are probably driving you to running everything as administrator. Given a choice of “use legacy apps”, “run XP”, and “be secure”, you can choose any two. I hope Jacobs has the nouse  not to put this forward as a serious suggestion.

Option 3. Small business with unmanaged desktops ? XP mode is for you. Got management ? Get MEDV.!

Option 4. Full VDI: Bad choice: put the legacy app on a terminal sever if you can – but remember it is badly written, if it doesn’t run on an up-to-date OS will it run on Terminal services ? VDI in the sense of running instances of full XP desktops in VMs (just at the datacenter, not the desktop) has all the same problems of managing what is in those VMs: except they aren’t behind NAT, and they probably run more apps so they are more at risk. 

Option 5. Hmmm. He doesn’t make any proposals, but he seems to demand that Microsoft produce something like MEDV. We’ve done that.

And while I’m taking Cluley and Jacobs to task I should give mention a to Adrian Kingsley-Hughes on ZDNet It was one of my twitter correspondents who pointed me to Adrian and on to Sophos. He quotes Jacobs saying “We all need to tell Microsoft that the current choices of no management, or major investment in VDI are not acceptable”. The response is that if we thought those choices were acceptable we wouldn’t have MEDV. And Adrian should have known that too.

If people like these don’t get then some blame has to be laid at our door for not getting the message across, so for clarity I’ll restate what I said in the other post

  • Desktop virtualization is not a free excuse to avoid updating applications. It is a work around if you can’t update.
  • Desktop virtualization needs work, both in deployment and maintenance – to restate point 1 – it you have the option to update, expect that to be less work.
  • “Windows XP Mode is specifically designed to help small businesses move to Windows 7.”  As I pointed out in an earlier post still.  MED-V is designed for larger organizations with a proper management infrastructure, and a need to deploy a centrally-managed virtual Windows XP environment  on either Windows Vista or Windows 7 desktops. Make sure you use the appropriate one.

Update Adrian has updated his post with quotes from the above.  He has this choice quote “XP Mode is a screaming seige to manage. Basically, you’re stuck doing everything on each and every machine that XP Mode is installed on.”. Yes Adrian, you’re right. No customer who needs to manage Desktop Virtualization in an enterprise should even think of doing it without Microsoft Enterprise Desktop virtualization. Adrian calls the above the “MEDV defense” but asks “OK, fine, but what about XP Mode? That’s what we are talking about here”. What about XP mode ? It’s the wrong product if you have lots of machine (with 5 you can get Software assurance and MDOP). We’re talking about customers who install the wrong product for their needs. My job as an evangelist is to try to get them to use the one that meets their needs. But I think it would help customers if  instead of saying “XP mode is the wrong product” and stopping, commentators (Adrian, Richard Jacobs, Uncle Tom Cobley)  also mentioned the right product.

This post originally appeared on my technet blog.

August 17, 2009

How to Boot from VHD (VHD booting re-visited.)

Filed under: Windows 7,Windows Server 2008-R2 — jamesone111 @ 6:17 pm

Some while back I wrote about boot from VHD. To re-cap, in Windows 7 and Server 2008 R2 (including core, and Hyper-V Server R2) the boot loader is capable of mounting a VHD file and booting from it as though it were a physical disk. There is no virtualization going on, just the necessary smarts to use the same format. If you try use this to boot older operating systems the boot process will start, but the machine will crash quite early on when it finds the system/boot device(s) aren’t really disks (virtualization hides this fact).

So for it all to work first you need the BootMgr from Windows 7 / Server 2008-R2 (it lurks in the hidden System partition). Obviously you have this if the main OS on the machine is Windows 7 or Server 2008-R2, but if you want add a VHD as a second OS on a system running  Vista / Server 2008 you can just update BootMgr (the easiest place to get it is probably the install DVD) . It supports some new features but the Boot Configuration Database file (BCD) your system already has remains valid – it just doesn’t contain any of the new features so this should have no unwanted side effects.

Second you need a VHD image, into which you have installed an OS which understands boot from VHD. This is easy enough to build in Hyper-V but there are other ways. There are TWO preparation steps which I forgot: the first is that VHD needs to be sysprep’d (%windir%\system32\sysprep\sysprep.exe) otherwise you’ll create conflicting clones. This can be fixed once you have got the OS booting from VHD, but you won’t get it to boot unless you create a boot configuration database on the partition where the OS resides in the VHD. You should be able to do this when you’re building the VHD, or you can mount the VHD and do it after the fact, either way you are looking for  %windir%\system32\bcdboot.exe, you’ll find it on Windows 7 and Server 2008-R2 , but NOT on Hyper-V server and not on older OSes. Run it as bcdboot V:\Windows /s V:   where V: is the volume letter for a mounted Vhd , or the drive letter of of the boot drive if the OS is running in hyper-V. I omitted this step when updating my release candidate set-ups to final code last night and it throws an ugly error. That led to my second mistake: I thought the problem was with the BCD. 

Since I wasn’t thinking clearly (I thought ‘I just do this before I go to bed’ ! ) I went to edit the BCD. Now… after the explanation above you’d be able to work out that you need to edit the BCD with the latest tools (BCDEdit). If you’re working on Windows 7 / Server 2008-R2 you don’t need to worry. But if you’re adding a VHD-Booting OS to a machine running Windows Vista/Server 2008 (which I was) and try to manage boot from VHD with the tools those OSes provided you’ll be on path to frustration, and (in my case) insomnia. After much puzzlement, I ended up back at BCD which would only boot Windows Server 2008, with the right version of BCDEdit.
So I was ready for the third step. I had my files in C:\VHD\Win7.VHD so the 3 commands I needed were to clone the existing entry and modify it for boot from VHD, thus

bcdedit /copy {default} /d “Windows Server 2008-R2 From VHD”

/d specifies the description you’ll see in the boot menu, the command will copy the default entry and return a guid e.g. {cbd971bf-b7b8-4885-951a-fa03044f5d71}, copy the GUID you’ll need it in the next 2 commands. If there is no OS on the drive you can copy the boot folder from the windows setup disk and modify the {default} entry in the same way as you’d modify the copy, just use {default} in place of the guid.

bcdedit /set {cbd971bf-b7b8-4885-951a-fa03044f5d71} device “vhd=[locate]\vhd\win7.vhd”

bcdedit /set {cbd971bf-b7b8-4885-951a-fa03044f5d71} Osdevice “vhd=[locate]\vhd\win7.vhd”

(use your own guid, obviously)

You can check any of the other settings for booting the OS with bcdedit /enum. If they need changing it’s back to bcdedit /set but if it looks right, it’s time to reboot and see if it works. I put this last part into a video I have on TechNet edge which shows some of the other uses of VHD files, but I glossed over making the VHD and getting the right versions of the tools. 

Incidentally in my last post I mentioned the announcement Hyper-v Server now boots from flash provided you have made a bootable flash device the steps above will let you set this up. Be warned though, that to be properly supported the setup will need to be “blessed” by the hardware vendor.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2009/08/17/how-to-boot-from-vhd-vhd-booting-re-visited.aspx’;

Update. Mark left a comment with a link to a more throrough post than this, just a quick side note though, boot from should work in theory for any Windows7 family OS, including Server 2008 . However in practice there is some work done in HyperV server so that the USB device doesn’t get lost for a short time during the boot process. This isn’t a problem booting from a hard disk, but will crash the OS booting from USB. So boot from USB is Hyper-V server and Windows-PE only.

This post originally appeared on my technet blog.

August 14, 2009

VMware – the economics of falling skies … and disk footprints.

Filed under: Virtualization,Windows Server 2008,Windows Server 2008-R2 — jamesone111 @ 4:36 pm

There’s a phrase which has being go through my head recently: before coming to to Microsoft I ran my a small business; I thought our bank manager was OK, but one of my fellow directors – someone with greater experience in finance than I’ll ever have – sank the guy with 7 words “I have a professional disregard for him.”. I think of “professional disregard” when hearing people talk about VMware. It’s not that people I’m meeting simply want to see another product – HyperV – displace VMware (well, those people would, wouldn’t they ?) , but that nothing they see from VMware triggers those feelings of “professional regard” which you have for some companies – often your toughest competitor.

When you’ve had a sector to yourself for a while, having Microsoft show up is scary. Maybe that’s why Paul Maritz was appointed to the top job at VMware. His rather sparse entry on Wikipedia says that Maritz was born in Zimbabwe in 1955 (same year as Bill Gates and Ray Ozzie, not to mention Apple’s Steve Jobs, and Eric Schmidt – the man who made Novell the company it is today) and that in the 1990’s he was often said to be the third-ranking executive in Microsoft (behind Gates and Steve Ballmer, born in early 1956). The late 90s was when people came to see us as “the nasty company”. It’s a role that VMware seem to be sliding into: even people I thought of as being aligned with VMware now seem inclined to kick them.

Since the beta of Hyper-V last year, I’ve being saying that the position was very like that with Novell in the mid 1990s. The first point of similarity is on economics. Novell Netware was an expensive product and with the kind of market share where a certain kind of person talks of “monopoly”. That’s a pejorative word, as well as one with special meanings to economists and lawyers. It isn’t automatically illegal or even bad to have a very large share (just as very large proportions in parliament can make you either Nelson Mandela or Robert Mugabe). Market mechanisms which act to ensure “fair” outcomes rely on buyers being able to change to another seller (and vice versa – some say farmers are forced to sell to supermarkets on unfair terms) if one party is locked in then terms can be dictated. Microsoft usually gets accused of giving too much to customers for too little money. Economists would say that if a product is over priced, other players will step in – regulators trying wanting to reduce the amount customers get from Microsoft argue they are preserving such players. Economists don’t worry so much about that side, but say new suppliers need more people to buy the product, which means lower prices, so a new entrant must expect to make money at a lower price: they would say if Microsoft makes a serious entry into a existing market dominated by one product, that product is overpriced. Interestingly I’ve seen the VMware side claim that HyperV , Xen and other competitors are not taking market share and VMware’s position is as dominant as ever.

The second point of similarity is that when Windows NT went up against entrenched Netware it was not our first entry into networking – I worked for RM where we OEM’d MS-NET (a.k.a 3COM 3+ Open, IBM PC-Lan program) and OS/2 Lan manager (a.k.a. 3+Open). Though not bad products for their time – like Virtual server – they did little to shift things away from the incumbent. The sky did not fall in on Novell when we launched NT, but that was when people stopped seeing NetWare as the only game in town. [A third point of similarity]. Worse, new customers began to dismiss its differentiators as irrelevant and that marks the beginning of end. 
Having been using that analogy for a while it’s nice to see no less a person than a Gartner Vice President, David Cappuccio, envisaging a Novell-like future for VMware.  In piece entitled “Is the sky falling on VMware” SearchServerVirtualization.com  also quotes him as saying that “‘good enough’ always wins out in the long run”.  I hate “good enough” because so often it is used for “lowest common denominator” I’ve kept the words of a Honda TV ad with me for a several years.

Ever wondered what the most commonly used in the world is ?
Man’s favourite word is one which means all-right, satisfactory, not bad
So why invent the light bulb, when candles are OK ?
Why make lifts, if stairs are OK ?
Earth’s OK, Why go to the moon ?
Clearly, not everybody believes OK is OK.
We don’t.

Some people advance the idea that we don’t need desktop apps because web apps are “good enough”. Actually, for a great many purposes, they aren’t. Why have a bulky laptop when a netbook is “good enough” ?. Actually for many purposes it is not. Why pay for Windows if Linux is ‘free’ … I think you get the pattern here. But it is our constant challenge to explain why one should have a new version of Windows or Office when the old version was “good enough” ? The answer – any economist will give you – is that when people choose to spend extra money, whatever differentiates one product from the other is relevant to them and outweighs the cost (monetary or otherwise , real or perceived) : then you re-define “good enough” the old version is not good enough any more. If we don’t persuade customers of that, we can’t make them change. [Ditto people who opt for Apple: they’d be spectacularly ignorant not to know a Mac costs more, so unless they are acting perversely they must see differentiators, relevant to them, which justify both the financial cost and the cost of forgoing Windows’ differentiators. Most people of course, see no such thing.]. One of the earliest business slogans to get  imprinted on me was “quality is meeting the customers needs”: pointless gold-plating is not “quality”. In that sense “Good enough” wins out: not everything that one product offers over and above another is a meaningful improvement. The car that leaves you stranded at the roadside isn’t meeting your needs however sophisticated its air conditioning, the camera you don’t carry with you to shoot photos isn’t meeting your needs even if it could shoot 6 frames a second, the computer system which is down when you need it is (by definition) not meeting your needs. A product which meets more of your needs is worth more.

A supplier can charge more in the market with choices (VMware, Novell, Apple) only if they persuade enough people accept the differentiators in their products meet real needs and are worth a premium. In the end Novell didn’t persuade enough, Apple have not persuaded a majority but enough for a healthy business, and VMware ? Who knows what enough is yet, never mind if they will get that many. If people don’t see the price as a premium but as a  legacy of being able to overcharge when there was no choice then it becomes the “VMware tax” as  Zane Adam calls it in our video interview. He talked about mortgaging everything to pay for VMware: the product which costs more than you can afford doesn’t meet your needs either, whatever features it may have.

I’ll come back to cost another time – there’s some great work which Matt has done which I want to borrow rather than plagiarize. It needs a long post and I can already see lots of words scrolling up my screen so want to give the rest of this post to one of VMware’s irrelevant feature claims :disk footprint.  Disk space is laughably cheap these days, and in case you missed the announcement Hyper-v Server now boots from flash – hence the Video above: before you run off to do this for yourself, check what set-ups are supported in production. And note it is only Hyper-V server not Windows Server, or client versions of Windows. The steps are all on this blog already. See How to install an image onto a VHD file, (I used a fixed size of 4GB). Just boot from VHD stored on a bootable USB stick. Simples.

I’ve never met a customer who cares about a small footprint: VMware want you to believe a tiny little piece of code must need less patching, give better uptime, and be more trustworthy  than a whole OS – even a pared down one like Windows Server Core or Hyper-V server. Now Jeff, who writes on the virtualization team blog , finally decided he’d heard enough of this and decided it was time to sink it once and for all . It’s a great post (with follow-up).  If you want to talk about patching and byte counts, argues Jeff, let’s count bytes in patches over a representative period:  Microsoft Hyper-V Server 2008 had 26 patches, not all of which required re-boots, and many were delivered as combined updates. They totalled 82 MB.  VMware ESXi 3.5 had 13 patches, totalling over 2.7 GB. That’s not a misprint 2700 MB against 82 (see VMware sometimes does give you more), that’s because VMware releases a whole new ESXi image every time they release a patch so  every ESXi patch requires a reboot. Could that be why VMotion (Live Migration, as now found in R2 of HyperV), seemed vital to them and merely important to us ? When we didn’t have it it was the most relevant feature. Jeff goes to town on VMware software quality – including the “Update 2” debacle, that wasn’t the worst thing though. The very worst thing that can happen in on a virtualized platform is  VM’s breaking out of containment and running code on the host: Since the host needs to access the VMs’ memory for snapshots, saving, migration, a VM that can run code on the host can impact all the other VMs. So CVE-2009-1244: “A critical vulnerability in the virtual machine display function allows a guest operating system users to execute arbitrary code on the host OS” is very alarming reading.

And that’s the thing – how can have a regard for a competitor who doesn’t meet the customers needs on on security or reliability, and who calls things like disk space to justify costing customers far, far more money ?

This post originally appeared on my technet blog.

August 13, 2009

Core parking in Server 2008 R2 – why it’s like airport X-ray machines.

Filed under: Windows Server 2008-R2 — jamesone111 @ 10:40 pm

memory-diag It’s third time lucky for this post… a couple of weeks ago I was at our big training event “tech-ready”, and my laptop blue screened on me, citing memory as the cause. On my return I started to write this post and came back to find the laptop had rebooted after a crash. Within 24 hours it had done it a second time, and I saw it do it third time with memory getting the blame at a blue screen. I ran the on-board diagnostics which told me there was nothing wrong but the Windows 7 memory diagnostic came back with the message you can see. So it looks like the laptop needs a bit more TLC from Dell. Fortunately I have a spare carcass I can put my hard disk into so I’m not totally stranded. It’s also reminded me that the auto-save option in Windows live writer is not on by default.

I don’t much like spending time away from home these days, queuing for airport security is a chore (in fact there is very little fun in flying) and the fight is a huge part of my annual carbon footprint, so going to Seattle for Tech-Ready is something which makes me stop and think: and each time I decide that it is still worth going . Sometimes it is the chance to network with colleagues from round the world I just wouldn’t get to hang out with , sometimes it is the deep technical content, sometimes it is the ability of people like Ray Ozzie to just engage me with ideas. Often it’s a combination. I was mighty impressed with Ray’s first appearance last year, and this year we got a taste of some ideas which will end up in products soon, some that are on a slower burn, and some we’ll look back on as an interesting flight of fancy. Of the “wow” demos we saw it’s not always possible to predict which will end up in which group. Ray quoted Steve Ballmer saying something like “We’re not going to go home, we’re going to keep coming and coming and coming” – which has echoes of the Blue Monster about it. Ray put it a different way: yes the economy is bad and we’re not immune to it, but if we cut back on R&D then … we he didn’t actually use the words  “you’ll regret it. Maybe not today. Maybe not tomorrow, but soon and for the rest of your life.”* but that was what it came down to.

If Ray was the big vision then Mark Russinovich had some of the best detail, and he talked about core parking in Server 2008 R2. It dawned on me that Research isn’t just about some of the blue sky stuff that we saw in Ray’s session, it is sometimes about going back to problems you thought you’d solved: like the process scheduler.

When I studied Computer Science at university in the mid 1980s we covered the theory of operating system design and I still have the text book. It describes the job of the process scheduler like this

  1. [Decide if] the current process on this processor is still the most suitable to run. If it is return control to it, otherwise …
  2. Save its volatile environment (registers, instruction pointer etc)
  3. Restore the volatile environment of the most suitable process
  4. Return control to that process

As a model it’s something we hardly need to think about: it  has coped with the arrival of multi-processor environments, notice it said “this processor” the scheduler just looks at each processor of a multi core/multi chip environment and repeats the task. “Most suitable” covers a variety of possibilities, and the introduction of multi threading meant nothing more than saying you could have more than one instruction pointer and register state for a single process: each one representing  a different thread of execution. We know there is often not a thread ready to go – if there were we’d be seeing utilization running at 100%. This doesn’t break the model, and as soon as a thread becomes ready it gets scheduled on an idle processor.  That means the load on the processors is roughly equal which seems “fair” and so “good” We’d assume that also gives better performance, and here we need to go and look queues definitions of performance – something I studied before university, and I get reminded of every time I fly.

People I’ve asked  think we have longer queues for airport security because “the more complex checks of today take longer”. But that can’t be so because the number of people flying has remained the same. The total number of passengers per hour who need to be processed hasn’t changed. If time-to-process was the only thing to change the queue would get longer and longer all day or we’d need to lengthen the gaps between flights and have fewer flights or a longer day. That hasn’t happened. As passengers we notice queue-length or  time-in-queue but the airport  measures “units processed per hour”.  If the queue looks like overflowing, another X-ray machine is opened. and when it gets shorter staff can take a break or attend to other duties. So the minimum amount of processor time is used to process all the passengers going through the airport. If every processor was running all day there would be minimal queues.

core-parkingThis model turns out to be very good for other kinds of processors. Modern CPUs can close down a core or socket which can save a lot of power. The amount is variable be its hundreds of watts for a large server, and then the same again in data centre air-con to take the heat. If you want to save 1 ton of C02 in a year (on National Energy Foundation numbers), you need to save 45 Kilowatt Hours per week and it’s easy to see servers which save enough Watts for enough hours to make that number. But if every processor processes the first thread waiting for processing, the savings won’t amount to a hill of beans** That’s what most schedulers do because they assumed that if there is at least one runnable thread, the best thing to do must be to get a thread onto an idle processor.  Some smart person went back to scheduler design with the new idea that “most suitable thread” might be no thread at all : let the processing unit (core) go idle and into a low power state or power it down completely and Park it. This is a  complex decision for a scheduler because it involves questions like “how long must the processor be off to save more than the energy it takes to bring it back” and “If this processor stops processing, what will happen to time-in-queue and queue-length”. Intuitively we’d say that queuing threads must give lower performance, but the processor will still complete the tasks faster than they go into the queue.  Change your measure to work done per unit time and hey presto…

Core parking doesn’t need any particularly fancy hardware – when I checked on this 2 year old laptop (before I had to put in the drive from regular laptop) and you can see from the Perf-Mon was working (it’s on by default). Better yet although older OSes have no concept of core parking if you are running Hyper-V then processors are scheduled for the VMs to provide it core parking. There are significant savings to be made here over and above what virtualization was already giving.



 * That was Humphrey Bogart in Casablanca

** OK, that’s a different greenhouse gas.

This post originally appeared on my technet blog.

Create a free website or blog at WordPress.com.