James O'Neill's Blog

March 10, 2010

UK techdays Free events in London – including after hours.

clip_image001

You may have seen that registration for UK TechDays events from 12th to 16th April is already open – but you probably won’t have seen this newly announced session, even if you are following @uktechdays on twitter

After Hours @ UK Tech Days 2010 – Wednesday 14th April, 7pm – 9pm. Vue Cinema, Fulham Broadway.

Reviving the critically acclaimed series of mad cap hobbyist technology demonstrations – After Hours reappears at Tech Days 2010. After Hours is all about the fun stuff people are building at home with Microsoft technology, ranging from the useful ‘must haves’ no modern home should be without, too the bleeding edge of science fiction made real! Featuring in this fun filled two hour installment of entertaining projects are: Home Entertainment systems, XNA Augmented Reality, Natural User Interfaces, Robotics and virtual adventures in the real world with a home brew holodeck!

Session 1: Home entertainment.

In this session we demonstrate the integration of e-home technologies to produce the ultimate in media entertainment systems and cyber home services.  We show you how to inspire your children to follow the ‘way of the coder’ by tapping into their Xbox 360 gaming time.

Session 2: Augmented reality.

2010 promises to be the year of the Natural User Interface. In this session we demonstrate and discuss the innovations under development at Microsoft, and take an adventure in the ultimate of geek fantasies – the XNA Holodeck.

Like all other techdays session this one is FREE to attend  – if you hadn’t heard: UK Tech Days 2010 is a week-long series of events run by Microsoft and technical communities to celebrate and inspire developers, IT professionals and IT Managers to get more from Microsoft technology.  Our day events in London will cover the latest technology releases including Microsoft Visual Studio 2010, Microsoft Office 2010, Virtualisation, Silverlight, Microsoft Windows 7 and Microsoft SQL Server 2008 R2 plus events focusing on deployment and an IT Manager day. Oh and did I say they were FREE

IT Professional Week – Shepherds Bush

Monday, 12 April 2010   – Virtualization Summit – From the Desktop to the Datacentre

Designed to provide you with an understanding of the key products & technologies enabling seamless physical and virtual management, interoperable tools, and cost-savings & value.

Tuesday, 13 April 2010  – Office 2010 – Experience the Next Wave in Business Productivity

The event will cover how the improvements to Office, SharePoint, Exchange, Project and Visio will provide a practical platform that will allow IT professionals to not only solve problems and deliver business value, but also demonstrate this value to IT’s stakeholders. 

Wednesday, 14 April 2010Windows 7 and Windows Server 2008 R2 – Deployment made easy

This event will provide you with an understanding of these tools including the new Microsoft Deployment Toolkit 2010, Windows Deployment services and the Application Compatibility Toolkit. Understanding of these tools including the new Microsoft Deployment Toolkit 2010, Windows Deployment Services. We will also take you through the considerations for deploying Windows Server 2008 R2 and migrating your server roles.

Thursday, 15 April 2010 SQL Server 2008 R2 – The Information Platform
Highlighting the new capabilities of the platform, as well as diving into specific topics, such as consolidating SQL Server databases, and tips and techniques for Performance Monitoring and Tuning as well as looking at our newly released Cloud platform SQL Azure.

Friday, 16 April 2010 (IT Managers)Looking ahead, keeping the boss happy and raising the profile of IT
IT Managers have more and more responsibilities to drive and support the direction of the business. We’ll explore the various trends and technologies that can bring IT to the top table, from score-carding to data governance and cloud computing.

Developer Week – Fulham Broadway

Monday, 12 April 2010 (For Heads of Development and Software Architects) Microsoft Visual Studio 2010 Launch – A Path to Big Ideas

This launch event is aimed at development managers, heads of development and software architects who want to hear how Visual Studio 2010 can help build better applications whilst taking advantage of great integration with other key technologies.
NB – Day 2 will cover the technical in-depth sessions aimed at developers

Tuesday, 13 April 2010 Getting started with Microsoft .NET Framework 4 and Microsoft Visual Studio 2010 WAITLIST ONLY
Microsoft and industry experts will share their perspectives on the top new and useful features with core programming languages and in the framework and tooling, such as — ASP.NET MVC, Parallel Programming, Entity Framework 4, and the offerings around rich client and web development experiences.

Wednesday, 14 April 2010 The Essential MIX
Join us for the Essential MIX as we continue exploring the art and science of creating great user experiences. Learn about the next generation ASP.NET & Silverlight platforms that make it a rich and reach world.

Thursday, 15 April 2010 Best of Breed Client Applications on Microsoft Windows 7
Windows 7 adoption is happening at a startling pace. In this demo-driven day, we’ll look at the developer landscape around Windows 7 to get you up to speed on the operating system that’ll your applications will run on through the new decade.

Friday, 16 April 2010 – Registration opening soon! Windows phone Day
Join us for a practical day of detailed Windows Phone development sessions covering the new Windows Phone specification, application standards and services

There will also be some “fringe” events , these won’t all be in London and I’ll post about them separately (James in the Midlands, I’ve heard you :-)  )

 

This post originally appeared on my technet blog.

February 25, 2010

Retirement Planning (for service packs)

Yesterday I wrote about end-of-life planning for OSes and so it makes sense to talk about the end of a service pack, as retirement – it is after all the word that is used on the product lifecycle pages. Of course we don’t mean retirement in go and live by the seaside sense…



Special police squads — BLADE RUNNER UNITS — had orders to shoot to kill, upon detection,any trespassing Replicants.


This was not called execution. It was called retirement


that sense. Service packs, like OSes (and replicants) get an end date set well in advance, having explained OSes I want to move on to service packs (and if you want to know about Replicants you’ll have to look elsewhere).


The rule for service packs is simple. Two years after the release of a Service Pack we stop supporting the previous version. So although Windows Vista will be in mainstream support until 2012, and extended support until 2017, that doesn’t mean you can run the initial release , or Service Pack 1 and be supported until then. Lets use Vista as a worked example – I explained yesterday


Windows Vista [had] a General Availability date [of] Jan 2007.For Vista, five years after GA will be later than 2 years after Windows 7, so Vista goes from mainstream to extended support in or shortly after January 2012. We’ve set the date, April 10th 2012. The end of extended support will depend on when the next version of Window ships, but it won’t be before April 11th 2017.


Service pack 1 for Vista became available in April 2008, and Service Pack 2 became available in April 2009.
So, the life of the original Release to Manufacturing of (RTM) version of Windows Vista ends on April 14 2010.
In the same way the life of SP1 of Vista should end in April 2011, actually because we don’t retire things on the exact anniversary, SP1 gets an extension until July 12 2011.


If you are on Vista you must have upgraded to SP1 or SP2 (or Windows 7) by April 14 if you want to continue being supported.


So here’s the summary for what is supported with Vista, and when


Jan ‘07 – April ‘08  Only RTM release available


April ‘08 – April ‘09 RTM and Service Pack 1 supported


April ‘09 – April ‘10 RTM , Service Pack 1 and Service Pack 2 supported


April ‘10  – July ‘11 Service pack 1 and Service Pack 2 Supported


July ‘11 – April ‘12 Service Pack 2 only supported


April ‘12 – April ‘17 Extended support phase on SP2 only.


To simplify things, that assumes there is no Service pack 3 for Windows Vista, and that the successor to Windows 7 ships before April 11 2015.



Vista SP1 coincided with the release of Server 2008, and  Windows XP service pack 3 came very shortly afterwards. The extra few days means the anniversary for XP SP2 falls after the cut off date for April retirement and the end of life for XP SP 2 is July 13th 2010 (the same as day Windows 2000 professional and server editions). Mainstream support for Windows XP (all service packs) has ended,  after July 13 XP is extended support ONLY on SP3 ONLY.


I should have included in yesterdays post that July 13th 2010 also marks the end of mainstream support for Server 2003 (and Server 2003 R2), the  RTM and SP1 versions are already retired. It would be very unusual to see a new service pack for something in extended support. If you still have 2003 servers, you need to decide what you will do about support / upgrades before Jul 13th


Server 2008 shipped as SP1 to sync up with Windows Vista  and SP2 for both came out on the same date, so there are no server service pack actions required until July 12 2011. I explained yesterday why I have sympathy with people who don’t plan, but if you are on Server 2008 SP1 don’t leave it till the last minute to choose between SP2 or upgrading to R2  and then implementing your choice.


Update – Fixed a few typos. 

This post originally appeared on my technet blog.

February 17, 2010

Free ebook: Understanding Microsoft Virtualization R2 Solutions

Filed under: Virtualization,Windows Server 2008-R2 — jamesone111 @ 9:28 am

Over on the MSPress blog they have an announcement

Mitch Tulloch has updated his free ebook of last year. You can now download Understanding Microsoft Virtualization R2 Solutions in XPS format here and in PDF format here.

I’ve worked with Mitch on a couple of books, including the first release of this one, and seen a couple of others they’ve all been good (poor books from MS press are very few and far between). If this was a paper book which you had to pay for I’d suggest it is well worth looking at – but it’s a free download (print it or view it on screen : your choice) so, seriously, if you expect to be working  with Hyper-V in the foreseeable future you’d have to be daft not to download it.

This post originally appeared on my technet blog.

February 16, 2010

Desktop Virtualization Hour

I had a mail earlier telling me about desktop virtualization hour , planned for 4PM (GMT) on March 18th. (That’s 9AM Seattle time, 5PM CET … you can work out the others I’m sure). More information and a downloadable meeting request are Here.

Some effort seems to be going into this one, which makes me think it is more than the average web cast.

This post originally appeared on my technet blog.

February 11, 2010

How to deploy Windows 7 – 3 useful posts

Filed under: Deployment,Windows 7,Windows Server 2008-R2 — jamesone111 @ 8:31 pm

I mentioned a few days back that I was going to write some posts about deploying Windows 7, but there is some good material out there already and there is no sense re-inventing the wheel

So I’d like to recommend 3 posts from fellow evangelist and all-round good chap Alan Le Marquand,

Choosing the path to Windows 7

Building Windows 7 images

Deploying Windows 7

These are as useful as collections of links to other subject matter as they for the writing which Alan has done. I expect to be referring people to these via this post (if you see what I mean) for some time to come.

This post originally appeared on my technet blog.

February 8, 2010

Installing Windows from a phone

Arthur : “You mean you can see into my mind ?”
Marvin: “Yes.”
Arthur: “And … ?”
Marvin: “It amazes me how you manage to live in anything that small”

Looking back down the recent posts you might notice that this is the 8th in a row about my new phone (so it’s obviously made something of an impression), this one brings the series to a close.

I’ve said already that I bought at 16GB memory card for the new phone which is a lot – I had 1GB before, so… what will I do with all that space? I’m not going to use it for video and 16GB is room for something like 250 hours of MP3s or 500 hours of WMAs: I own roughly 200 albums, so it’s a fair bet they’d fit. Photos – well maybe I’d keep a few hundred MB on the phone. In any event, I don’t want to fill the card completely. After a trip out with no card in the my camera I keep a SD-USB card adapter on my key-ring so I always have both a USB stick and a memory card : currently this uses my old micro-SD card in an full size SD adapter. If I need more than 1GB I can whip the card out of the phone, pop it in the adapter and keep shooting 

However the phone has a mass storage device mode so I thought to myself why not copy the Windows installation files to it, and see if I can boot a Machine off it and install Windows from the phone ? That way one could avoid carrying a lot of setup disks.
Here’s how I got on.

This post originally appeared on my technet blog.

December 21, 2009

Drilling into ‘reasons for not switching to Hyper-V’

Filed under: Powershell,Virtualization,Windows Server 2008-R2 — jamesone111 @ 11:30 am

Information week published an article last week “9 Reasons why enterprises shouldn’t switch to hyper-v”. The Author is Elias Khnaser, this is his website and this is the company he works for.  A few people have taken him to task over it, including Aidan . I’ve covered all the points he made, most of which seem to have come the VMwares bumper book of FUD, but I wanted to start with one point which I hadn’t seen before. 

Live migration. Elias talked of “an infrastructure that would cause me to spend more time in front of my management console waiting for live migration to migrate 40 VMs from one host to another, ONE AT A TIME.” and claimed it “would take an administrator double or triple the time it would an ESX admin just to move VMs from host to host”.  Posting a comment to the original piece he went off the deep end replying to Justin’s comments , saying “Live Migration you can migrate 40 VMs if nothing is happening? Listen, I really have no time to sit here trying to educate you as a reply like this on the live migration is just a mockery. Son, Hyper-v supports 1 live VM migration at a time.” . Now this does at least start with a fact : Hyper-V only allows one VM to be in flight on a given node at any moment: but you can issue one command and it moves all the hyper-v VMs between nodes. Here’s the PowerShell command that does it.
Get-ClusterNode -Name grommit-r2 | Get-ClusterGroup |
  where-object { Get-ClusterResource -input $_ | where {$_.resourcetype -like "Virtual Machine*"}} |
     Move-ClusterVirtualMachineRole -Node wallace-r2             
The video shows it in action with 2 VMs but it could just as easily be 200.  The only people who would “spend more time in front of [a] management console” are those who are not up to speed with Windows Clustering. System Center will sequence moves for you as well. But… does it matter if the VMs are migrated in series or in parallel ?  If you have a mesh of Network connections between cluster nodes you could be copying to 2 nodes of two networks with the parallel method, but if you don’t (and most clusters don’t) then n copies will go at 1/n the speed of a single copy. Surely if you have 40VMs an they take a minute to move it takes 40 minutes either way…  right ? Well no… Let’s use some rounded numbers for illustration only: say 55 seconds of the minute is doing the initial copy of memory, 4 seconds doing the second pass copy of memory pages which changed in that 55 seconds, and 1 second doing the 3rd pass copy and handshaking. Then Hyper-V moves onto the next VM and the process repeats 40 times. What happens with 40 copies in parallel ? Somewhere in 37th minute the first pass copies complete – none of the VMs have moved to their new node yet. Now: if 4 seconds worth changed in 55 seconds – that’s about 7% of all the pages – what percentage will have changed in 36 minutes ?  Some won’t change from hour to hour and others change from second to second – how many actually change in 55 seconds or  36 minutes or any other length of time depends on the work being done at that point, and the memory size and will be enormously variable. However the extreme points are clear (a) In the very best case no memory changes and the parallel copy takes as long as the sequential. In all other cases it takes longer (b) In the worst case scenario the second pass has to copy everything – when that happens the migration will never complete.  

Breadth of OS support. In Microsoft-speak “supported”  means a support incident can go to the point of issuing a hot-fix if need be. Not supported doesn’t mean non-cooperation if you need help – but the support people can’t make the same guarantee of a resolution. By that definition, we don’t “support” any other companies’ software – they provide hot-fixes, not us – but we do have arrangements with some vendors so a customer can open a support case and have it handed on to Microsoft or handed on by Microsoft as a single incident. We have those arrangements with Novell for Suse Linux and Red Hat for RHEL, and it’s reasonable to think we are negotiating arrangements for more platforms: those who know what is likely to be announced in future won’t identify which platforms to avoid prejudicing the process. In VMware-speak “supported”, has a different meaning. In their terms NT4 is “Supported”. NT4 works on HyperV but without hot-fixes for NT4 it’s not “Supported”. If NT4 is supported on VMware and not on Hyper-V exactly how is a customer better off ? Comparisons using different definitions of “support” are meaningless. “Such and Such an OS works on ESX / Vsphere but fails on Hyper-V” or “Vendor X works with VMware but not with Microsoft” allows the customer can say “so what” or “That’s a deal-breaker”.

Security.  Was it hyper-v that had the vulnerability which let VMs break out of into the host partition ? No that was VMware. Elias commented that "You had some time to patch before the exploit hit all your servers" which makes me worry about his understanding of network worms. He also brings up the discredited disk footprint argument; that is based on the fallacy that every Megabyte of  code is equally prone to vulnerabilities, Jeff sank that one months ago and pretty comprehensively – the patch record  shows a little code from VMware has more flaws than a lot of code of Microsoft’s.

Memory over-commit. Vmware’s advice is don’t do it. Deceiving a virtualized OS about the amount of memory at its disposal means it makes bad decisions about what to bring into memory – with the virtualization layer paging blindly – not knowing what needs to be in memory and what doesn’t. That means you must size your hardware for more disk operations, and still accept worse performance. Elias writes about using oversubscription, “to power-on VMs when a host experiences hardware failure”. In other words the VMs fail over to another host which is already at capacity and oversubscription magically makes the extra capacity you need. We’d design things with a node’s worth of unused memory (and CPU , Network, and Disk IOps ) in the other node[s] of the cluster. VMware will cite their ability to share memory pages, but this doesn’t scale well to very large memory systems (more pages to compare), and to work you must not have [1] large amounts of data in memory in the VMs (the data will be different in each), or [2]  OSes which support entry point randomization (Vista, Win7, Server 2008/2008-R2) or [3] heterogeneous operating systems. Back in March 2008 I showed how a Hyper-v solution was more cost effective if you spent some of the extra cost of buying VMware on memory – in fact I showed the maths underneath it and how under limited circumstances VMware could come out better. Advocates for VMware [Elias included] say buying VMware buys greater VM density: the same amount spent on RAM buys even-greater density. The VMware case is always based on a fixed amount of memory in the server: as I said back then, either you want to run [a number of] VMs on the box, or the budget per box is [a number] Who ever yelled "Screw the budget, Screw the workload. Keep the memory constant !" ? The flaw in that argument is more pronounced now than it was when I first pointed it out as the amount of RAM you get for the price of VMware has increased.

Hot add memory. Hyper-v only does hot-add of disk, not memory. Some guest OSes won’t support it at all. Is it an operation which justifies the extra cost of VMware ? . 

Priority restart – Elias describes a situation where all the domain controllers / DNS servers on are one host. In my days in Microsoft Consulting Services reviewing designs customers had in front of them, I would have condemned a design which did that, and asked some tough questions of whoever proposed it.  It takes scripting (or very conservative start-up timeouts) in Hyper-V to manage this. I don’t know enough of the feature in VMware to know how sequences things not based on the OS running but all the services being ready to respond

Fault tolerance. VMware can offer parallel running – with serious restrictions. Hyper-v needs 3rd party products (Marathon) to match that.  What this saves is the downtime to restart the VM after an unforeseen hardware failure. It’s no help with software failures if the app crashes, or the OS in the VM crashes, then both instances crash identically. Clustering at the application level is the only way to guarantee high levels of service: how else do you cope with patching the OS in the VM or the application itself ?      click for larger version

Maturity: If you have a new competitor show up in your market, you tell people how long you have been around. But what is the advantage in VMware’s case ? Shouldn’t age give rise to wisdom, the kind of wisdom which stops you shipping Updates which cause High Availability VMs to unexpectedly reboot, or shipping beta time-bomb code in a release product. It’s an interesting debating point whether VMware had that Wisdom and lost it – if so they have passed through maturity and reached senility.

 Third Party vendor support. Here’s a photo. At a meet-the-suppliers event one of our customers put on, they had us next to VMware. Notice we’ve got System Center Virtual Machine manager on our stand, running in VM, managing two other hyper-V hosts which happen to be clustered, but the lack of traffic at the VMware stand allows us to see they weren’t showing any software – a full demo of our latest and greatest needs 3 laptops, and theirs ? Well the choice of hardware is a bit limiting. There is a huge range of management products to augment Windows – indeed the whole reason for bring System Center in is that it manages hardware, Virtualization (including VMware) and Virtualized workloads. When Elias talks of 3rd party vendors I think he means people like him – and that would mean he’s saying you should buy VMware because that’s what he sells.

This post originally appeared on my technet blog.

December 15, 2009

Server 2008 R2 feature map.

Filed under: Photography,Windows Server 2008-R2 — jamesone111 @ 7:33 pm

One of the popular giveaways at our events this year has been the feature poster for server 2008-R2 – which is now available for download. I think the prints were A2 size, although at 300 DPI it is closer to A1 dimensions – the paper copies have all gone although

I’m told more are being printed if you want a paper copy.

One of my fellow evangelists thought it was a great application for Silverlight deep zoom (the technology formerly known as Seadragon). and and I have to say I agree. What better way is there to look at 93 Megapixels worth of image ?
The buttons in the lower right corner include maximize so you can use your full screen to view it. I haven’t got a wheel mouse plugged in at the moment but that is the best way to zoom in and out.

This post originally appeared on my technet blog.

November 26, 2009

How to deploy Windows: Windows deployment services.

Filed under: Windows 7,Windows Server 2008,Windows Server 2008-R2,Windows Vista — jamesone111 @ 12:23 pm

I saw something recently – it must have been in the discussion about Google’s bootable browser new “operating system” which talked about it taking hours to install Windows. I didn’t know whether to get cross or to laugh.Kicking around on youtube is a video I made of putting Windows 7 on a Netbook from a USB key (The technical quality of the video is very poor for the first couple of minutes, the installation starts in the third minute) . It took me 25 minutes from powering on for the first time to getting my first web page up. It was quick because I installed from a USB flash device. It would be quicker still on a higher spec machine, especially one with a fast hard disk. 

Installing from USB is all very well if you are go the machine(s) to do the installation(s). But if you have many machines to install, or you want to have users or other team members install at will then Windows Deployment Services is a tool you really should get to know.  WDS was originally a separate download for server 2003, then it got rolled into the product so it is just and installable component in Server 2008 and 2008-R2. There are other add on which round out deployment capabilities but there are 3 scenarios where WDS alone is all you need.

  1. Deploying the “vanilla” Windows image to machines. This can be Windows Vista, Windows Server 2008, Server 2008-R2 or Windows 7. I haven’t checked on deploying hyper-V server, it may be a special case because the generic setup process may not create a line that’s needed in the boot configuration database.
  2. Deploying a Windows image, customized with an unattend.xml file – again the same version choices are available , but now if you want to install with particular options turned on or off you can do so (The Windows Automated Installation Kit helps with the creation of this file, among other things)
  3. Creating a “Gold Image” machine with applications pre-installed, and capturing that image and pushing it out to many different machines [There are a few applications which don’t like this, so sometime it is better to run something to install ].

One thing which many people don’t seem to realise is that since Vista arrived one 32 bit can cover all machines, and one 64 bit image can be used on all 64-bit machines. Those images not only handle differences in hardware but can also be multi-lingual.

By itself WDS doesn’t address installing combinations of applications and images, nor does it automate the process of moving users data off an old machine and onto a new machine. I’ll talk about some of these things in future posts: but if you thinking about the skills you’ll need to do a deployment of Windows 7 (for example) understanding WDS is a key first step; the next step is answering the question “What do I need that WDS doesn’t give me ?”

Because I have to deploy a lot of servers. I put together a video showing WDS being used to deploy Windows server (server core is also the smallest OS and so the quickest to install as a demo). Because my Servers are most virtualized I have another video in the pipeline showing System Virtual Machine manager doing deployments of built VMs.
You get an idea of the power of WDS, but the fact the video is only 6 minutes long also gives you an idea of its simplicity.

This post originally appeared on my technet blog.

November 25, 2009

Announcing the PowerShell Configurator.

Filed under: Powershell,Virtualization,Windows Server 2008-R2 — jamesone111 @ 11:17 pm

For a little while I have had a beta version of a project I call PSCONFIG on codeplex. I’ve changed a couple of things but from the people who have given it a try, it seems that it is working pretty well. It’s aimed at servers running either Hyper-V server R2 Or Core installations Windows Server 2008 R2, although it can be useful on just about any version of Windows with PowerShell V2 installed. Here is breakdown of the what is included.

Installed software, product features, drivers and updates
* Add-Driver, Get-Driver
* Add-InstalledProduct ,Get-InstalledProduct ,  Remove-InstalledProduct,
* Add-WindowsFeature , Get-WindowsFeature, Select-WindowsFeature, Remove-WindowsFeature
* Add-HotFix,  Add-WindowsUpdate, Get-WindowsUpdateConfig , Set-WindowsUpdateConfig

Networking and Firewall
* Get-FirewallConfig , Set-FirewallConfig, Get-FirewallProfile , Get-FireWallRule, New-FirewallRule
* Get-NetworkAdapter, Select-NetworkAdapter, Get-IpConfig , New-IpConfig , Remove-IpConfig, Set-IpConfig

Licensing
* Get-Registration , Register-Computer

Page file
* Get-PageFile, Set-AutoPageFile

Shut-down event tracker
* Get-ShutDownTracker , Set-ShutDownTracker

Windows Remote management
* Get-WinRMConfig , Disable-WinRm

Remote Desktop
* Get-RemoteDesktopConfig , Set-RemoteDesktop

Misc
* Rename-Computer
* Set-DateConfig , Set-iSCSIConfig  , Set-RegionalConfig

It has a menu so it can replace the SCONFIG VB utility , to show how it works I’ve put a Video on you tube (see below). It includes on-line help and there is a user guide available for download from codeplex. The documentation is one step behind the code, although the only place where I think this matters is that the New Firewall rule command doesn’t have any explanation – hopefully if you use tab to work through the parameters it is obvious . Obvious as a release candidate I’m looking for feedback before declaring it final. Codeplex discussions are the best place for that.

=

This post originally appeared on my technet blog.

November 2, 2009

You can’t be a 21st century admin without PowerShell

Filed under: General musings,Windows Server 2008-R2 — jamesone111 @ 6:11 pm

When I was at school my father gave me a copy of an article he’d seen at work. I remember nothing of the article itself, but the title has stayed with me: “You can’t be a 20th century man without maths”. I think even then “You can’t be a [time] [person] without [skill]”  was a  Snowclone – I’ve adapted it from time to time – hence the title.  In a recent conversation someone asked me if I knew “That sunscreen thing” which was turned into a song by Baz Luhrmann (On you-tube) and having been reminded of it, I found it wanted to morph into the the opening of a session I was doing on managing Server 2008-R2 The original began

Ladies and gentlemen of the class of ’97: Wear sunscreen.
If I could offer you only one tip for the future, sunscreen would be it. The long-term benefits of sunscreen have been proved by scientists, whereas the rest of my advice has no basis more reliable than my own meandering experience. I will dispense this advice now.

newskill.. it was all I could do to avoid opening with “Ladies and Gentlemen: Learn Powershell. If I could offer you only one tip for the future, PowerShell would be it”, the long term benefits of PowerShell …”

I’ve been saying the same thing in different ways a lot recently. The slide on the left was in the session I delivered at the big Wembley event in October.  A few people picked up that I’d said “Everyone should learn PowerShell”, and I’ve since had to explain that this requires a suitable definition of “Everyone”. But it is my firm belief that IT professionals working Microsoft technology are at an advantage if they know at least the basics of PowerShell. Being able to automate complex processes , and show that the steps have been followed isn’t a new idea ; it always was important in the mainframe and mini computer world. There are plenty of situations where using a graphical interface is easier than using an obtuse command line tool, yet the focus on GUI tools in the Microsoft world means that command line and scripting skills are less prevalent among system admins than is the case in Unix / Linux world. Those skills can mean better efficiency, or allow tasks to be carried out which would otherwise be impractical. If the setup is simple or IT management is not a persons main job, doing the work optimally matters less because there isn’t much of it. If there is little repetition writing a script takes more time than it saves. When IT is your main role and includes repetition of complex tasks then scripting puts you ahead. Of course I equate “scripting” with “PowerShell” which simplifies things too much: the tools will vary between environments – I took the following list from one of the Slides in the Wembley deck: – it is not designed to be complete but to show pre-eminence of PowerShell in the Microsoft world.

In Server-R2 there is: Not forgetting that we also have
  • PowerShell for Active Directory
  • PowerShell for Applocker
  • PowerShell for Best Practices
  • PowerShell for BITS transfer
  • PowerShell for Clustering
  • PowerShell for Group Policy
  • PowerShll for Installing components
  • PowerShell for Migration
  • PowerShell for Remote-Desktop
  • PowerShell for Server Backup
  • PowerShell for Web admin
  • PowerShell for Exchange 2007
  • PowerShell for HPC
  • Powershell for HyperV @ codeplex.com
  • PowerShell for OCS in the OCS Res-kit
  • PowerShell for SQL 2008 R2
  • PowerShell for System Center

You can see anyone who says “I don’t do PowerShell” is at a disadvantage, and the first thing to explain to them  that opening up a PowerShell window and running the cmdlets which are provided by any of the above is no different from starting a CMD.EXE Window and entering commands there – in fact it’s easier because the way parameters and help are handled is consistent. The idea of an environment extended with task-related Snap-ins which we saw with the GUI management console is the same in PowerShell – we load something which understands the task into an environment which provides the UI. The cmdlets are just a foundation: building things up from them takes things to another level. But you can build things up around free-standing programs too – by allowing them to be scripted, PowerShell makes it possible to deliver things which otherwise would be too time consuming. The example I’ve been using to show this is the following:  In Server 2008 R2 we have a new feature called off-line domain join; ODJ allows you to create a domain account for a computer, and a file containing the information needed for that computer to be added to the domain. This file can be applied to the OS offline without needing to boot it, logon as an administrator, connect and change the computer name and member-of setting from the default workgroup to the chosen domain. The command to do this is a traditional .EXE and it looks like this.

djoin /provision /domain MyDomain /machine MachineName /savefile filename

Great … but what if you have 1000 machines ? Are you really going to sit there all day typing the names in, and checking you didn’t mistype any or miss any out ? If you have a list of machine names in a text file, you could do this with PowerShell

Get-content Machines.Txt | forEach-object {djoin /provision /domain MyDomain /machine $_ /savefile $_ }

For each machine name (line)  in the file machines.txt the command will run djoin with that name as both the machine parameter and the filename parameter.

The successful admin is not automatically the one who knows every possible way to use every possible command in PowerShell. Nor the one who turns their back on GUI to do everything from the command line , but the one who understands the tools available for the task at hand, can select the right one, and can put it to use competently. PowerShell is one of the tools available in so many cases in the Microsoft world, that you can’t meet that definition without it.

This post originally appeared on my technet blog.

October 17, 2009

More on VHD files

Filed under: Virtualization,Windows 7,Windows Server 2008-R2 — jamesone111 @ 2:42 pm

I’ve had plenty to say about the uses of VHD files on different occasions. They get used anywhere we need to have a file which contains an image of a disk. So from Vista onwards we have had complete image backup to VHD, we use VHD for holding the virtual disk to be used by a Virtual Machine (be it hyper-V , Virtual PC or Virtual Server – the disks are portable although the OS on them might be configured to be bootable on one form of virtualization and not another), and so on.

Most interesting of all with Windows 7 and Server 2008 R2 the OS can be boot from a VHD file – if you try to do this with an older OS it will begin to boot and then when it discovers it is not on a native hard disk it all goes the way of the pear. However an older OS can be installed on the “native” disk with a newer OS in a VHD, provided that the boot loader is new enough to understand boot from VHD. I’ve done this with my two cluster node laptops – I can boot into Server 2008 “classic” or into 2008 R2: the latter is contained in a VHD and so I don’t have to worry about re-partitioning the disk or having different OSes in different folders. The principles are the same but the process is a bit complicated for XP and for Server 2003 – but Mark has a guest post on his blog which gives a step by step guide. In theory it should work on any OS which uses NTLDR and Boot.ini all the way back to NT3.1 – though I will admit I’ve only run XP and Server 2003 in virtual machines since Hyper-V went into beta,

Of course being able to mount VHDs inside Windows 7 and Server 2008 R2 gives you an alternative way of getting files back from a backup, and I’ve got a video on technet edge showing that and some of the other uses. My attempts to modify a backup VHD into a Virtual Machine VHD have failed – I can access the disk in a VM, but my attempts to find the right set of incantations to make it bootable have left me feeling like one of the less able students at Hogwarts. Into this mix comes a new Disk2VHD  tool from Mark Russinovich and Bryce Cogswell – Mark is the more famous member of the team, but if you do a search on Bryce’s name you’ll see his background with sysinternals  so Disk2VHD comes with an instant provenance. There are multiple places where this tool has a use, lifting an existing Machine to make a boot-from-VHD image or a virtual machine, or as a way of doing an image backup which can be used in a VM.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2009/10/17/more-on-vhd-files.aspx’;

This post originally appeared on my technet blog.

September 12, 2009

For your viewing pleasure …

Filed under: Blogcasts,Windows 7,Windows Server 2008-R2 — jamesone111 @ 9:51 pm

For the last few weeks, on and off, Andrew and I have been working on a set of Videos on Windows 7 and Server 2008-R2, which are now available  on youTube. When we were kicking ideas around we came up with the idea of filming Andrew drawing cartoons of what the Screencasts will cover. Like most good collaborations once we got the idea neither of us is terribly certain who thought of what, and when we just being a listening post for the other. Andrew understood we could have click through navigation long before I did, so you click on something he has drawn and go to one of the screen casts which shows the detail. I must admit I never thought I’d end up spending an afternoon filming the linking up of cartoons with coloured string to illustrate remote desktop and VDI, and the though is always “Is this a great idea, or a waste of time”. So far we’ve had some very pleasing comments and if people have got suggestions for more stuff please let me (or Andrew) know about them. Other feed back is always welcome too.

This post originally appeared on my technet blog.

August 17, 2009

How to Boot from VHD (VHD booting re-visited.)

Filed under: Windows 7,Windows Server 2008-R2 — jamesone111 @ 6:17 pm

Some while back I wrote about boot from VHD. To re-cap, in Windows 7 and Server 2008 R2 (including core, and Hyper-V Server R2) the boot loader is capable of mounting a VHD file and booting from it as though it were a physical disk. There is no virtualization going on, just the necessary smarts to use the same format. If you try use this to boot older operating systems the boot process will start, but the machine will crash quite early on when it finds the system/boot device(s) aren’t really disks (virtualization hides this fact).


So for it all to work first you need the BootMgr from Windows 7 / Server 2008-R2 (it lurks in the hidden System partition). Obviously you have this if the main OS on the machine is Windows 7 or Server 2008-R2, but if you want add a VHD as a second OS on a system running  Vista / Server 2008 you can just update BootMgr (the easiest place to get it is probably the install DVD) . It supports some new features but the Boot Configuration Database file (BCD) your system already has remains valid – it just doesn’t contain any of the new features so this should have no unwanted side effects.


Second you need a VHD image, into which you have installed an OS which understands boot from VHD. This is easy enough to build in Hyper-V but there are other ways. There are TWO preparation steps which I forgot: the first is that VHD needs to be sysprep’d (%windir%\system32\sysprep\sysprep.exe) otherwise you’ll create conflicting clones. This can be fixed once you have got the OS booting from VHD, but you won’t get it to boot unless you create a boot configuration database on the partition where the OS resides in the VHD. You should be able to do this when you’re building the VHD, or you can mount the VHD and do it after the fact, either way you are looking for  %windir%\system32\bcdboot.exe, you’ll find it on Windows 7 and Server 2008-R2 , but NOT on Hyper-V server and not on older OSes. Run it as bcdboot V:\Windows /s V:   where V: is the volume letter for a mounted Vhd , or the drive letter of of the boot drive if the OS is running in hyper-V. I omitted this step when updating my release candidate set-ups to final code last night and it throws an ugly error. That led to my second mistake: I thought the problem was with the BCD. 


Since I wasn’t thinking clearly (I thought ‘I just do this before I go to bed’ ! ) I went to edit the BCD. Now… after the explanation above you’d be able to work out that you need to edit the BCD with the latest tools (BCDEdit). If you’re working on Windows 7 / Server 2008-R2 you don’t need to worry. But if you’re adding a VHD-Booting OS to a machine running Windows Vista/Server 2008 (which I was) and try to manage boot from VHD with the tools those OSes provided you’ll be on path to frustration, and (in my case) insomnia. After much puzzlement, I ended up back at BCD which would only boot Windows Server 2008, with the right version of BCDEdit.
So I was ready for the third step. I had my files in C:\VHD\Win7.VHD so the 3 commands I needed were to clone the existing entry and modify it for boot from VHD, thus


bcdedit /copy {default} /d “Windows Server 2008-R2 From VHD”


/d specifies the description you’ll see in the boot menu, the command will copy the default entry and return a guid e.g. {cbd971bf-b7b8-4885-951a-fa03044f5d71}, copy the GUID you’ll need it in the next 2 commands. If there is no OS on the drive you can copy the boot folder from the windows setup disk and modify the {default} entry in the same way as you’d modify the copy, just use {default} in place of the guid.


bcdedit /set {cbd971bf-b7b8-4885-951a-fa03044f5d71} device “vhd=[locate]\vhd\win7.vhd”


bcdedit /set {cbd971bf-b7b8-4885-951a-fa03044f5d71} Osdevice “vhd=[locate]\vhd\win7.vhd”


(use your own guid, obviously)


You can check any of the other settings for booting the OS with bcdedit /enum. If they need changing it’s back to bcdedit /set but if it looks right, it’s time to reboot and see if it works. I put this last part into a video I have on TechNet edge which shows some of the other uses of VHD files, but I glossed over making the VHD and getting the right versions of the tools. 


Incidentally in my last post I mentioned the announcement Hyper-v Server now boots from flash provided you have made a bootable flash device the steps above will let you set this up. Be warned though, that to be properly supported the setup will need to be “blessed” by the hardware vendor.




tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2009/08/17/how-to-boot-from-vhd-vhd-booting-re-visited.aspx’;

Update. Mark left a comment with a link to a more throrough post than this, just a quick side note though, boot from should work in theory for any Windows7 family OS, including Server 2008 . However in practice there is some work done in HyperV server so that the USB device doesn’t get lost for a short time during the boot process. This isn’t a problem booting from a hard disk, but will crash the OS booting from USB. So boot from USB is Hyper-V server and Windows-PE only.

This post originally appeared on my technet blog.

August 14, 2009

VMware – the economics of falling skies … and disk footprints.

Filed under: Virtualization,Windows Server 2008,Windows Server 2008-R2 — jamesone111 @ 4:36 pm

There’s a phrase which has being go through my head recently: before coming to to Microsoft I ran my a small business; I thought our bank manager was OK, but one of my fellow directors – someone with greater experience in finance than I’ll ever have – sank the guy with 7 words “I have a professional disregard for him.”. I think of “professional disregard” when hearing people talk about VMware. It’s not that people I’m meeting simply want to see another product – HyperV – displace VMware (well, those people would, wouldn’t they ?) , but that nothing they see from VMware triggers those feelings of “professional regard” which you have for some companies – often your toughest competitor.

When you’ve had a sector to yourself for a while, having Microsoft show up is scary. Maybe that’s why Paul Maritz was appointed to the top job at VMware. His rather sparse entry on Wikipedia says that Maritz was born in Zimbabwe in 1955 (same year as Bill Gates and Ray Ozzie, not to mention Apple’s Steve Jobs, and Eric Schmidt – the man who made Novell the company it is today) and that in the 1990’s he was often said to be the third-ranking executive in Microsoft (behind Gates and Steve Ballmer, born in early 1956). The late 90s was when people came to see us as “the nasty company”. It’s a role that VMware seem to be sliding into: even people I thought of as being aligned with VMware now seem inclined to kick them.

Since the beta of Hyper-V last year, I’ve being saying that the position was very like that with Novell in the mid 1990s. The first point of similarity is on economics. Novell Netware was an expensive product and with the kind of market share where a certain kind of person talks of “monopoly”. That’s a pejorative word, as well as one with special meanings to economists and lawyers. It isn’t automatically illegal or even bad to have a very large share (just as very large proportions in parliament can make you either Nelson Mandela or Robert Mugabe). Market mechanisms which act to ensure “fair” outcomes rely on buyers being able to change to another seller (and vice versa – some say farmers are forced to sell to supermarkets on unfair terms) if one party is locked in then terms can be dictated. Microsoft usually gets accused of giving too much to customers for too little money. Economists would say that if a product is over priced, other players will step in – regulators trying wanting to reduce the amount customers get from Microsoft argue they are preserving such players. Economists don’t worry so much about that side, but say new suppliers need more people to buy the product, which means lower prices, so a new entrant must expect to make money at a lower price: they would say if Microsoft makes a serious entry into a existing market dominated by one product, that product is overpriced. Interestingly I’ve seen the VMware side claim that HyperV , Xen and other competitors are not taking market share and VMware’s position is as dominant as ever.

The second point of similarity is that when Windows NT went up against entrenched Netware it was not our first entry into networking – I worked for RM where we OEM’d MS-NET (a.k.a 3COM 3+ Open, IBM PC-Lan program) and OS/2 Lan manager (a.k.a. 3+Open). Though not bad products for their time – like Virtual server – they did little to shift things away from the incumbent. The sky did not fall in on Novell when we launched NT, but that was when people stopped seeing NetWare as the only game in town. [A third point of similarity]. Worse, new customers began to dismiss its differentiators as irrelevant and that marks the beginning of end. 
Having been using that analogy for a while it’s nice to see no less a person than a Gartner Vice President, David Cappuccio, envisaging a Novell-like future for VMware.  In piece entitled “Is the sky falling on VMware” SearchServerVirtualization.com  also quotes him as saying that “‘good enough’ always wins out in the long run”.  I hate “good enough” because so often it is used for “lowest common denominator” I’ve kept the words of a Honda TV ad with me for a several years.

Ever wondered what the most commonly used in the world is ?
”OK”
Man’s favourite word is one which means all-right, satisfactory, not bad
So why invent the light bulb, when candles are OK ?
Why make lifts, if stairs are OK ?
Earth’s OK, Why go to the moon ?
Clearly, not everybody believes OK is OK.
We don’t.

Some people advance the idea that we don’t need desktop apps because web apps are “good enough”. Actually, for a great many purposes, they aren’t. Why have a bulky laptop when a netbook is “good enough” ?. Actually for many purposes it is not. Why pay for Windows if Linux is ‘free’ … I think you get the pattern here. But it is our constant challenge to explain why one should have a new version of Windows or Office when the old version was “good enough” ? The answer – any economist will give you – is that when people choose to spend extra money, whatever differentiates one product from the other is relevant to them and outweighs the cost (monetary or otherwise , real or perceived) : then you re-define “good enough” the old version is not good enough any more. If we don’t persuade customers of that, we can’t make them change. [Ditto people who opt for Apple: they’d be spectacularly ignorant not to know a Mac costs more, so unless they are acting perversely they must see differentiators, relevant to them, which justify both the financial cost and the cost of forgoing Windows’ differentiators. Most people of course, see no such thing.]. One of the earliest business slogans to get  imprinted on me was “quality is meeting the customers needs”: pointless gold-plating is not “quality”. In that sense “Good enough” wins out: not everything that one product offers over and above another is a meaningful improvement. The car that leaves you stranded at the roadside isn’t meeting your needs however sophisticated its air conditioning, the camera you don’t carry with you to shoot photos isn’t meeting your needs even if it could shoot 6 frames a second, the computer system which is down when you need it is (by definition) not meeting your needs. A product which meets more of your needs is worth more.

A supplier can charge more in the market with choices (VMware, Novell, Apple) only if they persuade enough people accept the differentiators in their products meet real needs and are worth a premium. In the end Novell didn’t persuade enough, Apple have not persuaded a majority but enough for a healthy business, and VMware ? Who knows what enough is yet, never mind if they will get that many. If people don’t see the price as a premium but as a  legacy of being able to overcharge when there was no choice then it becomes the “VMware tax” as  Zane Adam calls it in our video interview. He talked about mortgaging everything to pay for VMware: the product which costs more than you can afford doesn’t meet your needs either, whatever features it may have.

I’ll come back to cost another time – there’s some great work which Matt has done which I want to borrow rather than plagiarize. It needs a long post and I can already see lots of words scrolling up my screen so want to give the rest of this post to one of VMware’s irrelevant feature claims :disk footprint.  Disk space is laughably cheap these days, and in case you missed the announcement Hyper-v Server now boots from flash – hence the Video above: before you run off to do this for yourself, check what set-ups are supported in production. And note it is only Hyper-V server not Windows Server, or client versions of Windows. The steps are all on this blog already. See How to install an image onto a VHD file, (I used a fixed size of 4GB). Just boot from VHD stored on a bootable USB stick. Simples.

I’ve never met a customer who cares about a small footprint: VMware want you to believe a tiny little piece of code must need less patching, give better uptime, and be more trustworthy  than a whole OS – even a pared down one like Windows Server Core or Hyper-V server. Now Jeff, who writes on the virtualization team blog , finally decided he’d heard enough of this and decided it was time to sink it once and for all . It’s a great post (with follow-up).  If you want to talk about patching and byte counts, argues Jeff, let’s count bytes in patches over a representative period:  Microsoft Hyper-V Server 2008 had 26 patches, not all of which required re-boots, and many were delivered as combined updates. They totalled 82 MB.  VMware ESXi 3.5 had 13 patches, totalling over 2.7 GB. That’s not a misprint 2700 MB against 82 (see VMware sometimes does give you more), that’s because VMware releases a whole new ESXi image every time they release a patch so  every ESXi patch requires a reboot. Could that be why VMotion (Live Migration, as now found in R2 of HyperV), seemed vital to them and merely important to us ? When we didn’t have it it was the most relevant feature. Jeff goes to town on VMware software quality – including the “Update 2” debacle, that wasn’t the worst thing though. The very worst thing that can happen in on a virtualized platform is  VM’s breaking out of containment and running code on the host: Since the host needs to access the VMs’ memory for snapshots, saving, migration, a VM that can run code on the host can impact all the other VMs. So CVE-2009-1244: “A critical vulnerability in the virtual machine display function allows a guest operating system users to execute arbitrary code on the host OS” is very alarming reading.

And that’s the thing – how can have a regard for a competitor who doesn’t meet the customers needs on on security or reliability, and who calls things like disk space to justify costing customers far, far more money ?

This post originally appeared on my technet blog.

August 13, 2009

Core parking in Server 2008 R2 – why it’s like airport X-ray machines.

Filed under: Windows Server 2008-R2 — jamesone111 @ 10:40 pm

memory-diag It’s third time lucky for this post… a couple of weeks ago I was at our big training event “tech-ready”, and my laptop blue screened on me, citing memory as the cause. On my return I started to write this post and came back to find the laptop had rebooted after a crash. Within 24 hours it had done it a second time, and I saw it do it third time with memory getting the blame at a blue screen. I ran the on-board diagnostics which told me there was nothing wrong but the Windows 7 memory diagnostic came back with the message you can see. So it looks like the laptop needs a bit more TLC from Dell. Fortunately I have a spare carcass I can put my hard disk into so I’m not totally stranded. It’s also reminded me that the auto-save option in Windows live writer is not on by default.

I don’t much like spending time away from home these days, queuing for airport security is a chore (in fact there is very little fun in flying) and the fight is a huge part of my annual carbon footprint, so going to Seattle for Tech-Ready is something which makes me stop and think: and each time I decide that it is still worth going . Sometimes it is the chance to network with colleagues from round the world I just wouldn’t get to hang out with , sometimes it is the deep technical content, sometimes it is the ability of people like Ray Ozzie to just engage me with ideas. Often it’s a combination. I was mighty impressed with Ray’s first appearance last year, and this year we got a taste of some ideas which will end up in products soon, some that are on a slower burn, and some we’ll look back on as an interesting flight of fancy. Of the “wow” demos we saw it’s not always possible to predict which will end up in which group. Ray quoted Steve Ballmer saying something like “We’re not going to go home, we’re going to keep coming and coming and coming” – which has echoes of the Blue Monster about it. Ray put it a different way: yes the economy is bad and we’re not immune to it, but if we cut back on R&D then … we he didn’t actually use the words  “you’ll regret it. Maybe not today. Maybe not tomorrow, but soon and for the rest of your life.”* but that was what it came down to.

If Ray was the big vision then Mark Russinovich had some of the best detail, and he talked about core parking in Server 2008 R2. It dawned on me that Research isn’t just about some of the blue sky stuff that we saw in Ray’s session, it is sometimes about going back to problems you thought you’d solved: like the process scheduler.

When I studied Computer Science at university in the mid 1980s we covered the theory of operating system design and I still have the text book. It describes the job of the process scheduler like this

  1. [Decide if] the current process on this processor is still the most suitable to run. If it is return control to it, otherwise …
  2. Save its volatile environment (registers, instruction pointer etc)
  3. Restore the volatile environment of the most suitable process
  4. Return control to that process

As a model it’s something we hardly need to think about: it  has coped with the arrival of multi-processor environments, notice it said “this processor” the scheduler just looks at each processor of a multi core/multi chip environment and repeats the task. “Most suitable” covers a variety of possibilities, and the introduction of multi threading meant nothing more than saying you could have more than one instruction pointer and register state for a single process: each one representing  a different thread of execution. We know there is often not a thread ready to go – if there were we’d be seeing utilization running at 100%. This doesn’t break the model, and as soon as a thread becomes ready it gets scheduled on an idle processor.  That means the load on the processors is roughly equal which seems “fair” and so “good” We’d assume that also gives better performance, and here we need to go and look queues definitions of performance – something I studied before university, and I get reminded of every time I fly.

People I’ve asked  think we have longer queues for airport security because “the more complex checks of today take longer”. But that can’t be so because the number of people flying has remained the same. The total number of passengers per hour who need to be processed hasn’t changed. If time-to-process was the only thing to change the queue would get longer and longer all day or we’d need to lengthen the gaps between flights and have fewer flights or a longer day. That hasn’t happened. As passengers we notice queue-length or  time-in-queue but the airport  measures “units processed per hour”.  If the queue looks like overflowing, another X-ray machine is opened. and when it gets shorter staff can take a break or attend to other duties. So the minimum amount of processor time is used to process all the passengers going through the airport. If every processor was running all day there would be minimal queues.

core-parkingThis model turns out to be very good for other kinds of processors. Modern CPUs can close down a core or socket which can save a lot of power. The amount is variable be its hundreds of watts for a large server, and then the same again in data centre air-con to take the heat. If you want to save 1 ton of C02 in a year (on National Energy Foundation numbers), you need to save 45 Kilowatt Hours per week and it’s easy to see servers which save enough Watts for enough hours to make that number. But if every processor processes the first thread waiting for processing, the savings won’t amount to a hill of beans** That’s what most schedulers do because they assumed that if there is at least one runnable thread, the best thing to do must be to get a thread onto an idle processor.  Some smart person went back to scheduler design with the new idea that “most suitable thread” might be no thread at all : let the processing unit (core) go idle and into a low power state or power it down completely and Park it. This is a  complex decision for a scheduler because it involves questions like “how long must the processor be off to save more than the energy it takes to bring it back” and “If this processor stops processing, what will happen to time-in-queue and queue-length”. Intuitively we’d say that queuing threads must give lower performance, but the processor will still complete the tasks faster than they go into the queue.  Change your measure to work done per unit time and hey presto…

Core parking doesn’t need any particularly fancy hardware – when I checked on this 2 year old laptop (before I had to put in the drive from regular laptop) and you can see from the Perf-Mon was working (it’s on by default). Better yet although older OSes have no concept of core parking if you are running Hyper-V then processors are scheduled for the VMs to provide it core parking. There are significant savings to be made here over and above what virtualization was already giving.

 

 

 * That was Humphrey Bogart in Casablanca

** OK, that’s a different greenhouse gas.

This post originally appeared on my technet blog.

July 23, 2009

Oink flap –– Microsoft releases software under GPL — oink Flap

Mary-Jo has a post about the release of our Hyper-V drivers for Linux entitled Pigs are flying low: Why Microsoft open-sourced its Linux drivers , it’s one of many out there but the title caught my eye: I thought I’d give a little of my perspective on this unusual release. News of it reached me through one of those “go ahead and share this” mails earlier this week  which began

NEWSFLASH: Microsoft contributes code to the Linux kernel under GPL v2.
Read it again – Microsoft contributes code to the Linux kernel under GPL v2!
Today is a day that will forever be remembered at Microsoft.

Well indeed… but hang on just a moment: we’re supposed to “hate” the GPL aren’t we ? And we’re not exactly big supporters of Linux … are we ? So what gives ? Let’s get the GPL thing out of the way first:

For as long as I can remember I’ve thought (and so has Microsoft) that whoever writes a book, or piece of software or paints a picture or takes a photo should have the absolute right decide its fate.  [Though the responsibilities that come with a  large share of an important market apply various IFs and BUTs to this principle]. Here in the UK the that’s what comes through in the Copyrights Designs and Patents Act, and I frequently find myself explaining to photographers that the act tilts things in their favour far more than they expect. Having created a work, you get the choice whether to sell it, give it away, publish the Source code , whatever. The GPL breaks that principle, by saying, in effect “if you take what I have given away, and build something around it, you must give your work away too and force others to give their work away ad infinitum”; it requires an author of a derivative work to surrender rights they would normally have. The GPL community would typically say don’t create derivative works based on theirs if you want those rights. Some in that community – it’s hard to know how many because they are its noisiest members -  argue for a world where there is no concept of intellectual property (would they argue you could come into my garden and take the vegetables that stem from my physical work ? Because they do argue that you can just take the product of my mental work). Others argue for very short protection under copyright and patent laws: ironically a licence (including the GPL) only applies for the term of copyright, after that others can incorporate a work into something which is treated as wholly their own. However we should be clear that GPL and Open Source are not synonyms (Mary Jo, wasn’t in her title) . Open source is one perfectly valid way for people to distribute their works – we want Open Source developers to write for Windows and as I like to point out to people this little project here  means I am an Open Source Developer and proud of it. However I don’t interfere with the rights of others who re-use my code,  because it goes out under the Microsoft Public Licence: some may think it ironic that is the Microsoft licence which gives people freedom and those who make most noise about “free software” push a licence that constrains people.

What are we doing ? We have released the Linux Integration Components for Hyper-V under a GPL v2 license, and the synthetic drivers have been submitted to the Linux kernel community for inclusion in upcoming versions of the Linux kernel.  The code is being integrated into the Linux kernel tree via the  Linux Driver Project which is a team of Linux developers that develops and maintains drivers in the Linux kernel. We worked very closely with Greg Kroah-Hartman to integrate our Linux IC’s into the Linux kernel. We will continue to develop the Integration components and as we do we will contribute the code to the drivers that are part of the kernel.
What is the result ? The drivers will be available to anyone running an appropriate Linux kernel. And we hope that various Linux distributions will make them available to their customers through their releases. 
WHY ? It’s very simple. Every vendor would like their share of the market to come from customers who used only their technology; no interoperability would be be needed: but in the real world, real customers run a mixture. Making the Linux side of those customers lives unnecessarily awkward just makes them miserable without getting more sales for Microsoft. Regulators will say that if you make life tough enough, it will get you more sales, but interoperability is not driven by some high minded ideal – unless you count customer satisfaction, which to my way of thinking is just good business sense. Accepting that customers aren’t exclusive makes it easier for them to put a greater proportion of their business your way. So: we are committed to making Hyper-V the virtualization platform of choice, that means working to give a good experience with Linux workloads. We’d prefer that to happen all by itself, but it won’t: we need to do work to ensure it happens.  We haven’t become fans of the GPL: everything I wrote above about the GPL still holds. Using it for one piece of software is the price of admission to the distributions we need to be in, in order to deliver that good experience. Well… so be it.  Or put another way, the principle of helping customers to do more business with you trumps other principles.
Does this mean we are supporting all Linux distributions ? Today we distribute Integration components for SLES 10 SP2. Our next release will add support for SLES 11 and Red Hat Enterprise Linux (5.2 and 5.3). If you want to split hairs we don’t “support” SLES or RHEL – but we have support arrangements with Red Hat and Novell to allow customers to be supported seamlessly. The reason for being pedantic about that point is that a customer’s ability to open a support case with Microsoft over something which involves something written by someone else depends on those arrangements being in place. It’s impossible to say which vendors we’ll have agreements with in future (if we said who we negotiating with it would have all kinds of knock on effects, so those discussions aren’t even disclosed inside the companies involved). Where we haven’t arranged support with a vendor we can only give limited advice from first principles about their product, so outside of generic problems which would apply to any OS, customers will still need to work with the vendors of those distributions for support.

You can read the press release or watch the Channel 9 Video for more information.

This post originally appeared on my technet blog.

July 22, 2009

Release the Windows 7 !

Filed under: Beta Products,Windows 7,Windows Server,Windows Server 2008-R2 — jamesone111 @ 10:03 pm

It’s official. Windows 7 has released to manufacturing. http://www.microsoft.com/Presspass/press/2009/jul09/07-22Windows7RTMPR.mspx 

It’s official. Windows Server 2008 R2 has released to Manufacturing http://blogs.technet.com/windowsserver/archive/2009/07/22/windows-server-2008-r2-rtm.aspx

It’s official. Hyper-V server R2 has released to manufacturing http://blogs.technet.com/virtualization/archive/2009/07/22/windows-server-2008-r2-hyper-v-server-2008-r2-rtm.aspx 

When will be able to get your hands on it http://windowsteamblog.com/blogs/windows7/archive/2009/07/21/when-will-you-get-windows-7-rtm.aspx 

 

Woo hoo !

This post originally appeared on my technet blog.

How to activate Windows from a script (even remotely).

I have been working on some PowerShell recently to handle the initial setup of a new machine, and I wanted to add the activation. If you do this from a command line it usually using the Software Licence manager script (slMgr.vbs) but this is just a wrapper around a couple of WMI objects which are documented on MSDN so I thought I would have a try at calling them from PowerShell. Before you make use of the code below, please understand it has had only token testing and comes with absolutely no warranty whatsoever, you may find it a useful worked example but you assume all responsibility for any damage that results to your system. If you’re happy with that, read on.  


So first, here is a function which could be written as  one line to get the status of Windows licensing. This relies on the SoftwareLicensingProduct WMI object : the Windows OS will have something set in the Partial Product Key field and the ApplicationID is a known guid. Having fetched the right object(s) it outputs the name and the status for each – translating the status ID to text using a hash table.

$licenseStatus=@{0=”Unlicensed”; 1=”Licensed”; 2=”OOBGrace”; 3=”OOTGrace”;
4=”NonGenuineGrace”; 5=”Notification”; 6=”ExtendedGrace”}
Function Get-Registration

{ Param ($server=”.” )
get-wmiObject -query  “SELECT * FROM SoftwareLicensingProduct WHERE PartialProductKey <> null
AND ApplicationId=’55c92734-d682-4d71-983e-d6ec3f16059f’
AND LicenseIsAddon=False” -Computername $server |
foreach {“Product: {0} — Licence status: {1}” -f $_.name , $licenseStatus[[int]$_.LicenseStatus] }
}

 


On my Windows 7 machine this comes back with Product: Windows(R) 7, Ultimate edition — Licence status: Licensed


One of my server machines the OS was in the “Notification” state meaning it keeps popping up the notice that I might be the victim of counterfeiting  (all Microsoft shareholders are … but that’s not what it means. We found a large proportion of counterfeit windows had be sold to people as genuine.)  So the next step was to write something to register the computer. To add a licence key it is 3 lines – get a wmi object, call its “Install Product Key” method, and then call its “Refresh License Status method”.  (Note for speakers of British English, it is License with an S, even though we keep that for the verb and Licence with a C for the noun).  To Activate we get a different object (technically there might be multiple objects), and call its activate method. Refreshing the licensing status system wide and then checking the “license Status”  property for the object indicates what has happened. Easy stuff, so here’s the function.

Function Register-Computer
{  [CmdletBinding(SupportsShouldProcess=$True)]
param ([parameter()][ValidateScript({ $_ -match “^\S{5}-\S{5}-\S{5}-\S{5}-\S{5}$”})][String]$Productkey ,
[String] $Server=”.” )

 

$objService = get-wmiObject -query “select * from SoftwareLicensingService” -computername $server
if ($ProductKey) { If ($psCmdlet.shouldProcess($Server , $lStr_RegistrationSetKey)) {
                           $objService.InstallProductKey($ProductKey) | out-null 
                           $objService.RefreshLicenseStatus() | out-null }

    }   get-wmiObject -query  “SELECT * FROM SoftwareLicensingProduct WHERE PartialProductKey <> null
                                                                   AND ApplicationId=’55c92734-d682-4d71-983e-d6ec3f16059f’
                                                                   AND LicenseIsAddon=False” -Computername $server |

      foreach-object { If ($psCmdlet.shouldProcess($_.name , “Activate product” ))

{ $_.Activate() | out-null

$objService.RefreshLicenseStatus() | out-null

$_.get()
If     ($_.LicenseStatus -eq 1) {write-verbose “Product activated successfully.”}
Else   {write-error (“Activation failed, and the license state is ‘{0}'” `
-f $licenseStatus[[int]$_.LicenseStatus] ) }
                            If     (-not $_.LicenseIsAddon) { return }

              }              
else { write-Host ($lStr_RegistrationState -f $lStr_licenseStatus[[int]$_.LicenseStatus]) }
    }
}


Things to note



  • I’ve taken advantage of PowerShell V2’s ability to include validation code as a part of the declaration of a parameter.
  • I as mentioned before, it’s really good to use the SHOULD PROCESS feature of V2 , so I’ve done that too.
  • Finally, since this is WMI it can be remoted to any computer. So the function takes a Server parameter to allow machines to be remotely activated.

A few minutes later windows detected the change and here is the result.


image


 


This post originally appeared on my technet blog.

June 9, 2009

Parsing lists to objects in PowerShell – Tzutil

Filed under: Powershell,Virtualization,Windows Server 2008-R2 — jamesone111 @ 5:28 pm

Last week I taught a PowerShell class – the first time in ages I’d gone back to my old role as a trainer, and of the first things we do explaining PowerShell is explain that

(a) When PowerShell’s own commands are piped together they pass object with properties – not a text representation of the objects which we’ve been used to in other shells. So when we get the contents of a directory, instead of a line of text for each item we get a file object with a name, a size and so on. 

(b) PowerShell can still run command line executables intended for the CMD shell. Output from these will be formatted text.

So… this throws up another point, the need to parse text and turn it back into an object…

 

Over on the server core blog there is post about the command line Time Zone Utility (TZutil) – which is also present in Windows 7 (so I’m guessing the whole product family), and this links to another article about it, with some PowerShell script.

I thought I’d try another approach. TzUtil /L gives a list of the time zone names and codes like this

(UTC-12:00) International Date Line West
Dateline Standard Time 

(UTC-11:00) Midway Island, Samoa Samoa Standard Time

(UTC+12:00) Fiji, Kamchatka, Marshall Is. Fiji Standard Time

(UTC+13:00) Nuku'alofa Tonga Standard Time

So I wanted to convert it to Powershell objects and the code I came up was this

$tzlist=(tzutil.exe /l) ; 0..[int]($tzlist.count/3) | select  -property @{name="TzID";  expression={$tzList[$_ * 3]}} ,
                                                                      @{name="TzName";expression={$tzList[$_ * 3 + 1]}}

The first part is simple enough – run Tzutil /l and store the result. Since the lines are grouped in 3 the next bit just counts from zero to 1/3 of the count of lines and pipes the results into select-object.  This is a corruption of what I think of as the Noble method. Jonathan Noble was the first person I saw to create a custom object by adding properties to an empty string, using select. I did the same thing and then thought “Does it have to be an empty string” ? No, it can be an integer, and it doesn’t matter if is empty or not – the number part is discarded.  So what comes out is:

TzID                                                                                         TzName 
----                                                                                         ------
(UTC-12:00) International Date Line West                                                     Dateline Standard Time
(UTC-11:00) Midway Island, Samoa                                                             Samoa Standard Time

(UTC+12:00) Fiji, Kamchatka, Marshall Is.                                                    Fiji Standard Time
(UTC+13:00) Nuku'alofa                                                                       Tonga Standard Time

Which can be used anywhere else I need it – for example in making a selection to set the time zone with Tzutil /s.

The configuration tool which I have nearly finished for server 2008 R2 Core and Hyper-V server R2 will probably get the time zone to display it on the menu using a one line PowerShell function

Function Get-TimeZone { Tzutil.exe /g}

This post originally appeared on my technet blog.

Next Page »

Create a free website or blog at WordPress.com.