James O'Neill's Blog

June 16, 2008

How to get PowerShell snapins to work on 64 bit

Filed under: How to,Powershell — jamesone111 @ 12:05 am

I mentioned a problem with 64bit Powershell in my previous post: 3 snap-ins I’ve wanted to use have been packed for 32 bit and didn’t work "out of the box" on 64 bit, so  I thought I’d give a generic guide to making snapins work on 64bit.

  • Install the the Snapin using what
  • ever tools the writer gave you, Under the hood this will invoke
    C:\Windows\Microsoft.NET\Framework\v2.0.50727\InstallUtil.exe "<PathToDll>"
  • Start Powershell AS ADMINISTRATOR and use the command
    Get-PSSnapin -Registered | fl name, modulename
  • If you don’t see your new snap in on 64 bit run the command
    C:\Windows\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe "<PathToDll>"
    Things to note. First. This will fail if you don’t have admin powers, Second if you don’t know the path to the DLL use the same get-psSnapin command in the 32 bit version.
    After you have got success messages from installUtil , for a second time run
    Get-PSSnapin -Registered
  • Be aware that the name you get when you run the command in 32 bit may not be same in 64 bit. I found this trying the Windows Mobile Provider from Codeplex It’s called "PSMobile" in 32 bit and "Nivot.PowerShell.WindowsMobile" in 64 bit . The PowerShell Community Extensions use the same name – PSCX – on both.
  • Load the Snapin with 
    Add-PsSnapin "<SnapInName>"

I got an error that the WindowsMobile provider could not find  WindowsMobile.formats.ps1xml in the FormatData folder , so I made the folder and copied the file into it. I think the set-up configures one folder and the command line assumes another.

This post originally appeared on my technet blog.

Advertisements

June 15, 2008

Open-XML. This is what it’s all about.

Filed under: How to,Office,Powershell — jamesone111 @ 11:46 pm

I have been saying for ages that most IT professionals really don’t give two hoots about Open-XML. It’s a file format. Who cares ? My first chart - click for full size version

The old file format had been around for ages, and was pretty opaque, so hardly anyone dug into it. The new format is XML (which is good), and rich (also good), but complicated (not so good). It’s joined the ranks of formats approved by ISO, which might matter if to those Government trying to follow de-Jure standards (and ignoring de-facto ones… X400 vs SMTP anyone ?). But to IT Professionals in the commercial sector using and supporting office, does any of this matter ? Don’t they see file formats as "black boxes" … they might say that in theory it’s lovely that instead of something proprietary file internals are now XML (and standard’s body controlled XML)  but in practice they still need to deal with a change in file formats. Will they take advantage of the changes  ? And some will wonder if manipulation through the Office Object model which we’ve been doing since word 6 wasn’t enough ?

Time to Reconsider. 

This week we released the Open-XML SDK. It allows developers to work much more quickly with Open-XML. That’s obviously a Good Thing because it will bring more things to market which can work with the format. Still to the kind of IT-Pro I’m thinking about … the kind who I think reads this blog … a new SDK isn’t exactly a reason to crack open the champagne. OK we can manipulate the files without having the applications present, and  developers who slave away in C# might produce stuff which IT pros want… but what can I do with this RIGHT NOW ?

How about charting your Data Centre Activity in Excel. I can almost feel the interest, but it’s tempered with a "but that needs a bucket full of code …. doesn’t it". At risk of repeating myself

Time to Reconsider. 

Eric White – a fellow evangelist, although I’ll confess not one I could pick in an ID parade – has posted some the C# code for a PowerShell snap-in to CodePlex. This shows there might be a bucketful of code, but you don’t have to be one who writes it.  You can view what’s on Codeplex as two things, one is a demo of what can be done with the Open-XML SDK. The other is a bunch of Powershell cmdlets which are useful in their own right. Now those readers who have seen some of my PowerShell might still feel cautious – there may still be some nasties here.

Time to Reconsider. 

How’s this for a command line to get running processes into a spreadsheet.

   Get-Process | Export-OpenXMLSpreadSheet -OutputPath Process.xlsx  

That’s it. Not nasty is it ? Cynics might think "I could use Export-CSV and open that in Excel". But we’re just getting started. How about a graph ? These two lines of PowerShell get the total CPU time used by running processes, and then gets the 10 heaviest processes, and spits out their ProcessName  and %CPU – the proportion of CPU they’ve used (the method is a shade simplistic but bear with me)

   get-process | foreach -begin {$TotalCPU=0;} -process {$TotalCPU += $_.CPU;}   
Get-Process | sort -descending cpu | `
select -first 10 -property ProcessName, @{name="%CPU" ;expression={100*$_.cpu/$TotalCPU}}

You’ve already seen that we could pipe that into Export-OpenXMLSpreadSheet. But I want a bar chart of CPU used for each process, what would that need.  The video Eric has on his blog gives the answer – the extra switches needed by Export-OpenXMLSpreadSheet would be

   -chart -ChartType Bar -ColumnsToChart %CPU -HeaderColumn ProcessName

I thought I’d have a go, I had the Express version of C# on my demo server but no Office. I downloaded the SDK, and the Stuff from CodePlex, copied what I’d seen Eric do in the video and hey presto I had a compiled version set-up for 32 bit powershell only. This is not the first and won’t be the last Snap in to only register as 32 bit. Fortunately I know how to fix that and next post will explain it.  But within 30 minutes I’d produced the chart you see here, and opened it using office in a VM on the same box.

In my previous role as a consultant I would have loved tools like this. Giving the client reports that were easy to understand and looked great from Exchange , System Center Virtual Machine Manager, HPC, you name it… The PowerShell community extensions could even mail them as an attachment. .

Eric’s video shows how multiple documents can be given a common style and gives a bunch more detail. Seriously if you didn’t think you could learn anything from an 8 minute video today it might be time to reconsider.

 

This post originally appeared on my technet blog.

June 14, 2008

Doubts and Powershell, Hyper-V KeyValue pairs and Hash tables.

I’ve said a number of times that I think technical people are rarely secure in their own abilities; that they have a demon on their shoulder who whispers "You’re not really, that good" … "They’ll find you out one day". I was talking to a colleague this week who seems to be racked by such doubts, and saying that it’s better to the one who doubts yourself, than have everyone else doubt you (echos of Kipling there).

I do get the odd moment of doubt about what I’m doing with PowerShell and Hyper-V.  So it was just fantastic to get a mail this morning saying that some work I’d done was used in one of the top scoring sessions at Tech-ed in the US this week. Hyper-V and Virtual Machine manager were generating a lot of excitement among customers throughout the event you can get a flavour of why in this  5 minute  video from the keynote. I’m getting ready to release the library of Hyper-V bits on Codeplex. I’ve found one thing which has broken by changes made since the beta, but with the next build to come out being the release I’m hoping that there are no new changes required. reg-KVP

One  area I have been looking at recently is "Key value pair  exchange". One of the integration components passes these pairs between the Parent Partition and the Child Partitions. The data lives in the registry under HKLM\Software\Microsoft\VirtualMachine – data going from the child partition is under Auto and data going to it is under Guest\parameters. It allows a Guest OS to know the name of the computer hosting it what that computer calls the VM where it is running, and it allows the Host OS to know the OS and Fully Qualified domain name being used in the Guest. The data is only available when the VM is running. The "KVP Component" WMI object has a "Guest Intrinsic Exchange Items" property. Does it return something simple like "ProcessorArchitecture-9" ? No… It returns a rather nasty block of XML like this for each one.

<INSTANCE CLASSNAME="Msvm_KvpExchangeDataItem">   
  <PROPERTY NAME="Caption" PROPAGATED="true" TYPE="string"></PROPERTY>
  <PROPERTY NAME="Data" TYPE="string"><VALUE>9</VALUE></PROPERTY>
   <PROPERTY NAME="Description" PROPAGATED="true" TYPE="string"></PROPERTY>
  <PROPERTY NAME="ElementName" PROPAGATED="true" TYPE="string"></PROPERTY>
  <PROPERTY NAME="Name" TYPE="string"><VALUE>ProcessorArchitecture</VALUE></PROPERTY>
  <PROPERTY NAME="Source" TYPE="uint16"><VALUE>2</VALUE></PROPERTY>
</INSTANCE>

Fortunately I can get my data with just two long lines of PowerShell – for readability they are split over several lines, one gets the KVP component object and one processes the XML

$KVPComponent=(Get-WmiObject -Namespace root\virtualization `
-query "select * from Msvm_KvpExchangeComponent
where systemName = '$($vm.name)'") $KVPComponent.GuestIntrinsicExchangeItems |
forEach -begin { $KVPObj = New-Object -TypeName System.Object } ` -process {([xml]$_).SelectNodes("/INSTANCE/PROPERTY")  |
forEach -process {if ($_.name -eq "Name") {$propName=$_.value};
if  ($_.name -eq "Data") {$Propdata=$_.value} }
-end {Add-Member -inputObject $KvpObj -MemberType NoteProperty `
-Name $PropName -Value $PropData}}
-end {$KvpObj} 

 

This is the basis of a Get-VMKVP function, which I can use like this:

PS C:\Users\administrator\Documents\WindowsPowerShell> choose-vm | get-vmkvp 

ID VM Name             State
-- -------             -----
0 SEA-DC-01           Stopped
1 HPC DC,DNS and DHCP Running
2 JON WDS             Suspended
3 HPC Compute Node 2  Stopped
4 Core                Stopped
5 HPC Head Node       Stopped

Which one ?: 1

FullyQualifiedDomainName : CCS-DC.CCSTEST.COM
OSName                   : Windows Server (R) 2008 Standard
OSVersion                : 6.0.6001
CSDVersion               : Service Pack 1
OSMajorVersion           : 6
OSMinorVersion           : 0
OSBuildNumber            : 6001
OSPlatformId             : 2
ServicePackMajor         : 1
ServicePackMinor         : 0
SuiteMask                : 272
ProductType              : 2
ProcessorArchitecture    : 9

With the aid of the another WMI call – (Described on MSDN) I can ping the VM. If I store it’s FQDN in $VmFQDN the line is

Get-WmiObject -query "Select * from  Win32_PingStatus where Address='$VmFQDN' and ResolveAddressNames = True and recordRoute=1"

What I really wanted to do was to check the status code returned by the ping and the MSDN page gives me a list of codes. Now one could use a switch statement to output the right text for the code – in fact  Jeffrey Snover’s most recent blog post does exactly that. But I showed before that I can do the same thing with a hash table so I have

$PingStatusCode=@{0="Success" ; 11001="Buffer Too Small" ; 11002="Destination Net Unreachable" ;
                                    [quite a few more and finally]   11050="General Failure" }

and when the time comes to return the status information I can use

$PingStatusCode[[int]$_.statusCode]

I mentioned Jeffrey’s post,  I’d be tempted to turn his code form

$sku = $((gwmi win32_operatingsystem).OperatingSystemSKU) 
switch ($sku) 
{ etc } 

to  @{hash-table}[Value]  format.

Now the hash table is long, but I’m probably going to paste it into my code from MSDN and it is easier to change it to the format below than the format needed in a switch statement. The [index] construction isn’t the prettiest ever either.  And remember although this is typeset as 26 lines this is actually a single line to PowerShell 🙂

@{ 0 ="Undefined";

1 ="Ultimate Edition";
2 ="Home Basic Edition";
3 ="Home Basic Premium Edition";
4 ="Enterprise Edition";
5 ="Home Basic N Edition";
6 ="Business Edition";
7 ="Standard Server Edition";
8 ="Datacenter Server Edition";
9 ="Small Business Server Edition";
10 ="Enterprise Server Edition";
11 ="Starter Edition";
12 ="Datacenter Server Core Edition";
13 ="Standard Server Core Edition";
14 ="Enterprise Server Core Edition";
15 ="Enterprise Server Edition for Itanium-Based Systems";
16 ="Business N Edition";
17 ="Web Server Edition";
18 ="Cluster Server Edition";
19 ="Home Server Edition";
20 ="Storage Express Server Edition";
21 ="Storage Standard Server Edition";
22 ="Storage Workgroup Server Edition";
23 ="Storage Enterprise Server Edition";
24 ="Server For Small Business Edition";
25 ="Small Business Server Premium Edition"}[[int]((gwmi win32_operatingsystem).OperatingSystemSKU)]

Hmm, From Self-doubt to thinking I can do PowerShell better than Jeffrey in one post… life’s been a bit like that recently.

Technorati Tags:

This post originally appeared on my technet blog.

May 28, 2008

The Hyper-v API Network interfaces

Filed under: How to,Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 11:58 am

If you’ve read my post on adding disks to a Virtual machine, the techniques here should already feel familiar. We create a NIC , and we create a switch port. And then we tell the NIC it is connected to the switch port. Hyper-V creates VM switches which are either bound to a NIC, internal (visible to the Parent partition) or private (visible only to the child VMs). So one of the first things to do when setting up a NIC is to choose the switch, and the first function I’m going to create is Choose-VMSwitch using  the choose-list function I’ve already shown:  getting the Switches to pass to choose-List is easy enough, just query WMI for MSVM_VirtualSwitch Objects.

    Function Choose-VMSwitch 
    {choose-list  (Get-WmiObject -NameSpace  "root\virtualization" -Class "MsVM_VirtualSwitch") ` 
     @(@{Label="Switch Name"; Expression={$_.ElementName}} ) 
    } 

So now I can have a command Add-VMNic $VM  (Choose-VmSwitch) . Since Hyper-V supports Legacy and VMBus NICs, I have given the option for a -Legacy switch, and to support giving the NIC a fixed MAC address I’ve added a -MAC switch too.  The PowerShell Filter is much the same as I’ve shown previously 

  1. If not passed a VM parameter pick up what is in the pipe
  2. If presented  a string as a VM parameter replace it with a VM WMI Object (or array of VMs)
  3. If presented with an array of Strings or VMs call the function recursively passing it each member of the array [go back to step 2]. N.b. Multiple NICS can’t  have the same MAC address so ignore the -MAC parameter.
  4. Assuming we’ve got a WMI object … get the appropriate Resource Allocation Settings Data object
  5. Set the properties of the RASD; including the MAC address if one was provided. If no Switch parameter was passed set the connection property to an empty string, if one was passed Call New-VmSwitchPort and set the connection property to point to that. The VMBus NIC needs a GUID as an Identifier and the Emulated one does not.
  6. Set up an arguments array, and call the "AddVirtualSystemResources" method of the VirtualSystemManagementService WMI object (which I get with Get-WmiObject -NameSpace  "root\virtualization" -Class "MsVM_virtualSystemManagementService").

So here’s the code in full.

   Filter Add-VMNIC 
   {Param ($VM , $Virtualswitch, $mac, [switch]$legacy ) 
    if ($VM -eq $null) {$VM=$_} 
    if ($VM -is [Array]) {if ($legacy) {$VM | ForEach-Object {add-VmNic -VM $_ -Virtualswitch $Virtualswitch -legacy} }                               else  {$VM | ForEach-Object {add-VmNic -VM $_ -Virtualswitch $Virtualswitch} } } 
    if ($VM -is [String]) {$VM=(Get-VM -Machinename $VM ) } 
    if ($VM -is [System.Management.ManagementObject]) { 
        if ($Legacy) {$NicRASD = Get-VMRASD -resType 10 -resSubType 'Microsoft Emulated Ethernet Port' 

                    $NicRASD.ElementName= "Legacy Network Adapter"} 
     else         {$NicRASD = Get-VMRASD -resType 10 -resSubType 'Microsoft Synthetic Ethernet Port'
                    $NicRASD.VirtualSystemIdentifiers=@("{"+[System.GUID]::NewGUID().ToString()+"}")
                    $NicRASD.ElementName= "VMBus Network Adapter"}    
      if ($virtualSwitch -ne $null) {$Newport = new-VmSwitchport $virtualSwitch                                      if ($Newport -eq $null) {$Newport= ""}
                                     $NicRASD.Connection= $newPort}
      if ($mac -ne $null) {$nicRasD.address = $mac                $nicRasD.StaticMacAddress = $true }       $arguments = @($VM.__Path, @( $nicRASD.psbase.GetText([System.Management.TextFormat]::WmiDtd20) ), $null, $null )
       $result = $VSMgtSvc.psbase.invokeMethod("AddVirtualSystemResources", $arguments)   if ($result  -eq 0) {"Added NIC to '$($VM.elementname)'."} else {"Failed to add NIC to '$($VM.elementname)', return code: $Result" }}  $vm = $null }

In this function I call "New-VmSwitchPort", which is a wrapper for a method provided by the Virtual Switch Management Service. Like the the image management service, and the Virtual System Management Service, this is just a WMI Object which we query for. The process goes

  1. If presented with a string as the VirtualSwitch Parameter, replace it with a VirtualSwitch WMI object
  2. Assuming we now have a WMI object, get the SwitchManagementService WMI object.
  3. Get a GUID to use as the port’s name and friendly name , and pass it, the Switch object and 2 nulls in one array to the CreateSwitchPort method
  4. The Path to the new port is returned in one of these nulls, so the function picks this up and returns it.
   Function New-VMSwitchPort 
   {Param ($virtualSwitch , $Server=".") 
    if ($Virtualswitch -is [String]) {$Virtualswitch=(Get-WmiObject -computerName $server -NameSpace "root\virtualization" -Query "Select * From MsVM_VirtualSwitch Where elementname = '$Virtualswitch' ")} 
    if ($Virtualswitch -is [System.Management.ManagementObject])  {     $SwitchMgtSvc=(Get-WmiObject -computerName $Virtualswitch.__server -NameSpace  "root\virtualization" -Query "Select * From MsVM_VirtualSwitchManagementService")     [String]$GUID=[System.GUID]::NewGUID().ToString()      $arguments=@($Virtualswitch.__Path, $GUID, $GUID, $null, $null)      $result = $SwitchMgtSvc.psbase.invokeMethod("CreateSwitchPort",$arguments)     if ($result -eq 0) {"Created VirtualSwitchPort on '$($virtualSwitch.elementName)' " | out-host                          @($arguments[4]) }      else               {"Failed to create VirtualSwitchPort on '$($virtualSwitch.elementName)': return code: $Result" | out-host} } 
   } 

There are some extra functions that I won’t show here – I’ve got a "Remove-Port" function and a Set-VMNICPort function – which removes an existing port and adds a newly created one. I’ve got a Set-VMNICMacAddress function which changes the MAC address after the NIC is created, and Get-VMNIC and Get-VMNICSwitch which build up to give  a Get-VMNICList function along the same lines as the Get-VMDiskList I showed before

 

Bonus link Over on the Virtualization Team blog, Taylor has posted the code to connect the Host machines Network card to a VM switch. I’m going to rework that code slightly for the library I’m building -  I’ll have a "Choose-ExternalEthernetPort" and so on.

This post originally appeared on my technet blog.

May 16, 2008

iSCSI, clustering and Hyper-V

Filed under: How to,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 8:22 pm

One of the things I’ve been saying I’d blog for a while is how to set up a cluster on Hyper-V. Since Hyper-V does not support sharing SCSI disks between machines, you need to use iSCSI – which is all fine and good if you’re doing it in production with real workloads and a proper budget; but a bit of a pain if you’re doing it in a lab or training environment. Inside Microsoft I can get what I need courtesy of storage server, but what about the rest of the world ? There are several  iSCSI products with free evaluation versions so I was going to try a couple out, document them and post the results. Well my colleague in Ireland Gavin McShera, has saved me the trouble, with a blog post which explains it all. Once you’ve set-up iSCSI you Gavin links to the article on setting up a 2 node file serving cluster and you can see how easy clustering really is in 2008.

This post originally appeared on my technet blog.

May 11, 2008

More on the Hyper-V API

Filed under: How to,Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 2:02 pm

In which we see how to set the number of CPUs

I started with getting MSVM Computer System objects – which I showed back in February. With these objects I can ask for the state of the VM to be changed to Running, Stopped or Saved.

To do things in a proper Powershell Style I re-wrote and re-wrote my functions so I have GET-Vm (which returns one or more VM(s) by name), Choose-VM, which puts up a list and returns one or more VM(s). Plus Start-VM, Stop-VM and Suspend-VM. Over various iterations these have moved from demanding a single MSVM Computer System object, to accepting an or object or display name, then to accepting and of array either, to allowing input to be piped in. Since Stop-VM is a bit brutal, that same February Post showed using the ShutDown integration Component

Next I moved onto the Msvm_ImageManagementService, and a few weeks back I looked at how Virtual Hard disks can be created , mounted and Compacted.

From there it was on to the related idea of Snapshots which I covered here and here; Snapshots are handled through the Msvm_virtualSystemManagementService. This is actually a very important WMI class. I mentioned Taylor’s post which shows how to manipulate the Exchange of Key/Value pairs (the Host’s KVPs are managed through this object). But there are 6 other methods I want to introduce here they are Create-, Modify- & Destroy- VirtualSystem and Add, Modify and Remove Virtual System Resources.

Creating and Modifying work in the same way. Identify the machine (unless it is being created) and pass a block of XML which describes how you want the machine, or the resource attached to it to be. If you’re think ahead and saying "Can I dump that XML out to a file ?" you can: both the Virtual System Management Service WMI object and the MMC console provide interfaces to Export or Import the Machine.  There are quite a lot of things which we want to be able to manipulate.

  • Legacy Network Card
  • VM-Bus Network Card
  • VM-Bus SCSI Controller
  • IDE DVD Drive
  • Virtual DVD Disk (which is inserted into the drive)
  • IDE Hard drive
  • SCSI hard drive
  • Virtual Hard disk-image (inserted into the drive)
  • Memory size
  • CPU cores and reservation
  • The VM itself

And for each of these we can get the XML by

  • Building it up from Scratch
  • Reading it from a file
  • Getting the existing value from WMI (for modification)
  • Getting a default from WMI (for creating)

The first 2 are usually a pain, so typically the process goes:

  1. Get a ResourceAllocationSettingData (RASD) object
  2. Modify one or more of its properties.
  3. Covert it to XML formatted Text,
  4. Pass the XMl as one of an array of arguments to one of the Methods of the Msvm_virtualSystemManagementService.

For example: here’s how we set the number of CPUs – we get a variation on the generic RASD object, the Msvm_ProcessorSettingData object for the VM in question.

    Filter Set-VMCPUCount
{Param ($VM , $CPUCount)
$procsSettingData=Get-WmiObject -NameSpace  "root\virtualization" `
-query "select * from MsVM_ProcessorSettingData 
                                where instanceID like 'Microsoft:$($vm.name)%' "
$procsSettingData.VirtualQuantity=$CPUCount
$SettingXML=$procsSettingData.GetText([System.Management.TextFormat]::WmiDtd20) $arguments=@($VM.__Path, @($SettingXML) , $null) $Result=$VSMgtSvc.PSbase.InvokeMethod("ModifyVirtualSystemResources", $arguments) if ($Result  -eq 0) {"Success"} else {"Failure, return code: $Result "} }

[Update. There were a couple of bits of PowerShell 2.0 in the above. In 1.0 you can’t call the .InvokeMethod  method of a WMI object directly, you have to call it via .psbase and .PATH property doesn’t exist, you have to get to the path with __Path, not .path.path]

The process is almost identical for memory, except get the Msvm_MemorySettingData object and set 3 properties named, .Limit, .Reservation   and VirtualQuantity which are all set to the desired memory size in megabytes

In the next few posts I’ll look at using the RASD objects to add disks and Network cards, plus how we can create and configure the VM itself.

This post originally appeared on my technet blog.

May 7, 2008

Ways to tidy up my PowerShell – including making a hash of stuff

Filed under: How to,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 4:49 pm

Please excuse the bad pun… When I first wrote the function I posted to display the state of virtual machines, I used a construction which has been familiar to programmers since time immemorial.

  If X=1 output this

If X=2 output that

etc

Most modern programming languages, including PowerShell, have some kind of switch construction which is a little tidier but they’re still bulky…

I had put constants for each of the states in the .PS1 file which holds all my PowerShell VM functions. But this was more as a way of having a note of them than something I was going to use in my code. I could have written the Start-vm function (in the same post) like this 

   $VM.RequestStateChange($Running)

and re-coded my display function as

   switch ($_.EnabledState) { $Running {"Running"}
$Stopped {"Stopped"}
                         $Paused {"Paused"}
etc

but it still needs a line for each state. For completely separate reasons I was looking at hash tables. It takes one line to create a hash-table of return codes:

    $VMState=@{"Running"=2 ; "Stopped"=3 ; "Paused"=32768 ; "Suspended"=32769 ; 
"Starting"=32770 ; "Snapshotting"=32771 ; "Saving"=32773  ; "Stopping"=32774 }

So I changed the way I start and stop machines: one function does the work: expanding arrays, converting strings to computerSystem objects and actually changing the state: like this

   Function Set-VMState 
   {Param ($VM , $state)
    if ($VM -is [Array]) {$VM | ForEach-Object {Set-VMState -VM $_ -State $state} }
    if ($VM -is [String]) {$VM=(Get-VM -Machinename $VM) }
    if ($VM -is [System.Management.ManagementObject]) {$VM.RequestStateChange($State) } 
$VM = $null }

Using the hash Table and then I have Start, Stop and Pause functions like this:

   Filter Start-VM
   {Param ($VM)
$if ($vm -eq $null) ($vm=$_} Set-VMState -VM $VM -State $vmStates.running
$VM = $Null }

I also made a change to accept input from the pipe e.g. Get-VM "James%" | start-VM , there are two changes (a) use a FILTER instead of a FUNCTION and (b) pick up the piped input in $_ . So I’ve got quite a few functions where I should  change this.

[Update, I’m not sure if this is the approved way of Piping, but I quickly learned that I should add the $VM=$Null at the end, other wise when 5 items  are piped in function is run 5 times, using the first one each time.]

HashTables are a one-way lookup: $VMstates.running returns the value with a key of "Running" – 2 in the Start-VM filter.  If I have "2" and want to get back to "Running" there isn’t a built in way(that I know of). However PowerShell has a GetEnumerator which dumps out the whole hash table as Key/value pairs, which that makes it easy to get the name we want.

   function Convert-VMStateID
    {Param ($ID)   
($vmState.GetEnumerator() | where {$_.value -eq $ID}).name }

and using the choose-list function I showed before before , choose-VM becomes a one liner

Function Choose-VM

{choose-list -data (Get-VM) -fieldList @(@{Label="VM Name"; Expression={$_.ElementName}},
@{Label="State"; Expression={Convert-VMStateID -ID $_.EnabledState}}) }

(In principle it is a one liner … in  practice I’m going to have a -multi switch to allow single or multiple selections.

One other thing I’ve done in this tidying up exercise is to make sure I name my parameters in scripts. This means I really should go back to my Choose-list function and rename the "Field list" parameter to "Property" to match Powershell’s built-in cmdlets (just as I have been trying to use existing Verbs and write my nouns in the singular !).  Identifying parameters by position doesn’t make for readable code:  the following two lines are equivalent, but which would you rather see in a script (not the one you’d rather type at the command line !)

Set-VMState -VM $VM -State $vmStates.running  
Set-VMState  $VM  $vmStates.running 

 

This post originally appeared on my technet blog.

May 3, 2008

Hyper-v Snapshots part 2.

Filed under: How to,Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 12:44 pm

In my last post I explained how snapshots work and gave a little bit of PowerShell for creating a one . In the post before that I talked about creating a generic  choose-tree function. What I wanted was to be able to call Choose-tree  List_Of_Items First_Item  PathPropertyName, ParentPropertyName, PropertytoDisplay and get a tree view I can choose from: like this

    $folders=dir -recurse | where-object {$_ -is [system.io.directoryInfo]}
    choose-tree $folders (get-Item (get-location) ) "PSPath" "PsParentPath" "Name"

0    +windowsPowershell
1    | |--Nivot
2    | |--Pics Which one ?: 2

The key thing in this is that PowerShell lets us use variables/parameters to hold field names. The logic is pretty simple. Take an array of items, and tell the function which one to start at, Output that item, if it has any Children call the function recursively for each of them. Checking for children is where this ability to pass property names is important, because I can use  where-object {$_.$parent -eq $startat.$path.ToString()} to say "where the field I said holds the parent, matches the field I said holds the path" I put a "ToString" on the end because I found it doesn’t like being passed "path.path" and that’s needed for some WMI items; toString() returns the path in this case but it’s safe for strings too. When the function calls itself it specifies how many levels deep in the tree the current item is – each level of recursion adds 1 to $indent in the function.

I wrote an "Out tree" before doing "choose-tree", most of the code below is associated with making choices When processing the topmost item it sets up a counter to allow the user to choose the items, and because the output order might not be the same as the input order it also sets up an array to hold ordered items. Once all the child items have been processed it prompts the user for a selection  and returns the item at that position in the array. 

The only thing that I’ve done here that is out of the ordinary for me is I don’t like using the -f operator on strings because it makes for unreadable code. "{0, -4}" -f $counter. Says "Put argument at position 0 into this string , right justified to 4 characters", which is just what I need, and I’ve put in the rest of the output line into the same construction.

   Function Choose-Tree
   {Param ($items, $startAt, $path=("Path"), $parent=("Parent"), 
    $label=("Label"), $indent=0, [Switch]$multiple)
    if ($Indent -eq 0)  {$Global:treeCounter = -1 ;  $Global:treeList=@() } 
$Global:treeCounter++
$Global:treeList=$global:treeList + @($startAt)
$children = $items | where-object {$_.$parent -eq $startat.$path.ToString()} if   ($children -ne $null)
{ $leader = "| " * ($indent)
"{0,-4} {1}+{2} "-f  $Global:treeCounter, $leader , $startAt.$label | Out-Host
$children | sort-object $label |
ForEach-Object {Choose-Tree -Items $items -StartAt $_ -Path $path `
                      -parent $parent -label $label -indent ($indent+1)}
     } else { $leader = "| " * ($indent-1)        
"{0,-4} {1}|--{2} "-f  $global:Treecounter, $leader , $startAt.$Label  | out-Host } if ($Indent -eq 0) {if ($multiple) { $Global:treeList[ [int[]](Read-Host "Which one(s) ?").Split(",")] }
else           {($Global:treeList[ (Read-Host "Which one ?")]) } 
                      } }

And so to snapshots. 

The parent partition and every child MsVM_ComputerSystem WMI object which represents it. The "Name" field in this object is actually a GUID. There is a second object MsVM_VirtualSystemSettingData: each VM, and each of its snapshots has one of these objects. The Settings data object for VM itself has it’s GUID in both the InstanceID and systemName fields, but the Snapshots have their own instanceID with the VMs GUID in the system name field. As with most of my functions I things up so I can pass the MsVM_ComputerSystem object or pass a string and use Get-VM  to convert it. Then I it’s one Get-WMObject operation to get the Snapshots.

   Function Get-VMSnapshot
   {Param( $VM=$(Throw "You must specify a VM") )

if ($VM -is [String]) {$VM=(Get-VM -machineName $VM) }
Get-WmiObject -NameSpace root\virtualization -Query "Select * From MsVM_VirtualSystemSettingData Where systemName='$($VM.name)' and instanceID <> 'Microsoft:$($VM.name)' " }

Time to combine choose-tree and get-snapshot to choose my snapshots from a tree. As I said above , the Name field is actually a GUID, and the display name for the Snapshot is in the elementName . So I display the tree of choices,starting with the "Root" snapshot. [Note I’m aware that I don’t cope with the situation where you delete a root snapshot with two children and get two roots]

   Function Choose-VMSnapshot
   {Param ($VM=$(Throw "You must specify a VM"))
    $snapshots=(Get-VMSnapshot $VM )
    Choose-Tree -items $snapshots -startAt ($snapshots | where{$_.parent -eq $null}) `
        -path "Path" -Parent "Parent" -label "elementname" }

Now I can choose my snapshots, it’s easy to tell a function like Remove-snapshot or apply-snapshot what I want. Here’s Remove-Snapshot , which I can call with something like

Remove-snapshot -snapshot (choose-Snapshot Core). Pretty simple stuff, I use the variable pointing to to the virtual System Management Service, as I did when creating a new snapshot this time I just need to invoke the RemoveVirtualSystemSnapshot method. As with the new snapshot it should return 4096 for "started processing in the background" , and I return the Job ID.

   Function Remove-VMSnapshot 
   {Param( $snapshot=$(Throw "You must specify a snapshot") ) 
    $arguments=($snapshot,$Null) 
    $result=$VSMgtSvc.psbase.InvokeMethod("RemoveVirtualSystemSnapshot",$arguments) 
    if ($result -eq 4096){ $arguments[1] } 
    else                  {"Error, code:" + $result} 
   }

Finally I might want to apply a snapshot , and this needs us to specify the VM and snapshot. I’ve written this so that if the Snapshot is omitted the user is prompted to select it. It’s the same process again, except this time we use the ApplyVirtualSystemSnapshot Method

   Function Apply-VMSnapshot
   {Param( $VM=$(Throw "You must specify a VM"), $SnapShot=(choose-VMsnapshot $VM))
    if ($VM -is [String]) {$VM=(Get-VM -machineName $VM) }
    $arguments=@($VM,$snapshot,$null)
    $result=$VSMgtSvc.psbase.InvokeMethod("ApplyVirtualSystemSnapshot", $arguments)   
if ($result -eq 0) {"Success"} elseif ($result -eq 4096) {"Job Started" | out-host $arguments[2]} else {"failed"} }

[Update. There were a couple of bits of PowerShell 2.0 in the above. In 1.0 you can’t call the .InvokeMethod  method of a WMI object directly, you have to call it via .psbase]

If you’re wondering what to with the Virtual Hard disks I showed I’ll get round to that soon.

This post originally appeared on my technet blog.

April 25, 2008

Accessing the Hyper-V API: disks.

Filed under: How to,Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 1:39 am

… In which we create compact, mount, unmount vhds

In my last post I said "There are two WMI objects which do most of the work", and mentioned the one named "Msvm_ImageManagementService". I spent last week with  poor Internet connectivity and so I had to discover some of the following by using WebmTest; it’s not a widely used tool but it’s a kind of swiss army knife for WMI. That’s given me the subject matter for another post but, It’s much easier to get the information from the MSDN page for Msvm_ImageManagementService.  This gives you a list of methods you can call via WMI. There are 13: CreateDynamicVirtualHardDisk , CreateFixedVirtualHardDisk , CreateDifferencingVirtualHardDisk , ReconnectParentVirtualHardDisk , CreateVirtualFloppyDisk , MergeVirtualHardDisk , CompactVirtualHardDisk , ExpandVirtualHardDisk , ConvertVirtualHardDisk , GetVirtualHardDiskInfo , Mount , Unmount , ValidateVirtualHardDisk

For now I’m only looking at 6 of them the 3 CreateXXXvirtualDisk functions, mount and unmount, and compact.

The script which contains my functions has a line which sets a variable to point to the Image management service WMI object. With that in place I created a New-VHD function. Initially  I created "new-DynamicVHD" and "new-FixedVHD" functions. I then thought I’d merge them and have a  -fixed switch, in addition to that I pass the function a path and a size (I love the fact that Powershell  understands what 20GB means here !)

function New-VHD
{param ([string]$vhdPath , [int64]$size , [Switch]$Fixed)
   if ($fixed) { $IMGMgtSvc.psbase.invokemethod("CreateFixedVirtualHardDisk",@($vhdPath,$Size,$null) ) }
   else  { $IMGMgtSvc.psbase.invokemethod("CreateDynamicVirtualHardDisk",$arguments ) }    }

[Update. A bit of PowerShell 2.0 crept into the above. In 1.0 you can’t call the .InvokeMethod  method of a WMI object directly, you have to call it via .psbase]

I changed this later to have a Parent parameter. If this is present I Invoke the CreateDifferencingVirtualHardDisk  method of the WMI object: instead of passing it and array with path,size and a null, I pass it path , parent and a null. The null is for data being returned and points to a "job" – Hyper-V creates the hard disk in the background and we can check on progress by examining the WMI object representing the Job, and the final version of the function returns the job ID, to make that easier. The version above is easier to read, but I’ll make the full version available with the rest of the functions at a later date.

You can see how the Job ID can be used in the next function. Mounting a disk via WMI is easy. Just for illustration I’ve used two different syntaxes $IMGMgtSvc.invokemethod("MethodName",arguments ) and $IMGMgtSvc.methodName(arguments)

All that’s need to the mount the disk is to call the MOUNT method with the Path to the VHD. The functions return 4096 if they start a job, so I check for that and get the Storage Job, I could poll the job until it completes but I just wait 5 seconds instead. Buried in the storage job is the Disk  index. Because disks are mounted Offline I string together commands for mounting it and pipe them into DiskPart. If there are any partitions on the disk they’ll have drive letters so after letting the mount process settle I check what they are. I make sure the drive letters are echoed to the screen, but I return the index of the disk as the result of the function.

function Mount-VHD
{param ([string]$vhdPath=$(throw("You must specify a Path for the VHD")) , [Switch]$Offline)
$result=$IMGMgtSvc.mount($vhdPath)
if   ($result.returnValue -eq 4096)
      {start-sleep 5
       $StorageJob=(Get-WmiObject -Namespace root\virtualization -QUERY "select * from msvm_storageJob
where instanceID=$($result.job.split("=")[1])")
       $diskIndex=(Get-WmiObject -query "Select * from win32_diskdrive
where Model='Msft Virtual Disk SCSI Disk Device'
and ScsiTargetID=$($storageJob.TargetId)
and ScsiLogicalUnit=$($StorageJob.Lun)
and scsiPort=$($storageJob.PortNumber)").index
       if ($diskIndex -eq $null) {"Mount failed"}
       elseif (-not $offline)  {@("select disk $diskIndex",
"online disk" ,
"attributes disk clear readonly",
"exit")  | diskpart | Out-Null
       start-sleep 5                  
     get-wmiobject -query "select * from Win32_logicaldisktoPartition
where __PATH like '%disk #$diskIndex%' " |
foreach-object {$_.dependent.split("=")[1].replace('"','') | out-host }
       $diskIndex
      }
else {"Mount Failed"}
}
 

Unmounting the disk is so simple

function UnMount-VHD
{param ([string]$vhdPath )

$IMGMgtSvc.Unmount($vhdPath) }

and compacting is hardly complicated, just be aware that it takes and ARRAY of paths not a single variable.

Function Compact-VHD
{param ([string]$vhdPath)
$IMGMgtSvc.invokemethod("CompactVirtualHardDisk",@($vhdpath)) }

This post originally appeared on my technet blog.

April 19, 2008

More on the accessing the Hyper-V API from Powershell

Filed under: How to,Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 5:51 pm

… In which we find VMs, them choose one, start them, stop them  , and connect to them.

I spent more of the last week than I planned looking at Hyper-V and Powershell, and I’m getting dangerously close to calling myself an expert.

There are two WMI objects which do most of the work “Msvm_ImageManagementService” and “Msvm_virtualSystemManagementService” and I’ve got posts to write about things you can do with both of them, and some of the tricks of Wbemtest as well.

I also found that in several places I needed the WMI object representing the VM, or the object representing its status. Cue two functions

function Get-VM   
{Param ($machineName="%")
Get-WmiObject -Namespace "root\virtualization"               
-Query "SELECT * FROM Msvm_ComputerSystem
                       WHERE ElementName like '$machineName' AND caption like 'Virtual%' "
} #example 1: Get-VM
#           Returns WMI Msvm_ComputerSystem objects for all Virtual Machines #           (n.b. Parent Partition is filtered out)
#Example 2:  Get-VM "%2008%" #        Returns WMI Msvm_ComputerSystem objects for machines containing 2008 in their name #        (n.b.Wild card is % sign not *)

I re-wrote my function for displaying VM status as a format function and this little bit of code

function Format-VMStatus 

{param ($VM) if ($VM -eq $null) {$VM=$input}
$global:Counter=-1
$VM | Format-Table -autoSize  -properties as in the old post

}

Function Get-VMStatus
{Param ($machineName="%")
Get-VM $MachineName | Format-VmStatus
}

and Choosing a VM turned into

function Choose-VM  
{$VMs = Get-VM 
$VMs| Format-VmStatus | out-host
$VMs[ [int[]](Read-Host "Which one(s) ?").Split(",")] }

I wanted to be able to start a VM  (multiple VMs), stop it, pause it or connect to it using either it’s name or the Msvm_ComputerSystem object so I wrote 4 little functions which will do just that. (Yes I should have 3 of them calling a single “change state” function !)

function Start-VM 
{Param ($vm)
if ($vm -is [array]) {$vm | forEach-object {Start-VM $_ } }
if ($vm -is [string]) {$vm=(Get-VM $vm) }
if ($vm -is [System.Management.ManagementObject]) {$vm.requestStateChange(2) }
}
#Example Start-vm (choose-vm) - prompts the user to select one or more VMs and starts them

function Suspend-VM
{Param ($vm)

if ($vm -is [array]) {$vm | forEach-object {Suspend-VM $_ } } if ($vm -is [string]) {$vm=(Get-VM $vm) }

if ($vm -is [System.Management.ManagementObject]) {$vm.requestStateChange(32769) }
}

function Stop-VM
{Param ($vm) if ($vm -is [array]) {$vm | forEach-object {Stop-VM $_ } }
if ($vm -is [string]) {$vm=(Get-VM $vm) } if ($vm -is [System.Management.ManagementObject]) {$vm.requestStateChange(3) }
}

function Get-VMConnectSession
{Param ($vm) if ($vm -is [string]) {& 'C:\Program Files\Hyper-V\vmconnect.exe' $VSMgtSvc.SystemName $vm } if ($vm -is [System.Management.ManagementObject]) {& 'C:\Program Files\Hyper-V\vmconnect.exe' $VSMgtSvc.SystemName $vm.elementName } } #Example: Get-VMconnectSession $tenby
#         Launches a Terminal connection to the server pointed to by $tenby.

In the next post I’ll look at things we can do with virtual disks using the “Msvm_ImageManagementService object


This post originally appeared on my technet blog.

April 16, 2008

Core! that firewall management has some tricks.

Filed under: How to,Security and Malware,Windows Server,Windows Server 2008 — jamesone111 @ 5:23 pm

Quite a lot of the last few days has gone into preparation for the Road-Show and making sure I had all the things right for show Windows Server Core.

Core, as you probably know by now, is server 2008 with support of only a subset of features, and most of the GUI bits removed. The idea is that you manage core remotely, but some things need to be done at the command line. I’ve got all my notes on core on my PC but when I checked out the Core document in the step by step guides, I found it had all the bits I’d pulled together over recent months in one place, and a few more. I recommend it.

Server 2008 starts "shields-up" that is with the firewall blocking just about everything (even to the point of blocking inbound PINGs, which might be going a bit far). To manage core remotely from the management console, you need to set some firewall rules. In an ideal world my demo core machine would be in a domain -  and group policy would set the firewall rules. But it isn’t: the Step by step document kindly tells me that to allow all MMC Sanp-ins to connect, at a comment prompt, I need to type

   Netsh advfirewall firewall set rule group="remote administration" new enable=yes

and to enable remote management of the firewall

   Netsh advfirewall firewall set curentprofile settings remotemanagement enable

There’s one more section that jumps out of the document To manage a server that is running a Server Core installation and is not a domain member using an MMC snap-in … establish alternate credentials … on your client computer using

   cmdkey /add:<servername> /user:<username> /pass:<password>

This works like a charm for everything … except for the firewall MMC. The fact that it governs it’s own management traffic separately should have been a clue here. I haven’t found any way to get it to accept alternate credentials. This normally wouldn’t be an issue, because I use a standard password on all my demo machines. Steve does the same; they’re different passwords (of course), and in this case Steve set up the Hyper-v host computer, I set up the core machine as Virtual Machine guest on it. One had his password and one had mine. Much gnashing of teeth followed. 

This post originally appeared on my technet blog.

March 26, 2008

Remote Server Admin tools for Vista

Filed under: How to,Windows Server,Windows Server 2008,Windows Vista — jamesone111 @ 4:46 pm

Hard on the heels of the news of management tools for Hyper-V , I find that the folks in Redmond chose a UK public holiday to release all the Admin tools for Windows Vista to manage Servers.

The package is described in KB941314 – which doesn’t seem to be live yet, but you can go straight to the 64bit or 32 bit downloads

This post originally appeared on my technet blog.

Manage Hyper-V from Vista

Filed under: How to,Virtualization,Windows Server,Windows Server 2008,Windows Vista — jamesone111 @ 12:28 pm

I nearly leaked this ahead of time; as part of the Release Candidate for Hyper-V we have released a version of of the management console to run on Windows Vista SP1. (No, to the best of my Knowledge we don’t have plans of a version which runs on XP or Server 2003 – this is Windows “6” only. )


Jeff has posted some screen shots to the Virtualization team blog


Vista x64 Edition: http://www.microsoft.com/downloads/details.aspx?FamilyId=450931F5-EBEC-4C0B-95BD-E3BA19D296B1&displaylang=en


Vista x86 Edition: http://www.microsoft.com/downloads/details.aspx?FamilyId=BC3D09CC-3752-4934-B84C-905E78BE50A1&displaylang=en


If you want to run on Server Core  this means you can have the tools to manage it on a workstation without needing to install one full server machine to get them. To avoid confusion, the 64 bit Version of Vista and the x86 versions of Vista and Server CAN’T run hyper-V but CAN run the management tools for it.


Update: A new version is now out for RC-1 See http://support.microsoft.com/?id=949587 


This post originally appeared on my technet blog.

February 28, 2008

An interesting journey with PowerShell, GPS data and SVG. (Part 2)

Filed under: How to,Powershell — jamesone111 @ 9:19 am

This is, I’m afraid, another of those “Wow! what can you do with a couple of long lines of PowerShell” posts.


I wanted to create a Scalable Vector Graphics (SVG) file for PowerGadgets’ OUT-MAP cmdlet to do UK county maps. PowerGadgets, as I discovered is fussy about the SVG data it is passed, and I still have a bit more exploring to do on this: but an outline SVG file looks like this

<?xml Version=1.0?>  
<svg width=”1200″, height=”1200″, Viewbox=”0,0,1200,1200″>
<g>
<g>
<shape>
<text>Label1</text>

   </g>
<g>

    <g>
<shape1>
<shape2>
</g>
<text>Label2</text>
</g>
</g>
</svg>

The G tags are Groups. So the the whole document is a group, and all it’s subgroups are treated as map objects by PowerGadgets. These either contains one shape and text element – which PowerGadgets uses as its name – or a group of shapes and a text element. The SVG spec defines Rectangles, circles, Ellipses, Polygons and “paths”, and I’ve only used paths with PowerGadgets so far. A Path looks like this.


<path d=”M580,595 L574,590 L572,586 L576,590 L582,594 L580,595 Z” />


Everything is in the d parameter. Incidentally, be warned, a lot of this stuff is case sensitive. Inside the data says Move to 580,595,  Line to574,590, Line to 572,586, and the Z at the end says close the path Capital letters designate absolute co-ordinates and Lower Case  ones are relative, all my data is in absolute form. PowerGadgets doesn’t like very long paths, so I made a decision to round the data I got from Nearby.Org.UK. Fitting about 10.5 degrees of Latitude into 1200 pixels of screen says there’s little point in dealing with anything after the second decimal place. I also made a decision to ditch points which were very close together. That way I could keep the PATH in the SVG file inside PowerGadgets’ limit.


Now I’m pretty impressed with the whoever did the work to produce the data on that site (the blog there says it’s a chap called Barry Hunter). Each county in Britain and Ireland has its own file. If a county includes Islands, each Island gets a file – this is perfect creating those Paths, each file is a path statement in the SVG file. There are 300 files. There is no way on earth I’m going to hand process that lot. I mean… I could load them into Excel, do the rounding, calculate the distance between each point and it’s predecessor, isolate the points with a suitable gap and then – somehow – get that data into the SVG format, but 300 times must be automated. Guess which tool I chose ? Yep PowerShell. 


I created a script called do-map and called it for each file . Do-map takes a filename as a parameter and is, essentially 2 (long) lines of PS. I added 3 little filters as much as anything to make the rest easier to read. So here is the beginning of do-map

param($filename)
filter round2 {Param ($Number) [System.math]::round($number,2)}
filter sqr {Param ($Number) [System.math]::pow($number,2)}
filter cos    {Param ($Number);   [System.math]::cos($number * [system.math]::pi /180)}
(@(“Lat,long”) +  (get-content $filename | where {$_ -notMatch ‘^#’})) > temp.csv 

My 3 filters all call the .NET math library: Round2 rounds to 2 decimal places, SQR squares a number, and COS takes a number in degrees and returns the cosine (which I need to do get the projection right). So the first line of proper code, takes the text file from  Nearby.Org.UK, strips it’s initial comments and adds a header to make it a valid csv file. This goes into temp.CSV Next comes the longest line of PowerShell I have yet written, and bear in mind I’ve shortened it with those filters. To ease reading I’ve split it over 12 lines but it is really one line in the form Import-CSV | select-Object | where | foreach-object.

(import-csv temp.csv  |
select-object @{NAME=”Lat”; expression={round2 $_.lat}},
@{NAME=”Long”;  expression={round2 $_.long}},
@{Name=”Delta”; expression={(sqr((round2 $_.lat)-$global:lastLat)) + (sqr((round2 $_.long)-$global:Lastlong));
$global:LastLat=(Round2 $_.lat);
$global:Lastlong=(round2 $_.long)}}
) | where {$_.delta -gt .0004 }|

foreach-object -Begin   {[String] $MyXml='<g><path d=”M’} `
-Process {$myXml += ‘L’+[string](885+100*$_.long) + ‘,’ + [string][int](3500-30*$_.lat/(cos $_.lat)) + ‘ ‘} `
-end     {$myXml.REPLACE(‘ML’,’M’) + ‘Z” /><text transform=”matrix(1 0 0 1 ‘ +
[string] (885+100*$global:LastLong) +’ ‘ + [string][int](3500-30*$global:Lastlat/(cos $global:Lastlat))+’)”>’ +
($filename.replace(“C:\Users\jamesone\Counties\GBCountyBoundaryWGS84-“,””)).replace(“.txt”,””)+'</text></g>’}

The import CSV is obvious, so lets look at the other bits. The select-object section has 3 calculated fields, Lat and Long are the result of rounding columns in the CSV file. Delta is doing a bit of Pythagoras on the latitude and longitude.  I’m using a technique I’ve shown before: setting variables in the script block of a calculated field for select object to use outside the context of the current calculation; each row leaves its lat/long data for the next (The first row will get a big delta, which is fine)  There’s no need to work out the distance itself, testing the Square of the distance in the where to filter out any points with very small Deltas. I’m aware that if there are many points all very close together they will all be lost (instead of taking out some to leave a more widely spaced set). In practice this hasn’t been a problem.


Click to compare with the non-Mercator projection Next we have a for in 3 parts.
begin – create  string $myXML with the start of the path statement.
process – for each data point add LineTo its X,Y co-ordinates. These are scaled, and make this a Mercator project the Y co-ordinate is based on latitude/Cos(latitude)
[at Latitude Θ, 1 minute of Latitude is 1 Nautical mile, but 1 second of longitude is Cos Θ Nautical miles. If lines of longitude are drawn as parallel 1 unit apart then at latitude Θ the lines of latitude need to be drawn 1/cos Θ units apart]
End: Return the XML, first change the first point in the path to be M – because it will be built as ML. Add the Close path “Z”, and close the XML tag. Then add the label as a text tag, position it using the data left in LastLat and LastLong by the calculation of deltas. The actual text is extracted from the file name – I’ve been very lazy with this


So I invoke this script for each of the files –

dir C:\Users\jamesone\Counties\GB*.txt | %{C:\Users\jamesone\Do-Map.ps1 $_.fullName} > temp.xml 

it took 7 minutes 20 seconds to do all 303 files. You can find this out with

Get-History | format table  -auto CommandLine , StartExecutionTime, EndExecutionTime 

(I’ll leave a calculated field for run time as an Exercise for the reader). Temp.XML needs a small amount of cleaning up to become a usable SVG file – toping and tailing, then merging counties and their islands into a group and deleting islands with just one or two points, that’s another 5-10 minutes. I test the file by opening it in Visio and assuming I haven’t made any errors it looks fine so I can test it with out map, first to get a list of the object names, and then to use them to display a map

out-map  -mapsource “custom maps\uk-counties” -ListObjects

Import-Csv countyData.csv | out-map -values value -Label county -mapsource “custom maps\uk-counties” -legendbox_visible false

You can see the result – click it to get a comparison with the non-Mercator projection. One final thing. The data that this was based on was shared under Creative Commons by-sa 2.5 and I’ve attached the map on the same basis. You can use it as you see fit, just acknowledge the work of Nearby.Org.Uk geting the data and mine/Microsoft’s in formatting it for this use. If you create another data-set using this, the CC license as placed on the data I used says you have to share the data, but not any app which uses the data.


 


 


Technorati Tags: ,,,,
 

Update, fixed a bunch of typos, bad edits etc.

This post originally appeared on my technet blog.

February 27, 2008

An interesting journey with PowerShell GPS data and SVG. (Part 1)

Filed under: General musings,How to,Powershell — jamesone111 @ 11:10 am

Eileen phoned me from a traffic jam yesterday. “I’ve got a demon in the car” she said. A little late for a technical person Eileen has joined the world of Sat Nav owners and her characterization of hers put me in mind of the personal Dis-organizer device which shows up in some of the Terry Pratchett books.



An extremely annoying personal organiser, it is powered by a (usually incompetent) imp, which can perform various tasks. There have been 3 models encountered so far: the Mark 1, mark 2 and Mark 5. All of these start up with an unusually happy tune such as “bingly-bingly beep!”, “bingle bingle bingle” or (when wet) “ob oggle sobble obble”.


The [Mark 1] claimed to have 10 functions, although it appeared that five were apologising for the useless manner in which it performed the others.
… the Mark 5, the Gooseberry, [is] much more useful than its predecessors. It has games, such as “Splong”, and “Guess my weight in pigs”. It has an “iHum” feature, and can [send messages using ‘BlueNose’]


Eileen’s late to GPS, but since she can find her position on the Globe with a wrist watch a protractor and a couple of match sticks (another of those Obsolete Skills that I referred to), I can’t say I blame her. I’m into old-style navigation myself, having got an ‘O’ level in Air Navigation: actually I’m even more retro than that, I have prints of 17th century maps hanging on the walls at home – many by a Dutchman called Blaeu. Wikimedia have some of this maps – this one gives an idea of the style of the ones I have, and Taschen recently published some of his Atlases  and I had the “Anglia” and “Scotia and Hibernia” volumes as a Christmas present. They have an introduction explain Blaeu’s role in making the maps (as publisher rather than surveyor), his business and the role of other cartographers, notably Mercator.


Mercator is famous for two things. One he was the first person to call a collection of Maps an Atlas, and he gave his name to a way of mapping the surface of the roughly spherical earth onto a flat sheet of paper. “Mercator’s projection”  is still widely used even though it is “wrong”.  (West wing addicts may remember “Cartographers for social Justice” – the idea behind is here ).  Looking at a Mercator Map you’d think that if you were going to fly from Munich (Lat 48 degrees) to Seattle (also Lat 48 degrees) the shortest way would be to follow the line of latitude all the way, right ? The shortest distance between two points is a straight line, right ? But (unless you’re building a tunnel) you can’t go in a straight line along the earth’s surface, you have to go in a arc, and the shortest arc doesn’t follow a line of latitude, but follows the edge of a slice which passes through the centre of the earth. Which is why flights from Europe to the use go North.


Yipee - I got it working in powergadgets !The problem Mercator solved is simple. As you move away from the equator, lines of latitude are equally spaced – 1 arc minute of Latitude is 1 Nautical Mile*. However lines of longitude are not. Just South of North pole you can jog round 360 degrees  of longitude in a few paces. At the equator its over 20,000 miles.  How do you draw that ? Mercator’s solution draw lines of longitude evenly spaced and parallel (in real life they converge at the poles); but to allow for the fact they are actually narrower as you move away from the equator increase spaces between parallel lines of latitude (in real life they are constant distance). So grabbing a handy Mercator projection map (my local Ordnance Survery map) 5′ of latitude takes 18CM of map, and 5′ of longitude takes only 11CM.


Why does this matter? Well if you have GPS co-ordinates for mapping the boundary of something, then if you simply plot them as square, then the map looks wrong. I’ve found data for building the SVG file I need for PowerGadgets to do a British county Map. I processed it into the SVG file using PowerShell (of course – and that will be where part 2 comes in). But although the map is recognisable, it looks wrong because I haven’t made allowance for the projection. I’m going to do that later, and then explain the PowerShell which I got me there.


Technorati Tags: ,,,,

*Foot note. A Nautical Mile is 6076 feet, so one arc second of Latitude is about 100 feet. I’ve seen GPS numbers locations with fractions quoted greater than 0.1 second acuracy the “10 foot” accuracy quoted for consumer GPS devices

This post originally appeared on my technet blog.

February 26, 2008

PowerShell Community Extensions

Filed under: How to,Powershell — jamesone111 @ 10:40 pm

–I referred to the PowerShell Community extensions in the last post. It’s interesting, but not especially surprising to find that

  • The Community can do some pretty cool stuff.
  • A Community effort can encompass a lot of things, at the price of lacking clear direction
  • Writing code is give a higher priority than writing documentation.

Lets start with the first of these. There’s certainly some cool stuff in the Community extensions. Its a mixture of Scripts, Filters and Functions (which you can look at to get your own ideas) and compiled cmdlets, plus three new providers. It deals with Active Directory, handling files and the file system (everything from Calculating a Hash to Modifying the disk volume label), basic Bitmap handling, Clipboard handling, XML processing, Terminal services management an RSS provider, a speech generator and a lot general stuff to help you get around, including querying information about DLLs and a new tab-expansion function.

The problem is that this can give the impression of  "a very unevenly-edited [work which] contains many [things] which simply seemed to its editors like a good idea at the time."* Of course the question comes up "would you like to be the editor and have to tell people their code can’t go in until their documentation is up to scratch." ? I wouldn’t. How about "if it should be segmented would you like to be the one deciding what is spun off and what stays in the main part". Probably not. The profile script that comes with the extensions gives you the chance to see what’s being loaded apart from the basic snap-in and leave out any bits out that don’t appeal to you.

The discussion area on codeplex for the Extensions is has a handful of active threads which suggests that work is going on, but some aspects haven’t been touched for a long time. For example everyone seems to expect to be able to pass an object to Get-Member to find  out what its Properties and Methods are; but this doesn’t work with Posts returned by RSS provider.  It was suggested that this be corrected some while back but to date nothing has happened (you need to know the properties for the IFEED object). It was also suggested that some documentation be written for the feed provider and (in common with other places) this has not happened.

I don’t know who is going to need, for example, speech output and AD support at the same time. Still, I have already found some of the bits pretty useful. Take this bit of code, for example, which I used to create a post at the end of last week.

dir E:\DCIM\102PENTX\Develops\*small* | ForEach-Object -Begin {[String]$myHtml=""} 
  -Process {$myHtml += '<p><img src="' + (upload-blogfile $_.fullname) + '" /></p><p><b>' +  (get-image $_.fullname).title + '</b></p>'} 
-end {New-BlogPost -body $myHtml -Title "Testing Pictures"}

It gets all the pictures named SMALL in a given folder, uploads them to my blog and builds the body of a blog post as it does so; at the end of the process it uploads the post . 

Now the MetaWebLogAPI – which I’ve already written about needs the picture to be encoded in BASE64 format and the community extensions has ConvertTo-Base64. So the code for Upload-BlogFile is

function upload-Blogfile
  {param($filename, [string] $postUrl=$postUrl, [string] $blogid=$blogID,
[string] $username=$username, [string] $password=$password) #Requires -pssnapin PSCX $base64=convertto-base64 $fileName $ContentType=(get-itemproperty "hklm:\SOFTWARE\Classes\$($filename.substring($filename.lastIndexof('.')))" "content type")."content type" $postTemplate = "<methodCall><methodName>metaWeblog.newMediaObject</methodName><params> 
       <param><value>$blogid</value></param>        <param><value>$username</value></param>        <param><value>$password</value></param>        <param><value><struct> 
          <member><name>name</name><value><string>$($filename.substring($filename.lastIndexof('\')+1))</string></value></member> 
      <member><name>type</name><value><string>$contentType</string></value></member> 
          <member><name>bits</name><value><base64>$BASE64</base64></value></member></struct></value></param></params></methodCall>"
write-progress "Uploading to blog" "Sending" -cu $filename
 ([xml]((new-object System.Net.WebClient).UploadString($postURL , $postTemplate) )).methodresponse.params.param.value.struct.member.value.string

}

This breaks down as

Do the transformation to base 64,

get the MIME type for that kind of file.  
Build the XML to upload.

Put up a progress screen as the upload can be slow

Perform the upload and extract the URL assigned to the image from the XML returned.

So that command line was building up HTML which repeated the following for each image <P><IMG SRC="url"/></P><P><b>Image title</b></P>

Get-Image in that code is a filter for "new-object" using my Exif library to get the title. As I said at the time I like this of for loop – the begin and end blocks let you do stuff outside the actual loop but keeping it to one command, so the -begin {block} sets up an empty string, and the -end {block} posts the completed string to my blog. 

You may be wondering how I got pictures which contained the word "SMALL"  That was using 3 more cmdlets from the extensions

dir "E:\DCIM\102PENTX\Develops\*1*" | ForEach Object {Import-Bitmap -Path $_.fullname | resize-bitmap -percent 20 | 
  export-bitMap -path ($_.fullname.replace("IMGP","SMALL")) -Format JPEG -Quality 70}

Import-BitMap, and resize-bitmap are pretty self explanatory. Export-Bitmap will change the format, and JPEG quality, since these pictures are being resized for the web I’m dropping the size from 3900×2600/4MB to 780×520/32KB 

Playing with the -Output option on PowerGadgets I realized I can do Import-bitmap temp.bmp | set-clipboard to make it easy to paste a chart straight into another application.

Despite the poor documentation I think most people who use PowerShell will find something useful in the community extensions. They are so diverse that everyone will find different bits useful. But if you haven’t found which bits are useful to you go and download them from Codeplex and have a look.

 

*Foot note: Last week I got a copy of Bruce Payette’s book "PowerShell in Action" which seems to be accepted as the standard repository of all knowledge and wisdom when it comes to PowerShell,  I was delighted to find chapter one starts with a quote from the Hitch Hikers guide to the Galaxy. Bruce is a guy who clearly knows where his towel is

This post originally appeared on my technet blog.

February 20, 2008

A presentation tip (especially for PowerShell)

Filed under: How to,Powershell — jamesone111 @ 11:38 am

I try not to stand still when I’m presenting; stand behind a lectern for too long and you end up holding onto it , staring down and droning on to your PC. Radio Mics and the wireless presenter mouse give the freedom to stroll around the stage to my hearts content. The problem with demos is that you end back behind your PC and if I have to show PowerShell I’m behind the PC typing long command-lines. Mistyping them as often as not (and wondering why they didn’t work). Getting tempted to go off the script too.  Jeffrey Snover came up against this problem and created a fix as he explained here. There’s a revised version here. Frankly once you’ve seen it, a demo without using it would be daft.

Still this leaves the problem of having to press keys.  I don’t want to be stuck behind my PC, I want to click through the demo like I click though my slides.

I thought about this for a while, and thought there was the option to do some clever button assignment in the Intelli-mouse software. Sure enough you can assign different actions to different programs so I’ve assigned the Wheel button to [ENTER] (you can choose any key stroke) so now I can just click through the automated script.

image

 

Technorati Tags: ,

This post originally appeared on my technet blog.

February 12, 2008

Deploying Vista SP1.

Filed under: How to,Windows Vista — jamesone111 @ 1:43 pm

For the last year I have been saying that the mentality of waiting for a Service pack is a way of thinking which belongs to the 1990s (wait 6 months might make sense, but a service pack is not the milestone it was). Still, SP1 has released and whilst the majority of its changes were already pushed out though Windows update there are some additional bits in the SP1, so deploying it is, basically a Good Thing.

My laptop has been messed about with dreadfully over the last year and a bit. I blogged that sleep wasn’t working properly, and the battery life was terrible. This turned out to be a faulty battery – I swapped it with one from an identical machine we have for roadshow demos and life went from less than an hour to somewhere between 3 and 4 hours. Fantastic. I also noticed that my BIOS was out of date, so I hatched a plan.

  • Use the demo laptop to build a complete image of everything I wanted
  • Image it (this image won’t fit on one DVD) so
  • Split the image
  • Make ISO images for Installation Disks
  • Burn DVDs
  • Reformat my laptop, reinstall from the DVDs and restore my data from Vista’s regular backup

I have a secondary hard disks which replaces the DVD in the laptop, and because Vista likes run backups as a scheduled task this is reasonably up to date, and I could do a clean installation. I made myself a Vista+SP1 install disk from the files on an internal server and got to work.

  1. Build Vista Ultimate 64 bit
  2. Plug in all my different USB devices, and configure drivers for them (that means my hand made storage driver for my old compact camera and downloading the driver for my TV stick from Hauppauge). Configure drivers needed for smartcard
  3. Configure Microsoft IT’s VPN software
  4. Configure Media Centre TV settings.
  5. Add rights management Add in for Internet explorer
  6. Install Microsoft Office 2007 SP1 Enterprise + Visio + Communicator + Live Meeting
  7. Install my photography tools Digitial image suite + Groupshot + Paintshop Pro + Capture one + Advanced Batch Converter + Exifutils and some internal only ones.
  8. Install Powershell + PowerGadgets + Powershell extensions for one note
  9. Install PDF tools from Foxit Software, Flash , Quicktime and Silverlight and IE7 Pro.
  10. Install Virtual Earth and Google Earth (yes I have both ! Which is better depends on where you’re looking)
  11. Install Suunto Dive manager
  12. Install Mind-Genius
  13. Install Windows Live writer 
  14. Apply everything offered by Windows Update
  15. Remove redundant fonts
  16. Re-arrange start menu

I didn’t install any anti-virus software as that we get pushed down to me anyway, to keep the image size small I chose to leave MapPoint off and I forgot the driver for my Presenter Mouse . With the software more or less as I wanted it  next step was to run SYSPREP. I told it to generalize the machine as this should help the install process on different hardware. Then it was time to boot into Windows PE and image the machine. Out came my trusty bootable USB key from the prompt I had to run

imagex.exe /flags “Ultimate”  /capture c: C:\gold.wim "ULTIMATE Golden Image" /compress maximum /verify

I’ve explained this command line before. The crucial bit is without the /FLAGS switch you can’t use the image on an install DVD. Since Gold.wim is way to big for a DVD the next step was to run

imagex /split C:\Gold.wim C:\install.swm 3000

This gives me Install.swm and install2.swm (an SWM – or split WIM file is just like a WIM. Experience tells me that setup will fail if the split files have WIM extensions).

Next step, make a folder named VistaDVD, in that make a folder named SOURCES and create a dummy INSTALL.WIM it. The XCOPY the vista install DVD into VISTA DVD. When it asks to overwrite INSTALL.WIM say no (that saves copying a huge file to delete it later) then remove the dummy install.wim and copy in INSTALL.SWM. Finally make a second folder named VISTA2 with a SOURCES folder and copy in INSTALL2.SWM

Now I can make two DVD ISO images.

oscdimg -n –m –lGoldDisc1 –h –b"D:\etfsboot.com" c:\VistaDVD c:\Vista1.iso   
oscdimg -n –m –lGoldDisc2 –h  c:\Vista2 c:\Vista2.iso

The extra bits in the first one make it bootable. Before burning the ISOs I copied then to my secondary disk, and booted up Hyper V. I created a new virtual machine and installed from the ISOs. Happy that worked I burnt the disks and reinstalled the demo machine using them. All seemed well.

I took one last backup of my laptop to the secondary disk and about 11:40 yesterday I booted from my new DVDs and set the installation in motion. It asks to swap to DVD2 during the copy phase and back to DVD1 at the end of it, and about 12:00 I went to lunch. I returned at 12:30 to find the machine had finished installing and was waiting for me to set up an initial user account. Setup has a few final tasks to do and then I was able to log in and join the domain, and log on with my usual account – creating an empty profile.

I took my machine to a 1:00 presentation and set the restore going – it failed because it needed to be able to check accounts against the domain. This was a bit unexpected as I’d done a partial restore on an unjoined machine without problems.  I plugged a network cable in and and 50GB or so restored at about 1GB per minute. I had to recreate my outlook offline store, set-up links with my phone and set a few other settings but start to finish the process was done inside 3 hours – and for most of that time I wasn’t touching the machine.  One pleasant surprise was media center was ready to go when I plugged in my TV stick at home. The only unpleasant surprise was that MindGenius doesn’t like being imaged when it has had a serial number entered: it wouldn’t start. Since I still have the image I built on the demo machine I might go back and add the intellipoint drivers for the Mouse and remove and reinstall Mindgenius and make some new disks.  It only takes 7 steps (sysprep, imagex /capture imagex /split, 2 copies, 2 oscdimg, 2 burn operations)

My machine is definitely running better – how much that is SP1 and how much just cleaning it out I don’t know. But the ease with which you can make build disks and give them to every laptop user in the company is just one more reason why I say Vista is superior to anything we’ve had before.

This post originally appeared on my technet blog.

February 1, 2008

Getting to grips with Hyper-V’s API

Filed under: How to,Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 9:30 pm

It’s only 7 months since I first installed Powershell. Hard to believe that last week I saw a copy of the OCS res kit with my name on the cover as a result of the PowerShell scripts I wrote, and worth remembering before criticizing other people’s scripts.


Over on Ben Armstrong’s blog he has applied his virtualization expertise to showing how to use the newly published WMI management interface, posting samples using VB and “VB in powershell syntax” for example

$VMs = gwmi -class “MSVM_ComputerSystem” -namespace “root\virtualization” -computername “.”
foreach ($VM in $VMs){
if ($VM.Caption -match “Microsoft Virtual Computer System”){
write-host “==================================”
write-host “VM Name: ” $VM.ElementName
write-host “VM GUID: ” $VM.Name
write-host “VM State: ” $VM.EnabledState
}
}
Ben says he finds it easier to read that way, but I thought I’d show how we could do it in Powershell style. An If inside a for loop ? filter before looping 
VMs = (get-wmiobject -class “MSVM_ComputerSystem” -namespace “root\virtualization”) | 
             where {$_.caption -match “Virtual”}
Or better, use the WMI Query Syntax to only return the objects you want
$VMs = get-wmiobject -namespace “root\virtualization” -query `
“SELECT * FROM Msvm_ComputerSystem WHERE Caption Like ‘*virtual*’ “
The bit of Ben’s code which says he’s not been immersed in PowerShell is the for loop with write statements inside. It shouts “FORMAT-Table”
$VMs | Format-Table -autosize ElementName, Name, Enabledstate
If you prefer a list format you can always use Format-list, and if you want more helpful heading names you can use  @{Label=”VM GUID”; expression={$_.Name}} as a field
$VMs | Format-Table -autosize @{Label=”VM Name”; expression={$_.ElementName}} ,
@{Label=”VM GUID”; expression={$_.Name}} , Enabledstate
In fact you could go step futher and expand the value in enabledState to text
@{Label=”State”; expression={switch ($_.EnabledState) { 2 {“Running”} 3 {“Stopped”}
32768 {“Paused”} 32769 {“Suspended”} 32770 {“Starting”}
32771 {“Snapshotting”} 32773 {“Saving”} 32774 {“Stopping”} } }}
Of course I’ll often take the approach “Variables! We don’t need no stinking variables” and pipe the Get-WMIObject into Format-Table. But it’s useful sometimes – like in choose functions I tend to write for many objects …
Function Choose-VM
{$global:Counter=-1
 $VMs = get-wmiobject -namespace “root\virtualization” -query `
          “SELECT * FROM Msvm_ComputerSystem WHERE Caption Like ‘%virtual%’ “
 Format-Table -inputobject $VMS @{ Label = “ID”; Expression={($global:counter++) }} ,
                                @{Label=”VM Name”; expression={$_.ElementName}} ,
                                @{Label=”VM GUID”; expression={$_.Name}} ,
                                @{Label=”State”; expression={switch ($_.EnabledState) { 2 {“Running”} 3 {“Stopped”} 
                                                  32768 {“Paused”} 32769 {“Suspended”} 32770 {“Starting”}
                                                  32771 {“Snapshotting”} 32773 {“Saving”} 32774 {“Stopping”} } }} | out-host
$VMs[ [int[]](Read-Host “Which ones? “).Split(“,”)]}

Ben did a second post to show starting a virtual machine  – to do this I can just use

(choose-vm ) | foreach-object {$_.requestStateChange(2)} 

You could use anything which returns Msvm_ComputerSystem WMI objects and you can stop or pause one a machine just by requesting a different state to change to.  Shutting down a machine just by setting it’s state to 3 for stopped is the virtual equivalent of hitting the power switch. The hyper-v integration components include one to trigger a clean shutdown and Ben shows how that can be used; here’s my version

(choose-vm ) | foreach-object { (get-wmiobject -namespace “root\virtualization” -query `
               “SELECT * FROM Msvm_ShutdownComponent WHERE SystemName=’$($_.name)’ “).InitiateShutdown($true,”Because I said so”) }

 


This post originally appeared on my technet blog.

January 24, 2008

Hyper-V API information on MSDN

Filed under: How to,Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 10:37 am

I’ve been waiting for this for a little while, in fact I had hoped to see a draft before it went public (although I wonder if that actually falls foul of the rules on not having “Secret” APIs). I could see the WMI providers from Powershell but working out what to with them wasn’t trivial.

The information is now published on MSDN at http://msdn2.microsoft.com/en-us/library/cc136992(VS.85).aspx

Calling WMI APIs from PowerShell is something I know a bit about having done a lot of that for the OCS resource kit so it looks like I’m going to be having some fun with this … yes I do regard it as fun.

I’ve mentioned Jeff Woolsey before he’s a contributor on the virtualization team blog and keeps us informed internally with clear headings about  what’s confidential, what to share and what can be public with a “please don’t paste to your blog” this one came tagged This is important customer information. Please provide this information to customers.

Here’s a bit more informatiom from Jeff. Please note the final paragraph – the APIs are settled enough to share, but they are not guaranteed to be final.

The virtualization team is pleased to announce the public beta release of the Hyper-V WMI interfaces.

Hyper-V WMI APIs. Hyper-V uses WMI APIs (similar to the Virtual Server COM API) to create, manage, monitor, configure virtual resources. We expect the Hyper-V WMI APIs to be used widely in a variety of ways such as:

· By third party management vendors who want to write tools to manage WSV (examples, HP Openview & IBM Director)

· By enterprises who want to integrate with an existing management solution

· Developers who want to automate virtualization in a test/dev environments through scripts

The Hyper-V WMI APIs are publicly available here:

http://msdn2.microsoft.com/en-us/library/cc136992(VS.85).aspx

Important: This documentation is preliminary and is subject to change. This same warning is provided online (see the screenshot below). While we’re trying to avoid any changes, modifications are still possible up to the final release. We encourage user feedback by clicking on the link below to “Send comments about this topic to Microsoft.”

 

This post originally appeared on my technet blog.

« Previous PageNext Page »

Blog at WordPress.com.