James O'Neill's Blog

March 21, 2011

What’s in my PowerShell profile (2) edit

Filed under: How to,Powershell — jamesone111 @ 11:59 am

Carrying on with the theme of Useful Stuff For A PowerShell Profile, which I started with WhatHas, I want to show the edit command I added.
To begin with – in the betas releases – I was  a little bit annoyed and puzzled that there was no easy way to tell the PowerShell ISE to edit a file, but when I found there is a route to do so through the object-model I added an edit function to my profile. A very similar function is in the release version with PSEdit, I’ve kept mine which looks like this:

function edit {
   param ([parameter(ValueFromPipelineByPropertyName=$true)] 
          [Alias("FullName","FileName")]$Path
   )
   process {
 
     if (test-path $path) {
           Resolve-path $path -ErrorAction "silentlycontinue" | foreach-Object {
               if ($host.name -match "\sISE\s") {
             
     $psise.CurrentPowerShellTab.Files.add($_.path) | out-null }
            
  else {notepad.exe $_.path} }
         
 }
       else {write-warning "$path -- not found "}
   }
}

It only takes one parameter, which can come via the pipeline – enabling piping is a habit I’ve developed. I use a couple of PowerShell’s clever tricks with parameters – first, instead of using the whole piped object it can use a single named property of the object, and second, aliases let me invoke edit with –path or –fullname; that’s not a lot of use here – I can miss the name out completely and PowerShell knows which parameter I mean because there is only one. But the result of putting the two together is clever-squared: it says to PowerShell “if you find the object has a Path property, use it, if it doesn’t but has a FullName property use that, if it doesn’t have one of those but has FileName, use that” and so on.

A real-world use for this is I had some scripts which fetched a folder name from WMI and used PowerShell’s built-in Join-Path cmdlet to add a file name to it to create a parameter for another call to WMI. This worked against remote servers but I found that if the folder starts with a drive letter which exists on the remote server but not on the local machine , Join-path will produce an error, so I had to do the operation without Join-path.  I can use WhatHas Join-path to get  MatchInfo objects for each occurrence of Join-path. These objects have a Path property so loading all the affected scripts into the ISE editor is as simple as:
whatHas join-path | edit

Piping is the main place where my version scores over PSEdit  (the other being that it works in outside the ISE). The body of the function just has to open the requested file(s). Test-Path and Resolve-Path accept arrays, so I don’t need to any work to allow the function to be called as edit.\file1.ps1, .\file2.ps1.
If Test-Path says no valid name was passed, a warning is printed, but if it is valid  Resolve-Path is used to turn a partial name or wildcarded name into one or more paths.  If multiple names are passed and any IS valid, execution will reach Resolve-Path which will produce an error if any IS NOT valid – hence the use of –ErrorAction.
Armed with one or more valid, fully qualified paths, it’s a case of using the ISE’s object model to open the file: of course that only works in the ISE so the command falls back to notepad if the host name isn’t “Windows PowerShell ISE Host”

What’s in my PowerShell profile (1) WhatHas

Filed under: How to,Powershell — jamesone111 @ 10:01 am

This shows simple use of Select-String – which is such a great tool it deserves a long piece – but  I’ve been saying for I while I would write up the functions I have in my profile: so for now I’m just going to show how I use Select-String  there

I keep everything in the Profile.PS1 under Documents\WindowsPowerShell. PowerShell  reads additional user and system-wide profiles depending on whether you start the console version or the ISE version but I ignore these.

In modules or anything that might get used in another script, I follow the proper verb-noun naming conventions;  but for utility functions in the profile I just use short names.  WhatHas is one of 3 short name / profile functions I have, its  job is to find which of my scripts contain(s) something or some things. I use it when I need either to fix something found in multiple files, or to remind myself how I used something in the past in order to use it in something new: I can type
WhatHas reflection
And get a listing of all the places I’ve used [System.Reflection.Assembly]::LoadWithPartialName
showing the file, line number and line itself:  Here’s the code

Function whatHas {
    param ( [Parameter(ValueFromPipeLine=$true,Mandatory=$true)]$pattern, 
            $fileSpec="*.ps1",
           
[Switch]$recurse
      
   )
  Process { $( if ($recurse) { dir -recurse -include $fileSpec} 
               else  { dir $fileSpec} ) |
           
 select-string -Pattern $pattern }
          }
}

Originally I only had two parameters: a –recurse switch choses which form of DIR gets used (I really should use Get-ChildItem rather than DIR or LS somehow I’m happier with DIR),  the files it returns and what every was passed in –pattern are fed into Select-String – which accepts arrays for the pattern so I don’t have to do anything  to support searching for multiple patterns with
WhatHas reflection,assembly
The fileSpec was originally hard coded to "*.ps1" but I realized I might want to search .TXT or .XML files so I made that the default for a parameter instead.
I’m getting in the habit of allowing the “main” parameter of a function to be piped in – making the function body a Process {} block ensures each item piped is processed. 

The output looks like this:

PS C:\Users\James\Documents\windowsPowershell> whathas test-path

hyperv.ps1:236:   If (test-path $VHDPath) {
hypervR2.ps1:236:   If (test-path $VHDPath) {
profile.ps1:6:        if (test-path $path) {

So for the “how to use” case I can often copy what I want from the output and paste it into what I’m working on. For the “need to fix” case I might want to pipe the output into something else, and I’ll look at what that might be in the next post

July 5, 2010

Exploring the IMAGE PowerShell Module

Filed under: How to,Photography,Powershell — jamesone111 @ 12:57 pm

In part one of this series I showed the finished version of photo-tagging script I’ve been using. I based my work (which is available for download) on James Brundage’s PSImageTools module for PowerShell which is part of the PowerPack included with the Windows 7 Resource kit (and downloadable independently). In this post I want to show the building blocks that were in the original library provide and the ones I added.
Producing a modified image using this module usually means working to the following pattern:

  • Read an image
  • Create a set of filters
  • Apply the filters to the image
  • Save the modified image

If you are wondering what a filter is, that will become clear in a moment. James B’s original module had these commands.

Get-Image Loads an image from a file
Add-CropFilter Creates a filter to crop the image to a given size
Add-OverlayFilter Creates a filter to an overlay such as a watermark or copyright notice
Add-RotateFlipFilter Creates a filter to rotate the image in multiples of 90 degrees or to mirror it vertically or horizontally
Add-ScaleFilter Creates a filter to resize the image
Set-ImageFilter Applies a set of filters to one or more images
Get-ImageProperty Gets Items of EXIF data from an image
ConvertTo-Bitmap Loads a file, applies a conversion filter to it, and saves it as a BMP
ConvertTo-Jpeg Loads a file, applies a conversion filter to it, and saves it as a JPG
Copy-ImageIntoOrganizedFolder Organizes pictures into folders based on EXIF data

You can see there are 4 kinds of filter with their own commands in the list and each one makes some modification to the image: cropping, scaling, rotating, or adding an overlay. Inside the two ConvertTo commands, a 5th kind of filter, conversion, is used and I added a function to create filters to do that. I made some changes to the existing functions to give better flexibility with how they can be called, and added some further functions, mostly to work with EXIF data embedded in the image file. The full list of functions I added is as follows:

Save-Image Not strictly required but it is a logical command to have at the end of a pipe line, instead of calling a method of the image object
New-ImageFilter Not strictly required either but it makes the syntax of adding filters more logical
New-Overlay Takes text and font information and creates a bitmap with the text in that font
Add-ConversionFilter Creates a conversion filter for JPG, GIF, TIF, BMP or PNG format (as used in ConvertTo-Jpeg / Bitmap without applying it to an image or saving it)
Add-ExifFilter Adds a filter to set EXIF data
Copy-Image Copies one or more images, renaming, rotating and setting title keyword tags in the process.
Get-EXIF Returns an object representing the EXIF data of the image
Get-EXIFItem Returns a single item of EXIF data using its EXIF ID (the common IDs are defined as constants
Get-PentaxMakerNoteProperty Decodes information from the Maker-Note Exif field, I have only implemented this for Pentax data
Get-PentaxExif Similar to Get-Exif but with Maker-Note fields for Pentax

The image below was resized and labelled using these commands.  The first step is to create an image to act as an overlay:  I’m going to a copyright notice in Red red text, in 32 point Arial 

PS> $Overlay = New-overlay -text "© James O'Neill 2008" -size 32 -TypeFace "Arial"  `
                           -color "red" -filename "$Pwd\overLay.jpg" 

I’m using a Click for the 800 pixel high versionpicture I took in 2008: and I could have used a more complex command to build the text from the date taken field in the EXIF data.  Next I’m going to create a chain of filters to:

  • Resize my image to be 800 pixels high (the aspect ratio is preserved by default),
  • Add my overlay
  • Set the EXIF fields for the keyword-tags, title and Copyright information
  • Save the image as a JPEG with a 70/100 quality rating

Despite the multi-line formatting here, this is a single PowerShell command:  $filter = new-Filter | add | add | add...

PS> $filter = new-Imagefilter |  
     Add-ScaleFilter      -passThru -height 800 -width 65535  |
     Add-OverlayFilter    -passThru –top    750 –left  0     –image    $Overlay |
     Add-ExifFilter       -passThru -ExifID $ExifIDKeywords  -typeName "vectorofbyte" -string "Ocean" |
     Add-ExifFilter       -passThru -ExifID $ExifIDTitle     -typeName "vectorofbyte" -string "StingRay"  |
     Add-ExifFilter       -passThru -ExifID $ExifidCopyright
-typeName "String" -value "© James O'Neill 2008" |
     Add-ConversionFilter -passThru –typeName jpg -quality 70

Given a set of filters, a script can get an image,  apply the filters to it and save it. Originally these 3 steps needed 3 commands to be piped together like this
PS> Get-Image   C:\Users\Jamesone\Pictures\IMG_3333.JPG  |
      Set-ImageFilter -filter $filter |
         Save-image -fileName {$_.FullName -replace ".jpg$","-small.jpg"}

I streamlined this first by changing James B’s  Set-ImageFilter so that if it is given something other than an image object, it hands it to Get-Image.  In other words Get-Image X | Set-Image is reduced to Set-Image X (and I made sure X could be path, including one with wild cards or one or more file objects) . After processing I added a -savepath parameter so that set-image –SavePath P is the same as Set-Image | Save-Image P . P can be a path, or script block which becomes a path, or empty to over-write the image. Get an image,  apply the filters to it and save it becomes a singe command.
PS> Set-ImageFilter –Image ".\IMG_3333.JPG" -filter $filter `
                    –SaveName {$_.FullName -replace ".jpg$","-small.jpg"}

The workflow for my photos typically begins with copying files from a memory card, replacing the start of the filename – like the “IMG_” in the example above – with text like “DIVE” (I try to keep the sequential numbers the camera stamps on the pictures as a basis for a unique ID). Next, I rotate any which were shot in portrait format so they display correctly and finally I add descriptive information to the EXIF data: keyword tags like “Ocean” and titles like “Stingray”. So it made sense to create a copy-image function which would handle all of that in one command. The only part of this which hasn’t already appeared is rotation. The Orientation EXIF field contains 8 to show the image has been rotated 90 degrees, 6 indicates 270 degrees of rotation, and 1 to show the image is correctly rotated, so it is a question of read the data, and depending on what we find add filters to rotate and reset the orientation data.

$orient = Get-ExifItem -image $image -ExifID $ExifIDOrientation   
if ($orient -eq 8) {Add-RotateFlipFilter -filter $filter -angle 270
                    Add-exifFilter       -filter $filter -ExifID $ExifIDOrientation`
                                         -value  1       -typeid $ExifUnsignedInteger }   

There is similar code to deal with rotation in the opposite direction, and rotation is just another filter like adding the EXIF data for keywords or title, all job of Copy-Image does is to build a chain of filters to add Title and Keyword tags and rotate the image, determine the full path the new copy should be saved to and invoke Set-ImageFilter. To make it more flexible,  gave Copy-Image the ability to add filters to an existing filter chain: in the part one you could see Copy-GPSImage  which finds the GPS data to apply to a picture and produces a series of filters from it: these filters are passed on to Copy-Image which does the rest. 

The last aspect of Copy-Image to look at is renaming:  -Replace has become one of my favourite PowerShell operators. It takes a regular expression and a block of text, and replaces all instances of expression found in a string with the text. Regular expressions can be complex but “IMG” is perfectly valid so if I have a lot of pictures to name as “OX-” for “Oxford”  I can call the function with a replace parameter of "IMG","OX-" . Inside Copy-Image, the parameter $replace is used with the -replace operator (using PowerShell’s ability  to treat “img”,”ox” as one parameter in two parts).   $savePath is worked out as follows:

if ($replace)   {$SavePath= join-path -Path $Destination `
                     -ChildPath ((Split-Path $image.FullName -Leaf) -Replace $replace)}
else            {$SavePath= join-path -Path $Destination `
                     -ChildPath  (Split-Path $image.FullName -Leaf)  }

As mentioned above I went to some trouble to make sure the functions can accept image objects or names of image files or file objects – because at different times, different ones will suit me. So all of the following are valid ways to copy multiple files from my memory card to the current directory ($pwd), renaming, rotating and applying the keyword tag “oxfordshire”

PS[1]> Copy-Image E:\DCIM\100PENTX\img4422*.jpg -Destination $pwd `
           -Rotate -keywords "oxfordshire" -replace "IMG","OX-"
PS[2]> dir  E:\DCIM\100PENTX\img4422*.jpg | Copy-Image -Destination $pwd `
            -Rotate -keywords "oxfordshire" -replace "IMG","OX-"
PS[3]> get-image  E:\DCIM\100PENTX\img4422*.jpg | Copy-Image -Destination $pwd `
            -Rotate -keywords "oxfordshire"   -replace "IMG","OX-"
PS[4]> $i = get-image  E:\DCIM\100PENTX\img4422*.jpg; Copy-Image $i -Destination $pwd `
            -Rotate -keywords "oxfordshire" -replace "IMG","OX-"
PS[5]> dir  E:\DCIM\100PENTX\img4422*.jpg | get-image |  Copy-Image -Destination $pwd `
            -Rotate -keywords "oxfordshire"  -replace "IMG","OX-"

Of course if I have the GPS data from taking the logger with me on a walk I can use Copy-GPSImage to geotag the files as they are copied, and in the next part I’ll look at how the GPS data is processed.

This post originally appeared on my technet blog.

February 8, 2010

Installing Windows from a phone

Arthur : “You mean you can see into my mind ?”
Marvin: “Yes.”
Arthur: “And … ?”
Marvin: “It amazes me how you manage to live in anything that small”

Looking back down the recent posts you might notice that this is the 8th in a row about my new phone (so it’s obviously made something of an impression), this one brings the series to a close.

I’ve said already that I bought at 16GB memory card for the new phone which is a lot – I had 1GB before, so… what will I do with all that space? I’m not going to use it for video and 16GB is room for something like 250 hours of MP3s or 500 hours of WMAs: I own roughly 200 albums, so it’s a fair bet they’d fit. Photos – well maybe I’d keep a few hundred MB on the phone. In any event, I don’t want to fill the card completely. After a trip out with no card in the my camera I keep a SD-USB card adapter on my key-ring so I always have both a USB stick and a memory card : currently this uses my old micro-SD card in an full size SD adapter. If I need more than 1GB I can whip the card out of the phone, pop it in the adapter and keep shooting 

However the phone has a mass storage device mode so I thought to myself why not copy the Windows installation files to it, and see if I can boot a Machine off it and install Windows from the phone ? That way one could avoid carrying a lot of setup disks.
Here’s how I got on.

This post originally appeared on my technet blog.

November 6, 2009

The point of Windows 7 libraries and search

Filed under: How to,Windows 7,Windows Vista — jamesone111 @ 10:06 am

In my previous post I mentioned a correspondent – his name is Andy – who’d written asking the question “What the hell is the point of libraries and if you have the name of the person whose idea they were please post it for summary flaming” He made another comment which I think goes to the heart of it.

…  as with the advice to people to avoid Vista unless buying with a new machine and then only a powerful one which one then customises to remove things like pointless indexing, I am now launching the ‘destroy or develop libraries’ campaign!

I’d like to drill into that.  I just checked my home machine’s asset tag with Dell and it will be 6 years old next week. I wanted to replace it but I spent £30 on upgrading the memory to 2GB and although the graphics card can’t do glass effects,  it runs Vista well enough on its 2.2GHz Celeron (single core) processor that the replacement has been postponed indefinitely. It works as a media center and streams stuff to the TV via the Xbox. Memory is critical though: I’ve been saying since Windows 3.0 “don’t worry about CPU , throw memory at systems.”   a 256MB XP system isn’t going to make a happy upgrade, on that we can certainly agree.

But  “Pointless indexing ?” Indexing is a low priority task and only consumes resources when files change, so removing it saves very little and costs a lot. The big thing , the HUGE thing for me as a user in Vista is search, and clearly no index: no search. Anyone who has got into the Vista or Windows 7 way of working will understand that, just as internet search engines mean we don’t try to remember many complex URLs any more, so on Vista and 7 we don’t remember complex paths to find files.
When I first  first worked on Sharepoint (it was still called Tahoe at the time!) it became clear to me that file hierarchies work poorly.  Do you organize files by date, by subject, by type? If you write thousands of letters how do you name them so you can find all the letters for a given customer ? Or all the letters for customers interested in the WidgetMaster 2000 ? Bluntly, if you can’t find the stuff, is there any benefit in keeping it ? And it’s not just in office automation settings that this matters. I had over 30,000 photos on my PC at the last count. How do I quickly get to the Vulcan Collage I used in this post – did I put it in folder of “pictures for blogging” or did I make a folder for the vulcan shots and put the collage with the source pictures, or did I save the collage with other collages. To find it I just press the “Windows key” and start typing “vulcan” in the box on the start menu. Starting programs which are not on quick launch bar… life is too short to remember folder hierarchies on the start menu: I hit the same key and start typing the program name. Want to remove a program? Why bother to remember where that is in control panel? I hit the same key and start typing “remove” and the correct link to control panel appears. And Windows search is the search for Outlook. With my recent car problems I found I hadn’t got the number for the fleet management people in my contacts. So I typed “Fleet” in the search box and a second later there was a mail with the number I needed. I am totally dependent on search now.

image Indexing has a beneficial side-effect. You can create virtual folders based on metadata. I know a couple of people who flinch when I use the term meta-data but it is simply data about the data: its author, creation date, subject, tags and so on. Office Documents have “document properties”, MP3 and Windows Media files embed information about the song title, artist, composer, length and so on. JPEG and TIF images contain embedded EXIF data which contains camera information as well as artist, tags, title etc.  On the left you can see this being put to use in Windows 7. I’ve ringed the “arrange by” option; and here “tag” has been selected. In some places Tags are known as “keywords”, but as you can see in the screen shot (click for a full size version), a tag can contain multiple words. “Arrange by tag” tells windows “Select all the files in this folder and its subfolders, grouped by their tags" (a file can appear in multiple places if it carries more than one tag). Since each group is treated as a “search folder” I can search arrange results by metadata, so I can have “Infra Red-tagged Pictures also tagged Oxfordshire” Or “Pictures of Aircraft taken in July 2009” and so on.  I can drag the search folder to favorites or the desktop or my task/quick launch bar to call it up again.

image

But wait – there’s more ! In the second picture you can see I’ve typed something in the search box (ringed). Normally this would be something for a free text search over all the metadata fields. But I’ve typed FocalLength: so this will search in a specific metadata field. I haven’t specified an exact match but typed >280 so it only returns pictures where I was using my longest lens zoomed to the maximum length. Also notice on the menu bar that the search can be saved: that keeps a search folder to apply the same search criteria to my files in the future.

If you’ve opened up a the picture on the left you’ll have seen it contains some shots of the wild rabbits which come into my garden – and I seem to have gone down a bit of a rabbit hole here because the question was about libraries  – and I’m talking about Search folders. You can’t save files to a search folder – it isn’t a “place” – and a search narrows the selection to just some of the items in a branch of the file system…

image

Libraries use the index beneath the surface, but work in the opposite way to search folders. They bring together multiple file system branches. That’s it. I think Andy thought they were more sophisticated, but there’s only filtering if you do a search: the 4 default libraries which link together 4 of the “MY” folders with their “Public” counterparts. So now it doesn’t matter if something is in “My Music” or “Public Music”, I can find it in the “music” Library. And this isn’t limited to folders on my computer -  You can see on the left that my computer belongs to a Windows 7 HomeGroup and I’ve added the music folder from another member my Music library – this wasn’t the best staged demo because the netbook I’m connecting to only has the Windows sample music on it – which I’ve removed from my laptop, that’s the one non-blurred item .

Adding a folder to a library is a simple matter of going to the library’s properties, and clicking “include”. Any of the folders which comprise the library  can be set as the default location for saving. In effect, the Documents folder is “My Documents” with the extra ability to find public documents. You can change  the name of library so you could call it “All Documents” or even “My documents”. If you neither use the public folders anything there would be no harm in deleting the default libraries. Conversely if you’ve built up a complicated hierarchy of folders, so you might have “letters 2008”, “Letters 2007”, “Letters 2006”, “Invoices 2008”, “Invoices 2007” and so on, you could create new libraries for letters and Invoices.

Now, Andy’s complaint was essentially that he knew users for whom any change is bad. And my previous post I owned up to the fact that my first reaction to any change is “What did they do that for”  He says

unless totally new to computers, the addition and forcing users to default to libraries adds another level of confusion to non-tech savvy people. My mother… has been using PC’s since the 8086 days and had got to grips with DOS/File Manager/Explorer/My Computer/Computer for years before you introduced ‘My’ documents, pictures etc. I then had to spend time explaining the concept of a virtual pointer to a set of folders held elsewhere. We got there in the end although the desire to navigate to them via the C: drive remained for a while.

When we do make these changes we spend thousands of hours in usability labs to makes sure different categories of users can pick them up easily, and if we made everything exactly the same as it always has been it would be a brake on progress. Although Andy emphasised “forcing users to default to” when I did a quick check, everything I tried remembers the last folder things were opened from or saved to. I also have a vague memory that the pre-release versions of Windows 7 opened explorer at the libraries folder but the release version on machine opens at Computer – of course you can have shortcuts to open any folder you want. For some people’s’ machines “MY” in front of documents isn’t needed, you can rename the “MY” away. All the “My” folders are actually pointers so if you have always used C:\Documents, you can navigate via the Hierarchical path in the file system as before, if you can re-point the default location, any program which calls the Windows API to say “where is the default location for Documents will go to there and not keep trying to take you back to somewhere under users

Now, following that, some bright spark decided to demonstrate to logical human beings that had spent years learning how a hard disk could be navigated that logic and common sense is not required for using computers and in fact is detrimental to their use. I write of making the hard disk subordinate to the desktop in explorer when Vista was launched.

Andy’s point actually applied prior to Vista. On the Desktop you have a computer Icon, if you open computer it contains drives, if you open a drive and navigate to your user folder it contains the desktop. Where once we had a tree structure with the some root which contained all the drives, now we have a loop. I understand what he means though, having spent years with learning ways to impose a logic to cope strict hierarchies we’ve now said “you don’t need to force yourself into thinking that way any more”. No one forced to change how they organize their files: that’s important.  Personally I navigate to my documents via libraries, the “my documents” link in my home folder (which I have as a favorite) via the C: drive, and from Cmd and PowerShell prompts.

This post originally appeared on my technet blog.

September 14, 2009

On PowerShell function design: vague can be good.

Filed under: How to,Powershell,Virtualization — jamesone111 @ 5:22 pm

There is a problem which comes up in several places in PowerShell – that is helping the user by being vague about parameter types. Consider these examples from my Hyper-V library for PowerShell

1. The user can specify a machine using a string which contains its name
Save-VM London-DC or Save-VM *DC, or  Save-VM London*,Paris*

2. The user can get virtual machine objects with one command and pipe these into another command
Get-vm –running | Stop-VM

3. The user can mix objects and strings
$MyVms = Get-vm –server wallace,Grommit | where { (Get-VMSettings $_).note –match “LAB1”}
start-vm –wait “London-DC”, $MyVMs

The last one searches servers “wallace” and “Grommit” for VMs, narrows the list to those used in lab1 and starts London-DC on the local server followed by the VMs in Lab1.

In a post I made a few days back about adding Edit to your profile I showed a couple of aspects of about piping objects that became easier in V2 of PowerShell,
Instead of writing Param($VM) , I  can now write

Param(
       [parameter(Mandatory = $true, ValueFromPipeline = $true)] 
       $VM
     )

Manatory=$true makes sure I have a parameter from somewhere, and ValueFromPipeLine is all I need to to get it from the the pipeline. PowerShell offers a ValueFromPipeLineByPropery option which looks at the piped object for a property which matches the parameter name or a declared [alias] for it. I could use that to reduce a VM object to its name, but doing so would lose the server information (which I need in example 3 above) and it gets in the way of piping strings into functions, so this is not the place to use it.
Allowing an array gives me problems when the array members expand to more than one VM (in the case of wildcards).  The code for my “Edit” function won’t cope with being handed an array of file objects or an array of arrays, but it doesn’t need to, because I wouldn’t work like that. But things I’m putting out for others need to work the way different users might expect, this needs to handle arrays in arrays (like “london-DC”,$myVMs ) arrays of VM objects ($myVMs), so time for my old friend recursion, and a function ike this.

Function Stop-VM
{ Param(
        [parameter(Mandatory = $true, ValueFromPipeline = $true)] 
        $VM,
        [String]
        $Server = “.”
       )
  Process{
           if ($VM –is [String]) {$VM = GetVM –vm $vm –server $server}
           if ($VM –is [array])  {$VM | foreach-object {Start-VM –vm $_ –server $server}}
           if ($VM -is [System.Management.ManagementObject])  {
               $vm .RequestStateChange(3)
           }
        }

}

This says, if we got passed a single string (via the pipe or as a parameter), we get the matching VM(s), if any. If we were passed an array , or a string which resolved to an array, we call the function again with each member of that array. If we were passed a single WMI object or a string which resolved to a single WMI object then we do the work required.

There’s one thing wrong with this, and that is that it stops the VM without any  warning I covered this back here.  It is easy to support ShouldProcess; there is level at which Confirm prompts get turned on automatically, (controlled by $confirmPreference) and we can say that the impact is high – and at the default value the confirm prompt will appear even if the user doesn’t ask for it.

Function Stop-VM
{ [CmdletBinding(SupportsShouldProcess=$True, ConfirmImpact='High')]
  Param(
          [parameter(Mandatory = $true, ValueFromPipeline = $true)] 
          $VM,
          [String] 
          $Server = “.”
       )
  Process{
           if ($VM –is [String])  {$VM = GetVM –vm $vm –server $server}
           if ($VM –is [array])  {$VM | foreach-object {Stop-VM –vm $_ –server $server}}
           if ($VM -is [System.Management.ManagementObject]
                   –and $pscmdlet.shouldProcess($vm.ElementName, “Power-Off VM without Saving”) {
               $vm .RequestStateChange(3)
           }
        }
}

Nearly there, but we have two problems still to solve: and a simplification to make the message. First the simplification; Mike Kolitz  (who’s going through the same bafflement as I did with Twitter, but more importantly has helped out on the Hyper-V library), introduced me to this trick : when a function calls another function using the same parameters – (or calls itself recursively) if there are many parameters it can be a pain.  But PowerShell has a “Splatting” operator. @$PsboundParameters puts the contents of a variable into the command.  (James Brundage, who I’ve mentioned before wrote it up) And you can manipulate $psboundParameters, so Mike had a clever generic way of recursively calling functions.

if ( $VM -is [Array])   { [Void]$PSBoundParameters.Remove("VM") ;  $VM | ForEach-object {Stop-VmState -VM $_ @PSBoundParameters}}

In other words remove the parameter that is being expanded, and re-call the function with the remaining parameters, specifying only the one being expanded. As James’ post shows it makes life a lot easier when you have a bunch of switches.
OK, now the problem(s) the message

Confirm
Are you sure you want to perform this action?
Performing operation "Power-Off VM without Saving" on Target "London-DC".
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): n

Will appear for every VM, even if we select Yes to all or No to all. Each time Stop-VM is called it gets a new instance of $psCMDLET And what if we don’t want the message – for example in a script which kills the VMs and rolls back to an earlier snapshot.  Jason Shirk, one of the active guys in our internal PowerShell alias pointed out first you can have a –force switch and secondly you don’t need to use the function’s OWN instance of psCMDLET – why not pass one instance around ? So the function morphed into this

Function Stop-VM

{ [CmdletBinding(SupportsShouldProcess=$True, ConfirmImpact='High']

  Param(

          [parameter(Mandatory = $true, ValueFromPipeline = $true)]

          $VM,

          [String]

          $Server = “.”,

          PSC,

          [Switch]

          $force

       )

Process{

          if ($psc -eq $null)  {$psc = $pscmdlet}

         
if (-not $PSBoundParameters.psc) {$PSBoundParameters.add("psc",$psc)}

          if ($VM –is [String])  {$VM = GetVM –vm $vm –server $server}

          if ($VM –is [array])  {$VM | foreach-object {Stop-VM –vm $_ –server $server}}

          if ($VM -is [System.Management.ManagementObject]

                 –and ($force –or $psc.shouldProcess($vm.ElementName, “Power-Off VM without Saving”)) {

              $vm .RequestStateChange(3)

          }

       }

}

So now $PSC either gets passed or it picks up $pscmdlet and then gets passed to anything else we call – in this case recursive calls to this function.  And –force is there to trump everything.  And that’s what I have implemented in dozens of places in my library.

This post originally appeared on my technet blog.

July 24, 2009

PowerShell on-line help: A change you should make for V2 (#3) (and How to Author MAML help files for PowerShell)

Filed under: How to,Powershell — jamesone111 @ 4:39 pm

In the last could of “change you should make” posts I’ve talked about a couple of things which turn  Functions from being the poor-relation of compiled Cmdlets (as they were in PowerShell V1) to first class citizens, under V2. Originally the term “Script Cmdlets” was used but now we call them “Advanced functions”. This is quite an important psychological difference because writing a cmdlet – even a “script cmdlet”  sounds like difficult, real, programming. On the other hand an advanced function sounds like a function with some extra bits bolted on and nowhere near as daunting.


In V2 there is a new version of the Tab-Expansion function which completes names of functions and fills in parameters , and that doesn’t need any change to the code. In part 2 of this “Change you should make” series I talked about the better discover which comes from putting your functions into modules – simply type get-command <module-Name> and you get the list. And In part 1 I talked about the fact that we get support for the important common parameters –whatif, and –confirm for example. So we can find script-cmdlets advanced functions, and we can test to see if they are going to trash the system before we run them for real. What’s the one great thing about PowerShell that’s missing ? Consistent help everywhere!


In V2 you can put in a comment which actually defines help information, like this:

Filter Get-VMThumbnail
{
<#

.SYNOPSIS
       Creates a JPG image of a running VM

     .PARAMETER VM
         The Virtual Machine
     .PARAMETER width
         The width of the Image in Pixels (default 800)

    .PARAMETER Height
        The Height of the Image in Pixels (default 600)

    .PARAMETER Path
        The path to save the image to, if no name is specified the VM name is used.
If no directory is included the current one is used

    .EXAMPLE
        Get-VMJPEG core
        Gets a 800×600 jpeg for the machine named core,
and writes it as core.jpg in the current folder.
.EXAMPLE
        While ($true) { Get-VMJPEG -vm “core” -w 640 -h 480 `
-path ((get-date).toLongTimeString().replace(“:”,”-“) + “.JPG”)
{Sleep -Seconds 10}
       Creates a loop which continues until interrupted; in the loop creates an image of the
VM “Core” with a name based on the current time, then waits 10 seconds and repeats.

#>


 


This is all very fine and good – and there are other fields which you can read about in the on-line help. As far as I can tell there are just two things wrong with this,



  1. It needs a fair amount of processing to extract the help into a document as the basis of a manual.

  2. It adds a lot of bulk to a script – in fact most of my functions will be shorter than their help, I don’t think this is truly harmful, but I like my scripts compact.

image Fortunately both of these are solved by using external  help files – which are in XML format. The help declaration is simple enough.

Function Get-VMThumbnail
{# .ExternalHelp  MAML-VM.XML

The XML schema is known as MAML and isn’t the nicest thing I’ve ever worked with….


A little while ago I was filling in my year end review and inside Microsoft we do this using an InfoPath form. I could write all I knew about infopath on the back of a postage stamp. You fill in forms which come out as XML. Could InfoPath give me an easier way to get MAML files ? As it turns out the answer was “yes”, and I’ve learnt enough infopath to make half a blog post.  It turns out that InfoPath can read in an existing data file to deduce the the XML structure you want to have as the result of filling in the form: once it has the structure you can drag the document elements around to make a form.


It’s not the most beautiful bit of forms design ever – and the XML does need a change to move between editing in infoPath and using for PowerShell help. But it compared with putting the XML together in notepad … well, there is no comparison. I’ve attached the InfoPath form for those who want it. Filling it in is pretty much self explanatory, with the exception of Parameter sets. Each Parameter is entered in its own right and at least one (default) parameter set is defined. The set names are actually displayed as the command when the help is displayed so the set name needs to be the command name for every set.


I could script the change required, but I don’t seem to be able to manage it in infopath , at the top of the file there is an opening <HelpItems> XML tag. To work as Powershell help this needs to contain XMLNS=”http://msh”  Schema=”MAML”. InfoPath considers the file to be invalid if the first of these is present, so you need to take the file out to edit it.


One final warning when you’re developing the XML help, once PowerShell has loaded the XML file it doesn’t seem to want to unload it so you have to start another instance of the shell –using the graphical ISE version of Powershell I found just press control T to get a new instance of the shell in its own tab.


This post originally appeared on my technet blog.

July 22, 2009

PowerShell Modules: A change you should make for V2. (#2)

Filed under: How to,Powershell — jamesone111 @ 4:11 pm

A few days back I wrote about PowerShell version 2’s ability to confirm whether it should be changing something. Since I was writing something which would some pretty drastic changes , supporting –WhatIf and –Confirm for almost no effort seems like a huge win.

The next thing I wanted to cover was modules. I’ve written some quite large function libraries in PowerShell V1 and I met a few problems which V2 solves by use of modules.

  1. Collaboration. One script with 100 functions isn’t easy to collaborate on. A module lets you load multiple script files as one, but different people can work on each. 
  2. Loading of formatting and type extensions – these XML files can be loaded from scripts, but when they are loaded more than once things get untidy. Modules can load them along with the code.
  3. While we’re on code, modules allow the loading of a mixture of script files and DLLs in one go. Module DLLs don’t need to be registered as snap-ins do, so deployment is easier.
  4. Loading at all: an environment variable defines module paths, and you use import-module NAME, educating people on dot sourcing was a pain (and part of the reason for having a single monolithic file)
  5. Discovery : finding all the functions in a script needed some creativity. Now you can just do get-command –module Name

You can turn a script into a module simply by creating a manifest file, which is a text file with a .PSD extension. At it’s simplest it looks like this

@{ ModuleVersion     = "1.0.0"   
    NestedModules     = "Helper.ps1”}

But there is no reason why there should only be one nested module. So to collaborate with different people owning different functions, you just have a long list in the manifest. Here’s the (only slightly edited) manifest for a project which I’m just about to publish.

@{ ModuleVersion     = "1.0.0"
    NestedModules     = "Firewall.ps1" , "Helper.ps1" , "Licensing.ps1",
                        "network.ps1" , "menu.ps1" , "Remote.ps1",
                        "WinConfig.ps1", "windowsUpdate.ps1", "WinFeatures.ps1"

    GUID              = "{75c6f959-23a1-4673-8ee9-e61e21ff8381}"
    Author            = "James O'Neill"
    CompanyName       = "Microsoft Corporation"
    Copyright         = "© Microsoft Corporation 2009. All rights reserved."
    PowerShellVersion = "2.0"
    CLRVersion        = "2.0"
    FormatsToProcess  = "Config.format.ps1xml"

    }

If my manifest file is named Configurator.psd, all I need to do is create a configurator sub-folder in one of the folders pointed to by the Environment variable PSModulePath then I can just load it with import-module configurator. Of course different people can be working on Firewall.ps1 and Licensing.ps1. Collaboration problem solved. I can get rid of the functions if I want to with remove-module configurator. And if I reload them the fact that I am loading a .format.ps1XML file for the second time isn’t a problem as it would be when I load it from a script. No need to dot source, and the functions are discoverable with get-command –module configurator .

As you can see there are quite a few things you can add to the manifest file over and above the basics – things like FormatsToProcess and TypesToProcess, so to make it easier to build the file there is a new-moduleManifest cmdlet. There is plenty more to read about modules, but for starters look at this post of Oisin’s on Module Manifests.

This post originally appeared on my technet blog.

How to activate Windows from a script (even remotely).

I have been working on some PowerShell recently to handle the initial setup of a new machine, and I wanted to add the activation. If you do this from a command line it usually using the Software Licence manager script (slMgr.vbs) but this is just a wrapper around a couple of WMI objects which are documented on MSDN so I thought I would have a try at calling them from PowerShell. Before you make use of the code below, please understand it has had only token testing and comes with absolutely no warranty whatsoever, you may find it a useful worked example but you assume all responsibility for any damage that results to your system. If you’re happy with that, read on.  


So first, here is a function which could be written as  one line to get the status of Windows licensing. This relies on the SoftwareLicensingProduct WMI object : the Windows OS will have something set in the Partial Product Key field and the ApplicationID is a known guid. Having fetched the right object(s) it outputs the name and the status for each – translating the status ID to text using a hash table.

$licenseStatus=@{0=”Unlicensed”; 1=”Licensed”; 2=”OOBGrace”; 3=”OOTGrace”;
4=”NonGenuineGrace”; 5=”Notification”; 6=”ExtendedGrace”}
Function Get-Registration

{ Param ($server=”.” )
get-wmiObject -query  “SELECT * FROM SoftwareLicensingProduct WHERE PartialProductKey <> null
AND ApplicationId=’55c92734-d682-4d71-983e-d6ec3f16059f’
AND LicenseIsAddon=False” -Computername $server |
foreach {“Product: {0} — Licence status: {1}” -f $_.name , $licenseStatus[[int]$_.LicenseStatus] }
}

 


On my Windows 7 machine this comes back with Product: Windows(R) 7, Ultimate edition — Licence status: Licensed


One of my server machines the OS was in the “Notification” state meaning it keeps popping up the notice that I might be the victim of counterfeiting  (all Microsoft shareholders are … but that’s not what it means. We found a large proportion of counterfeit windows had be sold to people as genuine.)  So the next step was to write something to register the computer. To add a licence key it is 3 lines – get a wmi object, call its “Install Product Key” method, and then call its “Refresh License Status method”.  (Note for speakers of British English, it is License with an S, even though we keep that for the verb and Licence with a C for the noun).  To Activate we get a different object (technically there might be multiple objects), and call its activate method. Refreshing the licensing status system wide and then checking the “license Status”  property for the object indicates what has happened. Easy stuff, so here’s the function.

Function Register-Computer
{  [CmdletBinding(SupportsShouldProcess=$True)]
param ([parameter()][ValidateScript({ $_ -match “^\S{5}-\S{5}-\S{5}-\S{5}-\S{5}$”})][String]$Productkey ,
[String] $Server=”.” )

 

$objService = get-wmiObject -query “select * from SoftwareLicensingService” -computername $server
if ($ProductKey) { If ($psCmdlet.shouldProcess($Server , $lStr_RegistrationSetKey)) {
                           $objService.InstallProductKey($ProductKey) | out-null 
                           $objService.RefreshLicenseStatus() | out-null }

    }   get-wmiObject -query  “SELECT * FROM SoftwareLicensingProduct WHERE PartialProductKey <> null
                                                                   AND ApplicationId=’55c92734-d682-4d71-983e-d6ec3f16059f’
                                                                   AND LicenseIsAddon=False” -Computername $server |

      foreach-object { If ($psCmdlet.shouldProcess($_.name , “Activate product” ))

{ $_.Activate() | out-null

$objService.RefreshLicenseStatus() | out-null

$_.get()
If     ($_.LicenseStatus -eq 1) {write-verbose “Product activated successfully.”}
Else   {write-error (“Activation failed, and the license state is ‘{0}'” `
-f $licenseStatus[[int]$_.LicenseStatus] ) }
                            If     (-not $_.LicenseIsAddon) { return }

              }              
else { write-Host ($lStr_RegistrationState -f $lStr_licenseStatus[[int]$_.LicenseStatus]) }
    }
}


Things to note



  • I’ve taken advantage of PowerShell V2’s ability to include validation code as a part of the declaration of a parameter.
  • I as mentioned before, it’s really good to use the SHOULD PROCESS feature of V2 , so I’ve done that too.
  • Finally, since this is WMI it can be remoted to any computer. So the function takes a Server parameter to allow machines to be remotely activated.

A few minutes later windows detected the change and here is the result.


image


 


This post originally appeared on my technet blog.

June 24, 2009

How to get user input more nicely in PowerShell

Filed under: How to,Powershell — jamesone111 @ 3:48 pm

Long, long ago when I was using my first Microsoft product, I knew one way to get input from the user. The product was Commodore BASIC (in those days we wrote it in uppercase and knew it stood for Beginners All-purpose Symbol Instructional Code). and the method was INPUT. This was back in early 1979 : the user typed something pressed Enter and you could then process it. Later I learnt about GET so you could input with a single keystroke (and process it to see if the user had given you the right input). After all the myriad ways of getting input in the GUI,   PowerShell seems a bit retro (on the surface at least). But recently, I was poking around looking for something else and found that under the surface PowerShell has got some useful tricks which I’ve started to use and I’ll share here.

PowerShell has a variable $host which contains information about the program where it is running (the console host or the ISE environment). $host.ui contains a “User Interface” object which has some interesting methods

  • Prompt / PromptForChoice / PromptForCredential
  • ReadLine / ReadLineAsSecureString
  • Write  /  WriteDebugLine  /  WriteErrorLine  / WriteLine  / WriteProgress  / WriteVerboseLine / WriteWarningLine

The write ones have associated write- cmdlets, and the read ones are the basis of the read-host cmdlet. What about the prompt ones ? PowerShell has get-Credential but the  $host.ui.PromptForCredential has a couple of extra options: for example you can change the caption on the title bar and the message above the text boxes in the prompt. Like this:

$Host.ui.PromptForCredential("","Enter an account to add the machine to the domain","$env:userdomain\$env:username","")

The two empty strings are the caption, which defaults to “Windows PowerShell credential Request” and the password field; so the result looks like this.

image

What about promptForChoice ? For some time I have been using a function first named “choose-list” and now named “Select list” to fall in line with the “approved verbs” from the PowerShell team. Dan who wrote the “soliciting new verbs”  post on the team blog got in touch with me to say I ought to do this, I didn’t think what “choose” did matches the description of “select” and put forward “choose” as new verb. It didn’t get approved but at some point the definition of Select will be broadened. This works well when you want to choose from something which is naturally a table, but if you want something like PowerShell’s own prompts.

image

That is where Prompt for choice comes into play

the choices which are passed to the method are in a slightly awkward format , you need to  use New-Object System.Collections.ObjectModel.Collection[System.Management.Automation.Host.ChoiceDescription]

and for each item on the list use its Add method so the natural thing was to wrap it in a function.

Function Select-Item 
{    
<# .Synopsis
        Allows the user to select simple items, returns a number to indicate the selected item.
    .Description
        Produces a list on the screen with a caption followed by a message, the options are then
displayed one after the other, and the user can one.
        Note that help text is not supported in this version.
    .Example
        PS> select-item -Caption "Configuring RemoteDesktop" -Message "Do you want to: " -choice "&Disable Remote Desktop",
"&Enable Remote Desktop","&Cancel"  -default 1
       Will display the following
        Configuring RemoteDesktop
        Do you want to:
        [D] Disable Remote Desktop  [E] Enable Remote Desktop  [C] Cancel  [?] Help (default is "E"):
    .Parameter Choicelist
        An array of strings, each one is possible choice. The hot key in each choice must be prefixed with an & sign
    .Parameter Default
        The zero based item in the array which will be the default choice if the user hits enter.
    .Parameter Caption
        The First line of text displayed
     .Parameter Message
        The Second line of text displayed    
#>
Param( [String[]]$choiceList,
         [String]$Caption="Please make a selection",
         [String]$Message="Choices are presented below",
         [int]$default=0
      )
   $choicedesc = New-Object System.Collections.ObjectModel.Collection[System.Management.Automation.Host.ChoiceDescription]
   $choiceList | foreach  { $choicedesc.Add((New-Object "System.Management.Automation.Host.ChoiceDescription" -ArgumentList $_))}
   $Host.ui.PromptForChoice($caption, $message, $choicedesc, $default)

One of the things on my blogging backlog is the some the of the V2 features like the help text which I’ve used here. As you can see there is a lot more help than anything else, and the parameters take up more space than the business end of the code. The function will return a number, starting from zero which indicates which option was chosen. There are a couple of ways I have been using this, because two options return 0 or 1 I can easy convert that to true or false – the first one will always be FALSE. Also notice the selection character for an option is prefixed with &

PS > $enabled=[boolean](select-item -Caption "Configuring RemoteDesktop" -Message "Do you want to: " -choice "&Disable Remote Desktop", 
"&Enable Remote Desktop"  -default 1 ) 

Configuring RemoteDesktop Do you want to: [D] Disable Remote Desktop  [E] Enable Remote Desktop  [?] Help (default is "E"): d

PS C:\Users\jamesone\Documents\windowsPowershell> $enabled False

This throws up an issue though – what if the user doesn’t want to change the state ? I can have Disable , Enable and Cancel  but cancel will return 2 , and any non zero value evaluates to true, so cancel will enable remote desktop in that case! The obvious thing to do is to use a switch statement, which does nothing if the cancel option is chosen.

Switch (select-item -Caption "Configuring RemoteDesktop" -Message "Do you want to: " -choice "&Disable Remote Desktop",

                   "&Enable Remote Desktop","&Cancel"  -default 1 )
              {
                   1 {Enable-RemoteDesktop -confirm } 
                   0 { Disable-RemoteDesktop -confirm } 
              }

Enable and disable- remotedesktop are functions I’m working on , not standard powershell cmdlets. More on that another time. This is part of a menu for configuring systems and I’m making use of something I got from James Brundage (who post more on the PowerShell blog than his own these days). I don’t want to steal his thunder because it’s another technique which I’m using in lots of places and I think should go on the list of good practices.

This post originally appeared on my technet blog.

May 27, 2009

How to Install an Image onto a VHD file.

Filed under: Beta Products,How to,Virtualization,Windows 7,Windows Server 2008-R2 — jamesone111 @ 11:43 am

The last post I made talked about customizing windows image (.WIM) files, and the post before that talked about creating Virtual hard disk (.VHD) files. So the last step is to look at putting an image onto a VHD and making it bootable

So the steps are

  1. Identify your WIM file and if it has multiple images in, which image you are going to install. This might be (a) from the INSTALL.WIM on the windows setup disk (b) a customized version of INSTALL.WIM (see yesterday’s post), (c) an image which you have captured using the IMAGEX tool from the Windows Automated Installation Kit
  2. Create your VHD file. (See this post)
  3. Apply the image to the VHD, and make any additional customizations (enabling or disabling components, applying patches, or adding drivers (all of which can be done with DISM, see yesterdays post). adding files, changing registry entries)
  4. If the VHD is to be used to boot a physical machine, add an entry to the machines boot partition to point to the VHD (which I covered here) If the VHD is to be used for a virtual machine make the VHD itself bootable by creating a Boot Configuration database inside it.

Step 3, applying the image can be done using image X if you have installed the Windows Automated Installation Kit  using the command

"<path to AIK>\tools\<architecture>\Imagex.exe" /apply <path to wim> <image number> V:  

V: is the drive letter assigned to the mounted VHD use 1 as the <image number> for the first or only image

I mentioned a post by Mike Kolitz, Mike has also got a script on the MSDN code site which goes by the name of WIM2VHD this will create the VHD, apply the WIM file,and patches you provide and copy files into the VHD. Unless you want to customize the registry or turn components on or off this is the ideal tool* but it depends on having the tools from the Automated Installation kit. Mike, to prove what an all-round good chap he is has a PowerShell script on the same site named Install_windowsImage (this script also shows how other languages can be embedded in PowerShell scripts) which removes that dependency so the alternative is to download this and run

<path>\Install-WindowsImage.ps1 -WIM <path to wim> -Apply -Index <image number> -Destination V: 

As before V: is the drive letter assigned to the mounted VHD, and you use the number of the image starting at 1. Both imageX and InstallWindowsImage.ps1 can provide a list of images in the WIM file if required.

This process takes a few minutes but at the end you have an image which is ready to boot for the first time. Then you’re ready for step 4: making sure the windows boot loader will work.

If the VHD is going to appear as the System disk in a virtual machine the VM will use the boot loader and BCD on the that disk – i.e. we need a boot configuration database inside the VHD. Windows 7 and Server 2008 R2 now have a tool in the \windows\system32 directory named BCDBOOT which recreates the BCD. If you run

<Path\>bcdboot  V:\windows V:

It will create a BCD inside the VHD file and when a VM comes to boot from it all will be well.

I’ve discussed adding an entry to the BCD on a machine which is already running Windows Vista/7 /Server 2008 / Server 2008R2 , which needs an entry in the BCD on the physical hard disk which points to the VHD. An alternative way to create the entry  uses BCDboot.  If you run it with a /M switch it merges the boot information into an existing BCD. So if you are adding a VHD to boot an alternate OS from, then you can use that command. If you’re doing something odd like running windows PE to setup a new machine to boot from a VHD on a newly formatted drive you can use the same bcdboot V:\windows C: to create the store. If you are adding a Windows 7 / Server R2 VHD to a machine with a Vista / Server 2008 installation on it, don’t forget to update the machine to support the new features with bootsect from the windows Install disk.

 


* I’m assuming you want to know what the steps are and how you could take them manually, rather than just going straight to Wim2VHD.

This post originally appeared on my technet blog.

May 26, 2009

How to: customize Windows images with DISM

Filed under: Beta Products,How to,Virtualization,Windows 7,Windows Server 2008-R2 — jamesone111 @ 6:27 pm

In the initial release of Windows Server 2008 one of the the questions which always came up was “how do I add X” – the answer was we had tools named OCSETUP and OCLIST. These have been superseded in Windows 7 and Server 2008 R2 with the new Deployment Image Servicing and Management tool (DISM.EXE). The major thing of note about DISM is that it works with both the current running windows image and with offline images. So

DISM.exe /online /enable-feature /featurename=FailoverCluster-Core 

adds failover clustering to a running edition of Server Core,but you can add it to a mounted VHD file (see previous post) on drive V

DISM.exe /image:V:\ /enable-feature /featurename=FailoverCluster-Core 

DISM has some of the functions of ImageX (which is in the Windows Auto Installation Kit) – the ability to list the Images in a WIM (Windows Image) file, and to mount an image into the file system, and commit the changes made to it.

DISM.exe /mount-wim /wimfile:C:\dump\install.wim /index:1 /mountdir:c:\dump\mount

DISM.exe /image:c:\dump\install.wim /enable-feature /featurename=FailoverCluster-Core

DISM.exe /unmount-wim /mountdir:c:\dump\mount /commit

I thought I would have a go at reducing foot print of hyper-V server R2 (note that I’m working with the Release candidate and what is included may change before release) so I used DISM to remove the language packs, and configure the exact features I want. The following command gets a list if packeges

dism /image:c:\dump\mount /get-packages
Package Identity : Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~de-DE~6.1.7100.0 Package Identity : Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~en-US~6.1.7100.0 Package Identity : Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~es-ES~6.1.7100.0 Package Identity : Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~fr-FR~6.1.7100.0 Package Identity : Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~ja-JP~6.1.7100.0 Package Identity : Microsoft-Windows-ServerCore-Package~31bf3856ad364e35~amd64~~6.1.7100.0

I want to keep the English pack and base features, and remove all the others with the remove-package option

dism /image:c:\dump\mount /remove-Package /packageName:Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~ja-JP~6.1.7100.0 

dism /image:c:\dump\mount /remove-Package /packageName:Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~fr-FR~6.1.7100.0
dism /image:c:\dump\mount /remove-Package /packageName:Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~es-ES~6.1.7100.0
dism /image:c:\dump\mount /remove-Package /packageName:Microsoft-Windows-Server-LanguagePack-Packagyue~31bf3856ad364e35~amd64~de-DE~6.1.7100.0

Next I took a look at the installed features. This list has been trimmed down to save space, and when used in the later commands the names are case sensitive

dism /image:c:\dump\mount /get-features 

  Feature Name : Microsoft-Hyper-V
  State : Enabled
  Feature Name : Microsoft-Hyper-V-Configuration
  State : Enabled
  Feature Name : ServerCore-WOW64
  State : Enabled
  Feature Name : ServerCore-EA-IME
  State : Enabled
  Feature Name : NetFx2-ServerCore
  State : Disabled
  Feature Name : MicrosoftWindowsPowerShell
  State : Disabled
  Feature Name : ServerManager-PSH-Cmdlets
  State : Disabled
  Feature Name : WindowsServerBackup
  State : Disabled

You can use Get-packageInfo and Get-Feature info to get more information about the features and packages. I decided to remove the 32 bit support (WOW64) and the East Asian Language support (EA-IME), and then put in the PowerShell support.

dism /image:c:\dump\mount /disable-feature /featureName:ServerCore-EA-IME 
dism /image:c:\dump\mount /disable-feature /featureName:ServerCore-WOW64   
dism /image:c:\dump\mount /enable-feature /featureName:NetFx2-ServerCore
dism /image:c:\dump\mount /enable-feature /featureName:MicrosoftWindowsPowerShell dism /image:c:\dump\mount /enable-feature /featureName:ServerManager-PSH-Cmdlets dism /image:c:\dump\mount /enable-feature /featureName:BestPractices-PSH-Cmdlets

With the WIM file mounted it is also possible to copy files into it or to mount the registry hives and tweak registry settings such as allowing PowerShell scripts to run.

reg load HKLM\MyTemp C:\Dump\mount\windows\system32\config\SOFTWARE

reg add "HKLM\MyTemp\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell" /V ExecutionPolicy /t REG_SZ /d "RemoteSigned"

reg Unload HKLM\MyTemp

And Finally unmount the WIM fiile

DISM.exe /unmount-wim /mountdir:c:\dump\mount /commit

That’s done little more than scratch the surface, DISM can add drivers or patches (.MSU files) to an image, it can bring it into line with an Unattend.INF file, set input locales and timezones, even change the machine name. Now, the amount that you customize an existing WIM depends on whether or not you expect to install it enough times to make it worthwhile. It might be useful to update VHD files and of course it is the way to add features to server-Core or hyper-V installations.

Since the last post was about creating VHD files, you can guess that the next one will be about applying the images in WIM files to a VHD.

This post originally appeared on my technet blog.

How to: work with VHD files at the command line.

Filed under: Beta Products,How to,Virtualization,Windows 7,Windows Server 2008-R2 — jamesone111 @ 12:22 pm

Virtual Hard Disk (VHD) files have been given greater importance in Windows 7 and Server 2008 R2. They’ve always been used for hosting virtual machines (from the earliest Virtual PC through to Hyper-V) , and in Vista the complete image backup began to use VHD format, the iSCSI target software in Storage server – which is now available to TechNet subscribers. But in the new OSes we can boot from a VHD, mount VHDs and create VHDs in the standard OS.

In the library I created for Hyper-V, one of the first things I wanted to do was to be able to create VHDs and Mount and Unmount them. In fact when Hyper-V mounts the disk it doesn’t bring it on-line and flags it read only, so the PowerShell code I wrote not only had to call the WMI functions provided by hyper-V’s Image Management Service, but it also needed to invoke DiskPart.exe with a script to get the disk to a useful state. If the Index of the mounted VHD is stored in $diskIndex this line of PowerShell does that for me: 

@("select disk $diskIndex", "online disk" , "attributes disk clear readonly", "exit")  | Diskpart | Out-Null }

In Windows 7 and Server 2008 R2 you don’t need a script to call WMI objects to create and attach a VHD (note that Hyper-Vs image management service calls it Mounting/unmounting, and the newer Windows calls it attaching/detaching: it’s the same thing): it can be done from diskpart.exe – perhaps automating it as above – or from the storage part of the management console. Of course the MMC isn’t available if you are running on server Core, and the command line versions are also available when booting into Windows-PE to set up a machine so it useful to know what they are. DiskPart needs to be run  elevated (unless you are signed in as the account named administrator, or running Windows PE), it’s just as happy running from PowerShell as from CMD, provided you run as administrator. The following DiskPart commands will setup a VHD.

create vdisk file=<path>.vhd maximum=<size in MB> type=fixed 

select vdisk file=<path>.vhd

attach vdisk

create partition primary

active

assign letter=V

format quick fs=ntfs label=<OS Name>

exit

By default Create Vdisk makes a fixed VHD, you can also use type=Expandable;  a fixed VHD takes quite a bit longer as the file system creates a file the size specified by maximum and writes zero-filled blocks to the file (HELP CREATE VDISK will show you more options)

To mount a VHD , you select the virtual disk, and then attach it. Initially this disk won’t have any partitions on it (just like a freshly unwrapped hard disk), so the next step is to create a primary partition, make it active, give it a drive letter (I like to use V for VHD) and put a file system on it. Now you have a working drive V: of the specified size with an NTFS file system on it. You can unmount it by going back into diskpart and entering

select vdisk file=<path>.vhd 

detach vdisk

exit

and re-attach it with

select vdisk file=<path>.vhd 

detach vdisk

exit

In a later post I’ll cover how we can put an OS onto the VHD and how that OS can be customized, and also put up a link to a video of VHDs in action.

This post originally appeared on my technet blog.

February 23, 2009

How to use Advanced Queries in Windows search.

Filed under: Beta Products,Desktop Productivity,How to,Windows 7 — jamesone111 @ 4:57 pm

If there was one single feature about Windows Vista which made me say “I’m never ever going back to Windows XP” it was search and the way search was integrated everywhere.  True you can download Microsoft Search for Windows XP (and , as they say other kinds of desktop search are available) but it doesn’t permeate everywhere the way it does in Vista. In Windows 7 the search has got better still, with one important exception which I will come to in a moment.

Click for full size version

On the left you can see the result of typing in the search box ,and as you can see the search results are grouped by type. If you click on one of the of the titles it shows you just the matches of that type. However if you click “See more results” you get everything.

imageIt so happens I was looking for copies of my invoices from Virgin media which I know are in my inbox. The problem I have is I automatically go to “see more results”, and in any event you can see that there are a lot of other things in outlook – mostly from my news feed – about what Virgin group are doing. Click through to More Results and, if you’re used to vista’s search you’ll see we’ve lost something. In Vista this box had buttons to select different kinds of content. In Windows 7 it has gone …

 

However , you can use the Advanced Query Syntax (AQS) and boy is there a lot of it. Type Kind: and you get a list to choose from. Type size: you get some classifications, type: date: you get a calendar and bands of dates, isAttachment and HasAttachement let you pick yes or no. And a quick read of the AQS page shows there is a whole lot more you can enter. Helpfully when you enter a valid field name with the colon (:) after it it turns blue , an an invalid one stays back. 

Now I doubt if anyone is going to remember every single option for AQS – and since it narrows the search down it is sometimes going to be quicker to scroll through the search than find out the way to narrow it down. Still I’m a great believer that we all use our own subsets of the available functionality, so have a look at what you  can do, make use of the bits that help you and forget the rest.

This post originally appeared on my technet blog.

February 18, 2009

How to manage the Windows firewall settings with PowerShell

I mentioned recently that I’m writing a PowerShell configuration tool for the R2 edition of Hyper-V server and Windows server core.   One of the key parts of that is managing the firewall settings…. Now… I don’t want to plug my book too much (especially as I only wrote the PowerShell part) but I had a mail from the publisher today saying copies ship from the warehouse this week and this code appears in the book (ISBN  9780470386804 , orderable through any good bookseller)

The process is pretty simple. Everything firewall-related in Server 2008/Vista / Server R2/ Windows 7, is managed through the HNetCfg.FwPolicy2 COM object, so. First I define some hash tables to convert codes to meaningful text, and I define a function to translate network profiles to names. So on my home network

$fw=New-object –comObject HNetCfg.FwPolicy2  ;  Convert-fwprofileType $fw.CurrentProfileTypes  

returns “Private”


$FWprofileTypes= @{1GB=”All”;1=”Domain”; 2=”Private” ; 4=”Public”}
$FwAction      =@{1=”Allow”; 0=”Block”}
$FwProtocols   =@{1=”ICMPv4”;2=”IGMP”;6=”TCP”;17=”UDP”;41=”IPv6”;43=”IPv6Route”; 44=”IPv6Frag”;
                  47=”GRE”; 58=”ICMPv6”;59=”IPv6NoNxt”;60=”IPv6Opts”;112=”VRRP”; 113=”PGM”;115=”L2TP”;
                  ”ICMPv4”=1;”IGMP”=2;”TCP”=6;”UDP”=17;”IPv6”=41;”IPv6Route”=43;”IPv6Frag”=44;”GRE”=47;
                  ”ICMPv6”=48;”IPv6NoNxt”=59;”IPv6Opts”=60;”VRRP”=112; ”PGM”=113;”L2TP”=115}
$FWDirection   =@{1=”Inbound”; 2=”outbound”; ”Inbound”=1;”outbound”=2}

 

Function Convert-FWProfileType
{Param ($ProfileCode)
$FWprofileTypes.keys | foreach –begin {[String[]]$descriptions= @()} `
                                -process {if ($profileCode -bAND $_) {$descriptions += $FWProfileTypes[$_]} } `
                                –end {$descriptions}
}


The next step is to get the general configuration of the firewall; I think my Windows 7 machine is still on the defaults, and the result looks like this

Active Profiles(s) :Private 

Network Type Firewall Enabled Block All Inbound Default In Default Out
------------ ---------------- ----------------- ---------- -----------
Domain                   True             False Block      Allow     
Private                  True             False Block      Allow     
Public                   True             False Block      Allow     

The Code looks like this 


Function Get-FirewallConfig {
$fw=New-object –comObject HNetCfg.FwPolicy2
"Active Profiles(s) :" + (Convert-fwprofileType $fw.CurrentProfileTypes)
@(1,2,4) | select @{Name=“Network Type”     ;expression={$fwProfileTypes[$_]}},
                   @{Name=“Firewall Enabled” ;expression={$fw.FireWallEnabled($_)}},
                   @{Name=“Block All Inbound”;expression={$fw.BlockAllInboundTraffic($_)}},
                   @{name=“Default In”       ;expression={$FwAction[$fw.DefaultInboundAction($_)]}},
                   @{Name=“Default Out”      ;expression={$FwAction[$fw.DefaultOutboundAction($_)]}}|
            Format-Table -auto
}

Finally comes the code to get the firewall rules. One slight pain here is that the text is often returned as pointer to a resource in a DLL, so it takes a little trial and error to find grouping information.
The other thing to note is that a change to a rule takes effect immediately, so you can enable a group of rules as easily as :

Get-FireWallRule -grouping "@FirewallAPI.dll,-29752" | foreach-object {$_.enabled = $true}

 

Function Get-FireWallRule
{Param ($Name, $Direction, $Enabled, $Protocol, $profile, $action, $grouping)
$Rules=(New-object –comObject HNetCfg.FwPolicy2).rules
If ($name)      {$rules= $rules | where-object {$_.name     –like $name}}
If ($direction) {$rules= $rules | where-object {$_.direction  –eq $direction}}
If ($Enabled)   {$rules= $rules | where-object {$_.Enabled    –eq $Enabled}}
If ($protocol)  {$rules= $rules | where-object {$_.protocol  -eq $protocol}}
If ($profile)   {$rules= $rules | where-object {$_.Profiles -bAND $profile}}
If ($Action)    {$rules= $rules | where-object {$_.Action     -eq $Action}}
If ($Grouping)  {$rules= $rules | where-object {$_.Grouping -Like $Grouping}}
$rules}

Since this the rules aren’t the easiest thing to read I usually pipe the output into format table for example

Get-firewallRule -enabled $true | sort direction,applicationName,name | 
            format-table -wrap -autosize -property Name, @{Label=”Action”; expression={$Fwaction[$_.action]}},
            @{label="Direction";expression={ $fwdirection[$_.direction]}},
@{Label=”Protocol”; expression={$FwProtocols[$_.protocol]}} , localPorts,applicationname

 

Last, but not least if you want to create a rule from scratch you want to create a rule object with New-object –comObject HNetCfg.Fwrule, you can then pass it to the add method of the Policy object’s rules collection.  If I ever find time to finish the script it will probably have new-firewallRule, but for now you need to write your own.

This post originally appeared on my technet blog.

February 13, 2009

A Job or two saved for my “PowerShell configurator”

Somewhere in the queue of things to post is the remainder of my PowerShell configurator for Windows Server 2008 R2 Core and Hyper-VS Server R2. If you’re building a cluster the PowerShell CMDlets for clustering make that a breeze. Of course a cluster often calls for iSCSI and setting that up from the command line is tough, so I was going look at doing it in PowerShell. Quick tip of the hat to Ben, who’s blogged that the iSCSI Configuration UI is included in Hyper-V 2008 R2, just run iSCSIcpl.exe And there is an MPIOCPL.exe for setting up Multipath IO (when it is enabled.)

You can also run control.exe DateTime.cpl and control.exe intl.cpl to set time and international settings respectively. Then PowerShell V2 already has cmdlets for stop-computer and restart-computer, plus Add-Computer (to domain) and Rename-Computer, plus  Test-WsMan and Set-WSMANQuickConfig, so the number of things I need to implement is getting smaller…

This post originally appeared on my technet blog.

June 25, 2008

Setting up Domain controllers on Server Core.

Filed under: How to,Windows Server,Windows Server 2008 — jamesone111 @ 9:55 pm

One of the things I have pointed people to a few times recently  is the Windows Server Core document in the step by step guides for Server 2008. Want to know how to install a role ? It’s in there. Configure TCP/IP from the command line ? That’s there too. Put in the key that you skipped during installation ? You got it.

Fantastic. It’s the repository of all knowledge and wisdom … Except, to set up a domain controller, it tells you to run DCPROMO with an unattend file. What goes in the file ? I’ve carefully avoided this ….

Right at this moment I’m trying to prepare a presentation for the morning. I found a great way of explaining objects over the on the PowerShell blog: I was always told that plagiarism was the sincerest form of flattery and I was planning to flatter Ben who produced it. Sadly the slide deck isn’t linked to from his blog (and proving that I’m a complete numpty I missed the Link on the bottom of the PowerShell blog post)What I did find on Ben’s blog was How to Configure a Server Core Domain Controller: a-ha ! Better still it gave me something I could throw into a search and that turned up  KB 947034: How to use unattended mode to install and remove Active Directory Domain Services on Windows Server 2008-based domain controllers. Problem solved.

This post originally appeared on my technet blog.

June 16, 2008

How I get the server 2008 I want: #3: Vista look and feel

Filed under: How to,Windows Server,Windows Server 2008,Windows Vista — jamesone111 @ 2:19 pm

Server-glass I’ll be the first to admit that Servers don’t need to have the nice look and feel that we get with Desktop Operating Systems. But since the core of the OS is common to both it is possible. In Windows Server 2003 we had the XP Themes Service and if you switched it on you got something similar to the XP look.

In Server 2008 we have a couple of things more things to do.

1. Wireless. If you are using Server 2008 on a laptop for Demo or Development purposes you probably have wireless, but it doesn’t work… Until you add the Wireless feature, and turn on the Wireless service

2. Sound. The sound services are set to start manually. I set the Windows Audio service to Automatic start , and when it starts, it also starts the Windows audio endpoint builder.

3. Vista Applications (media player, mail calendar etc) , Glass, Explorer styles and so on come in the Desktop Experience Feature. This seems to need a reboot to be fully functional but when it is you can go to services and turn on the Themes service , then from personalization select the Vista Theme.

I’ve found that Server 2008 matches vista for performance but it lacks a few features- so I’m not going to be switching to it full time. A quick run down of what I’d lose (whether these matter to you is another matter).

  • BlueTooth support (I’m told the Microsoft Presenter mouse works with its Bluetooth Dongle, but I can’t connect my phone or GPS puck)
  • No Sidebar. Whether you like side bar seems to depend on whether you’ve found gadgets you like. I have a couple but I could live without it.
  • Readyboost. Server does have superfetch but won’t use memory sticks to enhance performance
  • Sleep and Hibernate – actually server has these but they go away when Hyper-V is running.

This post originally appeared on my technet blog.

How I get the server I want: #2 Getting sound in Hyper-V

Filed under: How to,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 12:13 pm

Before I start – lets be clear THERE IS NO SOUND CARD IN A HYPER-V VMimage

Good. Now that’s out of the way lets talk about how we get sound in a Machine without a sound card – and this applies to a physical server too.

Sitting in it’s rack in the data-centre there is no reason why a server should have a sound card, and not much need for it. When you run applications via terminal  services you might very well want sound and so the Remote Desktop client (MSTSC.EXE has an option "Bring Sound to this machine"

Although Hyper-V connection use the same RDP protocol under the surface, this behaves as though it is plugged into the VM’s graphics card, it’s not a "remote desktop" session, so to get this to work you need to have Remote Desktop (Or full on Terminal Services installed).

You need to start the Windows Audio Service on the remote server. If it’s not running when you log in you won’t get any sound in that session, so if you start it from remote desktop connection, save yourself some grief and log out.

RDP-Sound Once you’ve done that you can turn on the the volume control and you should see that a "pseudo" sound card is installed.

So now my demonstrations of Hyper-V include using Terminal Services Remote App in the Parent partition to run PowerPoint PowerPoint with sound from a Child VM.

This post originally appeared on my technet blog.

How I get the server I want: #1 Disabling the shutdown event tracker

Filed under: How to,Windows 2003 Server,Windows Server,Windows Server 2008 — jamesone111 @ 8:21 am

shutdown2 I think that the shutdown event tracker came in in Server 2003, and I’m sure that in some data centres it is a very useful tool for logging why servers were manually shut down.

On a demo system, it tends to be a nuisance. Hyper-v, for example, disables Sleep and hibernate so if you have it on a laptop you have to shutdown if you’re going to be on the move for any length of time. If you’re asked Why ? every time it grates pretty quickly.

Some time ago I found how to disable it, and I was setting up a new build in the office a few days ago when a passing colleague said  "I never knew how to do that… you should blog it".

It’s simple enough, you can control it through group policy if the machine is a domain, or via the local group policy object. In the latter case start the MMC and load the Group policy Snap-In and point it to the Local Computer.

Once you can see the network or local policy , navigate to Computer Configuration, then to Administrative templates, then to System. In the system container there are a number of sub-containers, scroll down past those and you’ll find some settings, you’re looking for "Display Shutdown Event Tracker". You can set it to Enabled, and display Always, on workstations only, or on Servers Only. Or you can set it to disabled. If the setting is not present it seems to be enabled/Servers Only by default.

Click for full size image

Once it is disabled. the dialog disappears . Job done. 

This post originally appeared on my technet blog.

Next Page »

Create a free website or blog at WordPress.com.