James O'Neill's Blog

March 19, 2014

Exploring the Transcend Wifi-SD card

Filed under: Linux / Open Source,Photography — jamesone111 @ 1:37 pm
Tags: , , , , ,

There are a number variations on a saying  ”Never let a programmer have a soldering iron; and never let an engineer have a compiler”

WP_20140309_11_34_59_ProIt’s been my experience over many years that hardware people are responsible for rubbish software. Years upon years of shoddy hardware drivers, dreadful software bundled with cameras (Canon, Pentax I’m looking at both of you); Printers (HP, Epson), Scanners (HP – one day I might forgive you) have provided the evidence. Since leaving Microsoft I’ve spent more time working with Linux, and every so often I get into a rant about the lack of quality control: not going back and fixing bugs, not writing proper documentation (the “Who needs documentation when you’ve got Google” attitude meant when working on one client problem all we could find told us it could not be solved. Only a lucky accident found the solution). Anyone can program: my frustrations arise when they do it without  proper specification, testing regime, documentation and “after care”. The Question is … what happens when Engineers botch together an embedded Linux system.

Let me introduce you to what I believe to be the smallest commercially available  Linux computer and Web server.

I’ve bought this in its Transcend form – which is available for about £25. It’s a 16GB memory card, an ARM processor and a WIFI chip all in an SD card package.  Of course chip designers will be able to make it smaller but since it’s already too easy to lose a Micro-SD card, I’m not sure the would be any point in squeezing it into a smaller form factor.  Transcend aren’t the only firm to use the same hardware. There is a page on OpenWrt.Org which shows that Trek’s Flu-Card, and PQI’s Aircard use the same hardware and core software. The Flu card is of particular interest to me, as Pentax have just released the O-FC1 : a custom version of the flu card with additional functions including the ability to remotely control their new K3 DSLR. Since I don’t have the K3 (yet) and Pentax card is fairly expensive I went for the cheap generic option.

The way these cards works is different from the better known Eye-FI card. They are SERVERS : they don’t upload pictures to a service by themselves, instead they expect a client to come to them, discover the files they want and download them. The way we’re expected to do this is using HTTP , either from a web browser or from an App on a mobile device which acts as wrapper for the same HTTP requests. If you want your pictures to be uploaded to photo sharing sites like flickr, photobucket, smugmug, one line storage like Dropbox, Onedrive (nee skydrive), or social media sites (take your pick) these cards – as shipped – won’t do that. Personally I don’t want that, so that limitation’s fine. The cards default to being an access point on their own network – which is known as “Direct share mode” – it feels odd but can be changed.   

imageVarious people have reported that Wifi functionality doesn’t start if you plug the card into the SD slot of a laptop; and it’s suggested this is a function of the power supplied. Transcend supply a USB card reader in the package, and plugged into it my brand-new card soon popped up as a new wireless network. It’s not instant – there’s an OS to load – but it’s less than a minute. This has another point for use in a camera: if the camera powers down, the network goes with it; so the camera has to stay on long enough for you to complete whatever operations are needed.

imageThe new card names its network WIFISD and switching to that – which has a default Key of 12345678gave me wireless connection with a nominal connection speed of 72Mbits/sec and a new IP configuration, a Connection-Specific DNS Suffix of WifiCard, an IP Address of 192.168.11.11 and DNS server, Default gateway, and DHCP server of 192.168.11.254 : that’s the server. The first thing I did to point my browser at 192.168.11.254, enter the login details (user admin, password admin) and hey presto up came the home page. This looks like it was designed by someone with the graphic design skills of a hardware engineer, or possibly a blind person. I mean, I know the card is cheap, but effort seemed to have gone in to making it look cheap AND nasty.

However with the [F12] developer tools toggled on in Internet explorer I get to my favourite tool. Network monitor. First of all I get a list of what has been fetched, and if I look at Details for one of the requests, the response headers tell me the clock was set to 01 Jan 2012 when the system started and the server is Boa/0.94.14rc21

The main page has 4 other pages which are arranged as a set of frames. frame1 is the menu on the left, frame2 is the banner (it only contains Banner.jpg) and frame3 initally holds page.html, which just contains page.jpg and there is a blank.html to help the layout. Everything of interest is in frame1, what is interesting is that you can navigate to frame1.html without entering a user name and password and from there you can click settings and reset the admin password.
The settings page is built by a perl script (/cgi-bin/kcard_edit_config_insup.pl) and if you view the page source, the administrator password is there in the html form so you don’t even need to reset it. Secure ? Not a bit of it. Within 5 minutes of plugging the card in I’d found a security loophole (I was aware of others before I started thanks to the openwrt page and Pablo’s investigation). I love the way that Linux fans tell me you can build secure systems with Linux (true) and it can be used on tiny bits of embedded hardware where Windows just isn’t a an option (obviously true here): but you don’t automatically get both at the same time. A system is only as good as the people who specified, tested, documented and patched it.

While I had the settings page open I set the card to work in “internet mode” by default and gave it the details of my access point. You can specify 3 access points; it seems if the card can’t find a known access point it drops back to direct share mode so you can get in and change the settings (I haven’t tried this for myself). So now the card is on my home wifi network with an address from that network. (The card does nothing to tell you the address, so you have to discover it for yourself). Since there is a just a process of trying to connect to an access point with a pre-shared key, any hotspots which need a browser-based sign-on won’t work.

The next step was to start exploring the File / Photo / Video pages. Using the same IE monitor as before it’s quite easy to see how they work – although Files is a Perl script and pictures & videos are .cgi files the result is the same. A page which calls   /cgi-bin/tslist?PATH=%2Fwww%2Fsd%2FDCIM and processes the results. What’s interesting is that path /www/sd/DCIM. It looks like an absolute path… What is returned by changing to path to, for example, / ? A quick test showed that /cgi-bin/tslist?PATH=%2F does return the contents of the root directory. So /cgi-bin/tslist?PATH={whatever} requires no security and shows the contents of any directory.
The pictures page shows additional calls to /cgi-bin/tscmd?CMD=GET_EXIF&PIC={fullpath}  and /cgi-bin/thumbNail?fn={full path}. The files page makes calls to /cgi-bin/tscmd?CMD=GET_FILE_INFO&FILE={full path} (picture EXIF is a bit disappointing it doesn’t show Lens, or shutter settings, or camera model or exposure time it just shows file size – at least with files we see modified date; thumbnail is also a disappointment. There is a copy of DCRAW included on the system which is quite capable of extracting the thumbnail stored in the raw files, but it’s not used)
And there is a link to download the files /cgi-bin/wifi_download?fn={name}fd={directory}.  By the way, notice the lack of consistency of parameter naming the same role is filled by PATH=, PIC=, fn=  and fn=&fd=  was there an organised specification for this ?

Of course I wanted to use PowerShell to parse some of the data that came back from the server and I hit a snag early on
Invoke-WebRequest http://192.168.1.110/cgi-bin/tscmd?CMD=GET_EXIF&PIC=%2Fwww%2Fsd%2FHello%20James%2FWP_20131026_007.jpg
Throws an error: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF

Shock horror! More sloppiness in the CGI scripts, the last response header is followed not by [CR][LF] but by [LF][LF] fortunately Lee Holmes has already got an answer for this one.  I also found found the space in my folder path /www/sd/hello James caused a problem. When it ran through [System.Web.HttpUtility]::UrlEncode the space became a + sign not the %20 in the line above: the CGI only accepts %20, so that needs to be fixed up. Grrr. 

Since we can get access to any of the files on the server we can examine all the configuration files, and those which control the start-up are of particular interest. Pablo’s post was the first that I saw where someone had spotted that init looks for a autorun.sh script in the root of the SD file system which can start services which aren’t normally launched. There seems to be only one method quoted for starting an FTP service
tcpsvd -E 0.0.0.0 21 ftpd -w / &
There are more ways given for starting the telnet service, and it looks for all the world as if this revision of transcend card has a non-working version of telnetd (a lot of the utilities are in a single busybox executable), so Pablo resorted to getting a complete busybox, quickly installing it and using
cp /mnt/sd/busybox-armv5l /sbin/busybox
chmod a+x /sbin/busybox
/sbin/busybox telnetd -l /bin/bash &

This was the only one which worked for me. Neither ftp nor telnet need any credentials: with Telnet access it doesn’t take long to find the Linux Kernel is 2.6.32.28,  the Wi-Fi is an Atheros AR6003 11n and the package is a KeyASIC WIFI-SD (searching for this turns up pages by people who have already been down this track), or more specifically KeyASIC Ka2000 EVM with an ARM926EJ-S CPU, which seems to be used in tablets as well.

Poking around inside the system there are various references to “Instant upload” and to G-PLUS but there doesn’t seem to be anything present to upload to any of the services I talked about before, when shooting gigabytes of photos it doesn’t really make sense to send them up to the cloud before reviewing and editing them. In fact even my one-generation-behind camera creates problems of data volume. File transfer with FTP is faster than HTTP but it is still slow. HTTP manages about 500KBytes/sec and FTP between 750 and 900KBytes/Sec. That’s just too slow, much too slow.  Looking at some recent studio shoots I’ve use 8GB of storage in 2 hours: averaging a bit more than 1MB/Second. With my K5, RAW files are roughly 22MB so take about 45 seconds to transfer using HTTP but it can shoot 7 frames in a second – and then spend five minutes to transferring  the files: it’s quicker to take the memory card out of the camera, plug it into the computer, copy files and return the card to the camera. It might get away with light use, shooting JPGs, but in those situations – which usually mean wandering round snapping a picture here and a picture there – would your WiFi connected machine be setup and in range ?

The sweet spot seems to be running something on a laptop / tablet phone to transfer preview JPGs – using lower than maximum resolution, and some compression rather than best quality (the worry here is forgetting to go back to best possible JPEG and turning RAW support off). In this situation it really is a moot point which end is the client and which end is the server. Having the card upload every file to the cloud is going run into problems with the volume of data, connecting to access points and so on. So is pulling any great number of RAW files off the card. Writing apps to do this might be fun, and of course there’s a world of possible hacks for the card itself.

Advertisements

August 9, 2012

Getting to the data in Adobe Lightroom–with or without PowerShell

Filed under: Databases / SQL,Photography,Powershell — jamesone111 @ 7:01 am

Some Adobe software infuriates me (Flash), I don’t like their PDF reader and use Foxit instead, apps which use Adobe-Air always seem to leak memory. But I love Lightroom .  It does things right – like installations – which other Adobe products get wrong. It maintains a “library” of pictures and creates virtual folders of images ( “collections” ) but it maintains metadata in the images files so data stays with pictures when they are copied somewhere else – something some other programs still get badly wrong. My workflow with Lightroom goes something like this.

  1. If I expect to manipulate the image at all I set the cameras to save in RAW, DNG format not JPG (with my scuba diving camera I use CHDK to get the ability to save in DNG)
  2. Shoot pictures – delete any where the camera was pointing at the floor, lens cap was on, studio flash didn’t fire etc. But otherwise don’t edit in the camera.
  3. Copy everything to the computer – usually I create a folder for a set of pictures and put DNG files into a “RAW” subfolder. I keep full memory cards in filing sleeves meant for 35mm slides..
  4. Using PowerShell I replace the IMG prefix with something which tells me what the pictures are but keeps the camera assigned image number. 
  5. Import Pictures into Lightroom – manipulate them and export to the parent folder of the “RAW” one. Make any prints from inside Lightroom. Delete “dud” images from the Lightroom catalog.
  6. Move dud images out of the RAW folder to their own folder. Backup everything. Twice. [I’ve only recently learnt to export the Lightroom catalog information to keep the manipulations with the files]
  7. Remove RAW images from my hard disk

There is one major pain. How do I know which files I have deleted in Lightroom ? I don’t want to delete them from the hard-disk I want to move them later. It turns out Lightroom uses a SQL Lite database and there is a free Windows ODBC driver for SQL Lite available for download.  With this in place one can create a ODBC data source – point it at a Lightroom catalog and poke about with data. Want a complete listing of your Lightroom data in Excel? ODBC is the answer. But let me issue these warnings:

  • Lightroom locks the database files exclusively – you can’t use the ODBC driver and Lightroom at the same time. If something else is holding the files open, Lightroom won’t start.
  • The ODBC driver can run UPDATE queries to change the data: do I need to say that is dangerous ? Good.
  • There’s no support for this. If it goes wrong, expect Adobe support to say “You did WHAT ?” and start asking about your backups. Don’t come to me either. You can work from a copy of the data if you don’t want to risk having to fall back to one of the backups Lightroom makes automatically

   I was interested in 4 sets of data shown in the following diagrams. Below is image information with the Associated metadata, and file information. Lightroom stores images (Adobe_Images table) IPTC and EXIF metadata link to images – their “image” field joins to the “id_local” primary key in images. Images have a “root file” (in the AgLibraryFile table) which links to a library folder (AgLibraryFolder) which is expressed as a path from a root folder (AgLibraryRootFolder table). The link always goes to the “id_local” field I could get information about the folders imported into the catalog just by querying these last two tables (Outlined in red)

image

The SQL to fetch this data looks like this for just the folders
SELECT RootFolder.absolutePath || Folder.pathFromRoot as FullName
FROM   AgLibraryFolder     Folder
JOIN   AgLibraryRootFolder RootFolder O
N  RootFolder.id_local = Folder.rootFolder
ORDER BY FullName 

SQLlite is one of the dialects of SQL which doesn’t accept AS in the FROM part of a SELECT statement . Since I run this in PowerShell I also put a where clause in which inserts a parameter. To get all the metadata the query looks like this
SELECT    rootFolder.absolutePath || folder.pathFromRoot || rootfile.baseName || '.' || rootfile.extension AS fullName, 
          LensRef.value AS Lens,     image.id_global,       colorLabels,                Camera.Value       AS cameraModel,
          fileFormat,                fileHeight,            fileWidth,                  orientation ,
         
captureTime,               dateDay,               dateMonth,                  dateYear,
          hasGPS ,                   gpsLatitude,           gpsLongitude,               flashFired,
         
focalLength,               isoSpeedRating ,       caption,                    copyright
FROM      AgLibraryIPTC              IPTC
JOIN      Adobe_images               image      ON      image.id_local = IPTC.image
JOIN      AgLibraryFile              rootFile   ON   rootfile.id_local = image.rootFile
JOIN      AgLibraryFolder            folder     ON     folder.id_local = rootfile.folder
JOIN      AgLibraryRootFolder        rootFolder ON rootFolder.id_local = folder.rootFolder
JOIN      AgharvestedExifMetadata    metadata   ON      image.id_local = metadata.image
LEFT JOIN AgInternedExifLens         LensRef    ON    LensRef.id_Local = metadata.lensRef
LEFT JOIN AgInternedExifCameraModel  Camera     ON     Camera.id_local = metadata.cameraModelRef
ORDER BY FullName

Note that since some images don’t have a camera or lens logged the joins to those tables needs to be a LEFT join not an inner join. Again the version I use in PowerShell has a Where clause which inserts a parameter.

OK so much for file data – the other data I wanted was about collections. The list of collections is in just one table (AgLibraryCollection) so very easy to query, and but I also wanted to know the images in each collection.

 image

Since one image can be in many collections,and each collection holds many images AgLibraryCollectionImage is a table to provide a many to relationship. Different tables might be attached to AdobeImages depending on what information one wants from about the images in a collection, I’m interested only in mapping files on disk to collections in Lightroom, so I have linked to the file information and I have a query like this.

SELECT   Collection.name AS CollectionName ,
         RootFolder.absolutePath || Folder.pathFromRoot || RootFile.baseName || '.' || RootFile.extension AS FullName
FROM     AgLibraryCollection Collection
JOIN     AgLibraryCollectionimage cimage     ON collection.id_local = cimage.Collection
J
OIN     Adobe_images             Image      ON      Image.id_local = cimage.image
JOIN     AgLibraryFile            RootFile   ON   Rootfile.id_local = image.rootFile
JOIN     AgLibraryFolder          Folder     ON     folder.id_local = RootFile.folder
JOIN     AgLibraryRootFolder      RootFolder ON RootFolder.id_local = Folder.rootFolder
ORDER BY CollectionName, FullName

Once I have an ODBC driver (or an OLE DB driver) I have a ready-made PowerShell template for getting data from the data source. So I wrote functions to let me do :
Get-LightRoomItem -ListFolders -include $pwd
To List folders, below the current one, which are in the LightRoom Library
Get-LightRoomItem  -include "dive"
To list files in LightRoom Library where the path contains  "dive" in the folder or filename
Get-LightRoomItem | Group-Object -no -Property "Lens" | sort count | ft -a count,name
To produce a summary of lightroom items by lens used. And
$paths = (Get-LightRoomItem -include "$pwd%dng" | select -ExpandProperty path)  ;   dir *.dng |
           where {$paths -notcontains $_.FullName} | move -Destination scrap -whatif

  Stores paths of lightroom items in the current folder ending in .DNG in $paths;  then gets files in the current folder and moves those which are not in $paths (i.e. in Lightroom.) specifying  -Whatif allows the files to be confirmed before being moved.

Get-LightRoomCollection to list all collections
Get-LightRoomCollectionItem -include musicians | copy -Destination e:\raw\musicians    to copies the original files in the “musicians” collection to another disk

I’ve shared the PowerShell code on Skydrive

July 31, 2012

Rotating pictures from my portfolio on the Windows 7 Logon screen

Filed under: Photography,Powershell — jamesone111 @ 12:15 pm

In the last 3 posts I outlined my Get-IndexedItem function for accessing windows Search. The more stuff I have on my computers the harder it is to find a way of classifying it so it fits into hierarchical folders : the internet would be unusable without search, and above a certain number of items local stuff is too.  Once I got search I start leaving most mail in my Inbox and outlook uses search to find what I want; I have one “book” in Onenote with a handful of sections and if I can’t remember where I put something, search comes to the rescue. I take the time to tag photos so that I don’t have to worry too much about finding a folder structure to put them in. So I’ll tag geographically  (I only have a few pictures from India – one three week trip, so India gets one tag but UK pictures get divided by County , and in counties with many pictures I put something like Berkshire/Reading. Various tools will make a hierarchy with Berkshire then Newbury, Reading etc) People get tagged by name – Friends and Family being a prefix to group those and so on. I use Windows’ star ratings to tag pictures I like – whether I took them or not – and Windows “use top rated pictures” for the Desktop background picks those up. I also have a tag of “Portfolio”

Ages ago I wrote about Customizing the Windows 7 logon screen. So I had the idea “Why not find pictures with the Portfolio tag, and make them logon backgrounds.”  Another old post covers PowerShell tools for manipulating images so I could write a script to do it, and use Windows scheduled tasks to run that script each time I unlocked the computer so that the next time I went to the logon screen I would have a different picture. That was the genesis of Get-IndexedItem. And I’ve added it, together with the New-LogonBackground to the image module download on the Technet Script Center

If you read that old post you’ll see one of the things we depend on is setting a registry key so the function checks that registry key is set and writes a warning if it isn’t:

if ( (Get-ItemProperty HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background
     
).oembackground -ne 1) {
        Write-Warning "Registry Key OEMBackground under
          HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background needs to be set to 1"

        Write-Warning "Run AS ADMINISTRATOR with -SetRegistry to set the key and try again."
}

So if the registry value isn’t set to 1, the function prints a warning which tells the user to run with –SetRegistry. After testing this multiple times – I found changing windows theme resets the value – and forgetting to run PowerShell with elevated permissions, I put in a try / catch to pick this up and say “Run Elevated”. Just as a side note here I always find when I write try/catch it doesn’t work and it takes me a moment to remember catch works on terminating errors and the command you want to catch must usually needs –ErrorAction stop

if ($SetRegistry ) {
  try{ Set-ItemProperty -Name oembackground -Value 1 -ErrorAction Stop `
               -PATH "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background" 

   }
  catch [System.Security.SecurityException]{
     Write-Warning "Permission Denied - you need to run as administrator"
  }
   return
}

The function also tests that it can write to the directory where the images are stored, since this doesn’t normally have user access: if it can’t write a file, it tells the user to set the permissions. Instead of using try/catch here I use $? to see if the previous command was successful
Set-content -ErrorAction "Silentlycontinue" -Path "$env:windir\System32\oobe\Info\Backgrounds\testFile.txt" `
              -Value "This file was created to test for write access. It is safe to remove"
if (-not $?) {write-warning "Can't create files in $env:windir\System32\oobe\Info\Backgrounds please set permissions and try again"
             
return
}
else         {Remove-Item -Path "$env:windir\System32\oobe\Info\Backgrounds\testFile.txt"}

The next step is to find the size of the monitor. Fortunately, there is a WMI object for that, but not all monitor sizes are supported as bitmap sizes so the function takes –Width and –Height parameters. If these aren’t specified it gets the value from WMI and allows for a couple of special cases – my testing has not been exhaustive, so other resolutions may need special handling. The Width and height determine the filename for the bitmap, and later the function check the aspect ratio – so it doesn’t try to crop a portrait image to fit landscape monitor.

if (-not($width -and $height)) {
    $mymonitor = Get-WmiObject Win32_DesktopMonitor -Filter "availability = '3'" | select -First 1
    $width, $height = $mymonitor.ScreenWidth, $mymonitor.ScreenHeight
    if ($width -eq 1366) {$width = 1360}
    if (($width -eq 1920) -and ($height -eq 1080)) {$width,$height = 1360,768}
}
if (@("768x1280" ,"900x1440" ,"960x1280" ,"1024x1280" ,"1280x1024" ,"1024x768" , "1280x960" ,"1600x1200",
      "1440x900" ,"1920x1200" ,"1280x768" ,"1360x768") -notcontains "$($width)x$($height)" )
{
    write-warning "Screen resolution is not one of the defaults. You may need to specify width and height"
}
$MonitorAspect = $Width / $height
$SaveName = "$env:windir\System32\oobe\Info\Backgrounds\Background$($width)x$height.jpg"

The next step is to get the image – Get-Image is part of the PowerShell tools for manipulating images .

$myimage = Get-IndexedItem -path $path -recurse -Filter "Kind=Picture","keywords='$keyword'",
                            "store=File","width >= $width ","height >= $height " |
                      where-object {($_.width -gt $_.height) -eq ($width -gt $height)} |
                           
 get-random | get-image

Get-Indexed item looks for files in folder specified by –Path parameter – which defaults to [system.environment]::GetFolderPath( [system.environment+specialFolder]::MyPicture – the approved way to find the "my pictures" folder -recurse tells it to look in sub-folders and it looks for a file with keywords which match the –Keyword Parameter (which defaults to “Portfolio”). It filters out pictures which are smaller than the screen and then where-object filters the list down to those with have the right aspect ratio. Finally one image is selected at random and piped into Get-Image.

If this is successful , the function logs what it is doing to the event log. I set up a new log source “PSLogonBackground” in the application log by running PowerShell as administrator and using the command
New-EventLog -Source PSLogonBackground -LogName application
Then my script can use that as a source – since I don’t want to bother the user if the log isn’t configured I use -ErrorAction silentlycontinue here
write-eventlog -logname Application -source PSLogonBackground -eventID 31365 -ErrorAction silentlycontinue `
                -message "Loaded $($myImage.FullName) [ $($myImage.Width) x $($myImage.Height) ]"

image

The next thing the function does is to apply cropping and scaling image filters from the original image module as required to get the image to the right size.  When it has done that it tries to save the file, by applying a conversion filter and saving the result. The initial JPEG quality is passed as a parameter if the file is too big, the function loops round reducing the jpeg quality until the file fits into the 250KB limit and logs the result to the event log.

Set-ImageFilter -filter (Add-ConversionFilter -typeName "JPG" -quality $JPEGQualitypassthru) -image $myimage -Save $saveName
$item = get-item $saveName
while ($item.length -ge 250kb -and ($JPEGQuality -ge 15) ) {
      $JPEGQuality= 5
      Write-warning "File too big - Setting Quality to $Jpegquality and trying again"
      Set-ImageFilter -filter (Add-ConversionFilter -typeName "JPG" -quality $JPEGQuality -passThru) -image $myimage -Save $saveName
      $item = get-item $saveName
}
if ($item.length -le 250KB) {
         
write-eventlog -logname Application -source PSLogonBackground -ErrorAction silentlycontinue `
           -eventID 31366 -message "Saved $($Item.FullName) : size $($Item.length)"
 }

image

That’s it. If you download the module  remove the “Internet block” on the zip file and expand the files into \users\YourUserName\windowsPowerShell\modules, and try running New-logonbackground  (with –Verbose to get extra information if you wish).
If the permissions on the folder have been set, the registry key is set,  pressing [Ctrl]+[Alt]+[Del] should reveal a new image.  YOU might want to use a different keyword or a different path or start by trying to use a higher JPEG quality in which case you can run it with parameters as needed.

Then it is a matter of setting up the scheduled task: here are the settings from my scheduler

image

image

image

The program here is the full path to Powershell.exe and the parameters box contains
-noprofile -windowstyle hidden -command "Import-Module Image; New-logonBackground"

Lock, unlock and my background changes. Perfect. It’s a nice talking point and a puzzle – sometimes people like the pictures (although someone said one of a graveyard was morbid) – and sometimes they wonder how the background they can see is not only not the standard one but not the one they saw previously.

July 7, 2010

Working with the image module for PowerShell; part 3, GPS and other data

Filed under: Photography,Powershell — jamesone111 @ 7:59 am

In Part one I showed how my downloadable PowerShell module can tag photos using related data – like GPS position – which was logged as they were being taken, and in part two I showed how I’d extended the module in James Brundage’s  PowerPack for Windows 7. Now I want to explain the extensions which automate the processes of:

  • Getting the data logged by GPS units and similar devices
  • Reading each image file from the memory card and matching it to an entry in the log made at around the same time
  • Building up the set of EXIF filters filters based on the log entry.

The data and pictures are connected by the time stamp on each, but to connect properly the scripts must cope with any time difference between the camera’s clock and time on the logging device – whether that’s a GPS unit or my wrist mounted scuba computer. A few seconds won’t introduce much error, but the devices might be in different time zones –for example GPS works on Universal time (GMT) – so the offset is often hours, not seconds. My quick and dirty way of making a note of the difference is to photograph whatever is doing the logging (assuming it can display its time). The camera will record the time its own clock was set to in the EXIF “Date and time taken” field and subtracting that from the time displayed on the logger in the picture gives an offset to apply to all data points. The following is the core of a function named Set-Offset which could be seen in part one;

$RefDate = ([datetime]( Read-Host ("Please enter the Date & time " +
                                   "in the reference picture, formatted as" + [char]13 +
                        [Char]10 + "Either MM/DD/yyyy HH:MM:SS ±Z or " +
                                   "dd MMMM yyyy HH:mm:ss ±Z"))  
            ).touniversalTime()

$ReferenceImagePath = Read-Host "Please enter the path to the picture"
if ($ReferenceImagePath -and (test-path $ReferenceImagePath) -and $RefDate) {
     $picTime = (get-exif -image $ReferenceImagePath).dateTaken 
     $Global:offset = ($picTime - $refdate).totalSeconds
}

The real Set-Offset can take –refDate and –ReferenceImagePath parameters so the user doesn’t need to be prompted for them.  Most of code you can see is concerned with getting the user to enter the time (in a format that PowerShell can use) and the path to the file. The only part which uses the image module is
(get-exif -image $ReferenceImagePath).dateTaken
Get-Exif is a command I added, and it returns an object which contains all the interesting EXIF data from the image file. Only the value in the DateTaken property is of interest here; it is used to calculate the number of seconds between the camera time and logger time and the result is stored in a global variable named $offset.

The next step is to read the data and applying the offset to it; depending on the how it was logged the next step will be either:
  $Points = Get-NMEAData   -Path $Logpath -offset $offset
Or
  $Points = Get-GPXData    -Path $Logpath -offset $offset
Or 
  $Points = Get-CSVGPSData -Path $Logpath -offset $offset
Or
  $Points = Get-SuuntoData -Path $Logpath -offset $offset

The last one handles the comma separated data exported from the Suunto Dive Manager program which downloads the data from my dive watch. The other 3 deal with different formats of GPS data, it may be in the form of NMEA sentences (comma separated again) or the CSV format used by Efficasoft GPS utilities on my phone or the XML-based GPX format. (GPS data formats are worth another post of their own). You may need to make slight alterations to these functions to work with your own logger, but they are easy to change.  All of them except Get-GPXdata import from a CSV file – and use a feature which is new in PowerShell V2 to specify the CSV column headings when using the import-csv command.  Get-GPXData uses XML documents looking for a hierarchy which goes <gpx> <trk> <trkseg><trkpt><trkpt><trkpt><trkpt>… All the functions use select-object to remove fields which aren’t needed and insert calculated data (for example converting the native speed in knots from GPS to MPH and KM/H )

After running one of these commands there will be a collection of data points stored in the variable $points. Each data point has a time – adjusted by the offset value, so it is time as the camera would have seen it. The Suunto dive computer points have a Description (the name of the dive site and water temperature) and depth, while the GPS points have Speed (GPS works in knots and the script calculates Miles per Hour and Kilometres per hour); bearing, latitude as Degrees, Minutes, Seconds, North or South, Longitude as Degrees, Minutes, Seconds East or West, Latitude & Longitude in their original form from the logger and Altitude in both meters and Feet (NMEA data needs extra processing to get the attitude data and Get-NMEAdata has a –NoAltitude switch to allow processing to be speeded up if it only Latitude and Longitude are needed )

Armed with a collection of points the next step find the one nearest to the time the picture was taken; a function named Get-NearestPoint does this. Given the time stamped on the photo the function returns the data point logged closest to that time. It isn’t very sophisticated, taking 3 parameters: a time, a time-sorted array of points and the name of field on the data points to check for the time, and working through the points until the point being looked at is further away from the target time than the previous point; the core of the function looks like this:

   $variance = [math]::Abs(($dataPoints[0].$columnName - $MatchingTime).totalseconds)
   $i = 1
   do {
        $v = [math]::Abs(($dataPoints[$i].$columnName - $MatchingTime).totalseconds)
        if ($v -le $variance) {$i ++ ; $variance = $v }
      } while (($v -eq $variance) -and ($i -lt $datapoints.count))
   $datapoints[($i -1)]

In use it looks something like this.

$image = Get-Image        –Path "MyPicture.Jpg"
$dt    = Get-ExifItem     -image $image  -ExifID $ExifIDDateTimeTaken
$point = Get-nearestPoint –Data  $points -Column "DateTime" -MatchingTime $dt

$point contains the data used to set the EXIF properties of the picture, a process which requires a series of Exif filters to be created – and I explained EXIF Filters in Part 2.  As well as data retrieved from a log, there are times when I want to tag a picture manually. For example  I took some photos in London’s Trafalgar Square without a GPS logger that I want to tag with 51°30’30” N, 0° 7’40” W  . To make this easier I created a function named Convert-GPStoEXIFFilter which can be invoked like this:

$filter = Convert-GPStoEXIFFilter 51,30,30 "N" 0,7,40 "W"

If you’re not used to PowerShell I should say that in some places 51,30,30 would be the way to write 3 parameters.  In PowerShell  it is one array parameter with 3 members. (Even old hands at Powershell occasionally get confused and put in a comma which turns two parameters into a single array parameter)  I could have explicitly named the parameters and made it clear that these 3 were an array by writing
 -LatDMS @(51,30,30) -NS "N"

Convert-GPStoEXIFFilter returns a chain of up to 7 EXIF Filters for GPS version, Latitude reference (North or South) Longitude reference (East or West), Altitude reference (above or below Sea Level), the Latitude & Longitude (as degrees, Minutes, Second and Decimals) and Altitude in meters (altitude is optional). If $point holds the data logged at the time the picture was taken Convert-GPStoEXIFFilter can be invoked like this:

$filter = Convert-GPStoEXIFFilter -LatDMS $point.Latdms -NS $point.NS `
                   -LONDMS $point.londms -EW $point.ew -AltM $point.altM

At the end of part 2 I showed the Copy-Image command that handles renaming, rotating, and setting keywords & title EXIF fields and mentioned it could be handed a set of filters. All the parameters that Copy-image uses are available to Copy-GPSImage, which takes the the set of points as well . Internally it performs the $image= , $dt= , $point= and $filter= commands seen above before calling Copy-image with the image, the filter chain and the other parameters it was passed. The full set of parameters for Copy-GPSImage is as follows

Image The image to work on – this can be an image object, a file object or a file name can come from the Pipeline.
Points The array of GPS data points from Get-NMEAdata, get-GPXData or Get-CSVGPSData
Keywords Keywords to go into the EXIF Keyword Tags field
Title Text to go into the EXIF Title field
Rotate If specified, adds whatever rotate filter is indicated by the EXIF Orientation field
NoClobber The conventional PowerShell switch to say “Don’t overwrite the file if it already exists”
Destination The FOLDER to which the file should be saved.
Replace

Two values separated by a comma specifying a replacement in the file NAME

ReturnInfo

If specified returns the point(s) matched with the pictures

So now it is possible to use three commands to geotag the images, the first two get time offset , and get the data points, applying that offset in the process.

set-offset "D:\dcim \100Pentx\IMG43272.JPG" –Verbose
$points= Get-CSVGPSData 'F:\My Documents\My GPS\Track Log\20100425115503.log' ‑offset $offset

and the third gets the files on a memory card and push them into Copy-GPSImage

$photoPoints = Dir E:\dcim –include *.jpg –recurse |  Copy-GpsImage -Points $Points ` 
          -verbose  -DestPath "C:\users\jamesone\pictures\oxford"   ` 
          -Keywords "Oxfordshire"  -replace "IMG","OX-"  -returnInfo

This is much as it appeared in Part 1 although third command has changed slightly.Copy-GPSImage now has a –returnInfo switch which returns the points where a photo was taken; to link the point to the image file(s) which patched it an extra property Paths is added to the points.

I mention this because I wanted to show the functions I put added almost for fun at the end. Out-MapPoint and ConvertTo-GPX got brief mentions in part 1: with the data in $photopoints I can push camera symbols through to a map like this : (note the sort –unique to remove duplicate points, 79 is the camera symbol)
  $photopoints | sort dateTime -Unique | Out-MapPoint -symbol {79}

Alternatively I can create a GPX file which can be imported into MapPoint, Google Earth and lots of other tools. GPX files need to be UTF8 text, PowerShell wants to write output files as Unicode – thwarting it isn’t hard but is ugly. 
  $photopoints | sort dateTime -Unique | convertto-gpx | out-file photoPoints.gpx -Encoding utf8

With the photo points logged it would be nice to show the path I walked but that will have too many points so I wrote Merge-GPSPoint which combines all the points for each minute so I can do 
  Merge-GPSPoints $points | Out-MapPoint
or 
  Merge-GPSPoints $points | convertto-gpx | out-file WalkPoints.gpx -Encoding utf8

One thing I should point out here is that the GPX format which I convert to is a series of Waypoints (i.e places that will be navigated to in future), not track points (places which have been visited in the past). The import routine processes the latter.

The last detail of the module for now is that I also gave it a function to find out where an image was taken, like this

PS  > resolve-imageplace 'C:\users\Jamesone\Pictures\Oxford\OX-43624.JPG'
Summertown, Oxford, Oxford, Oxfordshire, England, United Kingdom, Europe, Earth

That’s not a data error when it says Oxford, Oxford. The Geoplaces web service I use returns

ToponymName : name        fcode Desctiption for fcode
Earth Earth AREA a tract of land without homogeneous character or boundaries
Europe Europe CONT

Continent

United Kingdom of Great Britain and Northern Ireland United Kingdom PCLI Independent political entity
England England ADM1 First-order administrative division (US States, England, Scotland etc)
County of Oxfordshire Oxfordshire ADM2 A sub-division of an ADM1 (Counties in the UK)
Oxford District Oxford ADM3 A sub-division of an ADM2 (District level councils in the UK)
Oxford Oxford PPL Populated Place  (Cities, Towns, Villages)
Summertown Summertown PPLX Section of populated place

I haven’t done much to introduce intelligence into processing this. I used Trafalagar square in one part 2 and this returns  Charing Cross, London, City of Westminster, Greater London, England, United Kingdom, Europe, Earth which is correct but difficult to allow for. To make matters worse all sorts of strange geo-political questions come up as well if you say UK is the country, and England is the topmost Administrative division: English people might well think counties are the tier below parliament adminstratively but since the Scottish parliament and Welsh assembly opened, you might find a different view if you step over the border. Software which works to the American model of displaying the Populated place and First admin Division – for example Seattle, Washington; is easily thrown giving Reading, Berkshire it gives Reading, England.  Those are questions to look at another time

This post originally appeared on my technet blog.

July 5, 2010

Exploring the IMAGE PowerShell Module

Filed under: How to,Photography,Powershell — jamesone111 @ 12:57 pm

In part one of this series I showed the finished version of photo-tagging script I’ve been using. I based my work (which is available for download) on James Brundage’s PSImageTools module for PowerShell which is part of the PowerPack included with the Windows 7 Resource kit (and downloadable independently). In this post I want to show the building blocks that were in the original library provide and the ones I added.
Producing a modified image using this module usually means working to the following pattern:

  • Read an image
  • Create a set of filters
  • Apply the filters to the image
  • Save the modified image

If you are wondering what a filter is, that will become clear in a moment. James B’s original module had these commands.

Get-Image Loads an image from a file
Add-CropFilter Creates a filter to crop the image to a given size
Add-OverlayFilter Creates a filter to an overlay such as a watermark or copyright notice
Add-RotateFlipFilter Creates a filter to rotate the image in multiples of 90 degrees or to mirror it vertically or horizontally
Add-ScaleFilter Creates a filter to resize the image
Set-ImageFilter Applies a set of filters to one or more images
Get-ImageProperty Gets Items of EXIF data from an image
ConvertTo-Bitmap Loads a file, applies a conversion filter to it, and saves it as a BMP
ConvertTo-Jpeg Loads a file, applies a conversion filter to it, and saves it as a JPG
Copy-ImageIntoOrganizedFolder Organizes pictures into folders based on EXIF data

You can see there are 4 kinds of filter with their own commands in the list and each one makes some modification to the image: cropping, scaling, rotating, or adding an overlay. Inside the two ConvertTo commands, a 5th kind of filter, conversion, is used and I added a function to create filters to do that. I made some changes to the existing functions to give better flexibility with how they can be called, and added some further functions, mostly to work with EXIF data embedded in the image file. The full list of functions I added is as follows:

Save-Image Not strictly required but it is a logical command to have at the end of a pipe line, instead of calling a method of the image object
New-ImageFilter Not strictly required either but it makes the syntax of adding filters more logical
New-Overlay Takes text and font information and creates a bitmap with the text in that font
Add-ConversionFilter Creates a conversion filter for JPG, GIF, TIF, BMP or PNG format (as used in ConvertTo-Jpeg / Bitmap without applying it to an image or saving it)
Add-ExifFilter Adds a filter to set EXIF data
Copy-Image Copies one or more images, renaming, rotating and setting title keyword tags in the process.
Get-EXIF Returns an object representing the EXIF data of the image
Get-EXIFItem Returns a single item of EXIF data using its EXIF ID (the common IDs are defined as constants
Get-PentaxMakerNoteProperty Decodes information from the Maker-Note Exif field, I have only implemented this for Pentax data
Get-PentaxExif Similar to Get-Exif but with Maker-Note fields for Pentax

The image below was resized and labelled using these commands.  The first step is to create an image to act as an overlay:  I’m going to a copyright notice in Red red text, in 32 point Arial 

PS> $Overlay = New-overlay -text "© James O'Neill 2008" -size 32 -TypeFace "Arial"  `
                           -color "red" -filename "$Pwd\overLay.jpg" 

I’m using a Click for the 800 pixel high versionpicture I took in 2008: and I could have used a more complex command to build the text from the date taken field in the EXIF data.  Next I’m going to create a chain of filters to:

  • Resize my image to be 800 pixels high (the aspect ratio is preserved by default),
  • Add my overlay
  • Set the EXIF fields for the keyword-tags, title and Copyright information
  • Save the image as a JPEG with a 70/100 quality rating

Despite the multi-line formatting here, this is a single PowerShell command:  $filter = new-Filter | add | add | add...

PS> $filter = new-Imagefilter |  
     Add-ScaleFilter      -passThru -height 800 -width 65535  |
     Add-OverlayFilter    -passThru –top    750 –left  0     –image    $Overlay |
     Add-ExifFilter       -passThru -ExifID $ExifIDKeywords  -typeName "vectorofbyte" -string "Ocean" |
     Add-ExifFilter       -passThru -ExifID $ExifIDTitle     -typeName "vectorofbyte" -string "StingRay"  |
     Add-ExifFilter       -passThru -ExifID $ExifidCopyright
-typeName "String" -value "© James O'Neill 2008" |
     Add-ConversionFilter -passThru –typeName jpg -quality 70

Given a set of filters, a script can get an image,  apply the filters to it and save it. Originally these 3 steps needed 3 commands to be piped together like this
PS> Get-Image   C:\Users\Jamesone\Pictures\IMG_3333.JPG  |
      Set-ImageFilter -filter $filter |
         Save-image -fileName {$_.FullName -replace ".jpg$","-small.jpg"}

I streamlined this first by changing James B’s  Set-ImageFilter so that if it is given something other than an image object, it hands it to Get-Image.  In other words Get-Image X | Set-Image is reduced to Set-Image X (and I made sure X could be path, including one with wild cards or one or more file objects) . After processing I added a -savepath parameter so that set-image –SavePath P is the same as Set-Image | Save-Image P . P can be a path, or script block which becomes a path, or empty to over-write the image. Get an image,  apply the filters to it and save it becomes a singe command.
PS> Set-ImageFilter –Image ".\IMG_3333.JPG" -filter $filter `
                    –SaveName {$_.FullName -replace ".jpg$","-small.jpg"}

The workflow for my photos typically begins with copying files from a memory card, replacing the start of the filename – like the “IMG_” in the example above – with text like “DIVE” (I try to keep the sequential numbers the camera stamps on the pictures as a basis for a unique ID). Next, I rotate any which were shot in portrait format so they display correctly and finally I add descriptive information to the EXIF data: keyword tags like “Ocean” and titles like “Stingray”. So it made sense to create a copy-image function which would handle all of that in one command. The only part of this which hasn’t already appeared is rotation. The Orientation EXIF field contains 8 to show the image has been rotated 90 degrees, 6 indicates 270 degrees of rotation, and 1 to show the image is correctly rotated, so it is a question of read the data, and depending on what we find add filters to rotate and reset the orientation data.

$orient = Get-ExifItem -image $image -ExifID $ExifIDOrientation   
if ($orient -eq 8) {Add-RotateFlipFilter -filter $filter -angle 270
                    Add-exifFilter       -filter $filter -ExifID $ExifIDOrientation`
                                         -value  1       -typeid $ExifUnsignedInteger }   

There is similar code to deal with rotation in the opposite direction, and rotation is just another filter like adding the EXIF data for keywords or title, all job of Copy-Image does is to build a chain of filters to add Title and Keyword tags and rotate the image, determine the full path the new copy should be saved to and invoke Set-ImageFilter. To make it more flexible,  gave Copy-Image the ability to add filters to an existing filter chain: in the part one you could see Copy-GPSImage  which finds the GPS data to apply to a picture and produces a series of filters from it: these filters are passed on to Copy-Image which does the rest. 

The last aspect of Copy-Image to look at is renaming:  -Replace has become one of my favourite PowerShell operators. It takes a regular expression and a block of text, and replaces all instances of expression found in a string with the text. Regular expressions can be complex but “IMG” is perfectly valid so if I have a lot of pictures to name as “OX-” for “Oxford”  I can call the function with a replace parameter of "IMG","OX-" . Inside Copy-Image, the parameter $replace is used with the -replace operator (using PowerShell’s ability  to treat “img”,”ox” as one parameter in two parts).   $savePath is worked out as follows:

if ($replace)   {$SavePath= join-path -Path $Destination `
                     -ChildPath ((Split-Path $image.FullName -Leaf) -Replace $replace)}
else            {$SavePath= join-path -Path $Destination `
                     -ChildPath  (Split-Path $image.FullName -Leaf)  }

As mentioned above I went to some trouble to make sure the functions can accept image objects or names of image files or file objects – because at different times, different ones will suit me. So all of the following are valid ways to copy multiple files from my memory card to the current directory ($pwd), renaming, rotating and applying the keyword tag “oxfordshire”

PS[1]> Copy-Image E:\DCIM\100PENTX\img4422*.jpg -Destination $pwd `
           -Rotate -keywords "oxfordshire" -replace "IMG","OX-"
PS[2]> dir  E:\DCIM\100PENTX\img4422*.jpg | Copy-Image -Destination $pwd `
            -Rotate -keywords "oxfordshire" -replace "IMG","OX-"
PS[3]> get-image  E:\DCIM\100PENTX\img4422*.jpg | Copy-Image -Destination $pwd `
            -Rotate -keywords "oxfordshire"   -replace "IMG","OX-"
PS[4]> $i = get-image  E:\DCIM\100PENTX\img4422*.jpg; Copy-Image $i -Destination $pwd `
            -Rotate -keywords "oxfordshire" -replace "IMG","OX-"
PS[5]> dir  E:\DCIM\100PENTX\img4422*.jpg | get-image |  Copy-Image -Destination $pwd `
            -Rotate -keywords "oxfordshire"  -replace "IMG","OX-"

Of course if I have the GPS data from taking the logger with me on a walk I can use Copy-GPSImage to geotag the files as they are copied, and in the next part I’ll look at how the GPS data is processed.

This post originally appeared on my technet blog.

July 1, 2010

GPS, and other kinds of Picture tagging with PowerShell

Filed under: Photography,Powershell — jamesone111 @ 4:38 pm

imageWell… I have been off for a bit and you can have a read of the previous post for some background on that. During that time I’ve done a lot of walking, taking photos as I go.  Having raved about my HTC Touch pro 2 and its GPS I’ve been using it to GeoTag those photos. Naturally (for me) PowerShell figures in here somewhere. I’ve added to the image module in the Windows Powerpack that James Brundage wrote. It now takes data from a log and applies it to pictures. It supports several log formats and incidental things I found I wanted to do with GPS data. And it is available for download

I log my walks using Efficasoft’s GPS  utilities (on the right). I have a picture of the logger to help with the first of the three PowerShell commands I need to use

  1. Set-Offset works out the time difference between the time on the logger and time on the camera and stores it in $offset
  2. Get-CSVGPSData reads the GPS log from a CSV file, using $offset to adjust the time so it matches the camera. I store the result in $points and a successful import can be verified by looking at $points.count.
  3. Copy-GpsImage copies pictures from a memory card to my computer renaming them, rotating them if need be, and tagging them using the GPS data I stored in $points (if copying many files, it is useful to use -verbose switch to see progress)

Steps 2 and 3 might be repeated to tag more than one set of photos provided the camera clock is a consistent interval away from “GPS time”. Here’s what a session looks like in PowerShell


PS > set-offset "D:\dcim \100Pentx\IMG43272.JPG" –Verbose

Click for a Larger VersionPlease enter the Date & time in the reference picture, formatted as
Either MM/DD/yyyy HH:MM:SS ±Z or dd MMMM yyyy HH:mm:ss ±Z: 04/04/2010 16:02:17 +1
VERBOSE: Loading file D:\dcim \100Pentx\IMG43272.JPG
VERBOSE: OffSet = 3607

PS> $points= Get-CSVGPSData 'F:\My Documents\My GPS\Track Log\20100425115503.log' ‑offset $offset
PS> $points.count
593

PS> Dir E:\dcim –include *.jpg –recurse |
      Copy-GpsImage -Points $Points `
          -Keywords "Oxfordshire" `
          -DestPath "C:\users\jamesone\pictures\oxford" `
          -replace "IMG","OX-" `
          -verbose

VERBOSE: Loading file E:\dcim\100PENTX\IMG43624.JPG
VERBOSE: Checking 593 points for one where DateTime is closest to 04/25/2010 12:36:41
VERBOSE: Point 229 matched with variance of 2 seconds
VERBOSE: Performing operation "Write image" on Target " C:\users\jamesone\pictures\oxford\OX-43624".

In my case this will go through a stack of files, ending
VERBOSE: Performing operation "Write image" on Target " C:\users\jamesone\pictures\oxford\OX-43757"


imageThe picture shows a detail from the Radcliffe Observatory building in Oxford (featured in a recent episode of Lewis) with its GPS co-ordinates visible through File/Properties. I also wrote a little bit of code to push the data from the log though to MapPoint – this ended up as a function Out-MapPoint although I later added a function named convertTo-GPX which does the same job more quickly and also works with programs like GoogleEarth.

$MPApp = New-Object -ComObject "Mappoint.Application"
$MPApp.Visible = $true
$map = $mpapp.ActiveMap
ForEach ($point in $points) {
   $location=$map.GetLocation($point.Lat, $point.lon)
   $Pin =$map.AddPushpin( $location, $point.datetime)
   if ( $point.datetime -lt [datetime]"04/25/2010 12:12:00"){$pin.symbol= 6 }
   else                                                     {$pin.symbol = 7}
}
The details behind this this take some explaining, so this will be the first of several parts: if you would like to know more have a look at the next few posts where I will drill into how it all works, but if you want to dive in and play the code is available for download now.

This post originally appeared on my technet blog.

March 21, 2010

The 50th birthday that would have been

Filed under: Photography — jamesone111 @ 2:42 pm

If you believe in the parallel universes then there are ones where Ayrton Senna is celebrating his 50th birthday today, having won 6, 7 or 8 world championships. 

Senna and Mansell - click for a bigger versionIn this one, the 50th anniversary of his birth is marked a more sombrely. It’s not quite 16 years since he died with 3 championships to his name and last week when his nephew flipped open the visor on a very similarly patterned helmet to reveal very similar looking eyes I felt like I had seen a ghost – and heard Martin Brundle articulate the same thought on his TV commentary. I said pretty much all I wanted to about Ayrton on fifteenth anniversary  of his death. But since I’ve posted recently about scanning pictures and I have another post in draft about that , I thought I’d share a couple of successful scans.

In 1991 Senna looked like he was going successfully defend the title he won in 1990 – he won the first 3 races before Nigel Mansell had got a finish, and he wasn’t even the leading Williams driver until 7th race. The 8th was the British Grand Prix, and I was there. To cap a perfect day for Mansell and a partisan crowd, Senna ran out of fuel on the last lap. Mansell on his victory lap stopped and gave his adversary a lift back to the pits. In this picture you can see how much more exposed the drivers heads were in those days – which was to be the death of Senna in another Williams, 3 years later. Riding back like that is forbidden now, and that too speaks of the attitudes to safety. But it says something to me about the nature of sport that drivers could fight for everything on the track, yet offer help and not be humiliated by accepting it. Senna at Silverstone

The cheap film, second-rate lenses and my own technique limit how good the scan of the photo can be – but I hope the story explains why I treasure it. Far better from a technical point of view is this second picture – but sometimes the technical quality isn’t what matters.

 

To mark the anniversary, Autosport have a page Ayrton Senna: A life in pictures – they are arranged with the oldest at the bottom – you can see his first F1 test (in a Williams), another shot of the ride home with Mansell in the middle and on the left of the top row is quite a poor shot – you can’t tell what it is from the thumbnail. It’s the back of the car, and I generally don’t keep those. The caption reads “Ayrton Senna, Williams FW16 Renault; 1994 San Marino Grand Prix at Imola.”  And then slowly it dawns that the wall and trees in the background mean the car is going into the corner named Tamburello and there’s a big gap to car behind, so this must be the sixth lap – a few seconds after this shot was taken the right front wheel of FW16 hit the wall a little further down than we can see in the shot, and parted company from the car. On another day or in another universe it would have passed harmlessly by, but it didn’t and that picture – like the lift home one – captures what we now know to be a decisive moment.

This post originally appeared on my technet blog.

March 8, 2010

Photographic resolution and scans.

Filed under: Photography — jamesone111 @ 10:22 am

I’ve heard it said that every time you use an equation you lose half the audience. I’m going to take that risk : In photography there are a lot of equations which come up in the form 1/x + 1/y = 1/z , and one of those is for recorded resolution. 1/Lens-Resolution + 1/Recording-Resolution = 1/ImageResolution. It’s also a manifestation of the law of diminishing returns. But why do I think it is worth a post ?


First: there is a limit on the smallest detail that lens can resolve in an image: one test to get an indication of this is to look patterns of parallel black and white lines and see how fine the lines can be before they blend into a grey mush.


Second: However many Pixels you have, the digitization process can’t put in detail which wasn’t captured by the lens. There will always be some loss in the process – (or, if you prefer, to record as much detail as the lens could resolve the sensor would need to have infinite resolution). Increasing the sensor resolution will reduce the loss but each successive increase produces smaller and smaller benefits (and remember that if the number of lines the sensor can resolve doubles, the number of Pixels quadruples).


That equation says if a lens can resolve x pairs of lines per unit of distance (it doesn’t matter if it’s lines per mm or lines per image width), and the sensor records ½x , the net resolution is 1/3x ; if the sensor records x, net resolution is 1/2x ;  2x line pairs at the sensor gives a net resolution of 2/3x , go up to 4x and the result is 4/5x, 8 times lens resolution at the sensor gives 8/9 of the lens detail in the output. You can see the progression – but I’ve just described a 16 fold increase in linear resolution, or a 256 fold increase in pixels – like going from a basic 240×320 Pixel QVGA webcam to 20 Mega pixels – (in 2010) that’s the realm of professional equipment – but improving detail recorded detail by a factor of less than 3. Of course that would only be true if the image being digitized were the same – the pro camera will have a lens which resolves more detail (thousands of lines over the image width, against hundreds for a web cam lens). Changing whichever component has lower res will have a bigger impact than changing the higher res one. There’s no point in making a web cam where the lens has many times the resolving power of the sensor, or mounting a lens on a pro camera with much less resolution than its sensor.


image


I’ve known this for years, the upgrades to my digital SLR cameras have increased the Pixels but the images only show a fraction more detail – although it is easier to see with my best lenses. But I’ve recently been going over the problem with scanned images.
I made the picture above in 2003, printed it on roll paper using my A4 printer and it has been on my wall ever since. But it was shot on film and transferred to CD at the lab – the JPG files are quite low resolution. The border makes up about 1/3 of the height and the actual picture is roughly 14cm / 5½” tall – and covered by 1100 pixels. (The border is a useful trick for making the aspect ratio of the picture a bit squarer so the print isn’t so long. Instead of being 7700 x 1100 – a 7:1 aspect ratio, it is 8000×1400 a 5.7:1 ratio).  I’ve been thinking about doing a new version, the picture can be cropped less at the top & bottom as well and I can re-visit the ideal size of border; but I can fix a couple of other things the sepia toning is excessive and there is a stitching error (look at the legs at the landward end of the pier). 


image


 


The final result would go into silverlight deep zoom and I’m already looking at 13” / 330mm rolls of paper for my new super-A3 size paper. (The current print is 4 feet /1.2M long. The exact size of an A3 print would depend on the cropping and the border – with a border like the original the aspect ratio would be 3.8 : 1 – so the print wouldn’t longer just taller.) But I want to wring the maximum possible detail from the negatives. It’s not simply a question of smearing the same detail over more pixels – printing software can do that, so the even current image on bigger paper won’t look pixelated. I want something which rewards looking closer.  I can scan the prints which came back with the film, I have two negative scanners (one of which was the subject of the video I just made)  and I can set my 14 Megapixel camera up as a slide copier. Which will give the best results – is it the one with the most pixels ? No. Using the camera gave the most pixels, but the results weren’t great. But the question turns out to be much more complex than I expected, because no two digitisations produce the same range of tones, and they all have different levels of noise – noise can be processed out in software but at the price of some detail. Subtle details can be lost through a lack of contrast rather than a lack of resolution or swamped in the noise.


I spent some time trying to get examples of how each looked and gave it up as impractical – different details rendered better in different scans, and trying to find a single piece of the picture which shows both the good and bad from the different scans proved to be impossible – especially since the panorama software handles the overlapping sections differently in different sets of scans, so one might be comparing the fuzzy edge of a frame in one result and the sharp part of an overlapping frame in another (there was a flock of birds flying round the collapsed central ballroom and they appear – or don’t – depending on the whim of the software; the original used some 3rd party software and I’ve had 3 versions of Microsoft software since). In short – the more time I spent trying to be objective the less conclusive the results became, which print looks best isn’t necessarily the one with the most detail. As I said beforeBecause my experience has been bad I don’t scan much, and because I don’t scan much I won’t spend the money to get a better experience.”. So the key might be to stop wasting time scanning my own negatives and send them to a professional scanning service.



update: Fixed a bunch of typos.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2010/03/08/photographic-resolution-and-scans.aspx&#8217;;

This post originally appeared on my technet blog.

February 16, 2010

Giant Deepzoom mosaic, in a good cause.

Filed under: Photography — jamesone111 @ 2:32 pm

I’ve been sorting out photos recently, and came across a ton of miniature Formula one images which I turned into a Mosaic which ran to about 60MegaPixels. That was in 2002 and I haven’t done much with mosaics for several years and keep meaning to try  out the Mosiac software that is available today.

One of our partners is making a name for themselves with Mosaics and Silverlight DeepZoom. , the latest is for Fauna & flora international and their campaign for the Sumatran tiger. The mosaic is made up of pictures of endangered species and habitats – according to Steve 180,000 for them which makes this the biggest Deep Zoom to date. Fascinating stuff, and since we are now in the Chinese year of the tiger it seems suitable cause to be promoting.

This post originally appeared on my technet blog.

January 25, 2010

I’m a photographer, not a terrorist (or any other kind of bogeyman)

Filed under: Photography,Privacy — jamesone111 @ 11:39 am

Click for a larger version Every now and then in photography forums someone will ask “Do I need a release to publish pictures of someone”, the law varies enormously round the world but English law grants rights to the owner of copyright (the photographer or their employer), and not to people who appear in the pictures. The photographer can publish, exhibit or sell pictures provided nothing improper was done to get them in the first place: deceiving a model, trespassing to get a shot, taking a picture somewhere that conditions of entry meant not taking pictures or waiving the normal rights of a copyright holder, or using a long lens and a step ladder to see someone where they would have an expectation of privacy would all be examples of “improper”. The rule of thumb for photography in a public place is sometimes summarized as “if it shows what anyone else could have seen had they been there, its OK”.

Except it is becoming less and less OK. It used to possible to take photographs of children playing in public if it made a good photo. Photographers won’t do that now for fear of being branded paedophiles. People seem to be unable to tell the difference between making a picture of child having fun and a picture of a child being abused – which is far more likely to be at the hands of someone they know. If someone does not interact with a child in any way then logic says no protective action is needed; yet people have stopped taking pictures because of what others think might be in their head.
But photographers have a newer problem – people are losing the ability to distinguish Tourists from Terrorists. Again there seems to be a fear of what might be (but probably is not) in someone’s head. The number of news stories concerning photographers being prevented from taking pictures has been rising, and it triggered a protest this weekend in London, which I went along to. It was organised via the internet, but only ITN made the pun of such a gathering of photographers being a “flash mob”. I noticed the British Journal of Photography was supporting it – they’ve been around since the days of glass plates and have seen a lot of things come and go, so don’t tend to get worked up over nothing. 
Usually these stories concern section 44 of the Terrorism act 2000. Some people were protesting about the act itself , although I see it more as “section 44 is being used far too often on a random basis without any reasoning behind its use” – not my words but Lord Carlile, Government independent reviewer of anti-terrorist legislation quoted by the BBC. If you look up section 44 it says

An authorisation under this subsection authorises any constable in uniform to stop a vehicle, or a pedestrian in an area or at a place specified in the authorisation and to search…
It says that the Authorisation must be given by a police officer for the area who is of at least the rank of assistant chief constable (or Commander for the Metropolitan and City of London forces) and they must consider the authorisation expedient for the prevention of acts of terrorism. Section 46 says the authorisation must specify an end date which must not occur after the end of the period of 28 days beginning with the day on which the authorisation is given. (although the authorisation can be renewed on a rolling basis.)

IMG42743 A list of the authorisations issued would be a draft list of possible targets, so the police don’t publish such a list: however a constable acting under S44 must be able to show they hold the office of constable (Police Community Support Officers, Security Guards and so on have no powers) and that proper authorisation has been given. It would be interesting to see what happened if an officer mentioned section 44 and got the response “You claim to have authorisation issued in the last 28 days by an officer of suitable rank, covering this place. Could you substantiate that claim please.”  It’s my belief that in a lot of cases where an someone claims to be acting in the name of section 44 they either lack the proper authority or exceed the powers it gives them, which are set out in section 45, as follows
The power conferred by an authorisation under section 44(1) or (2) may be exercised only for the purpose of searching for articles of a kind which could be used in connection with terrorism  and Where a constable proposes to search a person or vehicle by virtue of section 44(1) or (2) he may detain the person or vehicle for such time as is reasonably required to permit the search to be carried out at or near the place where the person or vehicle is stopped.

There is no power to demand any personal details or the production of ID – indeed for the time being we are free to go about our business without carrying ID. The power is only to search for items which could be used for terrorism and not to detain a vehicle or person for any longer than reasonable to carry out the search. There is no power to seize photographs or to demand they be deleted.

What is interesting to a photographer is section 58.
It is a defence for a person charged with an offence [of collecting or possessing information likely to be useful to a terrorist] to prove that he had a reasonable excuse for his action or possession.

Train spotters have fallen foul of the act  (seriously, what use would a terror cell have for rolling stock serial numbers – an on-line train timetable would give them all they need) and they have to use their hobby as “reasonable excuse” , just as photographers have to when taking pictures of St Paul’s Cathedral or the Houses of Parliament. (And if you photograph trains well…). Of course there are sites with legitimate bans on photography – the photographer not a terrorist website has a map of them, and you can see just how good a picture Google maps gives of each of them. It does make you wonder why anyone planning an attack would go out with a camera.

None of this post has anything much to do with the normal content of this blog [I’ll post separately on the social media aspect] except that photography having gone mostly digital it is bound up with IT, and anyone who works in technology should be concerned when that technology is used to erode freedoms we take for granted, whether it is governments targeting data held by Google  the planned requirement to provide a National Insurance number when registering to vote – using a single key in many databases makes it so much easier to go on a fishing trip for information – or the national DNA database with it’s pretext that everyone is a potential criminal.  That mentality gave us the Kafkaesque sounding “National safeguarding Delivery unit” which checks people against another database to make sure they can be trusted to work with children but whose boss admits they give a false sense of security, and anecdotal evidence says that the need to be vetted puts people off volunteering. Even the people who will operate the new scanning machines at airports object to being vetted – oh the irony. And as Dara O’Briain put it on Mock the Week recently “If the price of flying is you have to expose your genitals to a man in the box, then the terrorists have already won.”

Ultimately the Photographers gathering this weekend was about that. We won’t go to bed one night in a liberal democracy and wake the next morning in a “Police state”, but if little by little we lose our right to go about our lawful business unmolested, if checks and surveillance become intrusive and if the only people allowed to point a camera at anyone else are unseen CCTV operators then we’ve lost part of the way of life which we are supposed to be safeguarding. The Police seemed to have made the decision that if photographers were demanding that the law shouldn’t be misused they’d just follow the advice given by Andy Trotter, of British Transport police, on behalf of ACPO that “Unnecessarily restricting photography, whether from the casual tourist or professional, is unacceptable.” and leave the photographers to it with minimal police presence. It wasn’t a rally, no speeches were arranged and so we had the fun of photographing each other, in the act of photographing each other. A couple of staff from the national gallery got mildly annoyed with photographers obstructing the gallery entrance but they kept their sense of proportion.  I didn’t take many pictures – the light was dreadful – but you can see a couple here.  As I said above the social media side has given me enough material for at least one more post

This post originally appeared on my technet blog.

December 15, 2009

Server 2008 R2 feature map.

Filed under: Photography,Windows Server 2008-R2 — jamesone111 @ 7:33 pm

One of the popular giveaways at our events this year has been the feature poster for server 2008-R2 – which is now available for download. I think the prints were A2 size, although at 300 DPI it is closer to A1 dimensions – the paper copies have all gone although

I’m told more are being printed if you want a paper copy.

One of my fellow evangelists thought it was a great application for Silverlight deep zoom (the technology formerly known as Seadragon). and and I have to say I agree. What better way is there to look at 93 Megapixels worth of image ?
The buttons in the lower right corner include maximize so you can use your full screen to view it. I haven’t got a wheel mouse plugged in at the moment but that is the best way to zoom in and out.

This post originally appeared on my technet blog.

September 23, 2009

On Scanners, Cameras and their USB modes, and lifting the lid on how they can be scripted.

Filed under: Photography,Windows 7,Windows Vista — jamesone111 @ 11:46 am

Long title, and I’m afraid I’ve been on a bit of a voyage of discovery about some of the things Windows 7 (and Vista) can do with photos and first thing I wanted to cover here was something I’ve been trying to ignore: Cameras have two USB modes.


In “Mass Storage Class” (MSC) mode, the computer sees the storage card with its blocks and filesystem and so forth like any other disk. Since the computer can write to the disk all kinds of problems could break out if the camera tried to access the disk, so when  connected the camera functions need to turn themselves off. In MSC mode the camera is becomes a USB card reader and acts like any other USB disk. (That’s the point of MSC devices)


In “Picture  Transfer Protocol” (PTP) mode – and its superset, the media transfer protocol (MTP) – the camera acts as a server – the computer requests a list of files, properties of files, contents of files, but it has no access to the underlying file system so the camera can continue to take pictures and write to the disk. This offers the chance to shoot and have the PC interact with the camera at the same time,  provided that the camera maker doesn’t shut all the functions down when connected in PTP mode. Sadly Pentax do; I put my wife’s Panasonic compact in PTP mode and it was the same. On my the little Canon I take on diving trips there is no “PTP mode”, but it does have Pictbridge support. PTP is the transport protocol for PictBridge and enabling pictbridge got it to work like the Panasonic and Pentax – i.e. all the controls are locked out. From what I’ve read Olympus are the same. Of course I haven’t got the information for every camera made by every manufacturer! I’ll come back to this towards the end of the post, but it changes the way your camera appears…


Click for a full size image  Click for a full size image  Click for a full size image    Click for a full size image


From left to right with my Pentax K7 in PTP mode the camera doesn’t show up as a drive, but as a portable device in Explorer. (I could have used the Canon or Panasonic here).  When you look in the devices and printers part of control panel of Windows 7 you see the camera. If you click through the K7 here gave options to browse, import or configure options. Something which seems different to the other cameras is the option is to automatically import photos when it the camera is plugged in (The K7 does not disappear when unplugged which all the other cameras did.). Not every imaging device which shows up in control panel is a WIA device. In the screen shot below you can see I’ve unplugged my K7 – the icon is greyed out – and plugged in my Web cam; which doesn’t show up in WIA.  The reverse is also true – there is a WIA driver for Windows Mobile devices, but my phone doesn’t show up in devices and printers (at least not as a phone or a camera, only as a potential networking device) but it does show up, with a phone icon, under Portable Devices in Explorer where it has access to the same photo import wizard that the cameras have.


Click for a full size image


Linked in with this there is a Windows Image Acquisition (WIA) driver for PTP enabled cameras – so you can fetch pictures from the camera in a program which understands scanners. Generally, programs that were written for WIA will talk about “Scanner or Camera” – as in the screenshot from Windows 7’s version of Paint below, although WIA allows a program to restrict its choice to scanners only or cameras only. (Windows Fax and Scan won’t accept camera input, for example).  WIA also provides a translation layer to support programs which were written to the older TWAIN interface: these usually talk about acquiring an image from a scanner. When a device appears through the translation layer its name in the TWAIN world is prefixed with “WIA”. Some scanners include both WIA and TWAIN drivers – though the TWAIN ones are redundant on Windows Vista and 7 – and in which case the scanner gets two entries in the TWAIN dialogs (one with WIA in front of the name and one without).  I’ve got a bad track record choosing scanners and the latest piece of junk I’ve bought has a WIA driver which does not work and a TWAIN driver (which does). Hunting down the 64 bit drivers was an undertaking in itself, and for reasons only known to the scanner driver writer it appears in some dialogs when it is not plugged in. [I could go off an huge rant here, at least my ancient HP scanner has a driver on Windows update, although it doesn’t support “Transparent Materials Adapter”, so I bought this one to scan film. How hard is it to produce a driver which works properly and supports full functionality of the scanner? Why are scanners, and cameras bundled with so much useless application software to provide things like “browsing pictures” less well than the OS does it when the vendor can’t get the basics right ? OK enough ranting….] So here in Paint my new film scanner appears alongside the K7. Any attempt to use that driver will fail…grrr… but my old scanner (in page scanning mode only) or the cameras or smartphone will transfer images straight into the application.  


Click for a full size image Click for a full size image


Click for a full size image


The oldest piece of software I still use is Paintshop pro 5 (dated 1999) and it uses TWAIN. In the left picture you can see that it sees the translated K7 WIA driver and the TWAIN driver for the scanner (which isn’t plugged in). Unplug the K7 and plugging in the scanner and the dialog presents the options on the right – with WIA translated and Native TWAIN drivers – only the latter works.


Click for a full size image Click for a full size image


image


It’s possible access scanners and cameras from a scripting environment. I’m not going to advocate that everyone transfers pictures via PowerShell but it can be useful for diagnostics purposes. You can pop up a PowerShell prompt and enter the following


  PS > $WIAdialog = New-Object -ComObject “WIA.CommonDialog”  
  PS > $Device    = $WIAdialog.ShowSelectDevice()


If I do this with no camera or scanner connected I get this error :


  Exception calling “ShowSelectDevice” with “0” argument(s): “No WIA device of the selected type is available.”


But if I do it with the rotten scanner connected I get this:


  Exception calling “ShowSelectDevice” with “0” argument(s): “The WIA device is not online.”


Assuming the commands is successful one can dig a bit deeper into the properties of a scanner or camera – I’ve cut the list down a little to save space.


PS > $device.Properties | sort name | format-table –autosize propertyID,name,value,type,isreadonly                                                                                             

PropertyID Name                   Value                Type IsReadOnly
———- —-                   —–                —- ———- 
         4 Description            K-7                    16       True
      1028 Device Time            System.__ComObject    104       True
        15 Driver Version         6.1.7600.16385         16       True 
      1026 Firmware Version       1.01                   16       True 
         3 Manufacturer           PENTAX                 16       True
         7 Name                   K-7                    16       True
      2050 Pictures Taken         419                     5       True 
 


As well as the properties collection, the device has an Items collection, which contains the pictures currently in the camera. Here’s the view of one item.


PS > $device.items.item(1).properties | sort name | format-table propertyID,name,value,type,isreadonly –autosize

PropertyID Name                             Value Type IsReadOnly
———- —-                             —– —- ———- 
      5125 Audio Available                      0    5       True
      5127 Audio Data          System.__ComObject  102       True 
      4110 Bits Per Channel                     8    5       True
      4104 Bits Per Pixel                      24    5      False 
      4109 Channels Per Pixel                   3    5       True 
      4123 Filename extension                 JPG   16       True 
      4099 Full Item Name               o506400A5   16       True 
      4098 Item Name                     IMG40165   16       True
      4116 Item Size                      6663701    5       True 
      4114 Number of Lines                   3104    5       True
      4112 Pixels Per Line                   4672    5       True


As well as having methods to work with the item, there are two useful wizards. The first one pops up a scanning wizard – if a plug in my other scanner, and it will automatically save pictures in a folder under My Pictures – the folder is created with the current date.


$WIAdialog.ShowAcquisitionWizard($device)  


And the second will work with scanners or cameras and returns the image as an object which can be manipulated before being saved


$i=$WIAdialog.ShowAcquireImage()          
$i.SaveFile(“$pwd\test.$($i.fileExtension)”)


The last things about the device object which I wanted to mention were the Events and Commands, properties. The Pentax and Canon both have events which a script can watch for to respond to changes in the files stored on the camera. This is would be useful on cameras which didn’t lock out all the controls while connected – because that means the files can only be changed from the computer end. Similarly on all three of my cameras the list of commands is disappointingly small. 


PS > $device.commands


CommandID                                Name            Description       
———                                —-            ———– 
{9B26B7B2-ACAD-11D2-A093-00C04F72DC3C}   Synchronize     Synchronize


 


But on some cameras there are more commands , including one named Take Picture, which has an ID of {AF933CAC-ACAD-11D2-A093-00C04F72DC3C}
I can’t test this myself (one blog I found seems to be looking for cameras which do support it, among other things) it seems NOT having the controls locked out is a pre-requisite for this. If it shows up on your camera (and it seems to be mostly Nikons which support it) you should be able to take a picture and acquire it with


$I = $device.ExecuteCommand(“{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}”)


and save it as in the previous example.  [Anyone who wants to post a comment about cameras where this works (or not) would be most welcome]


I’ll come back to WIA and some of the related technology in a future post, but that’s quite enough for now.


This post originally appeared on my technet blog.

September 20, 2009

Story of a photo.

Filed under: Photography — jamesone111 @ 11:46 am

Click for a larger versionMy last two working days were spent at a team “off-site” – and these things tend to bring out the curmudgeon in me (as I’ve said before). Past experience sends me in braced for a combination of people mumbling their way through 10,000 word PowerPoint decks on their subjects, prepared without a moments thought about what the audience might be interested to hear, and the kind of “organized fun” which makes one wish one had booked a visit to the dentist.

This weeks event stated with Giles Long, a swimmer who won Gold medals at two paralympics. I don’t think it’s overstating things to call him brilliant. He is working as an ambassador for the 2012 games – and if he weren’t I say they should sign up for the job.  Come the evening, we had a  second guest speaker before dinner. It was Terry Waite. I realize that if you’re reading this from outside Britain you might need look him up on Wikipedia. A remarkable man, with a remarkable story to tell, and a style of delivery to do it justice. I can’t remember seeing an audience so completely spellbound. I’ve seen people hang on a speakers every word, but it was always punctuated with laughter or applause, saying a speaker was “heard in silence” normally implies a background of shuffling and fidgeting, but not here. For over an hour you could have hear a pin drop. It’s rare to feel that something work sets up is a privilege but I wasn’t the only one who felt that way.

This post is not meant to be a critique of the idea of off-sites, or even the story of how my expectations got turned on their head at this one, but to be the story of this picture. I was asked to take my camera and had the new Pentax K7 with my 18-250 Zoom mounted on it : this lens is a bit of a Swiss Army Knife: very versatile, but there are some jobs which require the right tool. I snapped a few pictures at the start of Terry’s speech, but wasn’t happy with them. My other lenses were in the car: leaving the room to fetch one might have seemed rude, and anyway I didn’t want to miss what was being said. When he sat down I dashed down out and fetched my  Pentax 77mm f/1.8 limited lens. Those who know reckon the Pentax limiteds are the best autofocus lenses ever made (owners just smile when people mention Canon’s “L” glass). I needed the extra speed because because the venue turned the lights down, and using flash would just look horrible, as well as being intrusive. Very low levels of artificial light tax an autofocus system and the K7 doesn’t have much in the way of manual focus aids, yet with very little depth of field to work with the focus has to be pretty much perfect. With the slower zoom lens the amount of light getting in would be less than the autofocus system needs to work – the K7 does have a focus assist lamp, but that that’s intrusive too. 

As well as letting more light in, with the crop factor on a digital SLR, the 77mm is ideal for the semi-candid photos I wanted: when someone is your guest it seems wrong to be shoving a camera right in his face. In fact I cropped the picture giving something more like a 180mm lens’ view on 35mm film. Even at an ISO rating of 1600 and an aperture  f/1.8 it needed a  1/25th second exposure – which would be hopeless without the K7’s in-body shake reduction. (I may be wrong on this, I don’t think anyone makes a stabilized, fast prime at this focal length: Putting stabilization in the body –as Pentax, Sony and Olympus means every lens stabilized which is a real winner at times like this.) Even so it’s not perfectly sharp and the high ISO is noisy – more so because of the cropping. But it’s a picture I’m mighty pleased with.

In converting to monochrome I’ve tweaked brightness and contrast slightly and just given a hint of warm toning (my technique is to set a colour which is noticeable, and then halve the amount). I haven’t applied any noise reduction or sharpening. – though I might when I come to print it (I might split tone it with warmer whites and cooler blacks then too.)  Is it just down to equipment ? Certainly without the combination of lens, and shake reduction and modern cameras’ capabilities at high ISO meant I could shoot by available light – without them using flash would have given no chance to get something as natural as this.  What does the photographer contribute? Decide how they envisage the result, choose where to stand and pick what Henri Cartier-Bresson dubbed the decisive moment, if there is a learnable skill in picture making (as distinct from camera operation) , it is being ready for those moments when they present themselves.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2009/09/20/story-of-a-photo.aspx&#8217;;

This post originally appeared on my technet blog.

September 14, 2009

How to view RAW image files on Windows 7 (and Windows Vista).

Filed under: Photography,Windows Server,Windows Vista — jamesone111 @ 4:09 pm

My photography posts appear to be a bit like busses. I don’t make one for a while then two together …


Some while back I wrote a tale of two Codecs bemoaning the patchy support for RAW files.  Basically we (Microsoft) don’t provide codecs for anything other JPG, TIF, PNG and our Windows Media formats. Everything else is down to whoever is responsible for the format showing a bit of leadership. Pentax fell a bit short with the codec for their PEF format – no 64 bit support. Still, a 32 bit codec works in 32 bit apps –like live Windows Live Photo Gallery, and if one of those previews the image and creates the Thumbnail it then shows up in explorer. At least Pentax’s Codec will install: they support Adobe’s DNG format as an alternative and Adobe’s rather old beta codec won’t install on 64 bit Windows 7. I discovered Ardfry’s Codec for DNG, which is pretty good, though not free.


Putting QuickTime player onto my rebuilt PC I find that it has partial codec support for WIndows – i.e. some Mov files can be played in Windows Media Player and show a thumbnail in Explorer , and some can’t (it appears the “can” use H264 video and “the can’t” are CinePak or Sorenson). Before I had a chance to get the latest build from Ardfry, someone sent me a link to this page of Codecs from Axel Rietschin Software Developments.  I’ve only installed and tested the 64 bit ones PEF and DNG ones but the initial impression is very good indeed. The only gripe is that there doesn’t seem to be a way for the Codec to return the meta data information from the picture but tell Windows “For this format the meta data is read only” – with both Axel’s and Ardfry’s codecs you can enter new data only to get an error when Windows tries to save it.


The full list of supported formats is as follows.


Adobe Digital Negative (*.dng  )
Canon Raw Image  (*.cr2, *.crw )
Fuji Raw Image (*.raf)
Hasselblad Raw Image (*.3pr, *.fff)
Kodak Raw Image (*.dcr, *.kdc )
Leica Raw Image (*.raw, *.rwl)
Minolta Raw Image (*.mrw)
Nikon Raw Image (*.nef, *.nrw )
Olympus Raw Image (*.orf)
Panasonic Raw Image (*.rw2)
Pentax Raw Image (*.pef)
Sony Raw Image (*.arw, *.sr2, *.srf)


A nice bonus is that these were created to support Fast Image Viewer, which I hadn’t come across before: this supports tethered shooting on Cameras with PTP support (like my new Pentax K7). I’m going to give this a try and I’ll hand over the small pile of pennies required if it works. Update there are different levels of PTP support, and the K7 doesn’t do what I need it to. Sigh.


This post originally appeared on my technet blog.

July 25, 2009

Vulcan hunting: a mini case study in social media

Filed under: General musings,Photography — jamesone111 @ 8:00 am

I’ve described some of my activities over recent weekends as the biggest hunt for a Vulcan since Star Trek III – The Search For Spock. The Vulcan I’m after isn’t the pointy eared kind but XH558, the only flying example left of the V Bomber. It’s very easy to talk a lot of tosh about beautiful machines of various kinds, but big delta winged aircraft do have a certain something… Concorde always made people stop and look, and the Vulcan has been doing the same since it first flew in the 1950s. The Vulcan set a record for the longest bombing mission in history when one bombed the airfield at Port Stanley during the Falklands war in 1982. The task sounds crazy: “Chaps we’ve got an aircraft which entered service in the 1950s and is due to to be decommissioned. We’d like you to use it to bomb a runway, which is defended but we don’t know what with exactly. The good news is we’ve got you a base within 4000 miles of the target. The rest of the news isn’t so good, the aircraft’s navigation system is based on terrain mapping radar, and those 4000 miles are over featureless ocean, but before you worry about finding the target you’ll need to figure out how to get this aircraft to do  do air-to-air refuelling. We need this … well as soon as possible really”. They did it of course, and its been chronicled in at least one book.


The RAF kept a Vulcan flying for display purposes up till 1992 – ten years after it was meant to have come out service.  Enthusiasts wanted to keep it flying and it when it was retired it was flown to Bruntingthorpe in Leicestershire. Getting it back in the sky made getting to the Falklands look like seem like a walk in the park, it took 14 years. Even then it won’t fly forever: the engines have a very short life: they are rated in cycles so the translation to hours is approximate. There are two sets of engines available and they will last about 200 hours each – I don’t reboot my PC as frequently as once every 200 hours. The  Vulcan to the Sky trust wants to spin out the 400 hours they have 10 years, so they have 40 hours a year to maximize the chances people have to see the aircraft, and that will be it. 


The Trust has two linked problems. Like any charity their main problem is raising money. People only give in a crisis, and when the crisis is averted the money dries up till the next crisis comes. The other is making sure people who want to see the aircraft get a chance to do so. Of course the more people see it, the more donors they get, and they don’t show up when expected they alienate those donors. So XH558 is now registered on twitter. This has been a help for me since it has been spending the air-show season 20 miles or so from where I live. The first time twitter worked for me, I was standing in a field hoping to catch the take off. Late in the morning I checked twitter on my phone “Planned take off 14:00, return 16:00” I had to be home for 14:00, so that cut a lot of wasted time, and I came back to see it land. Next morning “Taking off 14:30 …” back I go – knowing I have to collect my children later. 14:30 comes and goes. At 14:45 I checked twitter on the phone. “Working on a problem” it said, so I went to pick up the children and found later that the problem wasn’t fixed in time for the days display, so flight cancelled and again I’d been saved a pointless wait. When XH558 finally did take off and do a practice display run a few days later I was there thanks to another tweet.


Think about this – if you’re trying to build a community around what you are doing, a few moments here and there to connect with your customers can produce some real results. Does it work for generating donations ?  I can’t extrapolate to everyone but they’ve got some money out of me. And here are the pictures I got – not just of the Vulcan  but of the other planes which came and went while I was waiting, produced with Microsoft Research Autocollage


Vulcan1


This post originally appeared on my technet blog.

July 24, 2009

Seventh Heaven

Filed under: General musings,Photography — jamesone111 @ 8:40 pm

As I mentioned recently I have bought a new Camera – the Pentax K7 : as a proper photographer I’m bothered more by lenses than camera bodies and last year I acquired Pentax’s beautiful 77mm Limited series. All those 7s and a new version of Windows… So I thought I’d grab a photo , so I picked up my compact (a 7MP one), and took a photo to have in the Windows 7 screen shot below. Quite by coincidence it is one of 77 pictures in the folder and the lens is focused at about 7M. Instead of doing this post on a Friday I should have waited to till the weekend, not for the 7th day of the week, but because I will be flying to the US – you guessed it, on a Boeing 777 with Air Canada – the great circle route does quite take me past seventy degrees north, sadly. Or Perhaps I should have shot it at 7 minutes past 7 O’clock….

Editing a 7MP image of a Pentax K7 with 77 mm lens, one of 77 pictures in a Windows 7 folder.

tweetmeme_style = ‘compact’;
tweetmeme_url = ‘http://blogs.technet.com/jamesone/archive/2009/07/24/seventh-heaven.aspx&#8217;;

This post originally appeared on my technet blog.

A tale of two codecs. Or how not to be a standard.

Filed under: General musings,Photography,Windows 7,Windows Vista — jamesone111 @ 12:16 pm

I’ve just bought a new digital SLR camera. Being a dyed in the wool Pentax person, I’ve upgraded to their new K7.

Being fairly serious about (some of) my photography I shoot quite a lot in RAW format.(In case you didn’t know higher end digital cameras can save the data as it comes off their sensor without converting it to JPEG format). There are only a small number of ways of expressing RAW data but every camera maker embeds one of those methods into their own file format: then each new camera introduces a new sub-version of the format. This is, frankly, a right pain.

Adobe came up with an answer to this, Digital Negative format, DNG. It has been adopted, but not Widely.  Pentax were first to support it in parallel with their own PEF format; Heavyweights like Hassleblad and Leica support it, so do some models from Casio, Ricoh and Samsung. But Canon and Nikon who account for somewhere round 3/4 of all DSLR sales have stuck with their own formats. Adobe maintain a converter which take proprietary files and convert them to DNG, so if you have an application which supports DNG but not your specific camera, Adobe’s tool will bridge the gap. So the take-up in photo processing software has been quite good. My chosen RAW software Capture One needs an update to work with the latest PEF, but will take DNG files straight from the camera. And I’d switch the camera over from PEF to DNG format if it weren’t for the vexing matter of Codecs. 

Before Windows Vista shipped we introduced “Windows Imaging components” WIC, which provide  RAW file using imaging CoDecs (COmpressor DECompressor). Windows 7 and Vista include WIC, and it’s WIC which provides image preview in the explorer: the net effect is that if you have a suitable Codec you get image preview. But, only a very basic set of codecs ships with the OS, partly because of the maintenance headache and partly because some RAW processing requires a bit of reverse engineering and we try to avoid doing that. Camera vendors provide Codecs and Pentax had a new PEF Codec on-line when I got my K7 home. But this is 32 bit only – other camera makers also lack 64 bit support. I could take this as inspiration for a huge rant  but let’s just say I’d make it a requirement for 32 AND 64 bit Windows to be able to preview a camera’s files before it was granted the “certified for Vista” logo – which the K7 sports on its packaging. Perhaps it’s good for our partnerships that I don’t decide such things.

I was on 64 bit Vista and I’m now on 64 bit Windows 7, so you might think the 32 bit codec would be totally useless … but no. A 32 bit codec won’t work with 64 bit software, like Windows explorer. But it will work with a 32 bit program like Windows Live Photo Gallery. (Photo Gallery from Vista has been moved over to Windows Live). Since WLPG shares a thumbnail cache with explorer, anything which you have seen in the Gallery will get a thumbnail in Explorer.  Now, granted, this is a Kludge but there are worse ones out in the world – so I can see my PEFs. But using PEF format means I need to use the (less than great) bundled RAW software until Capture one support the revised PEF. If I want to use Capture one today, I need to use DNG. So  do Adobe have a DNG codec ? They do, but their web site has (unanswered) complaints about the lack of 64 bit support going back to May of last year. Unlike the Pentax codec the Adobe one catches that I am on 64 bit Windows 7 and tells me it only installs on 32 bit Vista. [Users with the Windows Imaging Components installed on XP are out of luck too].

It’s a pretty poor show on Adobe’s part, but it’s easy to see how this comes about. None of the Camera vendors see it as their job to write a Codec for DNG – especially as Adobe have started the process. Microsoft don’t write Codecs except for major standards like JPG, PNG and TIFF and our own formats like Windows Media photo:  DNG doesn’t have enough of a foothold to be classed as a major standard. Adobe – I suspect – must feel that too many people are and not pulling their weight – expecting them to do all the work. It’s perhaps unfair to draw a parallel our support for Linux in the virtualization world (which I have only just written about) – after all it is in our interest to get our virtualization platform adopted, Adobe aren’t disadvantaged if people don’t choose to adopt DNG. But it needs a bit more commitment to get something adopted than Adobe are showing. If you were a product planner at Canon or Nikon would you write DNG support into the spec for future models ? Or would you decide that the support for DNG was half baked and you’d leave it as “something to keep an eye on” for now ?

In researching this I had a look at the Microsoft’s pro photo web site. Which is worth a visit just for the “Icons of imaging” page if you haven’t been there before. The downloads page does feature a 3rd party codec for DNG , which I must investigate. Sadly it’s not free: it’s not that I begrudge the money, but if I have to pay even a token amount to get something which bundled with something I have bought and is supposed to be a standard, to working in the all the places I’d expect it work then how much of a standard is it. I could level the same charge at Adobe over PDF iFilters and preview – but as I’ve written before, Foxit software plugs the gaps and is free – reinforcing the idea that PDF is a standard which is bigger than the company which devised it. I’d love to think DNG would do for RAW formats what PDF has done for documents, but sadly it doesn’t look like it will go that way.

This post originally appeared on my technet blog.

May 3, 2009

Virtual Windows XP … picking myself up off the floor.

Filed under: Beta Products,Photography,Virtualization,Windows 7,Windows XP — jamesone111 @ 3:16 pm

Someone gave me a definition of insanity as “trying the same thing over and over again expecting different results”.  I guess trying something you expect to fail is somewhere between insanity and scientific thoroughness. Anyhow, that’s how I came to be trying the test you see below. I didn’t expect it to work, but it did.

image As I mentioned yesterday I wanted to try out the tethered shooting ability of my Digital SLR. In fact  I have two Pentax digital SLRs, a 2003 Vintage *ist-D and a 2006 K10D. Pentax have only ever done 32 bit versions of the Remote Assistant software the *ist-D works with V1 and the K10D needs V3, which demands the CD which came with the camera (even if the old software is installed or the Camera is plugged in). The cable to connect the *ist D was in the loft – along with the K10D’s disk. So I couldn’t try either last night: this morning I got out the ladder and retrieved both. 

I had installed the Remote Assistant 1.0 into the VM, mainly to see if version 3 would upgrade in place without the CD, and it showed up on my Windows 7 Start menu, so I figured I’d plug in *Ist-D. Windows 7 installed the drivers for it. I fired up the VM and pulled down the USB menu, the camera showed as shared, I clicked it and after a warning that the it would no longer be usable in the host OS it became“Attached” so the option changed to goes to “Release”. Attaching the device to the VM is just like plugging a USB device into a physical machine, so the Virtualized instance of XP installed the drivers for the camera. It’s a standard device and doesn’t need anything downloaded or provided from a disk, so it was all done in 3 clicks.

I fired up remote assistant. It gives a representation of what  you can see through the view finder (not a live preview, but the camera settings – under the picture on the left you can see it is telling me 1/80th of a second shutter speed, aperture of f/2.4.  It was getting data from the camera, so there was nothing for it at this stage but to press the shutter button, so I aimed the camera at my son and ….

Click for full size version

 

.. it worked! It only went and worked !! The picture on the left is the assistant running in the VM, and on the right it’s working as a remote application without the whole desktop. The old camera is a USB 1.1 device so the transfer speed is pretty poor: which is why I never got into tethered shooting with it; there’s a motivation to get the newer software working to use the other camera – I’ve never used it because by the time Pentax had the software out I was running 64 bit vista and wasn’t going to change for one program. [Update. Done that, identical process, much faster transfer] 

I found the whole VM bogged down terribly if I asked it to save the file it was acquiring from the camera the host computer. So I decided to cheat and add a shortcut on the start menu to link to the folder in the VM where it stores the files. (This also turns out to be a useful backdoor to launch anything which isn’t set up on the host’s start menu).

The only other fault I can find with the whole process is that you have to reconnect the USB device by starting the VM and only then can you launch the Virtual Application. I don’t know if the Virtual PC team plan to do anything about this by release.

As a Hyper-V person through and through I tend to think of  Virtual PC is a bit of an old dog – in the the best of all worlds this would be underpinned by Hyper-V technology – but here I am applauding VPCs new trick. There could be a whole new lease of life in this old dog yet.

This post originally appeared on my technet blog.

March 16, 2009

A day out with Divers

Filed under: Events,Photography,Windows 7,Windows Vista — jamesone111 @ 11:38 am

I spent yesterday at the DIVERSE – the South Eastern area conference for BSAC (the British Sub Aqua Club) : officially I was there to speak about why Windows Vista (and 7) are so good at handling pictures. Unofficially as a diver (though not a BSAC member) any valid excuse to hang out with diving folk is welcome. “Club” suggests “amateur”, but I’ve been to plenty of IT industry events which were less professionally run. It was well attended too, and it’s quite some time since I’ve seen an audience so obviously enjoying what was being presented to them (even a potentially dull session on accident statistics was lightened with the some of the funny things people right when reporting accidents).


I’ve uploaded the slides I used. The main thrust of my session was that



  1. Putting data about your pictures in the pictures themselves (using EXIF data) is much more valuable than putting them in a separate database
    (One gentleman asked me if there was anything he could do about editors which drop all the data – I use EXIFCOPY from EXIFUTILS to copy it back from the original photo)

  2. Once tagged Search means you can find, sort, search and group the pictures. Search is available for XP, but  because is integrated everywhere in Windows Vista and 7 the experience is better: heir version of Windows explorer also makes it easy to tag pictures

  3. Some free software – Windows Live Photo Gallery (which also works on XP) makes it easier to work with  photos and do basic corrections (although it lacks a clone brush and the levels adjustment isn’t that sophisticated)

  4. There are interesting things you can do with the photos afterwards (e.g. building a collage with AutoCollage – which is running a 20% off promotion until the end of March. At £14.40 it’s a real bargain).

It seems the graphics card in my laptop has been on the blink for a couple of weeks – I’ve been getting loads of errors from my screen driver and some other odd behaviour – it finally gave up the ghost while I was on stage and when I try to boot the machine now I get some very interesting screen corruption. So I didn’t get to show the final collage. Here is what AutoCollage built with the pictures I showed (click for a bigger version)


 


Click for a bigger version


 


I showed my PowerShell scripts for tagging photos from the Suunto dive management software I use (Suunto were sponsoring the event, which was nice – I’ve got a very high opinion of Suunto, as much for their customer service as for their products and it would have been awkward if another make of dive computers was sponsoring things) . I talked about how this worked in an earlier post, and I’ve added  the latest versions of the files to this post for anyone who wants to try it. Here are some quick instructions:



  1. Try to remember to take a photo of the display on your dive computer at some point. This will allow you to calculate the error between the time on the camera and the time on the computer.

  2. Unzip the attached files

  3. Export the CSV files from the Suunto software  (in the 1.6 version I use* the command File/Export/ASCII in CSV format)

  4. Assuming you already have PowerShell installed, start PowerShell and enter the following command Run the following powershell command
    [reflection.assembly]::loadfile(“C:\<path To Where You Unzipped it >\OneImage.dll”)

  5. Then enter the following PowerShell command
    filter get-picture {param ($Path) new-object oneImage.exifimage $path}

  6. If you have a photo with of your computer use this powershell Command to find when it was taken
    (Get-picture “fullPathToYourPicutre“).dateTimeTaken
    You can work out the number of seconds difference between the time on the computer when the picture was taken and the time on the camera. If the Computer is ahead of the camera you want a positive number, and if it is behind you want a negative number.

  7. Next process the Suunto data with this command in powerShell
    <path to where you unzipped it>\prep-Divedata
    Note that if you are in the folder where you unzipped it, you need to enter the path as .\prep-divedata. This command takes about a minute to run on my system. If it looks like it has hung give it plenty of time.

  8. Read the warning in the right hand column of this blog – “This stuff is provided as is, with no warranty and confers no rights.” My code isn’t very extensively tested and there is a small chance it could screw up your photos. Make sure you have a backup before proceeding. Seriously.

  9. Now run the following command in powershell
    DIR <your file Selection> | foreach-object { <path to where you unzipped it>\tag-photo $_.fullName timeOffset}
    your file selection might be *.jpg, it might be c:\photo-Dump\dive40*.jpg, or whatever. As before if you are in the folder that holds the script you need to run it as .\tag-photo
    The timeOffset you worked out in step 6. If you don’t enter one everything will work on the assumption that computer and camera are in sync.

 


Officially there is no support for this (but officially I’m on leave and not posting here this week)


 


* Yes there is a newer version of dive manager, but I like to point out that 1.6 was designed to interface with the serial port, pre-dates Vista , never mind 7, and wasn’t intended for a 64 bit platform, yet here it is on Windows 7 , 64 bit downloading my data very nicely thank you. Which shows what happens if you write the software properly in the first place.

This post originally appeared on my technet blog.

February 17, 2009

Windows 7 : Photos, Gallery and AutoCollage.

Filed under: Beta Products,Photography,RSS,Windows 7 — jamesone111 @ 1:25 pm

imageWhen I first started using Windows Vista it was the better experience for photographers which really hooked me in. Most common image formats use the EXIF standard for embedding data about the picture (everything from the camera model and settings, to the title, keywords and so on. XP lets you look at this information in the file properties dialog, but vista introduced the the ability to set EXIF data from the main Windows explorer window, to search for picture titles from the start menu, and to sort, and build search folders based on Exif data (and it gives thumbnail previews – so you get the effect of a contact sheet). Storing data in EXIF is really important; photos get shared. If the data about a picture stays on the computer where it was edited and doesn’t follow the picture then someone who looks at it in years to come won’t get the “where and when” information. And if the data is stored in a database by a gallery package you’re locked into that package.

Vista also introduced “Windows Photo Gallery” , which added a little to what you could do from explorer. The Windows live team have adopted Photo Gallery, and it’s not pre-installed with Windows 7 (we have a link on the Getting started menu for Windows Live Essentials). As a 32bit app it actually works with the 32bit only RAW codec from Pentax so I can see what’s in those files. Photo Gallery does more than organize your photos: each version has introduced new bits under the heading of “fixing” photos:  PhotoShop it ain’t but it will crop pictures, straighten crooked ones ,reduce noise or sharpen soft images, fix red-eye; it’s got decent exposure correction features and will fix colour balance (if you haven’t seen the super cute demo* by 4 1/2 year on Kylie, the autofix combines these – as she says “I click – it’s better”)  and has even got the ability to do some black and white effects. It could do with a clone/heal brush, but otherwise its not bad. One annoyance is it has facial recognition – great – but it doesn’t seem to store the names of people found in the Exif data.
The other ambition for Photo Gallery is seems to be central point for “OK I’ve got my pictures … now what ? ” As Kylie shows, it has a hook into mail and there is the ability to upload pictures to web services – critically, the newest version supports plug-ins to support non-Microsoft sites (Facebook, flickr, smugmug and others).  You can start a new blog post in Live Writer with pictures in it too.
Then there’s also the ability to make a panorama – which has another cute kid demo, this time with 7 year old Alex. The panorama bits came from MS Research and they have a more sophisticated panorama tool “ICE” – the image composite editor. You can send images from Photo Gallery to ICE.  And this is the last of the extensions to the new version of Photo Gallery – the ability to send pictures to another program – so you can send them to Movie maker as well.

Now, in the 1.0 release of AutoCollage there didn’t seem to way to select photos other than giving it a whole folder to work with. This wasn’t too bad – I added a working folder to my send to menu and sent pictures to that before making my collage, but it was an extra step I could do without. The new 1.1 release (which doesn’t need a new key if you have bought 1.0)  hooks into Photo Gallery, so now I can select photos from where-ever and chuck them into a collage. If you take photos and haven’t tried AutoCollage yet you should get the trial version (And there is a flickr group to show what people are doing with it)

image Does Windows 7 do much more than Vista for photographers ? Not really – in fact since Gallery has moved into Windows live you could say it does less. But there is one feature which I’m almost ashamed to admit I love. It’s the menu you can see at the top – you can have multiple background pictures … as a slide show – Click on the thumbnail on the left to see how this is setup.

There is one other thing about this which can makes your machine nicer , and that is the ability to get pictures for the slide show from an RSS feed. There is a good post which describes this here. I must try creating my own feed for this.

 

* foot note. The kylie Demo is on Youtube, and there’s a great comment “Phrases you never thought you’d hear: (1) oh that’s the trombone player’s porsche  and(2) that new microsoft TV spot actually kicks ass.”

This post originally appeared on my technet blog.

Next Page »

Blog at WordPress.com.