James O'Neill's Blog

June 22, 2019

Last time I saw this many shells, someone sold them by the sea shore.

Filed under: Azure / Cloud Services,Linux / Open Source,Powershell,Uncategorized — jamesone111 @ 10:04 pm

I’ve been experimenting with lots of different combinations of shells on Windows 10.

imageBASH.  I avoided the Subsystem for Linux on Windows 10 for a while. There are only two steps to set it up – adding the Subsystem, and adding your chosen Linux to it. If the the idea of installing Linux into Windows, but not as a virtual machine, and getting it from the Windows store gives you a headache, you’re not alone, this may help or it make things worse. I go back to the first versions of Windows NT which had a Windows-16 on Windows-32 subsystem (WoW, which was linked to the Virtual Dos Machine – 32-bit Windows 10 can still install these), an OS/2 subsystem, and then a Posix subsystem. Subsystems translated APIs so binaries intended for a different OS could run on NT, but kernel functions (drivers, file-systems, memory management, networking, process scheduling) – remained the responsibility of underlying OS. 25 years on, the Subsystem for Linux arrives in two parts – the Windows bits to support all the different Linuxes , and then distributor-supplied bits to make it look like Ubuntu 18.4 (which is what I have) or Suse or whichever distribution you chose. wslconfig.exe will tell you which distro(s) you have and change the active one. There is a generic launcher wsl.exe which will launch any Linux binary in the subsystem so you can run wsl bash but a Windows executable, bash.exe streamlines the process

imageLinux has a view of Windows’ files (C: is auto-mounted at/mnt/c and the mount command will mount other Windows filesystems including network and removable drives) but there is strongly worded advice not to touch Linux’s files via their location on C: – see here for more details. – Just before publishing this I updated the 1903 release of Windows 10 which adds a proper access which you can see in the screen shot 
Subsystem processes aren’t isolated – although a Linux API call might have a restricted view of the system. For example ps only sees processes in the subsystem but if you start two instances of bash, they’re both in the subsystem they can both see each other and running kill in one will terminate the other. The subsystem can run a Windows binary (like net.exe start which will see Windows services) and pipe its output into an Linux one, like less;  those who prefer some Linux tools get to use them in their management of Windows.
The subsystem isn’t read-only – anything which changes in that filesystem stays changed – since the subsystem starts configured for US Locale,
sudo locale-gen en_GB.UTF-8 and sudo update-locale LANG=en_GB.UTF-8 got me to a British locale. 

Being writable meant I could install PowerShell core for Linux into the subsystem: I just followed the instructions (including running sudo apt-get update and sudo apt-get upgrade powershell to update from 6.1 to 6.2). Now I can test whether things which work in Windows PowerShell (V5), also work with PowerShell Core (V6) on different platforms.  I can tell the Windows Subsystem for Linux to go straight into PowerShell with  wsl pwsh (or wsl pwsh –nologo if I’m at a command line already). Like bash it can start Windows and Linux binaries and the “in-the-Linux-subsystem” limitations still hold. Get-Process asks about processes in the subsystem , not the wider OS. Most PowerShell commands are there; some familiar aliases overlap with Linux commands and most of those have been removed (so | sort will send something to the Linux sort, not to sort-object,  and ps is not the alias for get-process;  kill and CD are exceptions to this rule.). Some common environment variables (Temp, TMP, UserProfile, ComputerName) are not present on Linux, and Windows specific cmdlets, like Get-Service,  don’t exist in the Linux world, and tab expansion switches to Unix style by default but you can set either environment to match the other. My PowerShell Profile soon gained a Set-PsReadlineOption command to give me the tab expansion I expect and it sets a couple of environment variables which I know some of my scripts use.  It’s possible (and tempting) to create some PSDrives which map single letters to places on /mnt, but things like to revert back to the Linux path. After that V6 core is the same on both platforms

PowerShell on Linux has remoting over SSH; it connects to another instance of PowerShell 6 running in what SSH also terms a “subsystem”. Windows PowerShell (up to 5.1) uses WinRM as its transport and PowerShell Core (6) on Windows can use both. For now at least, options like constrained endpoints (and hence “Just Enough Admin”  or JEA), are only in WinRM.
The instructions for setting up OpenSSH are here; I spent a very frustrating time editing the wrong config file – there is one in with the program files, and my brain filtered out the instruction which said edit the sshd_config file in C:\Program Data\ssh. I edited the one in the wrong directory and could make an SSH session into Windows (a useful thing to know to prove Open SSH is accepting connections) but every attempt to create a PowerShell session gave the error
New-PSSession : [localhost] The background process reported an error with the following message: The SSH client session has ended with error message: subsystem request failed on channel 0.
When I (finally) edited the right file I could connect to it from both Windows and Linux versions of PowerShell core with New-PSSession -HostName localhost.  (Using –HostName instead of –Computername tells the command “This is an SSH host, not a WinRM one”). It always amazes me how people, especially but not exclusively those who spend a lot of time with Linux, are willing to re-enter a password again and again and again. I’ve always thought it was axiomatic that a well designed security system granted or refused access to many things without asking the user to re-authenticate for each (or “If I have to enter my password once more, I’ll want the so-called ‘Security architect’ fired”). So within 5 minutes I was itching to get SSH to sign in with a certificate and not demand my password.

image I found some help here, but not all the steps are needed. Running the ssh-keygen.exe utility which comes with OpenSSH builds the necessary files – I let it save the files to the default location and left the passphrase for the file blank, so it was just a case of hitting enter for each prompt. For a trivial environment like this I was able to copy the id_rsa.pub file to a new file named authorized_keys in the same folder, but in a more real world case you’d copy and paste each new public key file into authorized_keys, then I could test a Windows to Windows remoting session over SSH. When that worked I copied the .ssh directory to my home directory in the Subsystem for Linux, and the same command worked again.

imagePowerShell Core V6 is built on .NET core, so some parts of PowerShell 5 have gone missing: there’s no Out-Grid, or Show-Command, No Out-Printer (I wrote a replacement), no WMI commands, no commands to work with the event log, no transactions and no tools to manage the computer’s relationship with AD.  The  Microsoft.* modules provide about 312 commands in V5.1 and about 244 of those are available in V6; but nearly 70 do things which don’t make sense in the Linux world because they work with WinRM/WSMan, Windows security or Windows services. A few things like renaming the computer, stopping and restarting it, or changing the time zone need to be done with native Linux tools. But we have just over 194 core cmdlets on all platforms, and more in pre-installed modules. There was a also a big step forward with compatibility in PowerShell 6.1 and another with 6.2 – there is a support for a lot more of the Windows API, so although some things don’t work in Core a lot more does than at first release. It may be necessary to specify the explicit path to the module (the different versions use either “…\WindowsPowerShell\…” or “..\PowerShell\…” in their paths and Windows tools typically install their modules for Windows PowerShell) or to use Import-Module in V6 with the –SkipEditionCheck switch. Relatively few stubbornly refuse to work, and there is a solution for them: remotely run the commands that otherwise are unavailable – instead of going over SSH this time you use WinRM, (V5 doesn’t support SSH) When I started working with constrained endpoints I found I liked the idea of not needing to install modules everywhere and running their commands remotely instead, once you have a PSSession to the place where the commands exist, you can use Get-Module and Import-Module with a –PsSession switch, to make them available. So we can bridge between versions – “the place where the commands exist” is “another version of PowerShell on the same machine” it’s all the same to remoting. The PowerShell team have announced that the next release uses .Net core 3.0 which should mean the return of Out-Gridview (eventually), and other home brew tools to put GUI interfaces onto PowerShell; that’s enough of a change to  bump the major version number, and they will drop “Core” from the name to try to remove the impression that it is a poor relation on Windows. The PowerShell team have a script to do a side by side install of the preview – or even the daily builds – Thomas Lee wrote it up here. Preview 1 seems to have done the important but invisible work of changing .Net version; new commands will come later; but at the time of writing PowerShell 7 preview has parity with PowerShell Core 6, and the goal is parity with Windows PowerShell 5

There is no ISE in PowerShell 6/7, Visual Studio Code had some real annoyances but pretty well all of them have been fixed for some months now and somewhere I joined the majority who see it as the future. Having a Git client built-in has made collaborating on the ImportExcel module so much easier, and that got me to embrace it . Code wasn’t built specifically for PowerShell which means it will work with whichever version(s) it finds.  
imageThe right of the status bar looks like this and clicking the green bit pulls up a menu where you can swap between versions and test what you are writing in each one. These swaps close one instance of PowerShell and open another so you know you’re in a clean environment (not always true with the ISE); the flip side is you realise it is a clean environment when you want something which was loaded in the shell in the shell I’ve just swapped away from.
VS Code’s architecture of extensions means it can pull all kinds of clever tricks – like remote editing –and the Azure plug in allows an Azure Cloud Shell to be started inside the IDE. imageWhen you use Cloud Shell in a browser it has nice easy ways to transfer files; but you can discover the UNC path to your cloud drive with Get-cloudDrive  then , Get-AzStorageAccount will show you a list of accounts, you can work out the name of the account from the UNC path and you use this as the user name to logon but you also need to know the resource group it is in, and Get-AzStorageAccount shows that. Armed with the name and resource group  Get-AzStorageAccountKey gives you one or more keys which can be used as a password, and you can map a drive letter to the cloud drive.

Surely that’s enough shells for one post … well not quite. People have been getting excited about the new Windows Terminal which is went into preview in the Windows store a few hours before I posted this Before that you needed to enable developer options on Windows and build it for yourself. It needs the 1903 Windows update and with that freshly installed I thought “I’ve also got [full] Visual Studio on this machine, why not build and play with Terminal”. As it turns out I needed to add the latest Windows SDK and several gigabytes of options to Visual Studio (all described on the github page), but with that done it was one git command to download the files, another to get submodules, then open visual studio, select the right section per the instructions and say build me an X64 release, have a coffee … and the app appears. (In the limited time I’ve spent with version in store it looks to be the same as the build-your-own version).

imageIt’s nice, despite being early code (no settings menu, just a json file of settings to change)., It’s the first time time Microsoft have put out a replacement for the host which Window uses for command line tools – shells or otherwise, so you could run ssh, ftp, or a tool like netsh in it.  I’ve yet to find a way to have “as admin” and normal processes running in one instance. It didn’t take long for me to add PowerShell on Linux and PowerShell 7 preview to the default choices (it’s easy to copy/paste/adjust the json – just remember to change the guid when adding an new choice, and you can specify the path to a PNG file to use as an icon).
So, in a single window, I have all the shells, except for 32-bit PowerShell 5, as tabs:  CMD, three different, 64-bit versions of PowerShell on Windows, PowerShell on WSL, BASH on WSL, and PowerShell on Linux in Azure.
I must give a shout out to Scott Hanselman for the last one; I was thinking “there must be a way to do what VS code does” and from his post Scott thought down the same lines a little while before me. He hooked up with others working on it and shared the results. I use a 2 line batch file with title and azshell.exe (I’m not sure when “title” crept into CMD, but I’m fairly sure it wasn’t always there. I’ve used it to keep the tab narrow for CMD: to set the tab names for each of the PowerShell versions I set $Host.UI.RawUI.WindowTitle  which even works with from WSL) [UPDATED 3 Aug. Terminal 0.3 has just been releases with an Azure option which starts the cloud shell, but only in its bash incarnation. AzShell.exe can support a choice of shell by specifying –shell pwsh or –shell bash ] 
So I get 7 Shells, 8 if I added the 32 bit version of PowerShell. Running them in the traditional host would give me 16 possible shells. Add the 32 and 64 bit PowerShell ISEs and VS code with Cloud shell and 3 Versions of local PowerShell, and we’re up to 22. And finally there is Azure cloud shell in a browser, or , if you must, the azure phone app, so I get to an nice round two dozen shells in all without ssh’ing into other machines (yes terminal can run ssh) , using any of the alternate Linux shells with WSL or loading all the options VS code has. “Just type the following command line” is not as simple as it used to be.

March 19, 2014

Exploring the Transcend Wifi-SD card

Filed under: Linux / Open Source,Photography — jamesone111 @ 1:37 pm
Tags: , , , , ,

There are a number variations on a saying  ”Never let a programmer have a soldering iron; and never let an engineer have a compiler”

WP_20140309_11_34_59_ProIt’s been my experience over many years that hardware people are responsible for rubbish software. Years upon years of shoddy hardware drivers, dreadful software bundled with cameras (Canon, Pentax I’m looking at both of you); Printers (HP, Epson), Scanners (HP – one day I might forgive you) have provided the evidence. Since leaving Microsoft I’ve spent more time working with Linux, and every so often I get into a rant about the lack of quality control: not going back and fixing bugs, not writing proper documentation (the “Who needs documentation when you’ve got Google” attitude meant when working on one client problem all we could find told us it could not be solved. Only a lucky accident found the solution). Anyone can program: my frustrations arise when they do it without  proper specification, testing regime, documentation and “after care”. The Question is … what happens when Engineers botch together an embedded Linux system.

Let me introduce you to what I believe to be the smallest commercially available  Linux computer and Web server.

I’ve bought this in its Transcend form – which is available for about £25. It’s a 16GB memory card, an ARM processor and a WIFI chip all in an SD card package.  Of course chip designers will be able to make it smaller but since it’s already too easy to lose a Micro-SD card, I’m not sure the would be any point in squeezing it into a smaller form factor.  Transcend aren’t the only firm to use the same hardware. There is a page on OpenWrt.Org which shows that Trek’s Flu-Card, and PQI’s Aircard use the same hardware and core software. The Flu card is of particular interest to me, as Pentax have just released the O-FC1 : a custom version of the flu card with additional functions including the ability to remotely control their new K3 DSLR. Since I don’t have the K3 (yet) and Pentax card is fairly expensive I went for the cheap generic option.

The way these cards works is different from the better known Eye-FI card. They are SERVERS : they don’t upload pictures to a service by themselves, instead they expect a client to come to them, discover the files they want and download them. The way we’re expected to do this is using HTTP , either from a web browser or from an App on a mobile device which acts as wrapper for the same HTTP requests. If you want your pictures to be uploaded to photo sharing sites like flickr, photobucket, smugmug, one line storage like Dropbox, Onedrive (nee skydrive), or social media sites (take your pick) these cards – as shipped – won’t do that. Personally I don’t want that, so that limitation’s fine. The cards default to being an access point on their own network – which is known as “Direct share mode” – it feels odd but can be changed.   

imageVarious people have reported that Wifi functionality doesn’t start if you plug the card into the SD slot of a laptop; and it’s suggested this is a function of the power supplied. Transcend supply a USB card reader in the package, and plugged into it my brand-new card soon popped up as a new wireless network. It’s not instant – there’s an OS to load – but it’s less than a minute. This has another point for use in a camera: if the camera powers down, the network goes with it; so the camera has to stay on long enough for you to complete whatever operations are needed.

imageThe new card names its network WIFISD and switching to that – which has a default Key of 12345678gave me wireless connection with a nominal connection speed of 72Mbits/sec and a new IP configuration, a Connection-Specific DNS Suffix of WifiCard, an IP Address of and DNS server, Default gateway, and DHCP server of : that’s the server. The first thing I did to point my browser at, enter the login details (user admin, password admin) and hey presto up came the home page. This looks like it was designed by someone with the graphic design skills of a hardware engineer, or possibly a blind person. I mean, I know the card is cheap, but effort seemed to have gone in to making it look cheap AND nasty.

However with the [F12] developer tools toggled on in Internet explorer I get to my favourite tool. Network monitor. First of all I get a list of what has been fetched, and if I look at Details for one of the requests, the response headers tell me the clock was set to 01 Jan 2012 when the system started and the server is Boa/0.94.14rc21

The main page has 4 other pages which are arranged as a set of frames. frame1 is the menu on the left, frame2 is the banner (it only contains Banner.jpg) and frame3 initally holds page.html, which just contains page.jpg and there is a blank.html to help the layout. Everything of interest is in frame1, what is interesting is that you can navigate to frame1.html without entering a user name and password and from there you can click settings and reset the admin password.
The settings page is built by a perl script (/cgi-bin/kcard_edit_config_insup.pl) and if you view the page source, the administrator password is there in the html form so you don’t even need to reset it. Secure ? Not a bit of it. Within 5 minutes of plugging the card in I’d found a security loophole (I was aware of others before I started thanks to the openwrt page and Pablo’s investigation). I love the way that Linux fans tell me you can build secure systems with Linux (true) and it can be used on tiny bits of embedded hardware where Windows just isn’t a an option (obviously true here): but you don’t automatically get both at the same time. A system is only as good as the people who specified, tested, documented and patched it.

While I had the settings page open I set the card to work in “internet mode” by default and gave it the details of my access point. You can specify 3 access points; it seems if the card can’t find a known access point it drops back to direct share mode so you can get in and change the settings (I haven’t tried this for myself). So now the card is on my home wifi network with an address from that network. (The card does nothing to tell you the address, so you have to discover it for yourself). Since there is a just a process of trying to connect to an access point with a pre-shared key, any hotspots which need a browser-based sign-on won’t work.

The next step was to start exploring the File / Photo / Video pages. Using the same IE monitor as before it’s quite easy to see how they work – although Files is a Perl script and pictures & videos are .cgi files the result is the same. A page which calls   /cgi-bin/tslist?PATH=%2Fwww%2Fsd%2FDCIM and processes the results. What’s interesting is that path /www/sd/DCIM. It looks like an absolute path… What is returned by changing to path to, for example, / ? A quick test showed that /cgi-bin/tslist?PATH=%2F does return the contents of the root directory. So /cgi-bin/tslist?PATH={whatever} requires no security and shows the contents of any directory.
The pictures page shows additional calls to /cgi-bin/tscmd?CMD=GET_EXIF&PIC={fullpath}  and /cgi-bin/thumbNail?fn={full path}. The files page makes calls to /cgi-bin/tscmd?CMD=GET_FILE_INFO&FILE={full path} (picture EXIF is a bit disappointing it doesn’t show Lens, or shutter settings, or camera model or exposure time it just shows file size – at least with files we see modified date; thumbnail is also a disappointment. There is a copy of DCRAW included on the system which is quite capable of extracting the thumbnail stored in the raw files, but it’s not used)
And there is a link to download the files /cgi-bin/wifi_download?fn={name}fd={directory}.  By the way, notice the lack of consistency of parameter naming the same role is filled by PATH=, PIC=, fn=  and fn=&fd=  was there an organised specification for this ?

Of course I wanted to use PowerShell to parse some of the data that came back from the server and I hit a snag early on
Throws an error: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF

Shock horror! More sloppiness in the CGI scripts, the last response header is followed not by [CR][LF] but by [LF][LF] fortunately Lee Holmes has already got an answer for this one.  I also found found the space in my folder path /www/sd/hello James caused a problem. When it ran through [System.Web.HttpUtility]::UrlEncode the space became a + sign not the %20 in the line above: the CGI only accepts %20, so that needs to be fixed up. Grrr. 

Since we can get access to any of the files on the server we can examine all the configuration files, and those which control the start-up are of particular interest. Pablo’s post was the first that I saw where someone had spotted that init looks for a autorun.sh script in the root of the SD file system which can start services which aren’t normally launched. There seems to be only one method quoted for starting an FTP service
tcpsvd -E 21 ftpd -w / &
There are more ways given for starting the telnet service, and it looks for all the world as if this revision of transcend card has a non-working version of telnetd (a lot of the utilities are in a single busybox executable), so Pablo resorted to getting a complete busybox, quickly installing it and using
cp /mnt/sd/busybox-armv5l /sbin/busybox
chmod a+x /sbin/busybox
/sbin/busybox telnetd -l /bin/bash &

This was the only one which worked for me. Neither ftp nor telnet need any credentials: with Telnet access it doesn’t take long to find the Linux Kernel is,  the Wi-Fi is an Atheros AR6003 11n and the package is a KeyASIC WIFI-SD (searching for this turns up pages by people who have already been down this track), or more specifically KeyASIC Ka2000 EVM with an ARM926EJ-S CPU, which seems to be used in tablets as well.

Poking around inside the system there are various references to “Instant upload” and to G-PLUS but there doesn’t seem to be anything present to upload to any of the services I talked about before, when shooting gigabytes of photos it doesn’t really make sense to send them up to the cloud before reviewing and editing them. In fact even my one-generation-behind camera creates problems of data volume. File transfer with FTP is faster than HTTP but it is still slow. HTTP manages about 500KBytes/sec and FTP between 750 and 900KBytes/Sec. That’s just too slow, much too slow.  Looking at some recent studio shoots I’ve use 8GB of storage in 2 hours: averaging a bit more than 1MB/Second. With my K5, RAW files are roughly 22MB so take about 45 seconds to transfer using HTTP but it can shoot 7 frames in a second – and then spend five minutes to transferring  the files: it’s quicker to take the memory card out of the camera, plug it into the computer, copy files and return the card to the camera. It might get away with light use, shooting JPGs, but in those situations – which usually mean wandering round snapping a picture here and a picture there – would your WiFi connected machine be setup and in range ?

The sweet spot seems to be running something on a laptop / tablet phone to transfer preview JPGs – using lower than maximum resolution, and some compression rather than best quality (the worry here is forgetting to go back to best possible JPEG and turning RAW support off). In this situation it really is a moot point which end is the client and which end is the server. Having the card upload every file to the cloud is going run into problems with the volume of data, connecting to access points and so on. So is pulling any great number of RAW files off the card. Writing apps to do this might be fun, and of course there’s a world of possible hacks for the card itself.

July 23, 2009

Oink flap –– Microsoft releases software under GPL — oink Flap

Mary-Jo has a post about the release of our Hyper-V drivers for Linux entitled Pigs are flying low: Why Microsoft open-sourced its Linux drivers , it’s one of many out there but the title caught my eye: I thought I’d give a little of my perspective on this unusual release. News of it reached me through one of those “go ahead and share this” mails earlier this week  which began

NEWSFLASH: Microsoft contributes code to the Linux kernel under GPL v2.
Read it again – Microsoft contributes code to the Linux kernel under GPL v2!
Today is a day that will forever be remembered at Microsoft.

Well indeed… but hang on just a moment: we’re supposed to “hate” the GPL aren’t we ? And we’re not exactly big supporters of Linux … are we ? So what gives ? Let’s get the GPL thing out of the way first:

For as long as I can remember I’ve thought (and so has Microsoft) that whoever writes a book, or piece of software or paints a picture or takes a photo should have the absolute right decide its fate.  [Though the responsibilities that come with a  large share of an important market apply various IFs and BUTs to this principle]. Here in the UK the that’s what comes through in the Copyrights Designs and Patents Act, and I frequently find myself explaining to photographers that the act tilts things in their favour far more than they expect. Having created a work, you get the choice whether to sell it, give it away, publish the Source code , whatever. The GPL breaks that principle, by saying, in effect “if you take what I have given away, and build something around it, you must give your work away too and force others to give their work away ad infinitum”; it requires an author of a derivative work to surrender rights they would normally have. The GPL community would typically say don’t create derivative works based on theirs if you want those rights. Some in that community – it’s hard to know how many because they are its noisiest members -  argue for a world where there is no concept of intellectual property (would they argue you could come into my garden and take the vegetables that stem from my physical work ? Because they do argue that you can just take the product of my mental work). Others argue for very short protection under copyright and patent laws: ironically a licence (including the GPL) only applies for the term of copyright, after that others can incorporate a work into something which is treated as wholly their own. However we should be clear that GPL and Open Source are not synonyms (Mary Jo, wasn’t in her title) . Open source is one perfectly valid way for people to distribute their works – we want Open Source developers to write for Windows and as I like to point out to people this little project here  means I am an Open Source Developer and proud of it. However I don’t interfere with the rights of others who re-use my code,  because it goes out under the Microsoft Public Licence: some may think it ironic that is the Microsoft licence which gives people freedom and those who make most noise about “free software” push a licence that constrains people.

What are we doing ? We have released the Linux Integration Components for Hyper-V under a GPL v2 license, and the synthetic drivers have been submitted to the Linux kernel community for inclusion in upcoming versions of the Linux kernel.  The code is being integrated into the Linux kernel tree via the  Linux Driver Project which is a team of Linux developers that develops and maintains drivers in the Linux kernel. We worked very closely with Greg Kroah-Hartman to integrate our Linux IC’s into the Linux kernel. We will continue to develop the Integration components and as we do we will contribute the code to the drivers that are part of the kernel.
What is the result ? The drivers will be available to anyone running an appropriate Linux kernel. And we hope that various Linux distributions will make them available to their customers through their releases. 
WHY ? It’s very simple. Every vendor would like their share of the market to come from customers who used only their technology; no interoperability would be be needed: but in the real world, real customers run a mixture. Making the Linux side of those customers lives unnecessarily awkward just makes them miserable without getting more sales for Microsoft. Regulators will say that if you make life tough enough, it will get you more sales, but interoperability is not driven by some high minded ideal – unless you count customer satisfaction, which to my way of thinking is just good business sense. Accepting that customers aren’t exclusive makes it easier for them to put a greater proportion of their business your way. So: we are committed to making Hyper-V the virtualization platform of choice, that means working to give a good experience with Linux workloads. We’d prefer that to happen all by itself, but it won’t: we need to do work to ensure it happens.  We haven’t become fans of the GPL: everything I wrote above about the GPL still holds. Using it for one piece of software is the price of admission to the distributions we need to be in, in order to deliver that good experience. Well… so be it.  Or put another way, the principle of helping customers to do more business with you trumps other principles.
Does this mean we are supporting all Linux distributions ? Today we distribute Integration components for SLES 10 SP2. Our next release will add support for SLES 11 and Red Hat Enterprise Linux (5.2 and 5.3). If you want to split hairs we don’t “support” SLES or RHEL – but we have support arrangements with Red Hat and Novell to allow customers to be supported seamlessly. The reason for being pedantic about that point is that a customer’s ability to open a support case with Microsoft over something which involves something written by someone else depends on those arrangements being in place. It’s impossible to say which vendors we’ll have agreements with in future (if we said who we negotiating with it would have all kinds of knock on effects, so those discussions aren’t even disclosed inside the companies involved). Where we haven’t arranged support with a vendor we can only give limited advice from first principles about their product, so outside of generic problems which would apply to any OS, customers will still need to work with the vendors of those distributions for support.

You can read the press release or watch the Channel 9 Video for more information.

This post originally appeared on my technet blog.

February 17, 2009

Support for Red Hat OSes on Microsoft Virtualization (and Vice Versa)

One of the questions which comes up on our internal distribution lists for Hyper-V is “when will such and such and OS be supported on Hyper-V” and the somewhat frustrating response is usually in the form “We’re talking to OS vendors, but we can’t talk about contract negotiations while they are going on. As soon as we can say something we’ll do it in public”. We have to negotiate certification , support and so on. Even saying we’re talking (or not talking) to vendor X my impact what we’re doing with vendor Y. The OS which comes up most often in this context is Red Hat Enterprise Linux, we’ve made some public announcements which are a  step in this direction

Here are key points from Red Hat’s Press Release

  • Red Hat will validate Windows Server guests to be supported on Red Hat Enterprise virtualization technologies.
  • Microsoft will validate Red Hat Enterprise Linux server guests to be supported on Windows Server Hyper-V and Microsoft Hyper-V Server.
  • Once each company completes testing, customers with valid support agreements will receive coordinated technical support for running Windows Server operating system virtualized on Red Hat Enterprise virtualization, and for running Red Hat Enterprise Linux virtualized on Windows Server Hyper-V and Microsoft Hyper-V Server

The last one is important because the it means a customer with an issue can call on vendor and if the problem appears to lie with the other vendor’s product it’s managed as one streamlined incident.  Note that work hasn’t been completed – the above is written in the future tense. According to Mike Neil’s blog post “Microsoft and Red Hat Cooperative Technical Support” we will provide integration components for Red Hat on Hyper-V and Red Hat will provide properly certified drivers for Windows on their Virtualization stack

Microsoft people would prefer customers only used Microsoft products, and Red Hat people would prefer customers only used Red Hat products – we sure aren’t going to stop competing. But the reality is customers use both: and both companies want their customers to have an excellent experience of their respective technologies, which mean we have to cooperate as well . This is coopertiton in action.

This post originally appeared on my technet blog.

September 10, 2008

Hyper-v and competitors /collaborators

When Ray Noorda ran Novell he coined the term Coopertition to describe their relationship with us. Microsoft’s Kevin Turner described this as "Shake hands but keep the other hand on your wallet".

We would love customers to buy ONLY Microsoft stuff (support would be SO much simpler), and competitors would love customers only to buy their stuff. A world where we go 100% of spending of x% of the customers would be so much neater than the real world where we get x% of the spending  (on average) of 100% of customers. Both we and competitors want customers to have a great experience of our respective technologies, and that doesn’t happen if we don’t cooperate on some things.

In the virtualization world it means two things ; being able to run competing OSes on our virtualization and being able to run our OSes on competing virtualization and give the customer clarity about support.

So first, if go to http://windowsservercatalog.com/ and click on the ‘certified servers’ link on the right side of the page under the Windows Server 2008 logo, you can check which Servers have been validated in the lab- there a sub-section for servers validated for hyper-v.

Second if you go to Server Virtualization Validation Program page and click on the  ‘products’  link on the left side of the page you can find out which products we support. As you can see VMware is on the list their entry says which version of Windows is supported on which version of VMware. Today it’s 32bit Windows 2008 only, on ESX 3.5 update 2 only.  That would tend to make people nervous about older versions of Windows until you the section which appears next to each catalogue entry "Products that have passed the SVVP requirements for Windows Server 2008 are considered supported on Windows 2000 Server SP4 and Windows Server 2003 SP2 and later.". It would be reasonable expect more products from more vendors to appear on the list, but it’s good to see that VMware was one of the first to pass the tests.

Third. Linux support. Mike Sterling has posted that Linux Integration Components are now posted , the actual link he provides to the connect web-site seems to be broken, but you can find the components in the Connections Directory

Steve and I are off to Edinburgh to do the 5th run of our Virtualization tech-ed event Seats are still available for tomorrow (Thursday 11th)

This post originally appeared on my technet blog.

June 18, 2008

Hyper-V PowerShell library – now on Codeplex

I’ve decided to go ahead and post the PowerShell library I have been working on to Codeplex. I wanted to explain various bits of it here before pulling it all together, but that is taking more time than I wanted. I’ve provided early copies to a few people – John Kelbley  used them to good effect at the US tech-ed recently – and I haven’t had too many bug reports so I’ve decided to broaden the audience. 

There’s lot’s of good stuff happening in Hyper-V and scripting right now. Taylor Brown has a blog with more examples in PowerShell; Ben Armstong has some good stuff too; Ben’s more of a VB guy than a PowerShell one and I’ll probably continue to tease him gently about the fact … he has a very long script. to compact a VHD. Mine is a 4 line function.  Interestingly Taylor’s post contains a "Wait for WMI Job" function and Ben has built in. I might write a similar function and give my functions which return Job IDs a "-WAIT" option in the next major revision. I picked out Taylor and Ben because of their blogs but there have been other people inside Microsoft who have been a great help pulling this together, I wish I’d kept a list of who had pointed me to this or that so I could thank them.   

The source code is open (as Powershell tends to be) and distributed under the Microsoft public license (MS-PL).

This post originally appeared on my technet blog.

January 24, 2008

Vista vulnerabilities – a comparison.

Filed under: Apple,Linux / Open Source,Security and Malware,Windows Vista,Windows XP — jamesone111 @ 10:32 pm

Perhaps it’s a bit strong to say “if complete and utter chaos was lightning, Jeff Jones would be the sort to stand on a hilltop in a thunderstorm wearing wet copper armour and shouting ‘All gods are bastards’ ” (as a favourite quote  has it)  but you must admit it’s a better opening than “Blimey, XP was better than we thought”, or “See, there was no need wait for Vista SP1“.

Jeff, you see, has posted on his blog an analysis of Vulnerabilities in the first year of life of Windows Vista, Windows XP, two popular linux distros and Apple’s Mac OS X 10.4. Here are the bare numbers (though you should read the whole thing)

Metric Windows Vista Windows XP Red Hat rhel4ws Reduced Ubuntu 6.06LTS Reduced Mac OS X 10.4
Release Date 30-Nov-06 25-Oct-01 15-Feb-05 01-Jun-06 29-Apr-05
Vulnerabilities Fixed 36 65 360 224 116
Security Updates 17 30 125 80 17
Patch Events 9 26 64 65 17
Weeks With at least 1 patch event 9 25 44 39 15

To explain the numbers a little, an update might fix more than one vulnerability, and more than one update might go out out in a patch event. Apple seem to roll all their fixes for a given event into a single update.

Vista is the newest of these operating systems and you could argue that the art of software engineering has advanced. But then Why did a 2001 Microsoft OS fare so much better 2005/6 products?

With all the claims of the Linux community like “With many eyes all bugs are shallow” – how did Red Hat have 360 vulnerabilities ? They released Patches 44 weeks out of 52, 20 of their patches came in weeks when there had already been a patch. Ubuntu didn’t fare much better on that score.

If security vulnerability counts are indicative of bugs in general then Vista shipped in a better state than XP; Vista will go longer to SP-1 than XP did, it seems that they’ll have roughly the same number of vulnerabilities fixed at SP-1.

So that’s all good – why the “Wet copper armour” quote – and Gizmodo agrees with me ? Well, to bend another favourite quote, “The Internet is more full of exciting trolls and excruciating fan boys and girls than a pomegranate is of pips”. Most times I mention Apple I get visited by one set or the other. Jeff just called their babies ugly. He’s happy to discuss it. His document explains how he got to the numbers and he encourages people to do their own analysis. And he faces down point that “Of course you think the Microsoft products are good because you work for Microsoft” by pointing out it’s the other way around, he works for Microsoft because he thinks the products are good. Like me. Like most of us.

This post originally appeared on my technet blog.

December 2, 2007

It must be the truth #2. Astroturf and a rather less green Apple.

Filed under: Apple,Linux / Open Source,Mobility — jamesone111 @ 12:10 am

Silly me believing stuff I read on the Internet.  First there was the  The register’s story about only 26,000 iphones being activated in the UK.

Next came Electronics weekly’s usual “Made by Monkeys” e-mail – I’m not sure how I ended up on the list for it, but I haven’t unsubscribed because once in a while there’s a gem in there like Gender specific user interfaces, which people seem to find funny on multiple levels.  Directly below that there is something about An Apple Power Mac G5 Oozing Coolant.

Then Russ came along with a comment to my iPhone post At least the iPhone isn’t packed with ‘toxic chemicals’!  That smartphone is making me feel ill! and a link to a Greenpeace report where Microsoft got a pretty bad review. We’ve made some environmental progress (especially in software packaging) but Microsoft can’t claim to be up with the leaders in the hardware industry. But (Russ) Microsoft doesn’t make Smartphones and PDAs we only supply the OS to companies like Dell, HP, HTC Motorola, Palm, Samsung and Toshiba, (HTC and Palm aren’t on the Greenpeace survey, off the others only Motorola ranks below Apple). Having seen that corrosive coolant story I wanted to find the link to it. Being sure I’d seen it on The Register I couldn’t find it, a quick search turned it up , but not before I’d got side tracked into two other items….

One was from Greenpeace (again). Titled “Missed call: the iPhone’s hazardous chemicals” it says that in May “Steve Jobs, the boss of Apple, claimed: “Apple is ahead of, or will soon be ahead of, most of its competitors” on environmental issues. Yet when the iPhone launched in June there was no mention of any green features of the phone from Apple. So they tested one and criticized it’s use of PVC and brominated fire retardants. [Their criticism of Microsoft’s hardware centred on the presence of these two, and the slow schedule we have for phasing them out].  They also comment “The disassembling also revealed the iPhone’s battery was, unusually, glued and soldered in to the handset. This hinders battery replacement and makes separation for recycling, or appropriate disposal, more difficult, and therefore adds to the burden of electronic waste.”.  I thought a non-user changeable battery was bad, but soldered and glued ? That’s just perverse.

The other was back at The Register, this time about Dell shipments of Ubuntu Linux. The Linux community bombarded Dell with 130,000 requests on their Idea-storm web site, but as the register put it only a fraction of these zealots were willing to back their votes with cash. Dell have sold something 20 Million PCs in the last six months, of which – if the register is to believed – 40 thousand are running Ubuntu. That’s 2 PCs in every thousand. 0.2%. Now I don’t want to big up Linux’s market share, and I don’t know what proportion of that is accounted for by Ubuntu; but I would have bet that it had more than 0.2% of the market. Do people who want Linux work build their own PC (Or have it built to their own spec) rather than go to the likes of Dell ? I don’t know. But what about those 130,000 requests? Were they all distinct individuals ? Were they potential customers ?  Or did Dell fall victim to an astroturf campaign ?

This post originally appeared on my technet blog.

November 1, 2007

Linux Virtual Machine additions 2.0 .

Filed under: Linux / Open Source,Virtualization — jamesone111 @ 12:43 pm

I had to check the URL for Port 25 in the last post (is it technet … or MSDN ? its http://port25.TECHNET.com), and in doing so I spotted a post I’d missed from a few days ago.  

The Virtual Machine Additions for Linux 2.0 download is now available. This is to provide better support for qualified distributions of Linux running on Virtual Server 2005. I understand that work is going on for the equivalent software for Windows Server Virtualization, but no-one’s ready to share a date for when these will be available even as a beta.

Technorati tags: , ,

This post originally appeared on my technet blog.

Security and blogging.

Filed under: Linux / Open Source,Security and Malware — jamesone111 @ 12:19 pm

This would normally be one for Steve, but he’s got a few days away…

 Kim Cameron’s blog got hacked; normally I’d just say “Blog hacked: Film at Eleven“. Except Kim is a big noise in the Microsoft security world. ZDNet broke the story,  and the comments to it show Anti Microsoft folks out there laughing themselves silly. It’s not such a silly assumption that the blog is on Microsoft Technology and this is a result of security hole in that Microsoft Technology. But it’s wrong. as Kim points out the blog “is run by commercial hosters (TextDrive) using Unix BSD, MySQL, PHP and WordPress – all OSS products.  There is no Microsoft software involved at the server end – just open source. ” (IE7 Pro let me check that from the status bar – calling up this page at Netcraft). Ha ha ha. It’s a security hole in a competing technology…. Actually even that’s wrong. It was a vulnerability in the application (wordpress) , now fixed. Application vulnerabilities happen; I don’t think wordpress is any more or any less prone to them than anything else.

But what’s this ? A Microsoft person who keeps a blog on a FreeBSD system. Don’t we all swear never to use open source, before we even get the implants ? As Cameron says “I like WordPress, even if it has had some security problems, and I don’t want to give it up”. It astonishes people that Microsofties are free to use something they like. That’s what customers do, a lot of the time that’s why they choose Microsoft, but not always: that’s why we have sites like port 25

And metaphorical tip of the hat to Kim; that post handles some pretty troll-like comments about the breach in a very deft way.

This post originally appeared on my technet blog.

October 24, 2007

Post removed

Filed under: General musings,Linux / Open Source,Working at Microsoft — jamesone111 @ 8:24 am

This article has been removed.

The article implied a number of traits about Mr Richard Morrell which are entirely without basis.

I apologise for both the factual inaccuracy and the offence caused to Mr Morrell through this article.



This post originally appeared on my technet blog.

April 6, 2007

If freedom of choice doesn’t do it for you how about lower taxes ?

Filed under: Linux / Open Source,Office — jamesone111 @ 12:06 pm

A couple of days ago I wrote about our online Petition to the BSI concerning Open XML, it’s always nice to be noticed by Mary Jo Foley  but she asks

Am I the only one, in reading Microsoft’s rationale for ISO standardization, who finds it ironic that Microsoft is citing “customer choice” and “interoperability” as the motivators for its moves?

Wouldn’t it be more genuine (to use another Microsoft buzzword) to admit that Microsoft is seeking standardization for Open XML because there is a growing number of customers — especially government customers — whose purchasing contracts require approved-standards-based technologies?

Mary Jo also wondered how long we Microsoft bloggers can be transparent. Which what I’m trying to be here.  Microsoft have done well out of customer choice. People chose Word for Windows over Word Perfect, Excel over Lotus 1-2-3, NT and Windows Server 200x over Novell Netware, Windows over OS/2 warp. Companies like RealNetworks complained – with some success – that when customers had a Microsoft product put in front of them, they chose to use it, rather than get something like RealPlayer.  When our products weren’t good enough customers chose something else (Remember Multiplan ?) and we went back and improved the product (most of the time). In a successful company the best incentive to improve your product is having your peers know yours is the one taking a kicking.  

What are the people who want to compete in that area doing ? Are they bringing forth evidence that users are more productive with their products ? No. Evidence that software which is sometimes ‘free’ costs less over it’s life ? No. That they have more and better features ? No. That users like their software better ? No…  

When the law is against you argue the facts. When the facts are against you, argue the law
Legal proverb

The facts favour us. So what would you expect a competitor like IBM to do ? Argue the law … I’m a veteran of these arguments. In the early 1990s I was involved in selling into UK local govenrment against ICL – who argued, often successfully, that their Networks were the ONLY ones you could buy which implemented Open Systems Interconnection standards. You might think that a system from only one vendor might not be open… But then you might think the kind of market share Office has makes it a some kind of standard. Governments don’t do de-facto standards. You or I might think that SMTP is a standard, but it’s not defined by a true standards body; X400 on the other hand was defined by the CCITT (now called the ITU. Their wikipedia page descibes the CCITT as “a rather slow and deliberate organization”). Where was X.400 mostly adopted – and where are it’s last pockets still found ? Government -the only place where deadlines are infinitely flexible, (in 1949,the usage scenarios for a UK governement IT system were published, with an expectation it would be complete in 35 years. It is only nearing completion today) and budget for consultants knows no limits either – useful if you’re in the consulting business like, say, IBM. So can IBM and its allies somehow use standards to do what they couldn’t do with development – get Governments to buy their software.   Talking to Mary Jo our GM for Interoperability, Tom Robertson put it like this “The discussions around Open XML and ODF are a proxy for product competition in the marketplace…  …In general, we are not hearing about this issue from our enterprise or consumer customers – it is localized to governments today.”

So here’s the cunning plan.

  1. Get a basic office automation suite.

  2. Get its file format approved on a fast track by a standards body

  3. Set to work convincing governments that they must follow standards. Productivity and cost don’t matter (tax payers don’t notice)

  4. Make sure that your file format is the only one approved.

This last step is important. If you have a good product, or a good format why does it matter if the competitor is approved too ?  But if you have an inferior product, it would take take away your main selling point. IBM tried to stop Office Open XML becoming an ECMA standard. When that failed their argument became that only ISO standards are proper standards, and ECMA standards aren’t (in which case why do they participate in ECMA ?)

A huge proportion of tax payers think the software which makes us productive is Office. Businesses find that office gives them good return on investment. We are free to choose office; and given the freedom to do so most government will do the same. As a Microsoft shareholder and employee I’d be only be very slightly troubled if people chose a Microsoft product without even considering the alternatives. But as a tax payer, I think government has a duty to spend my money in the most effective way and that means checking all the options. When they’ve done that they tend to choose the industry leading software for the same reasons that everyone else does.  

Oh, one last thing Mary Jo finds our citing of interoperability as a motivator ironic (and we Brits say the Americans don’t understand irony). The old formats for Word, Excel and Powerpoint have their roots in the 1980s. Customers, partners and (yes) competitors wanted to be able to manipulate the files outside the applications. But the file formats were badly adapted for that, they had little in common with each other we didn’t publish them, and there was always the spectre that we’d claim some infringement of intellectual property rights. What we have done for office 2007 is provide a format is common to all the applications, that can be manipulated, and via the open specification promise which I’ve talked about before  there’s no threat of Intellectual Property lawsuits. We don’t need a third party to be stewards of the standard, but which attitude is preferable “we’re the biggest player, so what do is a de-facto standard, we’ve put what you need on our website.” or “We’ve fully documented it and handed it over to a third party”.


Technorati tags: Microsoft, office, OOXML, ODF, ECMA, IBM, FUD, ISO

This post originally appeared on my technet blog.

April 3, 2007

Sign-up for freedom of choice

Filed under: Linux / Open Source,Office — jamesone111 @ 11:53 pm

Back in February I wrote about IBM and their attempts to throw a spanner in the works for the Open XML used in office 2007

The key bits of the story are

  • IBM has long campaigned for Microsoft to open up it’s file formats – which we have done in Office 2007 with the Open XML format.
  • IBM thrown its weight behind Open Document Format, and against Open XML.
  • ODF has been ratified by ISO
  • IBM lobbies governments to use de-jure standards (e.g.from ISO) for document formats,
  • Despite opposition from IBM the other 20 members of the ECMA standards body ratified Open XML and passed it to ISO for their adoption
  • IBM is actively trying to persuade governments that only ISO standards count and ECMA standards are somehow not truly standards.
    Anyone who accepted this would – at the moment – be locked out of buying Microsoft Office, or the products from Novell and Corel .
  • IBM is trying to prevent ISO from adopting Open XML as a parallel standard to ODF, by lobbying ISO voting members like the British Standards institute.

We have launched an on-line petition which we will present to the BSI to show there is support for Open XML. If you think it would be better for Open XML to be approved by ISO  please consider signing it


Technorati tags: Microsoft, office, OOXML, ODF, ECMA, IBM, FUD, ISO

This post originally appeared on my technet blog.

February 14, 2007

Not exactly a valentine for IBM

Filed under: Linux / Open Source,Office — jamesone111 @ 7:56 pm

One of those odd coincidences. Back in January 26 when I was blogging about IBM FUD , I mentioned an Information Week Article where IBM made the assertion that “a so-called “open XML” platform file format, known as OOXML, is designed to run seamlessly only on the Microsoft Office platform.”. This is odd because companies like Corel and Novel are baking it into their products. Odd too that 20 out of 21 voting at ECMA voted for it if was not designed to be a standard.

On the same day Jerry Fishenden was writing about the same area on his blog. It’s a good read: here’s a snippet

“Watching the daily pantomime of what has happened with these Ecma Open XML file formats in the ISO process has made me realise that for some people this has never been about “open” and about interoperability and doing the right thing for the user. Instead, it seems to be about trying to build a prescriptive mandate for a single file format: ODF, the file format used by Open Office and Star Office. And blocking anything that might threaten that prescriptive model.”

Jerry can spot FUD when he sees it. The as I said before first step when looking to spread FUD is establishing credibility choosing the right name is critical if you are arguing in the domain of competing IT systems, it’s vital to work under a name which makes you sound Open and Interoperable, the Second step is to make assertions which go unchallenged, from which you can extrapolate.  Jerry spotted a couple of “great examples of the type of blatant, er, “terminological inexactitudes” (or “open double-standards” if you prefer) doing the rounds from an organisation known as “Open Forum Europe” (OFE). Great name for an IBM lobby group, that and they say “There are also serious doubts that the standard could be implemented outside the Microsoft environment, due to license requirements that are not made explicit.”

So we’ve made handed the format over to ECMA – who have passed it onto ISO, various people are implementing to it, but “licence requirements that have not been made explicit” mean some people have doubts   Who has doubts ? We don’t know. What Licence requirements ? No-one knows -they’re not explicit and only IBM seems able to see them. It helps IBM if people believe that Open XML is “Microsoft only” – even if a Microsoft backed site is plugging GPL code for working with Open XML in PHP. Of course Jerry knows what this is, he calls it “just plain old fashioned FUD.”

So much for a couple of weeks ago. Today we have published an OPEN LETTER which “shines a bright light on IBM’s activities” here are my 2 main take-aways

  • Microsoft does not oppose IBM’s chosen standard – ODF. We have ensured there is 3rd party support for ODF in Office 2007, which gives customers choice. By contrast IBM is working against customer choice. They’re lobbying to make ODF a requirement in public procurements. The members of the ECMA commitee had some serious clout, and who was the only one to vote against Open XML ? IBM, of course. They are now trying to stop ISO even considering it – on the grounds that ODF is already a standard. Should the first technology to the standards body, regardless of technical merit, get to preclude other related ones from being considered ? I don’t think so that would make “quick” better than good.  In any event there are multiple ISO standards for documents (e.g. PDF/A and HTML) and pictures (JPEG and PNG)
  • IBM is hypocritical on two counts. They claim to be for customer choice, but these actions show otherwise. And they pushed for Microsoft to standardize document formats and to make the related intellectual property available for free. We did this with Open XML. Then they try to block this at every turn.


Read the open letter for yourself. Keep it mind when you hear IBM talking about this subject.


Technorati tags: , , , , , , ,

This post originally appeared on my technet blog.

January 26, 2007

Another lesson in FUD, from a past master.

Filed under: General musings,Linux / Open Source,Office,Windows Vista — jamesone111 @ 6:48 pm

A little culture to lead into the weekend. A quote from Voltaire no less.

I have never made but one prayer to God, a very short one: “O Lord make my enemies ridiculous.” And God granted it.

Someone sent me a link to story “Rivals Attack Vista As Illegal Under EU Rules“.

We’ve heard of the self styled “European Committee for Interoperable Systems” before. It includes IBM, Nokia, Sun Microsystems, Adobe, Corel, Oracle, RealNetworks, Red Hat, Linspire and Opera (note the European nature of its members. The interoperability track record of IBM, Oracle, and Real networks is less than great, even Adobe blocked the inclusion of PDF support in office 2007). My guide to FUD pointed out the first step was, broadly establishing credibility. If you are arguing in the domain Science or Engineering position yourself as wise and expert, if it is the solution of Social problems, position yourself as “New”, and if it is one of competition position yourself as Open and Interoperable.  “American IT lobbying against Microsoft” would be truthful, but not help when the goal is to lobby the European Commission. I’ve written before that they equate “pro-competition” with “pro-Microsoft competitors”, so it’s pretty fertile ground for this group.

Any statement from that group should be examined for signs of FUD, and they’ll usually be found.

Step 2 in the guide to FUD. Make assertions which will go unchallenged. e.g. “Microsoft is dominant in the market. Anything Microsoft does is intended to increase or cement that dominance”. So that article tells us
“Vista is the first step of Microsoft’s strategy to extend its market dominance to the Internet,” the ECIS statement said
To borrow the famous Mandy Rice Davies quote “They would say that, wouldn’t they” But “the first step of Microsoft’s strategy” ? Gee, the Internet’s been around for a while and we – this powerful player committed to expanding our dominance – are only just getting round to the first step of a strategy …

On to Step 3. Extrapolate form your assertions. So what would expect ?  IBM can see a commercial opportunity in “Open Document Format” standard while Microsoft is on the side of the Office Open XML standard which is backed by ECMA. According to this news story “Bob Sutor, who is vice president of open source and standards at IBM, confirmed that IBM voted against adoption of OOXML at the Ecma general assembly”. IBM have been arguing for ODF and against OOXML in any way they can.. There have even been accusations that they stooped to putting false information into Wikipedia*. So what did they their mouthpiece tell the EC according to the article
“They said a so-called “open XML” platform file format, known as OOXML, is designed to run seamlessly only on the Microsoft Office platform.”

So the real story is: IBM having lost at ECMA is trying its luck at the EC, through a Front Organization.

Have they finished ? No.  So that article tells us
Microsoft’s XAML markup language was “positioned to replace HTML”, the industry standard for publishing documents on the Internet.Microsoft’s own language would be dependent on Windows, and discriminatory against rival systems such as Linux, the group says.

It’s the most ludicrous kind of scaremongering. Microsoft somehow getting everyone on the Internet to abandon HTML at all, never mind in favour of something closed really deserves to be laughed at. The scary thing is that the European Commission seems to be full of people who fall for this stuff.

* Before I leave this there’s been a bit of a storm about what’s on Wikipedia about Office Open XML. Pro OOXML people have accused pro ODF people at IBM of using Wikipedia to spread disinformation. It got to the point where someone from Microsoft asked an independent expert in XML, to have a look at it. This story has been turned into “Sneaky Microsoft Spin machine pays people to falsify Wikipedia”. When I first heard the story I thought the person behind it should be fired (especially when I see luminaries like Dave Winer saying we absolutely wrong). Then I read the page where the  story was broken by the guy who did/would have done it. And the mail asking him to do it was posted here.  I changed my view, I’d encourage anyone else to read those two pages and make up their own mind.


This post originally appeared on my technet blog.

November 8, 2006

Is Novell our Eurasia ?

Filed under: General musings,Linux / Open Source,Office,Virtualization — jamesone111 @ 2:57 pm

Most people, I think, know that Hamlet does not say “Alas Poor Yorick, I knew him well”, and Mae West never said “Come up and see me some time”. When I was active in Microsoft’s private trainer newsgroups, a misquote from George Orwell would come up frequently in the form “But, Thomas, we have always been at war with Eurasia” or “But we have always been at peace with Eurasia” the nearest 1984 gets to this is this in Chapter five;
“She did not remember that Oceania, four years ago, had been at war with Eastasia and at peace with Eurasia. It was true that she regarded the whole war as a sham: but apparently she had not even noticed that the name of the enemy had changed. ‘I thought we’d always been at war with Eurasia’ ”

And in chapter nine we get this.
“On the sixth day of Hate Week… [when] the general hatred of Eurasia had boiled up into such delirium … at just this moment it had been announced that Oceania was not after all at war with Eurasia. Oceania was at war with Eastasia. Eurasia was an ally.”

{Side notes
1. what did we do before search engines, on-line copies of books and so on ?
2. How did I manage before IE7 made it so easy to try more than one search engine for this stuff. }

Fun though it might be to paint a picture of Microsoft as something from Orwell (it makes a change from Star Trek) give or take a little bit of Newspeak, it’s not really like that. However I’ve been having the “We’re not after all at War with Eurasia” feeling following our announcement with Novell last week. I can’t quite get my head around it; yes, I get what’s been announced

  1. Technical cooperation the two companies will work together to deliver new solutions to customers, in the areas of Virtualization, Web services management and Document format compatibility.
  2. Patent cooperation, both will provide patent coverage for each other’s customers, giving customers peace of mind regarding patent issues. But
  3. Business cooperation a commitment to dedicate marketing and sales resources to promote joint solutions.

In one place I read that technical coopetition was supposed to include a joint facility mid-way between Provo and Redmond. Any residents of Horseshoe Bend Idaho will be overjoyed.  We announced support for Suse Linux on virtual server in April; In July we announced support for ODF in July. Last month ECMA TC-45 announced the final draft of it’s definition of Open XML (the native format for Office 2007) – and Novell have been on the that Committee. So only Web services management is news.

The patent issue is interesting because Microsoft customers should have that peace of mind already though  Microsoft Intellectual Property Indemnification. Great for Suse customers. The conspiracy theorists have been having fun debating what drove it – did Novell have some patent(s) which could hurt Microsoft and vice versa ? Are we getting behind Novell purely to be well placed to stab them in the back ? and so on. I’ve no idea on either.

And then there is the issue of joint solutions. When Ray Noorda ran Novell he coined the term Coopertition to describe their relationship with us. (Until checking this I hadn’t heard Ray died last month; sad news). Microsoft’s Kevin Turner described this as “Shake hands but keep the other hand on your wallet”. Talking of shaking hands look at the photo of Steve B and Ron Hovsepian (Novell’s CEO) –  who looks cheerful and who looks like they’ve stepped in something sticky ? 
Novell has been trying to re-invent itself since Ray’s time (it’s attempts to do so under Eric Schmidt were disastrous). Is this another reinvention? Or a realization that Ray’s idea of coopertition was smart after all . Both companies would like 100% of a customer’s business, but it doesn’t always work out that way. If that’s what the customer wants, lets give them the best experience of each of the technologies: and that means working together.  Obviously Novell want their Linux solution to be used rather than another flavour, and they now have the odd position of being “Microsoft’s preferred Linux”. Again the conspiracy theorists would like to think this is somehow designed to isolate Novell from the rest of the Open Source players, but as a “Mixed source” company maybe they’re closer to us anyhow.

I’m going to be fascinated to see how this one unfolds.


Technorati tags: ,

This post originally appeared on my technet blog.

August 22, 2006

Microsoft and open source. IE and Firefox

Filed under: Internet Explorer,Linux / Open Source — jamesone111 @ 10:26 pm

Here’s Something I bet most people most people thought they’d never see.  (Thanks to Information week). We’re making sure we help Firefox development.

We have an Open Source Software Lab in Redmond – they have an interesting web site at  http://port25.technet.com.  It makes a good story to portray Microsoft and the open source world as engaged in a full blooded fight to the death; but life is rarely that simple. A lot of Microsoft customers have some open source software (and Vice versa), as well as wanting to understand what we’re competing with, we want things to work well for our customers.

We’d rather people made IE their browser of choice. But if they’re going to run Firefox, we don’t want them to have a rotten experience of their Windows system as a result. And we’d certainly prefer them to use firefox on Vista than use it on XP or something even older. No sense in detering people from upgrading because the browser they happen to prefer doesn’t work so well on the new OS.

I also read a great post on the IE team blog this evening, they’ve set out a list of things that are fixed in CSS support, but the key stuff is at the end – there is tons of information around IE 7, you can start at the Information Index for IE 7.

A couple of things I’d pick out are the IE 7 Readiness Toolkit and the Checklists for Developers IT professionals and consumers if you support web clients or servers, this tells you where you should concentrate your efforts.

Tagged as Microsoft Windows Vista IE Internet Explorer Firefox Open Source

This post originally appeared on my technet blog.

April 4, 2006

Virtual Server Big News, Linux shock and clarity.

Filed under: Linux / Open Source,Virtualization — jamesone111 @ 1:41 pm

Last week I posted about an advance notice of announcement that is was due that this week – it was far from clear.. A few hours later I got advance notice of the Virtual Server news which came out yesterday, which was a model of clarity.

Item 1: On Monday April 3rd, Virtual Server 2005 R2 Enterprise Edition will be available at no charge, free, gratis, complimentary, on the house. This is the full Virtual Server 2005 R2 Enterprise edition available in both 32-bit and 64-bit versions that we shipped in Q4 CY2005. This is not a trial or limited version in any way. This is a fully-supported product, not some unsupported Trojan horse designed to get you to update a multi-thousand dollar per-processor, per-server product. Virtual machines created today with Virtual Server will be able to migrate into our hypervisor based Windows virtualization. This is a risk free proposition and it’s the real deal.

That’s pretty clear I’d say.

Item 2: On Monday April 3rd, we are also announcing our Linux support (see below) and availability of our VM Additions for Linux. In case you’re wondering how these distributions were picked, it’s simple. We asked and listened to our valued customers

We’re supporting 4 Enterprise and 5 standard distibutions:  Red Hat Enterprise Linux 2.1 (update 6), Red Hat Enterprise Linux 3 (update 6),  Red Hat Enterprise Linux 4, SuSE Linux Enterprise Server 9; Red Hat Linux 7.3, Red Hat Linux 9.0, SuSE Linux 9.2 , SuSE Linux 9.3 , SuSE Linux 10

To get virtual server goto the download or order page and click the link under “Register” on the right hand side. The Linux components are at Microsoft Connect  (click available programs on the left). Both require passport to register.

The mention of hypervisor in that mail is quite important – Service pack 1 for Virtual Server 2005 R2 will support virtualization capabilities in new chips from Intel and AMD, and that’s a stepping stone to the new architecture that will be in Longhorn server.  

This is good on its own, but combined with

it looks like the virtualization group (now including my predecessor in this job ) are really on a roll.

This post originally appeared on my technet blog.

Blog at WordPress.com.