James O'Neill's Blog

December 10, 2008

Virtualization: user group and good stories

Filed under: Events,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 11:05 am

Details of the next Microsoft Virtualisation* User Group meeting now up on www.mvug.co.uk!

Where:   Microsoft London (Cardinal Place)
When: Date & Time 29th January 2009 18:00 – 21:30
Who & What: Simon Cleland (Unisys) & Colin Power (Slough Borough Council)
Hyper-V RDP deployment at Slough Borough Council
Aaron Parker (TFL)
Application virtualisation – what is App-V?
Benefits of App-V & a look inside an enterprise implementation.
Justin Zarb (Microsoft)
Application virtualisation – in-depth look at App-V architecture

I presented at an event for Public Sector customers recently and the folks from Slough Borough Council were there. I thought they were a great example because so many of the things we talk about when we’re presenting about Hyper-V actually cropped up with their deployment.

We’ve got another great story from the Public sector at the other end of the British Isles – Perth and Kinross council reckon Hyper-V will save £100,000 in its first year.  

However the best virtualization Story was one which told by one of our partners on the recent unplugged tour. Virtualization reduces server count, and that’s great for Electricity (cost and CO2), maintenance, capital expenditure and so on. But they had a customer who didn’t care about that. They found the the walls were cracking in their office and the reason was the weight of Servers in the room above. According to the structural engineer, they had overloaded the floor of server room by a factor of 4 and there was a risk that it would collapse onto the office staff below. That’s the first story I’ve heard of virtualization being used to reduce Weight. 

 

* Being British the MVUG  like using the Victorian affectation of spelling it with an S

This post originally appeared on my technet blog.

Advertisements

November 20, 2008

Road-show materials.

Filed under: Events,Windows Server,Windows Server 2008 — jamesone111 @ 11:46 am

I’ve had a couple of requests for the slides and related information from the”Unplugged” tour

My Keynote slides

My Hyper-v Cluster setup and Jose Barreto’s blog post on getting Storage server for iSCSI 

Matt’s slides on Management and SCVMM (these are in a Password protected zip, the Password is Password 123)

Clive’s Slides on Management and SCVMM

My slides on Terminal Services

 

I’ve given the link to Jose’s post for iSCSI in the list above, but there are other iSCSI solutions – I normally mention Nimbus but I forgot to mention it in Monday’s session. Thanks to Dave for the reminder.

This post originally appeared on my technet blog.

November 14, 2008

Server 2008 R2 – Server Core changes.

Filed under: Beta Products,Powershell,Windows Server 2008 — jamesone111 @ 2:43 pm

As I said in my last post I have a whole stack of things to talk about in the aftermath of Tech.ed in Barcelona. One of those is the changes made in server core.

Fortunately Andrew over at the Windows Server Core blog has saved me a job, his post has more detail,  but the major change is a subset of .NET framework for core. This allows two really important things. PowerShell 2 and ASP.NET. Today Server core has no .NET framework support so that means ASP or plain HTML pages, but no ASP.NET.  ASP.NET is a obviously a step forward because you shouldn’t really have to care if a web server is on Core or on Full Windows when you decide what it will serve up. But PowerShell ? I keep telling people that you manage Windows Server Core remotely. You don’t logon to the console and fire up PowerShell….but with PowerShell 2 we have much richer remoting, and remotely managing a machine with PowerShell means you run the shell on Machine A and specify that you want to run a command on Machine B.

We also have File Server Resource Manager support, and the ability to add Certificate Services. Finally. Server 2008 R2 is 64 bit only, support 32 bit application will be available but the plan is that you will have to choose to install it, to keep the footprint as small as possible.

 

This post originally appeared on my technet blog.

November 2, 2008

Virtualization videos

Filed under: Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 2:52 pm

I mentioned I was presenting on Virtualization a week or so ago, and Matt was presenting another of the sessions. Without wanting to make a big deal of it, but since I’ve got to know Matt, I’ve come to value his opinion and he used my library from codeplex. to  demo building VMs from a script, he had some really nice things to say about, which did my ego a power of good. 

John Kelbley is a chap I know less well, but it’s always nice to have people who work in the product team telling you that you’ve done a great job. John pinged me a mail to say his discussion from the US tech-ed with Alexander Lash about script-based management for Hyper-V, including a discussion of PowerShell and WMI where they use my library was posted. It’s available for download.
WMV High res version
WMV Low Res version
MP4 Version
MP3 Version

I’m going to be recording at least one video session on PowerShell at Tech-ed Barcelona this week, and with a bit of luck we’ll get one together on Virtualization as well.

Bonus Link, before my ego gets out of control, thanks to Matt for the link to a great Silverlight demo of System Center Virtual Machine Manager.

This post originally appeared on my technet blog.

October 31, 2008

High Performance computing Webcasts over the next two weeks

I haven’t really said much about Windows HPC server 2008, which we launched recently as the successor to Compute Cluster Server. But there are some useful webcasts lined up , they’re early in the morning Redmond time, which puts them at then end of working day for those of us in Europe.

HPC Webcast Series: High Performance Computing for the Masses with Intel

November 4, 2008, 9:00am PDT

High Performance Computing has its roots in non-commercial, academic, national lab and governmental environments where very large, compute-intensive workloads and applications are executed.  According to recent data from IDC HPC, economically-attractive clusters have become the common denominator across most HPC deployments.  In parallel, the continuing maturation of HPC ISV commercial business software, the refinement of operating system & cluster software, and the consistent availability of more computing power at lower cost have opened up the inherent performance advantages of HPC clusters to a much wider community of users.  Please join Intel for an overview of the current HPC marketplace and the potential alternatives of utilizing High Performance Computing clusters for both increased productivity and competitive advantage. Register today!


Dramatic Acceleration of Excel-based Trading Simulations with Platform and Microsoft

November 5, 2008, 8:00am PDT

Tired of waiting hours for simulation results? In this solution overview, Platform will review the challenges of running Excel models in a distributed environment, and describe the benefits and details of how to deploy models without wholesale code/macro changes. Learn how to enhance application performance and empower quants and developers to deploy distributed applications quickly without modification, using Platform Symphony’s Excel Connector with Windows HPC Server 2008 and Microsoft® Office 2007 Excel. You’ll discover how the world’s top financial firms can realize new revenue opportunities ahead of the competition by leveraging this joint solution. Register today!


High Performance Computing (HPC) Clusters: Reduce Complexity with IBM and Microsoft

November 6, 2008, 8:00am PDT

No matter where you are located in the world, IBM has an HPC cluster solution that is easy to deploy.  Learn how IBM can help you reduce the risk and manage growth more easily with the pre-tested, easy-to-deply, easy-to-manage IBM Cluster 1350 solution, and how when combined with Windows HPC Server 2008 you can leverage your current Windows server expertise to accelerate your time to insight on computational analysis.  Whether you are a financial analyst or engineer our Windows based HPC cluster will meet the demand of your computational needs. Don’t miss this exciting and informative webcast. Register today


Improving Electronic System Design Productivity using Synopsys System Studio and Saber on Windows HPC Server 2008

November 12, 2008, 8:00am PDT

Synopsys System Studio and Saber products are now available on Windows HPC Server 2008.   Join us for this informative webcast and get the latest information on how System Studio further improves performance and signal processing design productivity by taking full advantage of Windows HPC Server 2008 as a stable, extensible environment on high-performance CPUs. Learn how Saber helps automotive and aerospace supply chains meet stringent reliability and safety requirements for mechatronic systems in harsh environments.

System Studio is Synopsys’ model-based algorithm design and analysis offering providing a unique dataflow simulation engine with the highest performance for exploring, verifying and optimizing digital signal processing algorithms. Saber is Synopsys’ technology-leading mechatronic design and analysis software, advancing Robust Design and Design for Six Sigma (DFSS) methodologies into today’s automotive, aerospace, and commercial design. Register today!

This post originally appeared on my technet blog.

October 27, 2008

Scheduling task on Server 2008 – a Hyper-V snapshot

Filed under: Powershell,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 3:42 pm

I had a mail from Alessandro, asking about scheduling operations on Hyper-V; specifically he wanted to schedule the snapshotting of a virtual machine. The logic was already in my codeplex library, but it only needs a couple of the functions (new-vmsnapshot and get-vm), but the easiest thing seemed to be to make its own PS1 file and then create a scheduled job to run PowerShell with that command.

So here is the script which I saved in the Hyper-V program files folder as "New-VMSnapshot.ps1". It takes the name of a VM and optionally the name of a server and kicks off the snapshot.

Param( $VMName , $Server=".")

vm=Get-WmiObject -computername $Server -NameSpace "root\virtualization" `

-Query "Select * From MsVM_ComputerSystem Where ElementName Like '" +

$VMName + "' AND Caption Like 'Virtual%' "

if ($VM -is [System.Management.ManagementObject]) {

$arguments=($VM,$Null,$null)

$VSMgtSvc=Get-WmiObject -ComputerName $VM.__server -NameSpace "root\virtualization" `

-Class "MsVM_virtualSystemManagementService"

$result=$VSMgtSvc.psbase.invokeMethod("CreateVirtualSystemSnapshot",$arguments)}

Before explaining how to schedule the script, I should give a couple of warnings about snapshots: first, reverting a machine to a snapshot can break the connection to its domain because the password for the secure channel has changed since the snapshot was taken. In training, demo, or test environments you can shut everything down, make a consistent set of snapshots and apply them all. That’s not an option for production environments.

Second, remember that a snapshot turns a VM’s current disk into the parent half of a differencing disk, and if the VM is running saves its memory as well. You can use a lot of disk space by being over-eager with snapshots, and you will blunt system performance. Deleting snapshots merges the differencing disks (creating a temporary disk which uses even more space) but only when the machine is shut down. Restarting the VM before the merge is complete will abandon the operation. So if you schedule snapshots you should have a plan to remove them regularly.

Creating the scheduled task is easy on Server 2008 – it uses the same scheduling tools as Vista. go to the Task scheduler (either as a free-standing tool or in the server manage) right click the scheduler and choose add ask and click though the wizard. On the general page you need to make sure the task runs if no-one is logged on, the triggers page specifies when the task runs and the actions page specifies the command(s) ,

%SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe 

and arguments to be passed

"& 'C:\Program Files\Hyper-V\new-Vmsnapshot.ps1' 'core' "

So much easier than the old AT commands.

This post originally appeared on my technet blog.

October 21, 2008

SCVMM 2008 released.

Filed under: Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 5:10 pm

There’s Billy Connolly sample which I’ve had on my PC for ages. To use it recently at Virtualization events Steve Lamb had to edit 3 rude works out of 12 seconds.

We want this, and that,
We demand a share in
that and most of that.
Some of
this and **** all of that
Less of that and more of this and **** plenty of this.

And another thing. I want it now.
I want it yesterday, and I want more tomorrow.
And the demands will all be changed then so ***** stay awake

Now…  if I worked on the Virtual Machine Manager team, that might be how I viewed the demands of the rest of the company…. I’d love to have been a fly on the wall when they were setting out their plans in 2006/7

  • We need you to write something to manage VMs. And we need it … soon as you can really.
  • You need to support Virtual Server because that’s shipping today.
    The API for Hyper-V is totally different and you’ll need to support that when it ships…. so better plan for a V2
    Oh … could you make it extensible and think about adding VMware support while you’re at it.
  • Physical to Virtual Migration is a nasty problem: go back and look at that one would you ?
    While you’re looking at that see if you can translate a VMware disk to a Microsoft one.
  • Setting up VMs from a library on the best suited host them isn’t straight forward either, so see what you can do..
  • Letting users access create and access VMs via a web portal would be ideal
    Hyper-V doesn’t expose the access control for VMs so if you;ll need to sort that out first
  • You’d better make it scriptable. No messing about, it needs to be done with PowerShell
  • We think High Availability will be big so make sure you can hook into cluster technologies.
  • These VM files are kinda huge so see if you can find a way of copying them in inside a SAN.
  • You need to fit it into the System Center family,
    in fact it would be great idea if you could make suggestions based on what the rest of system center sees .
  • Oh and Hyper-V is going to ship about 6 months after longhorn server, and there will be a Longhorn Server R2 about 18 months after that
    Go see the Hyper-V guys and see what they have up their sleeves for that one and start planning a V3. 

This kind of thing would merit a response with more swear words than a box-set of Billy Connolly: The SCVMM team said “OK”,(was there a lot of swearing first ?) And 13 months  after shipping their 2007 version, the 2008 version Released To Manufacturing today (See Zane’s post here),  It now manages Hyper-V and VMWare, and has added additional key features like Cluster support, Delegated Administration and resource optimization with “PRO” –I love the joined-upness of pro “Operations manager says it looks like this Machine needs to go off line, do you want to start moving VMs off it ?”

I don’t want to give the impression of belittling the work of the Hyper-V team; but in a sense the job of their product is to just blend into the infrastructure, to become invisible,become a given like file sharing. What’s going to matter to customers in 2 or 3 or 5 years is not what they use to do virtualization (Microsoft/VMware/Citrix/whoever) but how well they manage the whole environment where virtualization is in use, from Hardware to the Application in the Guest OS. In that sense SCVMM is the more important product. Virtualization is moving from the early-adopters to mainstream use but today the entrenched VMware customers I meet are – almost by definition – early adopters of virtualization. They’ve listened to stuff about Hyper-V and said yes it’s all very nice but we’ve got VMware, we know where we are with it, and don’t feel like making a strategic change just yet. Then they see SCVMM and before I can get into the business of “Now, you need this because …” their reaction is “We know why we we need it. When can we get it and how much does it cost”. Today (eval here)  and less than you might expect.

This post originally appeared on my technet blog.

PowerShell, WMI and formatting

Filed under: Powershell,Windows Server,Windows Server 2008 — jamesone111 @ 1:25 pm

I’ve been doing a ton of PowerShell recently, in which is part of the reason why the number of blog posts I’ve made has been down. One of the jobs has been to go back to some of the code I wrote a while back and examine how I handle WMI objects. At the time I wrote my contribution to the OCS resource kit last year I was pretty new to PowerShell and I wish I knew what I know now before I started.


Actually where I started with OCS and PowerShell was looking at someone else’s code where they had for loop with a bunch of write-host statements to output WMI objects to the screen. I said “don’t be daft, use | format-table” and the next thing I knew I was being asked to bring my obvious expertise to bear on the project (A Polite form of “if you’re so clever you do it then” ). So I started with a Bunch of get-thing functions. These would be in the form Get-WMIObject –Class Thing | format table. I quickly realised that wasn’t right because once it is formatted as a table you can’t use the object. So for every class of “thing” I made the get-thing function  return the WMI object(s) and had a list-thing function. One of the testers didn’t like the unformatted WMI objects if he used get- instread of list and I told him that was just how things had to be.


Lets move forward to recent history. I was looking at the Get-process function in PowerShell, when you run it it looks like this.

PS C:\Users\jamesone\Documents\windowsPowershell> get-process -Name winword 

 


Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
——-  ——    —–      —– —–   ——     — ———–
    931     123    85380     149360   506 …86.16    264 WINWORD


Only one thing wrong with that – if you send it to get-member you see it is a “System.Diagnostics.Process” object but those properties “NPM(K)” and the rest the don’t exist. Curious. I already knew PowerShell has some XML files which let me spot-weld properties onto objects, so I went for a little search using my favourite function Select-String.

PS C:\Users\jamesone\Documents\windowsPowershell> cd $pshome

PS C:\Windows\System32\WindowsPowerShell\v1.0> select-string -path *.ps1xml -simplematch “System.Diagnostics.Process”

DotNetTypes.format.ps1xml:370:                <TypeName>System.Diagnostics.ProcessModule</TypeName>
DotNetTypes.format.ps1xml:404:                <TypeName>System.Diagnostics.Process</TypeName>
DotNetTypes.format.ps1xml:405:                <TypeName>Deserialized.System.Diagnostics.Process</TypeName>

DotNetTypes.format.ps1xml:551:                <TypeName>System.Diagnostics.Process</TypeName>

DotNetTypes.format.ps1xml:598:                <TypeName>System.Diagnostics.Process</TypeName>

DotNetTypes.format.ps1xml:1090:                <TypeName>System.Diagnostics.Process</TypeName>

types.ps1xml:133:        <Name>System.Diagnostics.ProcessModule</Name>

types.ps1xml:327:        <Name>System.Diagnostics.Process</Name>

types.ps1xml:430:        <Name>Deserialized.System.Diagnostics.Process</Name>


The files themselves are signed so they shouldn’t be changed, but you can look at them.


In Types.Ps1XML there is a section starting at line 327 which defines the “spot-welded” properties. Have a look at a process in you’ll see alias properties like “NPM” which aren’t part of the definition of the .NET object, that’s where they come from.


In DotNetTypes.format.ps1xml there is a section which describes how the object should be formatted, table or list, column widths and headings, and properties or code blocks. It’s all written up on MSDN. So I could use this as the basis for my own XML file: load that using Update-FormatData and… job done. 


The gist of the XML file is it looks like this

<Configuration>

    <ViewDefinitions>
     </ViewDefinitions>
</Configuration>

Inside viewDefinitions there are one or more <view> objects. Which look like this, the key piece is the name of the type, as you can see this one is for a virtual machine in Hyper-V, and this one is formatted using a table.

  <View>
    <Name>Msvm_ComputerSystem</Name>

    <ViewSelectedBy>
      <TypeName>System.Management.ManagementObject#root\Virtualization\Msvm_ComputerSystem</TypeName>
</ViewSelectedBy>
     <TableControl>
<TableHeaders>
      </TableHeaders>
       <TableRowEntries>
          <TableRowEntry>
          </TableRowEntry>
        </TableRowEntries>
     </TableControl>
   </View>
The Table headers section contains one entry for each column
  <TableColumnHeader>
<Label>Up-Time (mS)</Label>
<Width>12</Width>
<Alignment>left</Alignment>
</TableColumnHeader>

And the TableRowEntry section contains a corresponding entry

  <TableColumnItem>
<PropertyName>OnTimeInMilliseconds</PropertyName>
</TableColumnItem>

If I wanted to have the time in seconds here I could use a script block instead of a property name
  <TableColumnItem> 
<ScriptBlock>$_.OnTimeInMilliseconds / 1000<ScriptBlock>
</TableColumnItem>

I’m not going pretend that putting the XML together is as quick as writing a list function – but it definitely seems worth it. 


Technorati Tags: ,,,,

This post originally appeared on my technet blog.

October 20, 2008

Hyper-v Server , what is it exactly ?

Filed under: Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 12:07 pm

I’ve got multiple blog posts on the go in Windows Live Writer at the moment, one talking of the relative dearth of posts recently. Last week the folks at VMware put up another post about Hyper-V server and in it actually found something nice to say about me (even if they mis-spelled my name. double-L guys, like the wet-suit and surf gear maker).  I’ve got to say something nice about them and getting products certified, but that’s another post.

Before this becomes a love-in, lets get back to Hyper-V Server … what is it then ? From the point it was first mooted, we’ve been saying that the easiest way to picture Hyper-V server is as Windows Server 2008 Standard Edition, Core Installation, with only the Hyper-V role available. That is an over simplification but as a first approximation it will do. In that post VMware’s Tim Stephan post says “I am actually thinking that, at the time of Hyper-V server’s announcement, Microsoft itself didn’t know what the Hyper-V Server 2008 architecture would look like…”. Not so: the idea of Server Core (standard) with all the bits what weren’t needed removed has been pretty constant even if the odd wrinkle needed ironing out between announcing and shipping.

The VMware post says “Hyper-V Server is supposedly Microsoft’s ‘thin’ hypervisor that doesn’t require Windows Server OS in the parent partition – as reported by Microsoft here.” 

“Here” is Post of Patrick’s where doesn’t say those things. He does say:

  • Hyper-V Server 2008 was built using the Windows hypervisor and other components, including base kernel and driver technologies. Microsoft Hyper-V Server 2008 shares kernel components with Windows Server 2008.
  • Microsoft Hyper-V Server 2008 contains a sub-set of components that make up Server Core deployment option of Windows Server 2008, and has a similar interface and look and feel. But as you know, Server Core has roles like DNS, DHCP, file. Hyper-V Server 2008 is just virtualization.

  • Because Hyper-V Server 2008 shares kernel components with Windows Server 2008, we don’t expect special hardware drivers to be required to run Microsoft Hyper-V Server

So, strictly speaking, it isn’t the Windows Server OS in the parent partition, but everything which IS in the parent is from Windows. If it wasn’t, you wouldn’t be able to manage Hyper-v Server like Windows Hyper-V, you wouldn’t be able to use the same drivers use the same patch process and so on.  And since I mention patching, do you need to download the same patches as for Windows Server 2008, Standard, core with only the hyper-V role ? Yes. The guys at VMware found that out. Incidentally, the client Windows Update uses some parts of internet Explorer to get updates over HTTP, and to find and connect through proxy servers. It might feel wrong to be applying an IE patch to Hyper-v server or Server core, but that’s the reason and not every IE patch will be needed.  Running a Microsoft OS in VMs, and as the host everything get’s patched together. Customers can judge VMware’s Patch record and Server 2008’s for themselves; I’m happy to be on the Microsoft side in that one.

One might quibble with their sensationalist tone “Microsoft OS based on Windows shock” … what did they expect, VHD’s stored on a whole new file system ? Windows PE as the management partition ? A whole new driver model ? They come to the conclusion that the easiest way to picture Hyper-V server is as Windows Server 2008 Standard Edition, Core Installation, with only the Hyper-V role available, as if we hadn’t been saying since before it was announced.

So does the rest of the Tim’s peice introduce any new FUD or distortions. Not really. Their comparison chart is the same old spin

  • Having laboured the point that Hyper-v Server is based on Windows, they claim it isn’t production proven, and is hard to move VMs to other products in the Microsoft family. I don’t think the facts justify those opinions (no surprise in that) But what constitutes an easy upgrade ? What constitutes production proven … do you need more proof or less than the next customer ? I don’t trust things which present opinions as facts (people point out when I do it, and I when I’m not clear I get annoyed with myself). 
  • Memory support in Hyper-V Server is capped at 32GB like Windows Server standard. There’s no clustering support (like Windows Server Standard).
  • VMware takes up less disk space than its competitors (though with disk space these days costing less than £0.10 per GB I’m never sure why they bring that up. VMware has a bigger memory foot print (see this post from Roger Klorese ) – depending on configuration Hyper-V uses about 200MB less. So no-one should be surprised that they gloss over that one.
  • I’m suspicious of their Supported guest OSes figure. Our 11 is “OSes which you can call Microsoft support and get a fix for if they don’t behave properly on Hyper-V”. We don’t support NT4 on anything any more. There’s actually a check box in Hyper-V to make it run better, but running well is not the same in Microsoft product-support parlance as supported. My understanding is that their 30 is “OSes which are known to work”, so that would include NT4. I’m not going to argue for any particular definition of “supported” but you can’t make a valid comparison unless the definitions are the same.

This post originally appeared on my technet blog.

October 8, 2008

Server 2008 AD Free Exams …

Filed under: Windows Server,Windows Server 2008 — jamesone111 @ 10:44 pm

I just had this in my mail with a request to share.  Most requests to “blog this” are on a fast track to the great electron recycler, but this is worth sharing. Here’s what

Registration is open till October 25th for testing New Virtual Lab based Exam 70-113: TS: Windows® Server 2008 Active Directory, Configuring

The new pilot exam “70-113: TS: Windows® Server 2008 Active Directory, Configuring” tests candidate’s abilities to actually perform tasks and solve problems in virtual lab environment like they would do it normally in a real world. We are pleased to offer you the opportunity to experience this pilot exam at no charge …

This pilot exam will not provide you with a score as with normal beta exams. This pilot is a test of the exam experience, so only a portion of the final exam will be presented to you during this pilot.

This pilot exam will not be added to your transcript and you will need to take the exam in its released form in order to be awarded the credential. Find exam preparation information: http://www.microsoft.com/learning/exams/70-640.mspx

An exam that doesn’t count. Apart from Masochism, why would you want to do it ? Here’s why

Upon completion of this pilot exam, the first 3000 candidates will receive 3 (!) free exam vouchers that can be used to register for any Microsoft Certification exam delivered at a Prometric testing center. 
The voucher will be distributed electronically 4 weeks after end of Pilot.

Apart from needing multiple visits to the test centre that’s a much better deal than the usual free beta (spend 2 or 3 times as long on the exam, but get it for free).  Here’s how to do it.

You must register at least 7 days prior to taking the exam. Register before October 25th to take the exam before October 31st.

Send your opinion about exam experience to: http://blogs.technet.com/betaexams/ and to: pbexam@microsoft.com

I didn’t even know there was a beta exams blog – I’ve done enough of them in my time. Sadly I won’t be able to fit this one in before the time runs out.

This post originally appeared on my technet blog.

September 28, 2008

Server Core — Too Dry and Crunchy ?

Filed under: Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 10:52 am

I’ve got a kind of love/hate relationship with the core installation of Server-2008.


I love: That it has the smallest foot print (I remember the days of Netware 2.0a – 3.x – why does a file server need a GUI ? ) , the smallest attack surface, lowest patch requirements (even if some – like an IE update which is needed because web proxy access for activation is provided via an IE component – seem like they shouldn’t be there).


I hate: The obscure command lines needed to set it up from scratch. Feeling brave on our roadshow, I configured core in front of the audience to show the steps. I was very clear that they should have Step-By-step guide by their side if they were going to do a fully manual configuration of core. I tried to be clear about the question of to core not to core




    1. Don’t bother with core on a small scale. One server ? Forget it. In fact if you’re doing a fully manual installation (instead of  using centralized deployment tools like WDS) you should be asking if core is for you.

    2. Core is for lights out operations. Once configured you remove the monitor and keyboard, and don’t ever connect to the console again. Not even via terminal services. Use remote management tools exclusively

    3. Where “management tool X” can only run locally and can’t run on core – and this applies to some tools for setting up storage – then you can’t use the product on core.

    4. Don’t use core for just one function. If your file servers run in a lights out data-centre and your DCs run in lights out data centre, they can both be on Core.

Core is not for everyone. If you have a battery of file servers then the most optimized way to run them is on core, with file serving as the only role. Serving static Web pages ? Best optimization is to run a farm of web servers on core. Consolidating 10 racks of servers down to one rack with Hyper-V ? Core could be for you.  But be clear – just as the optimization of a sports car for racing means saving weight by stripping out the comforts that would make it a Grand Tourer , so Server Core is pared down to optimize its footprint – which means giving up some of the things which make Windows Servers easy. Just as a racing car isn’t optimized for comfort, Server core is not optimized for ease of setup. And if anyone tells you that every file server , or every DC , or every web Server or every hyper-v server MUST be on Core, or even should be on Core, then they are assuming that every use needs the same optimization. I’d question any advice I got from someone who made assumptions like that.


Unfortunately a few Microsoft folks are putting that message out into the world. So I guess we only have ourselves to blame when VMware post a blog item entitled “Hyper-V with Server Core — Too Dry and Crunchy for our Taste” . And they make the points that I’ve just made.  They say Microsoft execs are keen to say Hyper-V is just part of Windows. That’s certainly true, it’s covered by Windows support contracts, and support doesn’t increase in cost when you deploy on 4 processor boxes as it does with VMware. It doesn’t force any hardware choices on you. And it’s managed like any other part of Windows – including using the umbrella technologies of the system center family. It does cut training costs – if you know how to set up a high availability  file-server or exchange server you know how to set up a high availability hyper-v server. Next week at VM Expo I have a session where I’m going to install Hyper-v, install failover clustering, set up a VM and show it failing over in 30 minutes. It is easy. But I never run it on core.


The VMware post talks about the Microsoft execs using the phrase “it’s the Windows you know”  and then go on to say that actually for many people Core isn’t the Windows they know: true. “It’s the Windows you know, or an optimized Windows you might not know yet” isn’t a good sound bite. Of course those customers who will find core a good fit for Hyper-V probably do know it already; but there is no getting away from the fact that when you make your installation decision, core needs more expertise than full Windows. Of course that is a choice you get to make and one I don’t think VMware gives customers; so you’d expect ESX to be easier to set-up than core: something would be very wrong if it were harder.


Of course VMware ducks the issue of what happens after you’ve set up the server. Which is easier to manage once it is running ? Customers who are exclusively on VMware have been showing a lot of interest in System Center Virtual Machine Manager (the most expensive piece of our virtualization family). These are people who don’t see their strategy changing  to Hyper-v in the short term, and have spent a lot buying everything VMware has to offer (See Brett’s story about our VMwareCostsWayTooMuch site for VMworld). Yet they still feel that VMware’s management doesn’t cut it and  like the idea that with SCVMM they can make raise the management of VMware to the level we offer with Hyper-V.  Ironically by protesting that it takes longer to set-up Hyper-v in the worst case scenario, the VMware post emphasizes that the up-front investment needed if Hyper-V is to run in the optimized environment of core isn’t especially great when set against the savings Hyper-V offers.


This post originally appeared on my technet blog.

September 24, 2008

Active Directory User Group, first meeting with John Craddock and Sally Storey

Filed under: Events,Windows 2003 Server,Windows Server,Windows Server 2008 — jamesone111 @ 8:35 am

We now have a UK AD user group (ADUG) And they have their first meeting set for October 23rd at Microsoft’s offices in London, from 6 till 9 in the evening. They’ve managed to secure John and Sally – probably the best established AD double-act in Europe (their tech-ed sessions are always among the top scoring ones). When I first came to work at Microsoft Sally was in Microsoft consulting Services with me and I’ve got a lot time for her and for John. They’re going to focus on new features Server 2008 for this one. Their sessions tend to get full, so although it’s still a month away I wouldn’t waste any time booking if you want to go. I’m hoping the event will also be relayed by Live Meeting, but that has yet to be confirmed.

This post originally appeared on my technet blog.

September 10, 2008

Hyper-v and competitors /collaborators

When Ray Noorda ran Novell he coined the term Coopertition to describe their relationship with us. Microsoft’s Kevin Turner described this as "Shake hands but keep the other hand on your wallet".

We would love customers to buy ONLY Microsoft stuff (support would be SO much simpler), and competitors would love customers only to buy their stuff. A world where we go 100% of spending of x% of the customers would be so much neater than the real world where we get x% of the spending  (on average) of 100% of customers. Both we and competitors want customers to have a great experience of our respective technologies, and that doesn’t happen if we don’t cooperate on some things.

In the virtualization world it means two things ; being able to run competing OSes on our virtualization and being able to run our OSes on competing virtualization and give the customer clarity about support.

So first, if go to http://windowsservercatalog.com/ and click on the ‘certified servers’ link on the right side of the page under the Windows Server 2008 logo, you can check which Servers have been validated in the lab- there a sub-section for servers validated for hyper-v.

Second if you go to Server Virtualization Validation Program page and click on the  ‘products’  link on the left side of the page you can find out which products we support. As you can see VMware is on the list their entry says which version of Windows is supported on which version of VMware. Today it’s 32bit Windows 2008 only, on ESX 3.5 update 2 only.  That would tend to make people nervous about older versions of Windows until you the section which appears next to each catalogue entry "Products that have passed the SVVP requirements for Windows Server 2008 are considered supported on Windows 2000 Server SP4 and Windows Server 2003 SP2 and later.". It would be reasonable expect more products from more vendors to appear on the list, but it’s good to see that VMware was one of the first to pass the tests.

Third. Linux support. Mike Sterling has posted that Linux Integration Components are now posted , the actual link he provides to the connect web-site seems to be broken, but you can find the components in the Connections Directory

Steve and I are off to Edinburgh to do the 5th run of our Virtualization tech-ed event Seats are still available for tomorrow (Thursday 11th)

This post originally appeared on my technet blog.

September 8, 2008

Today’s "Get virtual now event" – official information published

Filed under: Events,Virtualization,Webcasts,Windows Server,Windows Server 2008 — jamesone111 @ 11:38 am

The link to this story is now on the press pass home page 

Today more than 20 partners announce plans for Microsoft virtualization solutions; Microsoft Hyper-V Server, Microsoft Application Virtualization 4.5 and System Center Virtual Machine Manager 2008 available within 30 days.

BELLEVUE, Wash. — Sept. 8, 2008 — Kicking off a global series of “Get Virtual Now” events that will reach more than 250,000 customers and partners, Microsoft Corp. today announced strong customer and partner adoption of Microsoft virtualization software, new tools and programs to drive partner success.

Microsoft also announced the upcoming availability of System Center Virtual Machine Manager 2008, Microsoft Application Virtualization 4.5 and the new Microsoft Hyper-V Server 2008 as a no-cost download during today’s keynote addresses streamed live at http://www.microsoft.com/presspass/events/virtualization/default.mspx.

The Hyper-V Server website is also live. And as I forecast over the weekend Patrick’s posted a summary 

This post originally appeared on my technet blog.

September 7, 2008

Monday’s "Get Virtual Now" event

Filed under: Events,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 5:31 pm

You may have seen that there is a big Microsoft Virtualization event happening in the US on Monday.  I’m expecting it to be reported on the Virtualization team blog, and on the Sever and Tools Business "news bytes" blog, and possibly on the "Software enabled Earth"blog as well. 

I’ve had a short briefing ahead of the event, you can see from the published agenda that some of the corporate big-guns will be there,and I’m not going to pass any comment until after they have spoken. Our Press Pass site has a page for the event, including a live stream of it, so if like me you can’t make it to the event in person and want to keep up with what they say – well that seems like the place to be.

This post originally appeared on my technet blog.

September 6, 2008

Lies, Damn lies, and Statistics – a lesson in Benchmarking.

Filed under: Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 6:34 pm

Many years ago – before on-line meant "the internet" – I annoyed a journalist in an on-line discussion. I criticized the methodology used by his magazine to test file servers: Machines copied a large file making it a test of the cache effectiveness of the server.  As more machines and hence more files were added performance rose to a peak, then the total of files being coped exceeded the cache size, and it plummeted. This they explained as "ethernet collisions".

I mention this because there’s always a temptation to try to rip up a Benchmark someone else has done (I certainly didn’t use very diplomatic language back then). Single task tests can give you an idea how well a machine will carry out a similar task. What do file server tasks look like ? Realistic tests for file servers are hard. For virtualization it is close to impossible. If you a take a real world question like "I have 100 machines; when I multiply their CPU speed by their average CPU loading it they average out at 200MHz. How many Servers do I need ?" Obviously it’s more than 100×200 = 20GHz / (cores * Clockspeed) … but how much more ?" You need to answer questions like "What’s the overhead of running VMs ?" Would 5 servers running 20VMs have a bigger or smaller percentage overhead than 2 servers running 50 ? Assuming you could work this out and come up with an "available CPU" figure, it doesn’t answer questions about peaks of load i.e. "at any point in the last month would the instantaneous sum of CPU load totaled over a set of machines exceed the available CPU on the virtualization server ? And of course we haven’t mentioned disk and network I/O questions.

If that wasn’t enough to make people want to give up on Benchmarking,  Virtualization is a technology to put a lot of small loads into a single server. Benchmarks tend to load a system to the maximum and measure the throughput. Running benchmarks on virtualization is a bit like putting trucks on a race track. It might be fun and you will get a winner … but it doesn’t tell us anything much about the real world.  Still that doesn’t stop people doing it.  And just with motor racing even when the winner is declared people will argue afterwards.

Below are some numbers from Network world who benchmarked VMware and Hyper-V. They did the test on the two virtualization platforms, using Windows and Suse Linux as the guest OS. Using 4 procs and one proc per machine, they tested 1,3 and 6 instances and  I’m only showing the Windows-as-guest and I’ve multiplied the score per VM  by the number of VMs (they just showed the SPECjbb2005 bop/sec scores per VM, but you can get the raw numbers from pages 3 and 4 of their results (link below)

  One Instance Three Instances Six Instances
OS running natively on one CPU 18,153 n/a n/a
OS Running natively across four CPUs. 32,525 n/a n/a
Uni-proc VM on Hyper-V 17,403 49,089 87,186
Uni-proc VM on VMware ESX 17,963 53,205 83,784
4-proc VM on Hyper-V 31,037 101,022 87,528
4-proc VM on VMware ESX 31,155 81,429 96,816

Lets do some quick analysis of these numbers.

18,153 Vs 32,525 On the bare OS 1 Proc gives 18K bop/s and 4 procs gives 32K. So the application is not CPU bound – presumably I/O bound, and sensitive to disk configuration; however the "How we did it" page doesn’t say how the disks on each system were configured – for example did they use the OS defaults (slow, expanding  dynamic disks in Hyper-V) or do any optimization (faster, but more space consuming Fixed disks in Hyper-V). Given that other benchmarks show that Hyper-V can get better than 90% of the disk IOs of the native OS even under huge loading  – the separate disk benchmark they produced showing a very low number makes me suspect they followed the default. But the basic rule of lab work is record what it was you measured(a fixed, expanding or passthrough disk) otherwise your result lose their meaning. 

101,022 Another odd thing on disks is that the 3x Quad-Proc VM test, Hyper-V’s scores show each of the VMs getting 3.5% more throughput than the bare OS. Hyper-V does not cache VHD files, although it can coalesce disk operations which can occasionally throw tests, personally I don’t trust tests where this affects the result (which it plainly does in this case). 

87,186 Vs 83,784. The system had 16 cores and with uni-proc tests they stopped adding VMs at 6. With a single Uni-proc VM, VMware is a little faster, 3 such VMs and it’s faster still. Get to 6 and hyper-V is winning. Does efficiency favour Hyper-V as load increases who can tell ? Without tests at 12 or more VMs there’s no way of telling if this is an aberration or a trend.

Six instances. On uni-proc VMware and Hyper-V both get nearly 200% more throughput going from 1 to 3 VMs – that’s almost linear scalability, but it tails off to 57% and 78% going from 3 to 6. On 4 proc hyper performance actually goes down at 6 instances, and VMware only goes up 20% – this makes me think the system is I/O bound.

So who is the winner ?

In the uni proc test Hyper-V’s best score was 87,186 and VMware’s was 83,784. So Hyper-V is the winner !

In the quad-proc test Hyper-V’s best score was 101,022 and VMware’s was 96,816. Hyper-V is the Winner again !

Now, I’ve already suggested this is a field where there are more questions than answers. I’m left with just one: Since Microsoft clearly won, why is their article called VMware edges out Microsoft in virtualization performance test ?

Bonus link. This is NOT Simpson’s paradox. but if you’re every stuck for something to talk to statisticians about it’s worth knowing.

This post originally appeared on my technet blog.

August 19, 2008

Virtualization: Licensing and support changes

Filed under: Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 3:33 pm

I was briefed a few days ago about changes to licensing to make life easier for people doing virtualization.  I think our server applications have done a good job of adapting their licences for a world where they run in virtualized machine. That’s all fine and good, but the situation wasn’t so clever when those virtual machines weren’t tied to a single physical server.  The announcement is now up on Presspass. Previously if Exchange, SQL or the others could run on a physical computer, then that computer had to have a licence: unless a physical server was being replaced, you could not move a licence between two servers more than once every 90 days. Build a 16 node cluster with Hyper-V with one SQL VM meant buying 16 licences; as yet relatively few customers have built systems like that, but they have with VMotion and I’ve faced hostile questioning about this point in the past. I suspect a lot of customers thought they only needed as many licences as running VMs: that’s basically the position from today. You should read the Application Server License Mobility brief for yourself, because (a) it gives some good worked examples. (b) It makes it clear what kind of moves are still excluded (c) It makes it clear which products are eligible and which are not. I’ll steer questions to Emma

But there’s more. We’ve updated our application support policy for a whole crop of applications to cover support on virtualization.. The note on Presspass simply links to http://support.microsoft.com and says this covers 31 applications. The full list is in KB article 957006  I’ll try to find an definitive list (I have a list, I just can’t call it definitiveExchange is in though )

But that’s not all. We have a new version of the KB article, 897615 which outlines how we support people running on non-Microsoft Virtualization

for vendors who have Server Virtualization Validation Program (SVVP) validated solutions, Microsoft will support server operating systems subject to the Microsoft Support Lifecycle policy for its customers who have support agreements when the operating system runs virtualized on non-Microsoft hardware virtualization software

I’ve mentioned the SVVP before. There are 5 companies signed up so far with certified products

  • Cisco Systems, Inc.
  • Citrix Systems, Inc.
  • Novell, Inc.
  • Sun Microsystems
  • Virtual Iron Software

The internal briefing said we’d reached out to other vendors.  If they’re not on the list, it’s because they don’t want their customers to be properly supported. Enough said.

Update 1 The Exchange team blog has a post on this.

Update 2 Just be clear that the list on SVVP page is for those who have signed up to get their solution validated and the support statement in KB 897615 talks about having Validated solutions and It should be obvious that one comes  before the other. But I made the mistake of saying the 5 signed up as of August 19th had all completed validation.

 

This post originally appeared on my technet blog.

August 16, 2008

Terminal Services Easy print

Filed under: Events,Windows Server,Windows Server 2008 — jamesone111 @ 10:33 am

At the end of Wednesday’s session in London, we got round to Terminal services easy print.

For those who weren’t there, Easy Print is a mechanism for re-directing printers through the Terminal Services Client (MSTSC.EXE aka Remote Desktop Connection).  Trying to match printers up between different OSes – say 32-bit XP and 64 bit Server 2008 sounds like a driver nightmare. So in the 6.1 version of the client (in Vista SP1 and XP SP3) and the Server 2008 implantation of terminal services , the client can send printer details to the server. The server then prepares the print job as in an XML paper specification document, guided by the printer capabilities of the client – the client then prints renders the XPS for its printer.  Hence the server never needs a driver for the client’s printer. (There’s a good technet article which explains how it is configured for Windows XP with SP3 , and the group policy settings that are available).

When a the client connects through the TS Remote-App web portal you see 5 check boxes for what will be redirected to the server; Drives; Clipboard, Printers, Serial Ports, Supported Plug and Play devices. The drives can be blocked to prevent files being copied to or from the server; my own main use of the terminal services is to make local drives on the server available when I’m connected to a virtual machine.

The question came up – what happens when you have more than one printer ? Does the client just redirect the default one or all of them ? The check box says Printers not printer , but for some reason I had it fixed in my head that the client hooks up with the default printer only: wrong.

Here’s what you see if you look in the printers folder with a full remote desktop

Click for full size image

and here’s what you see in the print dialog for a remote-app program.

Click for full size image

If you click the properties button, a little shim dialog box appears an the local printer properties dialog box appears over the top of it. Changes to things like the paper size are sent back to the server by the Terminal services client.

One other thing where I had a mental block was where you find the option to configure a server as a member of a Terminal Services farm. Under Terminal Services / Terminal Services configuration in Server Manager, it is at the bottom of the middle pane. I think with the reduced resolution we were using for the display I needed to scroll down to find it. At least that’s the story I’m sticking to.

This post originally appeared on my technet blog.

August 15, 2008

Virtual Machine clustering in London, how we did it.

Filed under: Events,Windows Server,Windows Server 2008 — jamesone111 @ 2:18 pm

Wednesday’s event in London was fun, and we managed not to talk too much about VMware’s mishap: I joked at one point that once the VMware folks would be making T-Shirts in a few months with "I Survived August 12th, 2008" . (Buried in the back of my mind was an Old post of KC’s about the a "Bedlam event" The thought came up afterwards that maybe we should get some done with "I was running Hyper-V " on the back.

We took the same "Scrapheap challenge" approach to build a Virtualization cluster that we used for Virtual Server a year ago: scavenge what you need from round the office and build a cluster: with the message: "You can do this at home folks – just don’t do it production". Seriously. We had two cluster nodes and their management network, heartbeat network, shared storage network, and network for serving clients was the same 100Mbit desktop hub. It works; use it to learn the product, but don’t even think about using in production. By the way, these days we have "Computer Clusters" and "Network Load-balanced clusters" so we try to makes sure we refer to traditional clustering as Failover Clustering.

One of our computers was a Dell Inspiron 5160 which is 4 or 5 years old.  It ran as a domain controller – both cluster nodes were joined to the domain – it hosted DNS to support this and gave us our shared storage; using a "hybrid" form of Windows Storage server – basically the internal iSCSI target bits on Server 2008. I think machines of that age have 4200 RPM disks in them, Steve thinks it’s 5400, either way with the  network we had for iSCSI it was no speed demon (again this was intentional – we didn’t want to show things on hardware so exotic no-one could replicate when they got home).

We set-up two iSCSI targets on the the 5160, each with a single LUN attached to it. One small one to be our cluster quorum, one big one to hold a VM. In rehearsal we’d connected to these from one of our cluster nodes and brought the disks on-line (from the disk management part of server manager), formatted them and copied a VHD for a virtual machine to the large one. I’ve found that once the iSCSI initiator (client)  is turned on from the cluster nodes, the iSCSI target (server) detects its presence and the initiators can be given permissions to access the target.

Our two cluster nodes were called Wallace and Gromit. They’re both Dell Lattitude D820s although Wallace is 6 months older with a slightly slower CPU and a slightly different NIC. Try to avoid clusters with different CPU steppings, and mixing Intel and AMD processes in the same cluster can be expected to fail. Both were joined to the domain, both had static IP addresses. Both had the standard patches from Windows updated, including – crucially the kb950050 which is the update to the release version of Hyper-V. We didn’t install the optional enhancements for clustering Hyper-V. On each one, in the iSCSI initiator control panel applet we added the 5160 as a "Target Portal" (i.e. a server with multiple targets) and then on the targets page we added the two targets, and checked the box saying automatically restore connections. The plan was to disconnect the iSCSI disks on Wallace but they were left connected at the end of rehearsal.

Gromit had Hyper-V and fail-over clustering installed, but we wanted to show installing Hyper-V and failover clustering on Wallace, so we installed Hyper-V – in server manager, add a role, select Hyper-V and keep clicking next. On these machines it takes about 7 minutes with 2 reboots to complete the process. One important thing if you are clustering hyper-V the network names must be the same on all the nodes of the cluster. It usually best NOT to assign networks to Hyper-V in the install Wizard and do it in Hyper-V’s network manager (or from a script) to make sure the names match.

Then we installed Failover clustering from the features part of server manager, no-reboot required. We went straight into the Fail over clustering MMC (on the admin tools part of the start menu), we chose Create a cluster and it only needed 3 pieces of information.

  • We added servers by giving the names for nodes 1 and 2 [that is "Wallace.ite.contoso.com" and "grommit.ite.contoso.com"]
  • We Named the cluster and give it an IP address [e.g. VMCluster.ite.contoso.com , 10.10.10.100] 
  • We then hit next a couple of times and the cluster was created.

At the end of the process we had a report to review – you can validate a cluster configuration and check the report without actually creating it. In the disk manager part of server manager we the state of the ISCSI disk had changed to to reserved on both nodes, and one node will see the disks as available  – in our case this was Wallace. We found that the cluster set-up Wizard made the big disk the cluster Quorum and left the small one for applications, to fix this we right-clicked the Cluster in the Failover clustering MMC, and from the "more actions" menu, went through cluster settings/Quorum settings and changed it

The next step was to build a VM, and we just went through the new Virtual Machine Wizard in the Hyper-V MMC on Wallace. The important part was to say that configuration was not in the default location  but on the shared clustered drive. We didn’t connect the demo machine to a network (we hadn’t configured matching external networks on the two nodes) , and picked a pre-existing virtual hard disk (VHD) file on the same disk. We left the VM un-started, and we should have set the Machine shutdown settings for the VM – by default if the Server is shut down the VM will go into a saved state, which is not what you want on cluster (if you follow the link to the clustering information from the Hyper-V MMC it explains this).

Finally back in the Failover clustering MMC, we chose add clustered application/service, selected Virtual Machine from the list of possible services, and the clustering management tools discover which VMs exist on the nodes and are candidates for clustering. We selected our VM and clicked through the wizard. In Clustering parlance we brought the service on-line – or as most people would  say we started the VM. Steve showed the VM – which was running Windows server core – we don’t bother to activate demo VMs and this one had passed the end of its grace period, it was still running [Server 2008 doesn’t shut you out when it decides it’s expired]. I killed to power on Wallace, switched the screen to Grommit to see the Virtual Machine was in the middle of booting back into life. From starting the presentation to this point had taken 35 minutes.

We showed "quick migration" – which is simply moving the cluster service from one node to another. With the quick migration we put the VM into a suspended state on one node, switch the disks over to the other node and restore the VM. How quick this is depends on how much memory the VM has and how fast the disk is. We were using the slowest disk we could and it took around 30 seconds. If total availability is critical then the service in the VM should be clustered, but if isn’t there’s a short period where the service is off-line. Matt chipped in and showed his monster server back in Reading doing a failover and it was very quick – round the one second mark – but each of his disk controllers cost more than our entire setup.

I’m going to try to capture a video of what we did and post it next week. Watch this space as they say.

 

 

This post originally appeared on my technet blog.

August 8, 2008

Interesting Citrix event, London, September

Filed under: Events,Virtualization,Windows Server,Windows Server 2008 — jamesone111 @ 11:35 pm

Chatting to my opposite number at Citrix today I found that they have a Virtualization event coming up. Entitled Unlock the Power of Virtualisation it’s taking place at Old-County Hall in London on 17th September, and the details are registration link are available via  http://www.citrix.com/virtualisation

The site says the day will include the following 3  presentations

  • Citrix XenDesktop™ – Virtualising Windows-based desktops.
  • Citrix XenApp™ – Delivering Windows-based applications.
  • Citrix XenServer™ – Virtualising Windows-based servers.

Interestingly the notes they sent me describe a fourth session

Virtualisation with Citrix and Microsoft:
For nearly two decades, Microsoft and Citrix have delivered significant value to customers, and now both companies are working together on desktop and server virtualisation technologies. Hear how Microsoft and Citrix are working together on product integration so that customers have access to comprehensive and flexible virtualisation solutions, all controlled by an integrated management platform
.

Why I am recommending this event ?

I often find myself explaining to people that this or that feature added to Terminal Services isn’t a full on attack on Citrix, and the fact they have a HyperVisor and we have one too doesn’t mean we want to kill each other. When I do I’m always aware that "that’s what the Microsoft guy would say". I’m also aware that I’m so immersed in Microsoft products that I can’t always describe how partners like Citrix deliver value. This is something that you can best get from the source, not second hand. I make no secret that I think VDI solutions are over-hyped (they have their uses, but I’m a "PC on every desk" kind of person, if you don’t need a PC on every desk then a presentation virtualization solution, will usually be more efficient). Since we don’t have broker to distribute users among pools of VMs we look to Citrix for that. Again I’m aware that since we’re missing a piece people might think that colours my view. Citrix have the products to deliver a service either way, and no axe to grind so I’d recommend them as a vendor to talk to when weighing up different central models for application delivery.

 

Technorati Tags: ,,

This post originally appeared on my technet blog.

« Previous PageNext Page »

Blog at WordPress.com.