James O'Neill's Blog

June 29, 2016

Just enough admin and constrained endpoints. Part 1 Understanding endpoints.

Filed under: DevOps,Powershell — jamesone111 @ 1:42 pm

Before we can dive into Just Enough Admin and constrained end points, I think we need fill in some of the background on endpoints and where they fit in PowerShell remoting

When you use PowerShell remoting, the local computer sees a session, which is connected to an endpoint on a remote computer. Originally, PowerShell installations did not enabling inbound sessions but this has changed with newer. If the Windows Remote Management service (also known as WSMAN) is enabled, it will listen on port 5985; you can check with
NetStat -a | where {$_ -Match 5985}
If WSMAN is not listening you can use the Enable-PSRemoting cmdlet to enable it.

With PS remoting enabled you can try to connect. If you run
$s = New-PSSession -ComputerName localhost
from a Non-elevated PowerShell session, you will get an access denied error but from an elevated session it should run successfully. The reason for this is explained later. When then command is successful, $s will look something like this:
Id Name ComputerName State ConfigurationName Availability
-- ---- ------------ ----- ----------------- ------------
2 Session2 localhost Opened Microsoft.PowerShell Available

We will see the importance of ConfigurationName later as well. The Enter-PSSession cmdlet switches the shell from talking to the local session to talking to a remote one running  
Enter-PSSession $s
will change the prompt to something like this
[localhost]: PS C:\Users\James\Documents>
showing where the remote session is connected: Exit-PSSession returns to the original (local) session; you can enter and exit the session at will, or create a temporary session on demand, by running
Enter-PsSession -ComputerName LocalHost

The Get-PsSession cmdlet shows a list of sessions and will show that there is no session left open after exiting an “on-demand” session. As well as interacting with a session you can use Invoke-command to run commands in the session, for example
Invoke-Command -Session $s -ScriptBlock {Get-Process -id $pid}
Handles NPM(K) PM(K) WS(K) VM(M) CPU(s)   Id SI ProcessName PSComputerName
------- ------ ----- ----- ----- ------   -- -- ----------- -------------- 
    547     26 61116 77632 ...45   0.86 5788 0  wsmprovhost      localhost

At first sight this looks like a normal process object, but it has an additional property, "PSComputerName". In fact, a remote process is represented different type of object. Commands in remote sessions might return objects which are not recognised on the local computer. So the object is serialized – converted to a textual representation – sent between sessions, and de-serialized back into a custom object. There are two important things to note about this.

  1. De-serialized objects don’t have Methods or Script Properties. Script properties often will need access to something on the originating machine – so PowerShell tries to convert them to Note Properties. A method can only be invoked in the session where the object was created – not one which was sent a copy of the object’s data.
  2. The object type changes. The .getType() method will return PsObject, and the PSTypeNames property says the object is a Deserialized.System.Diagnostics.Process; PowerShell uses PSTypenames to decide how to format an object and will use rules defined for type X to format a Deserialized.X object.
    However, testing the object type with -is [x] will return false, and a function which requires a parameter to be of type X will not accept a Deserialized.X. In practice this works as a safety-net, if you want a function to be able to accept remote objects you should detect that they are remote objects and direct commands to the correct machine.

Invoke-Command allows commands which don’t support a -ComputerName parameter (or equivalent) to be targeted at a different machine, and also allows commands which aren’t available on the local computer to be used remotely. PowerShell provides two additional commands to make the process of using remote modules easier. Import-PSSession creates a temporary module which contains proxies for all the cmdlets and functions in the remote session that don’t already exist locally, this means that instead of having to write
Invoke-Command -Session $s -ScriptBlock {Get-ADUser}
the Get-ADUser command works much as it would with a locally installed Active Directory module. Using Invoke-Command will return a Deserialized AD user object and the local copy of PowerShell will fall back to default formatting rules to display it; but when the module is created it includes a format XML file describing how to format additional objects.
Import-PSSession adds commands to a single session using a temporary module: its partner Export-PSSession saves a module that can be imported as required – running commands from such a module sets up the remote session and gives the impression that the commands are running locally.

What about the configuration name and the need to logon as Admin?

WSMAN has multiple end points which sessions can connect to, the command Get-PSSessionConfiguration lists them – by default the commands which work with PS Sessions connect to the end point named "microsoft.powershell", but the session can connect to other end points depending on the tasks to be carried out.
Get-PSSessionConfiguration shows that by default for the "microsoft.powershell" endpoint has StartUpScript and RunAsUser properties which are blank and a permission property of
NT AUTHORITY\INTERACTIVE        AccessAllowed,
BUILTIN\Administrators          AccessAllowed,
BUILTIN\Remote Management Users AccessAllowed

This explains why we need to be an administrator (or in the group “Remote Management Users”) to connect. It is possible to modify the permissions with
Set-PSSessionConfiguration -Name "microsoft.powershell" -ShowSecurityDescriptorUI

When Start-up script and Run-As User are not set, the session looks like any other PowerShell session and runs as the user who connected – you can see the user name by running whoami or checking the $PSSenderInfo automatic variable in the remote session.

Setting the Run-As User allows the session to run with more privileges than are granted to the connecting user: to prevent this user running amok, the end point is Constrained a- in simpler terms we put limits what can be done in that session. Typically, we don’t the user to have access to every command available on the remote computer, and we may want to limit the parameters which can be used with those that are allowed. The start-up script does the following to setup the constrained environment:

  • Loads modules
  • Defines proxy functions to wrap commands and modify their functionality
  • Hides cmdlets, aliases and functions from the user.
  • Defines which scripts and external executables may be run
  • Sets the PowerShell language mode, to further limit the commands which can be run in a session, and prevent new ones being defined.

If the endpoint is to work with Active Directory, for example, it might hide Remove-ADGroupMember. (or import only selected commands from the AD module); it might use a proxy function for Add-ADGroupMember so that only certain groups can be manipulated. The DNS Server module might be present on the remote computer but the Import-Module cmdlet is hidden so there is no way to load it.

Hiding or restricting commands doesn’t stop people doing the things that their access rights allow. An administrator can use the default endpoint (or start a remote desktop session) and use the unconstrained set of tools. The goal is to give out fewer admin rights and give people Just Enough Admin to carry out a clearly defined set of tasks: so the endpoint as a privileged account (even a full administrator account) but other, less privileged accounts are allowed to connect run the constrained commands that it provides.
Register-PSSessionConfiguration sets up a new endpoint can and Set-PSSessionConfiguration modifies an existing one ; the same parameters work with both -for example

$cred = Get-Credential
Register-PSSessionConfiguration -Name "RemoteAdmin" `
                               
-RunAsCredential $cred `
                               
-ShowSecurityDescriptorUI  `
                                -StartupScript
'C:\Program Files\WindowsPowerShell\EndPoint.ps1'
The -ShowSecurityDescriptorUI switch pops up a permissions dialog box – to set permissions non-interactively it is possible to use -SecurityDescriptorSddl and specify the information using SDDL but writing SDDL is a skill in itself.

With the end point defined the next part is to create the endpoint script, and I’ll cover that in part 2

Advertisements

June 27, 2016

Technical Debt and the four most dangerous words for any project.

Filed under: Uncategorized — jamesone111 @ 9:15 am

I’ve been thinking about technical debt. I might have been trying to avoid the term when I wrote Don’t swallow the cat, or more likely I hadn’t heard it, but I was certainly describing it – to adapt Wikipedia’s definition it is the future work that arises when something that is easy to implement in the short run is used in preference to the best overall solution”. However it is not confined to software development as Wikipedia suggests.
“Future work” can come from bugs (either known, or yet to be uncovered because of inadequate testing), design kludges which are carried forward, dependencies on out of date software, documentation that was left unwritten… and much more besides.

The cause of technical debt is simple: People won’t say “I (or we) cannot deliver what you want, properly, when you expect it”.
“When you expect it” might be the end of a Scrum Sprint, a promised date or “right now”. We might be dealing with someone who asks so nicely that you can’t say “No” or the powerful ogre to whom you dare not say “No”. Or perhaps admitting “I thought I could deliver, but I was wrong” is too great a loss of face. There are many variations.

I’ve written before about “What you measure is what you get” (WYMIWIG) it’s also a factor. In IT we measure success by what we can see working. Before you ask “How else do you judge success?”, Technical debt is a way to cheat the measurement – things are seen to be working before all the work is done. To stretch the financial parallel, if we collect full payment without delivering in full, our accounts must cover the undelivered part – it is a liability like borrowing or unpaid invoices.

Imagine you have a deadline to deliver a feature. (Feature could be a piece of code, or an infrastructure service however small). Unforeseeable things have got in the way. You know the kind of things: the fires which apparently only you know how to extinguish, people who ask “Can I Borrow You”, but should know they are jeopardizing your ability to meet this deadline, and so on.
Then you find that doing your piece properly means fixing something that’s already in production. But doing that would make you miss the deadline (as it is you’re doing less testing than you’d like and documentation will have to be done after delivery). So you work around the unfixed problem and make the deadline. Well done!
Experience teaches us that making the deadline is rewarded, even if you leave a nasty surprise for whoever comes next – they must make the fix AND unpick your workaround. If they are up against a deadline they will be pushed to increase the debt. You can see how this ends up in a spiral: like all debt, unless it is paid down, it increases in future cycles.

The Quiet Crisis unfolding in Software Development has a warning to beware of high performers, they may excel at the measured things by cutting corners elsewhere. It also says watch out for misleading metrics – only counting “features delivered” means the highest performers may be leaving most problems in their wake. Not a good trait to favour when identifying prospective managers.

Sometimes we can say “We MUST fix this before doing anything else.”, but if that means the whole team (or worse its manager) can’t do the thing that gets rewarded then we learn that trying to complete the task properly can be unpopular, even career limiting. Which isn’t a call to do the wrong thing: some things can be delayed without a bigger cost in the future; and borrowing can open opportunities that refusing to ever take on any debt (technical or otherwise) would deny us. But when the culture doesn’t allow delivery plans to change, even in the face of excessive debt, it’s living beyond its means and debt will become a problem.

We praise delivering on-time and on-budget, but if capacity, deadline and deliverables are all fixed, only quality is variable. Project management methodologies are designed to make sure that all these factors can be varied and give project teams a route to follow if they need to vary by too great a margin. But a lot of work is undertaken without this kind of governance. Capacity is what can be delivered properly in a given time by the combination of people, skills, equipment and so on, each of which has a cost. Increasing headcount is only one way to add capacity, but if you accept adding people to a late project makes it later then it needs to be done early. When me must demonstrate delivery beyond our capacity, it is technical debt that covers the gap.

Forecasting is imprecise, but it is rare to start with plan we don’t have the capacity to deliver. I think another factor causes deadlines which were reasonable to end up creating technical debt.

The book The Phoenix Project has a gathered a lot of fans in the last couple of years, and one of its messages is that Unplanned work is the enemy of planned work. This time management piece separates Deep work (which gives satisfaction and takes thought, energy, time and concentration) from Shallow work (the little stuff). We can do more of value by eliminating shallow work and the Quiet Crisis article urges managers to limit interruptions and give people private workspaces, but some of each day will always be lost to email, helping colleagues and so on.

But Unplanned work is more than workplace noise. Some comes from Scope Creep, which I usually associate with poor specification, but unearthing technical debt expands the scope, forcing us to choose between more debt and late delivery. But if debt is out in the open then the effort to clear it – even partially – can be in-scope from the start.
Major incidents can’t be planned and leave no choice but to stop work and attend to them. But some diversions are neither noise, nor emergency. “Can I Borrow You?” came top in a list of most annoying office phrases and “CIBY” serves as an acronym for a class of diversions which start innocuously. These are the four dangerous words in the title.

The Phoenix Project begins with the protagonist being made CIO and briefed “Anything which takes focus away from Phoenix is unacceptable – that applies to whole company”. For most of the rest of the book things are taking that focus. He gets to contrast IT with manufacturing where a coordinator accepts or declines new work depending on whether it would jeopardize any existing commitments. Near the end he says to the CEO Are we even allowed to say no? Every time I’ve asked you to prioritize or defer work on a project, you’ve bitten my head off. …[we have become] compliant order takers, blindly marching down a doomed path”. And that resonates. Project steering boards (or similarly named committees) can to assign capacity to some projects and disappoint others. Without one – or if it is easy to circumvent – we end up trying to deliver everything and please everyone;  “No” and “What should I drop?” are answers, we don’t want to give especially to those who’ve achieved their positions by appearing to deliver everything, thanks to technical debt.

Generally, strategic tasks don’t compete to consume all available resources. People recognise these should have documents covering

  • What is it meant to do, and for whom? (the specification / high level design)
  • How does it do it? (Low level design, implementation plan, user and admin guides)
  • How do we know it does what it is meant to? (test plan)

But “CIBY” tasks are smaller, tactical things; they often lack specifications: we steal time for them from planned work assuming we’ll get them right first time, but change requests are inevitable. Without a spec, there can be no test plan: yet we make no allowance for fixing bugs. And the work “isn’t worth documenting”, so questions have to come back to the person who worked on it.  These tasks are bound to create technical debt of their own and they jeopardize existing commitments pushing us into more debt.

Optimistic assumptions aren’t confined to CIBY tasks. We assume strategic tasks will stay within their scope: we set completion dates using assumptions about capacity (the progress for each hour worked) and about the number of hours focused on the project each day. Optimism about capacity isn’t a new idea, but I think planning doesn’t allow for shallow / unplanned work – we work to a formula like this:
TIME = SCOPE / CAPACITY
In project outcomes, debt is a fourth variable and time lost to distracting tasks a fifth. A better formula would look like this
DELIVERABLES = (TIME * CAPACITY) – DISTRACTIONS + DEBT  

Usually it is the successful projects which get a scope which properly reflects the work needed, stick to it, allocate enough time and capacity and hold on to it. It’s simple in theory, and projects which go off the rails don’t do it in practice, and fail to adjust. The Phoenix Project told how failing to deliver “Phoenix” put the company at risk. After the outburst I quoted above, the CIO proposes putting everything else on hold, and the CEO, who had demanded 100% focus on Phoenix, initially responds “You must be out of your right mind”. Eventually he agrees, Phoenix is saved and the company with it. The book is trying to illustrate many ideas, but one of them boils down to “the best way to get people to deliver what you want is to stop asking them to deliver other things”.

Businesses seem to struggle to set priorities for IT: I can’t claim to be an expert in solving this problem, but the following may be helpful

Understanding the nature of the work. Jeffrey Snover likes to say “To ship is to choose”. A late project must find an acceptable combination of additional cost, overall delay, feature cuts, and technical debt. If you build websites, technical debt is more acceptable than if you build aircraft. If your project is a New Year’s Eve firework display, delivering without some features is an option, delay is not. Some feature delays incur cost, but others don’t.

Tracking all work: Have a view of what is completed, what is in Progress, what is “up next”, and what is waiting to be assigned time. The next few points all relate to tracking.
Work in progress has already consumed effort but we only get credit when it is complete. An increasing number of task in progress may mean people are passing work to other team members faster than their capacity to complete it or new tasks are interrupting existing ones.
All work should have a specification
before it starts. Writing specifications takes time, and “Create specification for X” may be task in itself.
And yes, I do know that technical people generally hate tracking work and writing specifications. 
Make technical debt visible. It’s OK to split an item and categorize part as completed and the rest as something else. Adding the undelivered part to the backlog keeps it as planned work, and also gives partial credit for partial delivery – rather than credit being all or nothing. It means some credit goes to the work of clearing debt.
And I also know technical folk see “fixing old stuff” as a chore, but not counting it just makes matters worse.
Don’t just track planned work. Treat jobs which jumped the queue, that didn’t have a spec or that displaced others like defects in a manufacturing process – keep the score, and try to drive it down to zero. Incidents and “CIBY” jobs might only be recorded as an afterthought but you want see where they are coming from and try to eliminate them at source.

Look for process improvements. if a business is used to lax project management, it will resist attempts to channel all work through a project steering board. Getting stakeholders together in a regular “IT projects meeting” might be easier, but get the key result (managing the flow of work).

And finally Having grown-up conversations with customers.
Businesses should understand the consequences of pushing for delivery to exceed capacity; which means IT (especially those in management) must be able to deliver messages like these.
“For this work to jump the queue, we must justify delaying something else”
“We are not going be able to deliver [everything] on time”, perhaps with a follow up of “We could call it delivered when there is work remaining but … have you heard of technical debt?”

June 1, 2016

A different pitch for Pester

Filed under: DevOps,Powershell,Testing — jamesone111 @ 2:10 pm

If you work with PowerShell but don’t consider yourself to be a developer, then when people get excited by the new (newish) testing framework named Pester you might think “what has that got to with me” …
Pester is included with PowerShell 5 and downloadable for older versions, but most things you find written abut it are by software testers for software testers – though that is starting to change. This post is for anyone thinks programs are like Sausages: you don’t want to know how either are made.

Let’s consider a way of how we’d give someone some rules to check something physical 
“Here is a description of an elephant
It is a mammal
It is at least 1.5 M tall
It has grey wrinkly skin
It has long flexible nose” 

Simple stuff. Tell someone what you are describing, and make simple statements about it (that’s important, we don’t say “It is a large grey-skinned mammal with a long nose” . Check those statements and if they are all true you can say “We have one of those”. So lets do the same, in PowerShell for something we can test programmatically – this example  has been collapsed down in the ISE which shows a couple of “special” commands from Pester

$Connections = Get-NetIPConfiguration | Where-Object {$_.netadapter.status -eq "up" }
Describe "A computer with an working internet connection on my home network" {
    It "Has a connected network interface"  {...}
    foreach ($c in $Connections)            {  
        It "Has the expected Default Gateway on the interface named  '$($C.NetAdapter.InterfaceDescription)' "   {...}
        It "Gets a 'ping' response from the default gateway for      '$($C.NetAdapter.InterfaceDescription)' "   {...} 
        It "Has an IPV4 DNS Server configured on the interface named '$($C.NetAdapter.InterfaceDescription)' "   {...}
    }
    It "Can resolve the DNS Name 'www.msftncsi.com' " {...}
    It "Fetches the expected data from the URI 'http://www.msftncsi.com/ncsi.txt' " {...}
}

So Pester can help testing ANYTHING, it isn’t just for checking that Program X gives output Y with input Z: Describe which says what is being tested
Describe "A computer with an working internet connection on my home network" {}
has the steps needed to perform the test inside the braces. Normally PowerShell is easier to read with parameter names included but writing this out in full as
Describe -Name "A computer with an working internet connection on my home network" -Fixture  {}
would make it harder to read, so the norm with Pester is to omit the switches.  
We describe a working connection by saying we know that it has a connected network, it has the right default gateway and so on. The It statements read just like that with a name and a test inside the the braces (again switches are omitted for readability). When expanded, the first one in the example looks like this.

     It "Has a connected network interface"  {
        $Connections.count | Should not beNullOrEmpty
    }

Should is also defined in Pester. It is actually a PowerShell function which goes to some trouble to circumvent normal PowerShell syntax (the PowerShell purist in me doesn’t like that, but and I have to remember the famous quote about “A foolish consistency is the hobgoblin of little minds”) the idea is to make the test read more like natural language than programming.
This example has a test that says there should be some connections, and then three tests inside a loop use other variations on the Should syntax.

$c.DNSServer.ServerAddresses -join "," | Should match "\d+\.\d+\.\d+\.\d+"
$c.IPv4DefaultGateway.NextHop          | Should  be "192.168.0.1"
{
Test-Connection -ComputerName $c.IPv4DefaultGateway.NextHop  -Count 1} | Should not throw

You can see Should allows us to check for errors being thrown (or not) empty values (or not) regular expression matches (or not) values, and depending on what happens in the Should the it command can decide if that test succeeded. When one Should test fails the script block being run by the It statement stops, so in my example it would be better to combine “has a default gateway”, and “Gets a ping response” into a single It, but as it stands the script generates output like this:

Describing A computer with an working internet connection on my home network
[+] Has a connected network interface 315ms
[+] Has the expected Default Gateway on the interface named  'Qualcomm Atheros AR956x Wireless Network Adapter'  56ms
[+] Gets a 'ping' response from the default gateway for      'Qualcomm Atheros AR956x Wireless Network Adapter'  524ms
[+] Has an IPV4 DNS Server configured on the interface named 'Qualcomm Atheros AR956x Wireless Network Adapter'  25ms
[+] Can resolve the DNS Name 'www.msftncsi.com'  196ms
[+] Fetches the expected data from the URI 'http://www.msftncsi.com/ncsi.txt'  603ms

Pester gives this nicely formatted output without having to do any extra work  – it can also output the results as XML so we can gather up the results for automated processing. It doesn’t allow us to test anything that couldn’t be tested before – the benefit is it simplifies turning a description of the test into a script that will perform it and give results which mirror the description.
The first example showed how a folding editor (the PowerShell ISE or any of the third party ones) can present the script as so it looks like a the basic specification.
Here’s an outline of a test to confirm that a machine had been built correctly – I haven’t filled in the code to test each part.  
Describe "Server 432" {
   It "Is Registered in Active Directory"                 {}
   It "Is has an A record in DNS"                         {}
   It "Responds to Ping at the address in DNS"            {}
   It "Accepts WMI Connections and has the right name"    {}
   It "Has a drive D: with at least 100 GB of free space" {}
   It "Has Dot Net Framework installed"                   {}
}
 
This doesn’t need any PowerShell knowledge: it’s easy to take a plain text file with suitable indents and add the Describes, Its, braces and quote marks – and hand the result to someone who knows how to check DNS from PowerShell and so on, they can fill in the gaps. Even before that is done the test script still executes. 

Describing Server 432
[?] Is Registered in Active Directory 32ms
[?] Is has an A record in DNS 13ms
[?] Responds to Ping at the address in DNS 4ms
[?] Accepts WMI Connections and has the right name 9ms
[?] Has a drive D: with at least 100 GB of free space 7ms
[?] Has Dot Net Framework installed 5ms

The test output uses [+] for a successful test, [-] for a failure, [!] for one it was told to skip, and [?] for one which is “pending”, i.e. we haven’t started writing it. 
I think it is good to start with relatively simple set of tests, and add to them, so for checking the state of a machine, is such-and-such a service present and running, are connections accepted on a particular port, is data returned, and so on.  In fact whenever we find something wrong which can be tested it’s often a good idea to add a test for that to the script.

So if you were in any doubt at the start, hopefully you can see now that Pester is just as valuable as a tool for Operational Validation as it is for software testing.

Create a free website or blog at WordPress.com.