DevOps in the enterprise (continued)

I have already posed that one of the biggest challenges for the enterprise is sheer scale. Scale in terms of tin, services, process and people and this has a serious impact on transformation.

With hundreds or thousands of servers I can assure you that no-one has a complete map of the assets let alone the configuration of a global infrastructure. Sure, configuration management databases (CMDB) exist but they are no solution.
Centralised IT teams distributed according to an ITIL model have been bolstered by shadow IT which operates out of business units diluting knowledge and completeness.

I know that many industries are forced to comply with regulation . Many have defined standards to try and lower cost and achieve ‘compliance’. But I fear that many realise that their previous efforts have only gone so far. I have long held the opinion that if a privileged user is able to access a system it can no longer be guaranteed to comply to standards; this, in and of itself, is one of the reasons why Infrastructure as Code and immutable systems are so compelling.

And with this in mind the starting point for introducing DevOps to a legacy estate has to be a Discovery process.

Discovery is the first phase or in more traditional IT automation circles using an approach which I first used in 2005, a passive deployment.

HP, IBM, CA etc. will sell you Discovery tools but can you also use the emerging DevOps options?

Tools that are based on Promise Theory do not do passive although this may change and CFEngine comes close already, their goal is to implement a representation of its’ desired state from a policy. You guessed it Chef and Puppet are really based on this principle so you have a challenge.

Discovery using traditional tools uses approaches that are not popular in the enterprise; nmap and fingerprinting etc. with agents so they may work but you have no certainty.

You tackle the problem but breaking down the scale. Selecting a small footprint of Unix|Windows boxes, a specific application or some other logical divide.

You must first look for, and find, patterns and analyse these so you can find the most suitable target and start from there.
The second phase is to identify relationships which link patterns, in a computing world this becomes relationships between load balancers or connecting a web server to its’ application server.

Visual / mapping tools are a great way to start this discovery if it is available but they need to be able to exploit the discovery techniques described above. A good engineer can assimilate this information using scripts, spreadsheets and the like.

Note that Discovery takes time and will delay the implementation and adoption of your DevOps tools. This time will however be a very good investment!!!

If you have no tested content ready to deploy immediately then leave deploying the agent for later unless you can benefit by low-level infrastructure data like CPU count, RAM etc. Because if you have an agent running and someone inadvertently attaches a policy then you will have issues.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s