DevOps in the enterprise (continued)

I have already posed that one of the biggest challenges for the enterprise is sheer scale. Scale in terms of tin, services, process and people and this has a serious impact on transformation.

With hundreds or thousands of servers I can assure you that no-one has a complete map of the assets let alone the configuration of a global infrastructure. Sure, configuration management databases (CMDB) exist but they are no solution.
Centralised IT teams distributed according to an ITIL model have been bolstered by shadow IT which operates out of business units diluting knowledge and completeness.

I know that many industries are forced to comply with regulation . Many have defined standards to try and lower cost and achieve ‘compliance’. But I fear that many realise that their previous efforts have only gone so far. I have long held the opinion that if a privileged user is able to access a system it can no longer be guaranteed to comply to standards; this, in and of itself, is one of the reasons why Infrastructure as Code and immutable systems are so compelling.

And with this in mind the starting point for introducing DevOps to a legacy estate has to be a Discovery process.

Discovery is the first phase or in more traditional IT automation circles using an approach which I first used in 2005, a passive deployment.

HP, IBM, CA etc. will sell you Discovery tools but can you also use the emerging DevOps options?

Tools that are based on Promise Theory do not do passive although this may change and CFEngine comes close already, their goal is to implement a representation of its’ desired state from a policy. You guessed it Chef and Puppet are really based on this principle so you have a challenge.

Discovery using traditional tools uses approaches that are not popular in the enterprise; nmap and fingerprinting etc. with agents so they may work but you have no certainty.

You tackle the problem but breaking down the scale. Selecting a small footprint of Unix|Windows boxes, a specific application or some other logical divide.

You must first look for, and find, patterns and analyse these so you can find the most suitable target and start from there.
The second phase is to identify relationships which link patterns, in a computing world this becomes relationships between load balancers or connecting a web server to its’ application server.

Visual / mapping tools are a great way to start this discovery if it is available but they need to be able to exploit the discovery techniques described above. A good engineer can assimilate this information using scripts, spreadsheets and the like.

Note that Discovery takes time and will delay the implementation and adoption of your DevOps tools. This time will however be a very good investment!!!

If you have no tested content ready to deploy immediately then leave deploying the agent for later unless you can benefit by low-level infrastructure data like CPU count, RAM etc. Because if you have an agent running and someone inadvertently attaches a policy then you will have issues.

Advertisements

It Started Off As A Post On DevOps In The Enterprise

As the title says; I started writing this post to explore the suitability of DevOps practices and the Enterprise but then realised that the question that needed addressing first was a little different.

What should the Enterprise expect to gain from adopting DevOps and how should they go about it?

IT departments in the Enterprise are increasingly adopting tooling that is associated with DevOps but are they adopting the practices that will yield future benefits?

Let’s not forget that DevOps is an idea that extends on the themes of source management, automation and collaboration across a number of teams involved in architecture, development and operations.

So DevOps is quite complicated and like ITIL, 6Sigma etc. DevOps is not immediate. You do not become an organisation capable of functioning in a DevOps manner overnight, it is something that takes work to adapt to and adopt. While adopting the practices of these processes will improve future performance when done properly it is about future performance. The controls, tools, policies and learnings do not affect what has gone before and may have been considered ‘finished’.

For future services the benefits of a DevOps approach should be consistent with the early proponents and therefore easy to categorise and quantify. But in the enterprise how much of the budget, effort and intellect goes into maintaining the status quo and how much into future?

If 80% of the IT budget goes into maintaining services then the benefits will not be seen until technology refresh occurs, this does not necessarily mean replacing the service or even the hardware but it certainly does mean change.

You have the (technical) debt from previous activities to deal with and technical debt in the enterprise is significant by any measure!

So in the enterprise I summarise that the benefits of a DevOps approach are there in the long term and the key to gaining early value is closely linked to your approach to maintenance and how well you have delivered your platforms historically. I will explore this in future posts.

In the short-term the costs will exceed the benefits. Clearly you may be investing in tools and training and if you are wise you will also perform some [process] integration / automation on top of the relatively simple concepts of configuration management.

 DevOps Value Map

Once you have a ‘top team’ of proficient specialists you should look to prove your process and capabilities by taking on a project, product or deliver a component using a DevOps approach. Of course once you have one project in production which has been delivered using a DevOps approach you need to ensure that you have the organisational construct and management vision to support the service ; building this competency takes time and great leadership and understanding and we will visit this in another post.

As your competency grows look at the standard MTTF, MTBF, MTTR metrics and determine where to attack the technical debt and be prepared for a journey.

The series continues.

We’re Automating! What to do?

I am delighted, you have decided to automate your “applications|process|infrastructure”. I’ve been automating since the mid-90’s.

Your cloud is already automated ….. of course. ( It is, isn’t it? )

So where do you start?

Do you choose to use Chef, Puppet or even HP CSA and get writing some code. Or shall we pause for breath and consider what to do carefully?

Don’t stop but I recommend pausing, pausing is a good option.

There are many ways to tackle the task at hand so I will share some of the options.

Note: automation and standardisation go hand in hand. Work out the common patterns, valid and illegal configurations and address those. It is not necessary or desirable to solve every problem when automating but focus on your needs and application landscape.

And it really doesn’t matter which tool you use – make a selection based on your use case and your organisations ability to adopt. A domain specific language (DSL), script or PowerShell or a packaged solution – it doesn’t matter provided it fits with your organisation. You will ultimately end up with policy setters who are highly skilled in the tooling and users who consume the policies.

Anyway – onto approaches.

1) Take a vertical approach and iterate to ensure you deliver 100% of one product. Bear in mind that with most applications you will need to choreograph actions i.e. build one server followed by another or start process B after A. So automating 100% of a n-tier application will need orchestration – and to do this you will need more than Chef.

2) Take a horizontal approach and cover one aspect of your operation – historically this would have been something you may have done to address a patch management problem or perhaps deploying an agent technology like backup. Again, consider carefully whether patching (OS) is core or whether containers or images can help you to solve this problem in another manner.

3) Go passive. This will very much depend on the technology you have chosen to automate with. You may have an option to deploy an agent in a read only mode. This can be invaluable but emerging ‘convergence-centric’ tools like Chef are not appropriate for this approach. You need something that does discovery AND probably compliance.

Passive allows you to gather data on your estate in order to gather metrics or identify problems (think compliance) so that you can tackle your problems systematically. Metrics are crucial so this has some real merit.

4) Look at operational metrics and….
– automate the task that you do most often and which already has a ‘run-book’
– automate the task that costs most time|effort
– automate the task that only Joe knows how to do
– take on the issue that costs most downtime or is the highest revenue

(separate blog on this)

5) Tackle one task and automate every single aspect of it. The ports, virtual hosts, primary and secondary controllers for a specific agent. Or perhaps automate until you have achieved 80% of the use cases.

Consider automating Oracle deployments as an example. Should you extend this to include RAC, all of the RAC pre-requisites, every conceivable RAC configuration option?

6) Bite it all off and get everyone automating!
Please don’t do this, at-east not until you have thought long and hard.

So know you have some ideas you really need to start thinking about the metrics; they are core to your success. Actually the only way you will know you have been successful is by gathering and using metrics.

Post script:

Other thoughts you must consider:

– How to upgrade a service and keep within your SLA
– How to upgrade a service
– incremental change (does my tool replace or update my configs)
– hard cases (schema, brokers) XML
– giving users access to your tooling; as a user, to set policy

– can you treat infrastructure and apps the same

– do you create a team specifically focused on automating, configuration management or DevOps