It’s cloud, nothing more and nothing less

Public cloud, private cloud, hybrid cloud – let’s stop the nonsense; it is all cloud.

The delta comes in the form of access.

Many enterprises have, in fact, not built a cloud but done some infrastructure automation, automation that serves the purposes of the infrastructure and operations folks but is not a cloud.

So what’s the distinction?

A cloud is all about the API and after the API it is all about the content.

If a service cannot be requested, acquired and operated via an API you do not have a cloud. If the underlying infrastructure is not terminated automatically based on demand (or lack of it) and returned to a pool of available capacity…. you guessed it; it is not a cloud.

Forget where the cloud is, you should not care.


Andy Hawkins

+44 7795 464517

Twitter Google Plus LinkedinSkypeWordpress

This e-mail message may contain confidential or legally privileged information and is intended only for the use of the intended recipient(s).

Buzzwords demistyfied (for the cynic)

Serverless: it’s what the guys who want to sell you cycles are peddling vs. what the guys who want to sell you the tin to run those cycles on

Microservices: how we used to build distributed systems before Moores law made it possible to colocate everything in one image

DevOps: it’s what the new kids on the block do….until they get to twenty five employees and then they don’t quite know what to do except NOT to do ITIL

Agile: it’s the methodology for guys who don’t want to commit anything to writing, in either a document or calendar form

API: intelligent glue unlike SOA which was academically superior but incomprehensible to anyone but an academic

Stateless: something that is technically useless in any meaningful application

App: code that is too trivial to be used in any backend

Experiment: in applied computing this is often used to disguise something that is not well thought out or suitable for the intended purpose


Andy Hawkins

+44 7795 464517

Twitter Google Plus LinkedinSkypeWordpress

This e-mail message may contain confidential or legally privileged information and is intended only for the use of the intended recipient(s).

The Wrong Patterns

Almost every organisation I visit in this DevOps world is obsessed by ‘technical debt’ – and consumed with guilt, hatred and loathing; well – perhaps that’s a bit harsh but……..

I find the obsession frustrating because it is all too often nebulous and therefore can’t be fixed or leads to an incorrect hypothesis about where problems lie.

These are some of the warning signs that I see people focus; often incorrectly & which lead to frustration and an inability to move forward.

I am keen to help anyone avoid these technical debt mistakes…..

1. Code Quality
Trying to ‘fix’ code quality across an entire legacy code base is not sensible.

* Languages, skills and coding techniques have changed – live with it.
* Define standards, write style and convention guides and test code going forward & maintain this approach – apply them sensitively to legacy code bases
* Don’t point a static code analysis tool at your entire code base and expect anything but pain
* Don’t try to refactor all your code to modern standards
* Fixing legacy issues should be based on metrics such as defect rate, bugs and adding new functionality

2. Run-book
Paper run-books should be burned and organisations with no run book deserve to suffer pain.

Failing to use a build pipeline|toolchain or run-book automation is a cardinal sin, don’t settle for people running complex IT release processes interactively.
* tools have existed to automate processes have existed for more than a decade; many are very good
* root|Administrator logging on to a system after development should be a reason for dismissal – perhaps a bit harsh but only perhaps
* trust the efficacy of anything done by humans with the same reliability as a 1000 person Chinese whisper

3. Staff turnover
You can’t stop staff turnover and you don’t want to.
Many organisations I work with have ‘someone’ who has to be retained – scarey.

* It is inevitable that staff will come and go and also that skills and experience will change
* Train your staff well and measure performance and conformity.
* Avoid the hero culture. Ensure that you apply the same principles to knowledge as you do to compute resilience i.e. N+1
* If you have critical systems with critical staffing requirements make plans to mitigate your risk. Automate, train, replace, apply staffing RAID1, RAID10

4. Corporate memory/intertia
We can’t do that because we are too large|have compliance|……
Being unprepared to make improvements because of corporate memory is just plain lazy

How often do I hear ‘You can’t do that’ or ‘that won’t work here’ or similar
* Don’t settle for laziness; create a culture to challenge convention
* Become the skeptical optimist; advocate the ‘we can do better’ approach and aim to prove it
* But don’t try and change the whole environment in one fell-swoop
* Make changes, strategically, but don’t do so without sponsorship

5. A relatively young industry
Whichever way you look at it IT/IS is a young industry; search for learnings from other sectors, specialisms. Take some of our own medicine! How many other industries has IT changed? Yet IT refuses to automate itself.

* It is surprising how little we have experimented
* It is surprising how few IT people are prepared to take risk
* Doing the same thing in the same way many times will yield the same results; if you want to change things you have to change the way you do things
* Accept learnings from others, go out and find experts – borrow their ideas

6. Old technology
The rate of change in our industry is significant and it is increasing
assume that any new technology will solve the problems of mankind and bring peace and harmony. Containers are great but they are not heaven-sent.

* with economic controls we must accept that technology will often be old but if it meets the needs of the business it has purpose
* Accept old technology; it can still be automated – you don’t have to Dockerize everything

7. The cloak of invisibility
The number of times I hear the words ‘we don’t have any data’.
Organisations which try to make improvements without evidence should fail.
Apply the scientific principle for crying out loud.

How often do we make changes without any data?
* We tune a server, or focus on performance coding when we don’t have any indication that there is a problem or where it lies
* We state that we can reduce cost when we don’t understand what something costs today
* We over-engineer because someone decides a service must meet certain unspecified requirement
* The business does not have any defined KPI’s and therefore doesn’t know what is important to it

8. The fragile artifact
Ignore the fragile artifact at your peril.
* I spent time in the IT department of an airport once and the ground handling system was the fragile artifact.
* Go near it and it might fail – so, people didn’t go near it
* When it failed everyone kept their head down
* Stop!!! – focus on the fragile artifact; fix it, fix the process around it, fix the team supporting it or turn it off!

9. A plethora of standards
We have standardised our Windows|RDBMS|release pipeline|……..,but we are a large organisation and we have 20 standards for each……

* So you haven’t standardised at all
* You can set a baseline / single standard – standards can be extended?
* Perhaps you are in IT and your standards don’t meet the needs of the business. Collaborate – fix the problem don’t settle on a second rate fudge
* Don’t tell me you can’t standardise – you simply haven’t considered the problem properly, understood the patterns and worked out how to abstract them

0. Enterprise Architects
Enterprise architects who attend seminar, produce Visio, talk in platitudes but can’t build systems and fix technical problems themselves do not add value to the business

* I spent some time with an airline where the EA talked about his errors in selecting a certain provider only to find that the solution he had bought did not do what it claimed; if he had spent an hour trying to use the managed service before signing the approval he would have worked this out and found a better solution
* EA’s need to be helping the organisation build and manage solutions
* EA’s need to define, measure and manage standards
* EA’s need to be driving, owning and living change

Work out what technical debt is and what it mean to you – then, and only then, we can fix the problems.

The Right Patterns

I am committed to the ideas behind Continuous Delivery and the approach that has become known as DevOps and I want to help organisations adopt and start delivering solutions using these approaches.

I try to use common sense approaches to guide enterprises in their adoption and present the following four patterns (in order of preference) to help guide selection of the right project to kick-off their journey:

1. A Greenfield project; you may argue that you don’t have a suitable target but I challenge this as most IT departments have something, even a component, in motion at any time

2. A strategic application (service) that is based on technologies / components that are core architectural blueprints for the long-term

3. The fragile artefact; the fragile artefact is a great pattern but requires very careful consideration before you commit. What makes it fragile & is that something you can address?

4. A third-party / COTS product

Why look for these?

1. Building blocks that can be re-used; early projects need to be used as an incubator to accelerate what follows

2. They are likely to provide good ROI and reduce TCO – and that can be measured and used to improve what follows

3. The size of the change can be made to be manageable; the emphasis is ‘be realistic’

4. In conjunction with solving the core problem it is likely one will be deploy supporting capabilities and this must be achievable; success will include tackling CI, approvals, orchestration, code review, peer programming, instrumentation – solely for the selected target but with the potential to reuse

5. We will learn but also we will be less likely to be inhibited by corporate memory or corporate inertia

6. Speed. We can deliver something tangible in a reasonable time

7. The COTS project often surprises people. It is in your best interest to ensure that your suppliers deliver product in a manner that you understand and can consume; it is necessary to ensure you understand and are able to manage it in the longer-term. As a supplier I have always been receptive when a client tells me a product needs to be delivered in a certain manner and keen to help.

DevOps: Coming to an enterprise near you.

In an industry where hype is the norm the DevOps movement has been fairly low-key until quite recently. Well, low-key in the enterprise perhaps, but there is a massive and very passionate community that has been doing great things for some time. Things like deploying code to production, at scale, every 15 minutes or creating super computer scale systems for pharmaceutical research in the cloud and paid for with a personal credit card.

DevOps is now coming to an enterprise near you and will have a huge impact in 2015 and beyond, so get ready.

Compare DevOps with the production line; DevOps is the IT equivalent to help you understand the relevance.

The production line revolutionised industry and underpins our consumer world delivering a dazzling array of innovative products that are accessible to anyone.

While web and gaming companies pioneered this space it’s merits have been identified by global software companies, retailers, banks and even heavy industry.

Over five years we have observed many engineers content being blissfully ignorant of DevOps. Others have dismissed it but take note; this is changing and changing fast and now many enterprises have now nailed their sails to the DevOps mast.

DevOps is as essential as it is inevitable; it will become ubiquitous in the enterprise and is fundamental to redress the impact that ITIL and out-sourcing has had on innovation and expertise.

How can we be so confident? DevOps shortens the software development lifecycle and reduces waste (time, process, repetition) and improves quality enabling you to focus on what is important to the business and innovate. Most fundamentally it is about automating all things including process, infrastructure, deployment, test, build and change. This has been something that many organisations have tried to tackle alone and have not succeeded.

So ask yourself:

  1. Do you have the culture, capability and confidence to commence your DevOps journey?
  2. Do you know where to start and where to avoid?
  3. Do you understand what DevOps best practices look like and can you avoid anti-patterns?

DevOps is a C-cubed world. Culture, Capability and Confidence. You need all three to succeed.

Don’t fall into the common trap and assume this is simply about adopting the cloud or implementing tools – talk to someone who knows this space and has the hands-on experience to advise.

DevOps & Visible Patterns

Dealing with C-C­­2 in a DevOps culture

If DevOps is about Culture is is equally about Capability and Confidence.

Perhaps I am alone but I am increasingly seeing the following mathematical equations in organisations:

a) C­­3
b) C­­2-C
c) C-C­­2

While many early advocates had great cultural awareness and technical expertise they often lacked ‘enterprise’. Recently I see too many C-C2’s.

My experience is that anything other than C3 is incredibly harmful. Choose your team wisely and coach them well.

DevOps in the enterprise (continued)

I have already posed that one of the biggest challenges for the enterprise is sheer scale. Scale in terms of tin, services, process and people and this has a serious impact on transformation.

With hundreds or thousands of servers I can assure you that no-one has a complete map of the assets let alone the configuration of a global infrastructure. Sure, configuration management databases (CMDB) exist but they are no solution.
Centralised IT teams distributed according to an ITIL model have been bolstered by shadow IT which operates out of business units diluting knowledge and completeness.

I know that many industries are forced to comply with regulation . Many have defined standards to try and lower cost and achieve ‘compliance’. But I fear that many realise that their previous efforts have only gone so far. I have long held the opinion that if a privileged user is able to access a system it can no longer be guaranteed to comply to standards; this, in and of itself, is one of the reasons why Infrastructure as Code and immutable systems are so compelling.

And with this in mind the starting point for introducing DevOps to a legacy estate has to be a Discovery process.

Discovery is the first phase or in more traditional IT automation circles using an approach which I first used in 2005, a passive deployment.

HP, IBM, CA etc. will sell you Discovery tools but can you also use the emerging DevOps options?

Tools that are based on Promise Theory do not do passive although this may change and CFEngine comes close already, their goal is to implement a representation of its’ desired state from a policy. You guessed it Chef and Puppet are really based on this principle so you have a challenge.

Discovery using traditional tools uses approaches that are not popular in the enterprise; nmap and fingerprinting etc. with agents so they may work but you have no certainty.

You tackle the problem but breaking down the scale. Selecting a small footprint of Unix|Windows boxes, a specific application or some other logical divide.

You must first look for, and find, patterns and analyse these so you can find the most suitable target and start from there.
The second phase is to identify relationships which link patterns, in a computing world this becomes relationships between load balancers or connecting a web server to its’ application server.

Visual / mapping tools are a great way to start this discovery if it is available but they need to be able to exploit the discovery techniques described above. A good engineer can assimilate this information using scripts, spreadsheets and the like.

Note that Discovery takes time and will delay the implementation and adoption of your DevOps tools. This time will however be a very good investment!!!

If you have no tested content ready to deploy immediately then leave deploying the agent for later unless you can benefit by low-level infrastructure data like CPU count, RAM etc. Because if you have an agent running and someone inadvertently attaches a policy then you will have issues.

Testing your automation

I was prompted to write this after enquiring why the team was applauding in the background while I was on a conference call the other day and I recognised that a deploy is still fraught and laden with unnecessary angst.

It’s time for an admission!

A long time ago I was part of a team that spent six months automating the delivery of a major platform for a client.

We automated everything, tested it and waited to deployment day.

So far so good…..

We tested every unit, tested everything in isolation, then integration tested in a range of pre-production environments.

But it went horribly wrong on release day!


Storage, security, authentication, routing, name services, timing and perhaps just poor planning or bad luck.

Actually it was a combination of these things that I should have foreseen.

To name just a few root causes.
1) firewalls
2) routing and name service deltas
3) latency
4) user permissions.

All sounds simple and straightforward but all caused problems plus perhaps 20 other things. Our unit and integration tests did not protect us.

The solution; we developed a framework called probes. Non-destructive tests, some as simple as a traceroute or ping to be run repeatedly in the weeks, days, hours and immediately before and after deployment to help determine readiness and identify failure. We choreographed the probes using iConclude, now HP Operations Orchestrator, in a matter of days and it was one of the most pleasing aspects of our project.

Anytime we needed to check status run the probe suite and validate our tests. Prior to the deploy run the probe suite and look for red alarms. No alarms then all pre-conditions were met. Run the probe at deployment time and it also raised and closed tickets in the release system and updated the monitoring tools. Deployment down from 48 hours of overtime to minutes on a normal day.

Sounds simple. It was simple!

As I look at the current state of DevOps I cannot see anything as elegant as this being developed.

I think there is huge value in this approach and perhaps people are doing it so I will continue to look and hope to see it soon.

Oh, and by the way, probes were our friend in finding problems with the solution once it had gone live.

DevOps and what you wont hear

DevOps is about Automation, the ‘A’ in the CAMS acronym so you really need to understand the following to make a success of it.

Data, Data, Data – metrics and instrumentation are key to planning, execution and understanding progress

Necessity is the mother of invention; if it ain’t broke don’t fix it <Powershell or bash might be the right answer>

Legacy businesses may be able to use existing automation tools as effectively as emerging tools for automation

Choreographing a service is more important than automating a server

Redressing the failures of historical behaviour is hard nee impossible; tread carefully with legacy

Open source changes rapidly; keeping current needs a process

Workflow ideas are emerging but confused in the Configuration Management community

You write positive actions when building something. Removing something needs a positive action too. Is an immutable infrastructure practical for you?

Physical infrastructure is harder than virtualisation which is harder than cloud

Community doesn’t replace your own expertise

It is all about infrastructure as code – consider your need for a UI and how your user experience will benefit | suffer

You can test from the inside out or the outside in; which is better is yet to be decided

It Started Off As A Post On DevOps In The Enterprise

As the title says; I started writing this post to explore the suitability of DevOps practices and the Enterprise but then realised that the question that needed addressing first was a little different.

What should the Enterprise expect to gain from adopting DevOps and how should they go about it?

IT departments in the Enterprise are increasingly adopting tooling that is associated with DevOps but are they adopting the practices that will yield future benefits?

Let’s not forget that DevOps is an idea that extends on the themes of source management, automation and collaboration across a number of teams involved in architecture, development and operations.

So DevOps is quite complicated and like ITIL, 6Sigma etc. DevOps is not immediate. You do not become an organisation capable of functioning in a DevOps manner overnight, it is something that takes work to adapt to and adopt. While adopting the practices of these processes will improve future performance when done properly it is about future performance. The controls, tools, policies and learnings do not affect what has gone before and may have been considered ‘finished’.

For future services the benefits of a DevOps approach should be consistent with the early proponents and therefore easy to categorise and quantify. But in the enterprise how much of the budget, effort and intellect goes into maintaining the status quo and how much into future?

If 80% of the IT budget goes into maintaining services then the benefits will not be seen until technology refresh occurs, this does not necessarily mean replacing the service or even the hardware but it certainly does mean change.

You have the (technical) debt from previous activities to deal with and technical debt in the enterprise is significant by any measure!

So in the enterprise I summarise that the benefits of a DevOps approach are there in the long term and the key to gaining early value is closely linked to your approach to maintenance and how well you have delivered your platforms historically. I will explore this in future posts.

In the short-term the costs will exceed the benefits. Clearly you may be investing in tools and training and if you are wise you will also perform some [process] integration / automation on top of the relatively simple concepts of configuration management.

 DevOps Value Map

Once you have a ‘top team’ of proficient specialists you should look to prove your process and capabilities by taking on a project, product or deliver a component using a DevOps approach. Of course once you have one project in production which has been delivered using a DevOps approach you need to ensure that you have the organisational construct and management vision to support the service ; building this competency takes time and great leadership and understanding and we will visit this in another post.

As your competency grows look at the standard MTTF, MTBF, MTTR metrics and determine where to attack the technical debt and be prepared for a journey.

The series continues.