Okay, we are not superheroes. But a group of people hoping to do some good together with our clients in Morocco are gathering and I am struck by the excitement, energy and optimism as we start to link up. We are the IBM Corporate Service Corps for Morocco (CSCMOR13)
It is easy as a long-time consultant to be blasé about business travel. Airports all look the same, hotels are the same and ‘road warriors’ really want to just get home at the end of the week. I’m lucky that, living in Toronto, there is enough client work to keep me busy locally – and I’ve still spent long stretches of my career traveling every week. A year in Europe, 4 additional years in Montreal each week. And shorter stretches in Vancouver BC and North Carolina.
This time is different. It started on Wednesday when our WhatsApp chat showed a picture of two of our crew linking up to start their journey from Australia. Soon the chat was buzzing with selfies as groups gathered in 2s and 3s and started to connect. ‘Can’t wait’ and ‘Looking forward to it’ was common. Or ‘See you in Dubai’ or some other place where they would change planes.
Jinzi and I connected briefly at the Air France desk in Toronto-Pearson airport before we took separate flights to Paris. Colleagues were texting each other gate information, tips for how to be prepared as we arrive in Morocco, and just ‘safe flight’. It’s great to see a group of people who’ve only met over webex finally connecting in person. This team is definitely moving from ‘forming’ to ‘norming’ quickly.
Almost half of us are on the same flight from Paris to Rabat and as we were waiting at the gate for the final flight to our destination some things became obvious very quickly.
This group of near strangers who are spread across 5 continents share a lot of common concerns, beliefs and interests. It felt like old friends chatting from the moment we met.
There is a lot of passion for our role – working together with our clients – government agencies or NGOs in Morocco – co-creating on a strategy that can improve an economic or social challenge
Unlike many project teams from IT consulting firms – the team is mostly women. Of 15 total, 11 are women and 4 men. Very happy to get some experience being in the minority!
By end of day Friday, we should all be together. This weekend we get acclimatized to the city and each other, and Monday we start working with our clients!
One of the most over-used and misunderstood concepts in IT today is DevOps. It’s right up there with Multicloud and Agile as ideas that get more lip service than understanding or actual comprehension. Most companies today have started to implement DevOps. DevOps engineer jobs are constantly posted. Yet most companies are unhappy with the return relative to the investment.
In a 2020 IDC survey of U.S. enterprise organizations, more than 77% of respondents indicated that they have already adopted DevOps in some capacity. But most of these enterprise organizations are using DevOps for less than 20% of their application estate. Those same companies plan to double DevOps usage in 2021.
So why is DevOps so hard, and why is it so important to get it right? And why is it important for your company’s journey to cloud?
Probably the biggest impediment for organizations implementing DevOps (or DevSecOps as its increasingly known) is a lack of understanding or agreement on just what it is.
What exactly is DevOps?
The term DevOps was first coined in 2008 at an Agile conference in Toronto. The definition proposed was:
DevOps is a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality. (source Wikipedia)
Note that in that 2008 definition there is no mention of tooling. The concept is about new processes and practices to move faster. Implementing DevOps will help accelerate software project development cycle by minimizing costs and driving efficiency. Most organizations seem to think that implementing a ‘DevOps toolchain’ is all that’s required. And of course tooling is essential to automate, simplify, and accelerate the process.
But without changing the relationship between the IT Development team and the IT Operations team, and changing the underlying philosophy of both those teams, DevOps is just another way to waste IT funds by purchasing under-utilized tools. Like the people who think that having a ‘daily standup meeting’ means they’ve implemented agile, buying the latest toolset (or downloading the opensource versions) does not equal ‘implementing DevOps’.
As its name implies, DevOps all about bringing together the two main historical camps of IT: The development team and operations team. Although they have many skills in common, these two groups have always approached IT with very different perspectives:
Developers want unlimited test environments to rapidly develop and test different code streams in parallel. They want to be able to develop test, iterate and not lose any changes. They want the latest cool feature that they’ve created implemented immediately and are happy to ‘throw It over the wall’ into production.
The Operations team wants stability. Change is risk. The most likely time for an outage is right after something new has gone in. Cost and time taken to set up and manage all these overlapping environments distracts from the main goal – keeping the production environments safe.
There are other stakeholders with additional views:
Business users want the change they want immediately and not wait for quarterly cycles. But like the electrical grid, they expect the systems to be always available and only notice when it is not.
Senior IT executives may be primarily concerned is about improving overall operational metrics around cost, risk, quality, productivity and speed in the development cycle. More bang for the buck.
DevSecOps means taking the DevOps approach and embedding a security perspective into it right from the start. Security is not a standalone practice but instead a shared responsibility. Designing and building security in rather than having it imposed by a separate ‘compliance’ function.
Successful implementations of DevOps change the Dev and Ops cultures to make sure we stop thinking either/or in order to get both rapid access to change and maintain production stability. They highly automate the steps of software creation and use from requirements definition through development, integration, testing, deploying and operating. They use combined processes and tools to shorten development cycles, and allows for more frequent deployment to production.
Well implemented DevOps processes give developers and testers the ability to create, customize and delete their own environments; it is about keeping track of all the components necessary for a successful implementation of a new function into an environment, and it’s about ensuring that change is promoted to production smoothly, efficiently and often while retaining the necessary approvals and controls.
To track progress in implementing DevOps and to continuously improve, it is important to identify and track a set of measurements. In true DevOps fashion, these metrics should be simple, highly automated and accessible to all involved. While each implementation should focus on the pain points of your own organization, some of the key items to consider include:
Frequency of code changes, number of stories/function points, features are being incorporated
Defect rate at each step in the process. Numbers caught at each stage including in production
Track how these measures interact. As DevOps capabilities mature, you should be seeing significant gains in quality and functionality per unit time. As you deploy to production more frequently, are defects in production increasing? As you automate creation of test environments, is code integration becoming more challenging? How have these numbers changed since before you started your DevOps journey? Calculating the number of releases per period over time will give you an indication of improved velocity. Tracking the number of defects per release in production over time will indicate how quality is improving.
(You can find lots more on DevOps measurements here and here. )
Why is it so important today? And how does it impact our Cloud work?
Multiple trends have increased the importance of properly implementing DevOps.
End-user expectations conditioned by the ‘it just works’ deployment of the latest features on new apps on their Android or iPhones means both increased expectation of the rate of change, higher availability expectations AND more success outcomes expectations.
Agile development and continuous integration continuous deployment (CI/CD) create the need for automated deployment and the expectation of easily measuring outcomes.
The ability to rapidly create and delete environments in the cloud (particularly development and test environments) means it is even more important to automate code deployment so that you know exactly what is where, and how to recreate it.
Increasing use of microservices architecture (or APIs) for software development means there are more “things” to manage in development and production.
As use of microservices increase, that in turn is one of the factors driving the increasing use of containerization as the deployment vehicle for cloud workloads. Containers are foundational for the API economy by providing efficient scalability simply and automatically. In order to deploy and manage those APIs in containers requires enterprise grade DevOps.
Other trends like infrastructure as code, network as code, SRE (Site Reliability Engineering) rely on DevOps or DevOps concepts to be successful (see here.)
Migrating to the cloud – like any new technology – brings its own new operational procedures. It is all too easy as we build in new environments to recreate the mistakes of the past – or to use something which was successful in the past but doesn’t take advantage of the new approach. Fundamentally, the cloud is about creating a virtualized environment. As companies implement more solutions on more clouds, the proliferation of workloads to manage will mean the need for consistent tooling to manage while increase. DevOps process are the approach to managing that virtualization. The right culture, process and tools are critical to IT success in the cloud world. Containers, service mesh, data mesh … these can package up tools, data and technologies to make development faster, quality higher, operations smoother, and our business users happier. And that’s what DevOps is all about.
What’s Next in DevOps?
Already we see people trying to take the next step in DevOps by bringing in Artificial Intelligence concepts into the DevOps process (also known as AIOps). Moving beyond automation to self-healing, or cognitively-assisted touchpoints to improve service resiliency and make the user experiences better. Automatic suppression of false alerts, using predictive insights to predict and prevent failures, and log-file analytics to provide real-time problem determination are all possible.
Want to learn more? Some other insights on DevOps found here. (And thanks for all the research by my colleague Abdel Koumare who sifted through these to find the best ones.)
In my most recent blog post, I highlighted several lessons about leadership in a crisis as demonstrated by the story of Apollo 13. This failed mission to the moon is most famously recaptured in the Ron Howard directed film starring Tom Hanks and Ed Harris. Continuing the lessons learned and how they are shown by this story – either in the movie or in real-life.
Contingency plans are a great starting place – not the final recipe. You need to plan for emergencies. Even when the one that happens isn’t what was envisioned, they can be a valuable reference for parts of your situation. Because you can’t prepare for every situation. But NASA’s focus on meticulous planning and testing scenarios pays off. Again, this is not shown in the movie. But NASA had already tested the contingency of navigating the combined Apollo spacecraft from the LM during Apollo 9. And on Apollo 8, Lovell himself had tested navigating by visual reference to the stars and to the Earth’s terminator. Both these backup procedures were available and used in unexpected ways during the Apollo 13 flight.
Sometimes you have to find a way to make do. In a situation that seems unbelievable but actually happened, the filters that scrubbed carbon dioxide out of the atmosphere were different shapes on the two spacecraft. To keep the three astronauts breathing, they literally need to find a way to fit a square peg into a round hole, with only the items already on the spacecraft. Kranz assigns a team early to work on this problem and they are given all the items the astronauts might use. Duct tape and the now-useless mission flight plan document are major pieces of the solution.*
It’s not about what things are designed to do, it’s about what they can do. The lunar module wasn’t designed as a lifeboat. The Command Module simulator wasn’t designed as a test bed. The mission flight plan wasn’t designed to be part of the air filtration system. In a crisis it is about evaluating the capability of what you’ve got, and how it can fit in what you need. The same is true whether it is tools or people. Sometimes the best person is not the most technical qualifications, but someone with the determination, commitment or potential. Or all three.
Don’t sweat the small stuff. We all have policies and procedures that are mandatory. Usually there is a good reason for them. But in a crisis it’s much more important to focus on the big picture. In the movie, the flight surgeons on the ground are continually questioning the astronaut’s health and need for sleep. In fact, LM pilot Haise, is quite ill with an infection and fever by this point. But Lovell is fed up with the constant questioning on minor matters. The crew are working round the clock to find ways to survive and get back to Earth. Frustrated, he pulls all the monitors off his body. The other two astronauts quickly join the mutiny. On the ground, it looks like all three astronauts have flatlined. Once mission control has confirmed that everyone is fine Kranz decides to just let it be. They have more important problems to deal with.
Never show you don’t believe. Your team takes their lead from you. If you hint that something can’t be done, that reinforces their fears. In the movie, Kranz is repeatedly being asked for the odds of safe return. These requests are coming from the White House. Even privately, Kranz refuses to allow that there is any possibility of failure. The real Gene Kranz even titled his own autobiography after the other famous quote from this movie: “Failure is not an option.” There is no evidence he actually said that during the Apollo 13 mission, but the statement reflects his determination to find solutions. In real-life, Glynn Lunney, one of the other flight directors, speaking about the mission later said: “If you spend your time thinking about the crew dying, you’re only going to make that eventuality more likely.” Instead you think about how to keep them alive.
If People are inspired, they will work to solve any problem. The movie is filled with inspiring speeches from Kranz and Lovell delivered as only award-winning actors Harris and Hanks could present them. Would that we were all as eloquent in a moment of crisis. But it’s not the speech necessarily that brings the inspiration. In his autobiography Lovell recounts how a senior Grumman executive is recalled from an MIT sabbatical to help once the disaster occurs. Grumman Aerospace Corporation were the designers and builders of the LM. The executive arrives at the office in the middle of the night to find the Grumman parking lot full. Far more people came in than were called – the Grumman factory and office workers wanted to help any way they could. They believed in what they were doing and would do anything they could to help return the astronauts safely. That common sense of purpose and shared commitment made them pitch in without request. And they did that without being asked and without the inspiring speech.
It’s better to set a direction and make adjustments along the way. In a crisis, you don’t always know the exact path to success. You need to get going in the right general direction and fine-tune as you go rather than try to plan for the perfect outcome before moving. In the movie this happens quite literally. First the astronauts onboard stabilize the spacecraft which has been pushed off course by the explosion and the escaping gases. Getting any control at all is the only objective. Then they decide to go for a ‘free return’ trajectory by continuing to the moon and using the slingshot effect around it. Later they speed up the flight to reduce the total time given the limited air and power in Aquarius. They make further minor course corrections navigating by eye with the LM engines. As they prepare for final re-entry into the Earth’s atmosphere, they are still slightly off-course. But progressively they’ve got closer and closer to their final destination.
Simple solutions often work best. Near the end of the flight, CSM pilot Swigert is in the command module alone executing the power-up sequence with Mattingly on the ground walking him through the procedure step by step. None of the astronauts has slept in days and Swigert is worried about flipping the wrong switch. He places a piece of tape over the one that would allow him to jettison the LM – which still contains Haise and Lovell. Simple and effective.
It’s about getting the job done, not about the most glamorous job. In another scene that would be unbelievable in fiction but is true, Marilyn Lovell needed someone to sit with her aging mother-in-law during the final splashdown. Marilyn wanted to distract her from any upsetting comments that might be made by the many people in the room. Neil Armstrong and Buzz Aldrin – the first two people to set foot on the moon and arguably in 1970 among the most famous people in the world – volunteer. No one would’ve denied them a spot in Mission Control or elsewhere, but this was another important job to do. (In the movie the mother asks “Oh, are you boys astronauts too?”)
Ask for help and take the help offered. The Apollo 13 drama caught the world’s attention. Every country offered to help, often when there was little they could do. But with the state of technology at the time, NASA had more math & physics problems than they could handle in the time available. In a seldom-told facet of this crisis, Grumman, designers of the LM, reached out to the University of Toronto’s Institute for Aerospace Studies. U of T was asked to help with the plan to jettison the LM from the CSM before re-entry. They needed to calculate the right air pressure in the connecting tunnel between the Aquarius and the Odyssey to get a clean separation. Enough to make sure that the LM did not interfere with the Odyssey during re-entry. And they didn’t want the pressure to change the command modules trajectory either.
Sometimes it’s better not to know the stakes. I believe that generally people work better when they know the big picture. But sometimes when the stakes are high enough that knowledge can be more pressure than help. The U of T aerospace engineers assumed that they were one of several groups calculating the answer to the LM separation problem. No doubt that took pressure off in having to calculate the perfect answer. It can be paralyzing to know lives are riding on how well and how rapidly you complete your work. Years later the engineers were told by LM pilot Haise that no one else was asked to solve this particular challenge. The U of T answer was literally life-critical, not some kind of ‘double-check’.
Sometimes you do have to guess. With the damage to the oxygen tanks and fuel cells, there is a possibility the heat shield was damaged too. If it is cracked, the Odyssey will burn up and the astronauts will perish on re-entry. There is no way to test this, and nothing can be done in any case if there is a problem. So they make the assumption that the heat shield is fine. There is no point in worrying when nothing more can be done.
In the end, with determination, ingenuity and luck, the Odyssey returns the three astronauts safely to Earth. Fred Haise recovers from his illness. Ken Mattingly never gets the measles. Kranz’ white vest ends up in the US National Air and Space Museum. May all your crises end as successfully.
The flight of Apollo 13 occurred from April 11-17, 1970. It was the fifth manned flight to circle the moon.
If you watch the movie, you may be interested to know that the reason the scenes set in space in the movie look so realistic is that that they were actually filmed while weightless. The Command Module and Lunar Module sets were built in NASA’s ‘vomit comet’ and cast and crew were actually weightless for 20 seconds at a time to get each take.
* The command module and lunar module were made by different companies with different design teams. The command module was designed by North American Aviation / North American Rockwell and the lunar module by Grumman. Like most large scale government procurement programs, the US government spread NASA procurement around the country.
A crisis that snuck up on us. A situation the public is ignoring until disaster strikes. People isolated away from others, at risk of their lives. It sounds like a story from today’s pandemic headlines but I’m referring to something that happened from 50 years ago this month.
Apollo 13 was intended to be the third landing on the moon in less than a year. Interest was waning. The ‘space race’ was over, the US had won, and the USSR denied there ever was a race. The astronauts’ news conferences weren’t even carried live on network television anymore. Just a routine mission. But an accident happened on the way to the moon that creates a set of powerful lessons in leadership that resonate today.
I often tell people that the fastest course that you can get in leadership in a time of crisis is to watch the movie made from this true story. Ron Howard directed Apollo 13 starring Tom Hanks as Jim Lovell – the spacecraft commander – and Ed Harris as Lead Flight Director Gene Kranz. Both in real-life and the characters as portrayed, they demonstrate the creativity, compassion, and determination that brings the best from their team and their situation.
For a quick recap of the challenge that they faced: Mid-way to the moon, while executing a routine maintenance operation, a wiring flaw causes the oxygen tanks in the Command Service Module (CSM) to explode, crippling the Odyssey. The oxygen tanks are needed for the fuel cells as well as breathing. Power levels are falling to nothing. Lovell reports to Mission Control the second-most famous line uttered from space “Houston, we’ve had a problem.”* In the moment it’s unclear exactly what has happened – in all their careful planning and redundancies, NASA had never envisioned a failure like this. But it is quickly obvious that the Odyssey is failing.
In that instant, the mission changes. With no chance to land on the moon, the three astronauts must turn the Lunar Module – Aquarius – into a lifeboat. They need Aquarius to live in, and to execute the course corrections needed to get back to Earth. The astronauts return to the CSM Odyssey only for final re-entry and landing back on Earth. From the moment of the explosion, every obstacle experienced and every plan needed was created just-in-time by the people on the ground, or by the three astronauts. Rather than being ignored, the entire world is gripped in watching the real-life drama unfold.
Just what are the lessons? As I go through them I will refer to scenes in the movie, which is a fairly accurate condensation of the mission. But it is important to remember, this all really happened. I will share some additional lessons from this moment in history that didn’t make the film. And below I will provide some insight where you can find more detail.
The lessons start even before the crisis hits.
Own the situation. Days before the flight is due to launch, NASA flight surgeons discover that the command module pilot (Ken Mattingly – played by Gary Sinise) has been exposed to the measles. They decide he must be replaced with the backup pilot, Jack Swigert (Kevin Bacon). Jim Lovell fights for his original team, but eventually he realizes he can’t win. Once that happens, Lovell not only tells Mattingly himself, he takes the blame. “This was my decision.” The lesson: Own the situation. There are no excuses. Then with his new pilot he spends as much time training together in the limited window before launch as possible.
Get the data and make informed decisions. When the accident first occurs, Mission Control can’t believe their instruments. It must be an instrument failure. People are recommending different courses of action based on incomplete information. From the Odyssey, Lovell reports seeing a gas venting in addition to the same data on power loss they are seeing on the ground. Only taking all the information together is there a credible theory: The spacecraft is seriously damaged.
Give your team hope. Napoleon said “The role of a leader is to define reality and offer hope.” Flight Director Gene Kranz – basically the person in charge of the mission on the ground during one of three shifts – continually inspires his team. When told that this is NASA biggest failure he replies “I believe that this will be our finest moment.” Echoing Churchill’s similar comment during the Battle of Britain.
Work the problem(s) There is a tendency in crisis for everyone to pile onto the biggest issue. With the Odyssey crippled they have many problems. Do they turn around mid-way, or do they continue around the moon and return. How will they navigate? How will they stretch the LM Aquarius oxygen and batteries from two days for two people to four days for three people? How will they restart the Odyssey? Kranz assigns teams to various challenges and focuses on the most immediate, the most critical. Shut the command module down. Don’t turn the ship around immediately, continue around the moon as the least risky. On the spacecraft Lovell, Swigert and Fred Haise (played by Bill Paxton) operate the same way. Before they are told to power up the LM, they’ve already started turning on the Aquarius.
Know your limits. This isn’t shown in the movie but one of the smartest things Kranz does is turn over responsibility to Glynn Lunney at the end of his shift.** He knows that fresh eyes and fresh minds will better deal with the next set of problems. For the rest of the flight the three Flight Directors on the ground share responsibility round the clock. But Kranz signals his commitment to success in a different way. For each mission, his wife has provided him a new vest to wear with the mission patch. Kranz commits that he will wear the vest continuously until the three astronauts are safely returned. Many of the ground crew sleep at Mission Control throughout the rest of the flight so that they are available if needed.
Stay calm and project confidence. It’s okay to show your emotions – in a crisis it’s almost impossible not to – but wherever possible stay calm and focused on what most needs to be managed. There is a great scene in the movie – which the real Lovell says is fiction – where the three astronauts are arguing with voices raised – Did Swigert make a mistake that caused the accident? Lovell (Tom Hanks) raises his voice to get the other two to calm down. It’s pointless to argue, because it won’t change where they are or what they need to do next. Then Mission Control calls on the radio. With voice still raised Lovell asks “Are we on Vox” (ie, on an open mike). When he realizes they haven’t heard the argument he responds in perfect ‘pilot announcing something to passengers’ voice: “Houston say again that last transmission”. When you project that sense of calm authority, you help your team believe that they can achieve the task.
On the ground this same confidence is beautifully shown by Marilyn Lovell (Kathleen Quinlan). After the accident, reporters start gathering in front of her home so that they can track the family’s every move. She tells the NASA press agent to have them to stay off her property. In both the spirit of the time and in her commitment to success she tells them that if they are unhappy, they can take it up with her husband. “He’ll be home on Friday.” She says.
Assign people to the problem they are best able to fix. Ken Mattingly, the command module pilot who got bumped from the flight last-minute, is pulled in to figure out how to restart the command module in order to get it ready for reentry. Mattingly knows the spacecraft and the crew better than anyone. No one has ever cold-started an Apollo capsule in space before. And they have limited batteries and time to get the system restarted. He works in a simulator with other ground crew through various sequences to come up with a way to coax the systems online without using all the remaining power.
In part 2 I will continue the lessons that you can learn from this crisis, and I will explain how the mission of Apollo 13 completes.
*Often misquoted as “Houston we have a problem”. Swigert says “Hey we’ve got a problem here.” Lovell says “Houston, we’ve had a problem.” The exact wording of the most famous line said in space is also controversial. On first stepping on the moon, Neil Armstrong claims that he said “That’s one small step for a man, one giant leap for mankind.” In the audio recording you do not hear the word ‘a’. https://en.wikiquote.org/wiki/Neil_Armstrong
**In the movie, they generally show the same team in Mission Control under Gene Kranz in all Earth-bound scenes. Kranz was the lead Flight director and responsible for the White team. But Kranz, Lunney (Black team) and Milt Windler (Maroon team) split responsibility and with their respective teams handed off to each other at the end of each shift. That way they were able to work the problems continuously and collaboratively.
There is so much jargon in the cloud world, sometimes its hard to follow the conversation! Different vendors use the same words to describe different things, and different words to describe similar things. And the meanings are evolving over time. One of the most-used and abused phrases today is some variation of ‘Hybrid Cloud’ or ‘Hybrid Multi-cloud’. Is this just hype or some vendor fud (fear, uncertainty and doubt)? Or is it a real thing?
My goal in this blog is to explain the term, and highlight the criticality of Hybrid Cloud to your enterprise computing strategy. Not only is Hybrid Cloud a critical interim step in the goal of getting to the cloud, I believe it is going to part of the long-term strategy for most large enterprises.
What is Hybrid Cloud Anyways?
NIST – the US National Institute of Standards and Technology defines the Hybrid Cloud as “a composition of two or more clouds (private, community or public) that remain unique entities, but are bound together, offering the benefits of multiple deployment models.”
While NIST has the benefit of being a standards organization, I think more commonly, people use the term ‘Hybrid Cloud’ most frequently to describe systems spanning public cloud and private or dedicated cloud, or even public cloud and traditional on-premise solutions, rather than any two clouds as NIST defines. That’s how I’ll use it here, but I will come back the NIST definition at the end. To highlight the point that this term is still evolving, I’ve collected some other example definitions at the end of the blog.
To understand where you might want to use Hybrid Cloud, think about the difference between your on-premise environment and Public Cloud. You have more control over the on-prem environment since it runs in your data center, you can run transactional workloads fast, but it is not elastic like Public Cloud is. If you need to scale on premise, then you need to buy new hardware, where as on Public Cloud (and dedicated), if you need to scale you just add more resources and that provisioning is handled separately. Access to the on-premise environment is at the speed of your local network, access to Public Cloud is at the speed of your internet connection.
One of the simplest and most common examples of hybrid cloud is where your production environment remains on premise in your data centre but you’ve moved your development and test environments out to the cloud. That gives you the flexibility to create and destroy test environments as needed. As someone who’s led a lot of large complex systems integration projects, we used to beg for ‘one more test environment’, and as soon as we got it, we’d want another one!
This ‘prod on-premise, test in the cloud’ is a Hybrid Cloud example because to work effectively, the environments need to be connected with a DevOps toolchain that spans the on-prem and public cloud environment. And the big benefit – besides keeping your test team happy – is that you only pay for those added environments while they are in use, and can eliminate the cost when you are in a cycle of low or no testing.
Simple example of Hybrid
Hybrid access to Microservice
A second Hybrid Cloud example and one step up on the complexity curve would be an on-prem environment that is using a cloud-based API. Perhaps your application is calling out to Watson APIs, or needs access to Google Maps. In that scenario, Watson or the Map APIs are a small Public Cloud-based component of your overall solution. Data is mostly flowing out to the cloud, with a small answer set coming back.
Getting to the highest end of a complex Hybrid Cloud application, you have applications that share large amounts of data. Perhaps an ERP system that is on prem, is updating a data lake in the cloud. And that ERP is also communicating with a CRM solution like Salesforce on the cloud for quote or order management so that manufacturing and distribution understands backlog requirements.
This third example starts to show why hybrid will be around for a long time. As more applications move to the cloud; more and more of them will need to communicate with those ‘left behind’.
For lots of reasons it will take years to get all large enterprise applications to the cloud, if ever. Some applications will be considered ‘crown jewels’ or contain data too sensitive to move out, some applications that have evolved over years may be too complex to move all at once, and for some applications that have been stable for years, there may be no value in migration.
So what’s not Hybrid? You’re not on Hybrid Cloud, for example, “if a company is using a SaaS application for a project but there is no movement of data from that application into the company’s data center.” (Dummies.com) That’s just a single cloud, working standalone from the rest of your application portfolio.
When is Hybrid Cloud a Problem?
Hybrid Cloud is a computing pattern and like other patterns, it isn’t right for every situation.
Of course, if you’re organization started as ‘born on the cloud’, there is probably little likelihood that you will ever implement this solution. There isn’t likely a use case that will force you to build your own data centre if you don’t already have one.
Particularly as the solution gets more complex, you need to be really concerned about the amount and timeliness of data movement between the various environments. If there is a requirement for real-time or near real-time, bandwidth and latency can become significant issues. “Managing hybrid cloud is a complex task because each cloud solution has its own API, storage management protocols, networking capabilities, etc.” (Citrix)
How to Manage Hybrid Cloud
When you have a simple hybrid environment, you probably don’t need any new tools or techniques to manage your Hybrid Cloud. Your existing IT management processes will probably be fine. As more and more on-premise solutions are working in tandem with on public cloud solutions, it gets more complicated and you are more likely to need tools to manage hybrid cloud and/or multi-cloud environment that results.
The tooling to manage Hybrid Cloud and multi-cloud is still new and evolving. Clients are investing in their own custom solutions and vendors are building out capabilities. As your hybrid environment grows more complex you will need tools that provide
Visibility – With workloads running in multiple clouds and on premise, you need a “single pane of glass” that tells you exactly what is running, and where it is running.
Management – Tooling that gives you the ability to set security policies and track spend; and manage workloads based on those needs. Orchestrate how applications start, connect to each other, and scale.
Automation – Automation will allow you to deploy applications across environments, help manage backup and disaster recovery, and provide the ability to move workloads from on-premise to cloud or vice-versa.
Focusing on these will help manage systems that have already been built. As you’re building new applications or modernizing existing ones, you will need ways to rapidly develop integration methods and data movement across cloud environments rapidly and consistently.
To make it easier to manage workloads and have consistent tooling across private cloud and public cloud, common tooling such as OpenShift from Red Hat / IBM or Anthos from Microsoft can help. The promise of these tools is that they will allow you to orchestrate all of your cloud environments using a single interface. Built on open source tools such as Kubernetes and Linux, they offer built-in governance and help manage orchestration across the Hybrid environment. Increasingly, Kubernetes and containers are the de facto organizational process for new application and microservices deployments, so these tools become increasingly critical.
In addition to OpenShift, other companies such as Microsoft (Anthos), Citrix and VMware are all offering competing management frameworks and one of the questions an IT department will need to answer is choosing between these platforms.
Future of Hybrid Cloud
I think it is clear from the examples above that hybrid cloud will continue to evolve over time. Eventually it is likely to be indistinguishable from ‘multi-cloud’ – two or more different clouds collaborating in an application or system. Which is basically what the NIST definition states.
For now, Hybrid Cloud offers benefits that can’t be gained with just on-premise or just ‘on-cloud’ solutions:
It gives business comfort around security and compliance issues – with the right level of security for production by maintaining on-premise for high-security applications, and allowing development and test environments and lower-security needs systems to be on public cloud
Integration with the remaining legacy on premise environments is easier (and higher-performance) from private cloud, which facilitates migration. Until a critical mass of the applications that communicate together are ready for cloud, you can keep the portfolio together using private cloud, then gradually move the ones to public when its practical).
Hybrid Cloud offers scalability / workload management (ie, test environments, development environment, temp performance testing or training environments) that your on-premise data centre can’t match.
Hybrid Cloud gives the potential for cost-optimization between existing assets and public cloud environments
Embracing Hybrid cloud gives your on-premise environments access to public APIs and microservices such as AI technologies
Hybrid Cloud gives your IT department the opportunity to balance among the competing goals of control, portability, scalability and cost
Hybrid Cloud allows applications to move across boundaries over time and to operate across boundaries
New use cases are coming along that will bring new forms of Hybrid Cloud. One example is IoT (the Internet of Things). As IoT becomes more mainstream we will need solutions that deal with collecting and sorting all the data collected by IoT devices, and sending back for analysis summarized or most critical data. While there are multiple options for this, one of the most likely will be a ‘hub and spoke’ model where IoT devices communicate with a local, private server, and that device then decides what information needs to be shared to the cloud. This hub and spoke is an instance of Edge Computing, where selected processing is moved closer to the data being collected. There is a great example of how this will work using an example of IoT data collection of a motorcycle and rider example here via TechBeacon.
Some would say that coming rollout of 5G networks will resolve this capacity problem, but particularly in a large country like Canada the rollout of 5G will likely be focused on the largest cities and some narrow corridors between them. So the hub and spoke model will continue to be in use for a long long time.
Hybrid Cloud is clearly a flexible and evolving term. The concept of private or dedicated technology working together with systems built or migrated to the public cloud will persist for a long time. Indeed, Hybrid Cloud usage is growing as companies continue their migration to the cloud. “The global hybrid cloud market is expected to grow from USD 44.6 billion in 2018 to USD 97.6 billion by 2023, at a CAGR of 17.0% during the forecast period.” (Markets and Markets ).
Emerging technologies like IoT and Edge computing will bring new Hybrid Cloud use cases over time. The term Hybrid Cloud will continue to evolve but the concept – your own technology interacting with public cloud solutions – will continue to be a valuable part of your IT environment for years to come.
Addendum – Other Hybrid Cloud definitions
As I did some research for this blog, I saw that there is no consistent definition for Hybrid Cloud. And its not just the words that vary, the scope described varies. See the examples below, all of which differ in meaning from the NIST shared at the start of this article. Some would argue this comes from the fact that ‘Hybrid Cloud’ is a term that comes from the vendor community, not from clients or some academic pursuit. But I think when people say that they are trying to imply that Hybrid cloud isn’t valuable or useful or that it ‘isn’t really cloud’. I hope I’ve shown that there is more to hybrid than a short-term or interim step on the way to Hybrid Cloud. The multiple definitions point more to the fact that hybrid cloud is evolving, like most cloud technologies are today.
IBM definition “Hybrid cloud is a computing environment that connects a company’s on-premises private cloud services and third-party public cloud into a single, flexible infrastructure for running the organization’s applications and workloads.”
Forrester Research‘ definition of Hybrid Cloud: “One or more public clouds connected to something in my data center. That thing could be a private cloud; that thing could just be traditional data center infrastructure.” (As found on this BackBlaze blog post. )
TechTarget definition: “Hybrid Cloud is a cloud computing environment that uses a mix of on-premises, private cloud and third-party, public cloud services with orchestration between the two platforms. By allowing workloads to move between private and public clouds as computing needs and costs change, hybrid cloud gives businesses greater flexibility and more data deployment options.”
If you want a broader discussion of the variation in the meaning of the term Hybrid Cloud, there is a good summary in this IDC Report.
In my last blog, I talked about why realizing the full benefits of cloud requires a fundamental change in how organizations run IT – starting by shifting the focus from technology to business outcomes.
To do this effectively, business and IT must work together as partners, sharing a strategic vision for the organization. Often, a Cloud Target Operating Model (CTOM) is used to map the expected business benefits of cloud technology and resulting in a transformative business model across the organization – people, processes and technology.
No matter what size your company is and where you are in the journey to cloud, your organization can achieve all the benefits cloud has to offer – provided business and IT are willing to work in sync. It’s all about adopting a cloud mindset. Let’s dig a little deeper into how that plays out.
A common perception in the IT industry is that Canada lags 12-18 months behind the U.S. in the adoption of most new technology. Whether that is because we are more cautious about new technology, our big businesses are relatively smaller, or if it’s just an industry myth, in my experience it does seem to be true in the journey to cloud. Often, clients have cloud programs with set goals, but the level of transformation activity isn’t at the same level as we see with our U.S. clients.
Whether small or large, Canadian firms often say that they are still in the early stages of building their cloud capability and that they expect to mature capability, processes and skill levels significantly over the next three years.
Adopting a cloud mindset starts with thinking “cloud first” rather than simply “lift and shift.” McKinsey recently noted that the majority of cloud migration spending today is “lift and shift.” This is certainly true in Canada. While this approach has its merits, in many cases, lift and shift can lead to a more complex IT environment. As the McKinsey study explains, “Just taking legacy applications and moving them to the cloud will not automatically yield the benefits that cloud infrastructure and systems can provide. In fact, in some cases, that approach can result in IT architectures that are more complex, cumbersome, and costly than before.”
Not surprisingly, migration is easier for some than others. For “born on the cloud” organizations, digital transformation is as natural as breathing. DevOps, agile, containers – it’s just business as usual. On the other hand, cloud adoption is going to take more time, and change will be more difficult, for older, large organizations with mission-critical legacy applications.
For these older companies, the success metric will depend on how they handle two major shifts ahead:
changing the technology itself and
changing the culture of the organization as they find partners and gain the skills needed to orchestrate and integrate the total service to the organization.
Developing an agile ethos
Agile is much more than just a daily stand-up! Many organizations haven’t really adopted an agile ethos, nor have they fundamentally changed their processes. Are you part of a large enterprise with a legacy environment? Don’t assume that if “you’re big, you’re slow.” If there is a culture of experimentation within the organization – a willingness to fail fast, learn and adapt – it won’t necessarily take longer to start to achieve the benefits associated with cloud. Similarly, just because you’re small doesn’t mean you are nimble. Agility is key.
Some may have entered the cloud world without a strategic plan. They may have started acquiring cloud without realizing the challenges of multiple clouds or purchased tools that promise to streamline development without addressing the end to end production cycle.
Make sure you answer the important questions around how you are getting to cloud just as much as why. How are you going to govern your cloud environment? Scale it? Architect and manage it? Starting with the CTOM helps frame where you want to go so everyone involved has bought into the vision – while ensuring minimal losses because risk is understood and accounted for.
How long does this transformation take?
Rewriting mission-critical applications to optimize them for the cloud can certainly be very costly. Many organizations need to see the business value of migrating to cloud to justify the spending. If yours is an organization where change is challenging and cost constraints are an issue, an iterative approach makes the most sense.
And it’s important to recognize not every application belongs on the cloud. Some workloads are better suited to a traditional data centre environment for security reasons or because they are infrequently updated. These workloads will always exist alongside multi cloud and hybrid cloud environments.
The point here is to be sure you are using the right technique for the right services, based on your target operating model. In that case, a technology refresh can benefit from DevOps or agile techniques — whether it’s on cloud or on prem.
How to realize business benefits from cloud
The biggest benefit of cloud is that it enables your organization’s digital transformation. This is most effective when there is awareness and integration between the strategic plans for the infrastructure, the applications, the IT processes, and the business processes. The speed of digital transformation is driven by the availability of your business application components – whether APIs or microservices are ‘cloud ready’. This will drive the agility that digital transformation requires.
When organizations hit roadblocks in achieving benefits from cloud, a deeper look often reveals a series of separate initiatives: cloud migration in one area, digital transformation somewhere else; and DevOps being run without actually merging the functions of the development team and the operations team. Without an integrated vision, you may be migrating the wrong applications in terms of maximizing the value of your digital initiatives.
With the vast amounts of data most businesses generate, equally vast computing resources are required to analyze the data and turn it into actionable information. The ability to scale resources up and down to respond on demand is another cloud benefit, allowing organizations to scale and grow workloads as needed.
Finally, while cloud can save your organization money, there is no guarantee that cloud will always – or even usually – cost less than traditional on-prem. The key cost benefit of cloud is being able to spend at the point of usage rather than upfront; and being able to target spending where it is most needed at any point in time.
Think big, start small
No matter what size your company is and where you are in your cloud journey, adopting a cloud mindset lets you realize business and IT benefits along the way.
We advise clients to start small, learn quickly, then adapt and expand. Pick one area. For example, you can start with a small HR project, and leverage that experience and knowledge across other areas of the organization. Although your ability to reap benefits quickly depends on your CTOM, cloud technologies operating at peak efficiency will:
Provide flexibility and scalability. The faster you release services into production, the faster you can respond to business needs in response to market demands.
Remove project irritants. Organizations are facing inevitable refresh and replacement demand. By moving that stream onto the cloud, IT can more easily maintain day-to-day services to the business.
Provide access management and cost control. Like a utility, cloud lets you pay only for the services you need, when you need them – with the bonus of backup and recovery that might not have been affordable in an on-prem environment.
A successful move to cloud will align business and IT in a way that supports your organization’s drive to become a customer-centric enterprise. Done right, a cloud for the real world gives you the flexibility, speed and control to capitalize on tomorrow’s big opportunity.
Thanks to colleagues Kristen Leroux and Brian Franks for sharing their thoughts in shaping this post.
Cloud by itself is not a silver bullet that will drive savings and productivity, make users happier, or make your company more agile. If you are migrating to cloud without a strategic plan for digital transformation, you are not likely to experience much success – and costs could very well spiral out of control.
In this blog, I’d like to explore how organizations can maximize the value of cloud by changing how they run IT.
Successful early cloud adopters have figured out they must manage IT differently to achieve the full promise of what cloud offers. Organizations that have yet to reap the benefits of cloud are still using traditional IT management frameworks that were biased towards large capital spends or building for longevity – thus lacking speed, agility and an organizational self-sufficiency philosophy up and down the technology services stack.
A joint effort between business + technology
It’s important to recognize that “digital transformation” using cloud is not just a shift in technology, but a move towards business outcome-based transformation.
Consider what today’s businesses are dealing with:
Customers and suppliers who want to interact with an organization digitally, on any device, at any time, with an expectation for speedy fulfilment.
Processes that require standardization and connectivity to remove the paper buffer and enhance quality.
A growing volume of data available for capture and subsequent analysis that requires new skills and tools to exploit it.
The processes and technologies supporting these trends require:
Speed to market and automatic scalability in response to demand.
Agility and flexibility to interact with, and deliver, new capabilities regardless of where they are developed (APIs, microservices using containers, and open source development).
Technologies and skills that vary widely and may not be readily available, resulting in most companies being unable to attain self-sufficiency.
In other words, what is needed are Agile development techniques, DevOps, a focus on a collaboration process that consists of “try, learn, improve, repeat” rather than endless analysis trying to build the perfect end solution – all combined with an improved level of partnership between business and IT.
Moving to the cloud is hard – and ignoring the soft stuff makes it harder
Given these trends, most organizations are changing how they run IT, supported by new operating models and the right skills within the organization to embrace business Integration, supplier management, operations management, business outcome focus and elasticity. They need engagements that will help transform how IT is managed – as much as what IT is managing. Roles will change, skill demands will increase, and many traditional activities will transition.
Here are two key business challenges to keep in mind when moving to cloud:
IT is now delivered “as a service.” Success is measured in outcomes and business results, which requires skills for managing and integrating services from multiple vendors into your existing operations. These are often new competencies within IT or present at a new scale.
Business and IT must work in sync. Business areas may acquire cloud services directly, yet lack the overall skill to address the security, compliance, recoverability and repatriation issues that can result. Collaboration and co-creation between business and IT, together with the discipline necessary for production, is necessary to realize the speed and agility of cloud.
Why “cloud first” requires a new operating model
A cloud-first strategy is designed to remove limitations that have crept into the organization’s business solutions by fundamentally changing the way IT works, in cooperation with the rest of the business. The Cloud Target Operating Model (CTOM) brings business and IT trends together in the context of management, providing an assessment of where an organization is and where they need to go, in order to effectively manage the transition and operations of new services, cloud amongst them. (See 7 key considerations for your CTOM, below.)
Based on a shared vision and strategic direction, the CTOM maps to the expected business benefits of cloud technology and results in a transformative business model across the organization — people, processes and technology. The CTOM combines experience-based design with cloud-based capabilities, and services and capabilities delivered via a cloud infrastructure offer the advantages of:
Cloud industry standards from cloud providers that provide stable and predictable application development.
Container-flexible modules that break applications into components and microservices that offer inherent flexibility.
Continuous integration / continuous development.
Compute and storage facilities that are consumed by IT as a service have security and standards built right in.
Building a cloud-first strategy: 5 steps to success
How can you ensure that your organization experiences all the benefits that cloud has to offer? I’d like to suggest five key steps.
IT and business areas must plan and work together.
With 80% of workloads still to be migrated to cloud – through renewal, rebuilding, retirement, redundancies — close collaboration and cooperation with the business is required to understand what’s possible and to build a plan. Acquiring a shared vision between IT and the business is essential to get those two gears in sync. A multi-year roadmap of business capabilities will influence what will happen to underlying applications to meet growing business needs. IT must be able to undertake technical work of application modernization while planning for the functionality the business unit needs in the meantime.
Application ownership must move from project-based perspective to product-based perspective.
Traditionally, IT managed features and budget. Spending on new features and applications was justified on a project basis using a cost-benefit analysis. In a digital business, there is a budget for continuous innovation, and applications continually evolve. This is similar to the way products are developed, where ideas come up outside the original budget cycle and the application owner has the authority and incentive to change priorities. The more agility your business can have, the faster the business can move.
Changes to IT methodology and metrics will create new value.
With cloud provisioning delivered by service providers, IT management metrics become centred around agility and rate of change to support ‘fail fast, fix and do it better’ mentality. Agility and speed are the top priorities in a cloud-first model. Techniques like DevOps, and Site Reliability Engineering – where the focus is on creating new value and going live quickly – are changing the way we manage and deploy to production, encouraging tighter integration between the historically separate camps of Development versus Production Support.
Agile and DevOps are at the heart of the IBM Garage Method for Cloud. Garage is a way to start the culture change at a departmental or team level and to experience and experiment with the process. It is about continuous testing, continuous integration and continuous development to be sure that whatever you’ve got at any moment is ready to be deployed.
Internal governance must change.
Governance is about how we make decisions, and in the cloud space, governance applies from top to bottom in concert with architectural standards and the roadmap discussion of capabilities from the organization. Governance asks: Is the solution consistent with the architecture and a business need? How will this piece of the puzzle fit together? How does decision making change when you are operating at a faster speed with more implementations going into production, each of them incremental? What would the governance processes be over that?
Different skills and new financial management are needed.
With the shift to cloud, IT is no longer in the server management / operations business. Instead, they are managing, monitoring, and governing service providers according to agreements, SLAs, and contracts. This affects the skills and financial management required.
Financial management: Financial management changes as you move from capital expenditure to operating expenditure with the additional complexity of consumption variability. An organization can become more agile with a cloud subscription model, but it also increases the need for proactive management. New monitoring skills are needed to prevent cloud spending from spiralling out of control.
Change management for skills and processes. Change management is required, not only for the business processes inside IT department but the impact to people throughout the organization. With significant impact to roles that performed traditional Build and Run tasks, staffing roles will change. Planning and commercial skills increase in importance. Role of the operations department also changes.
Tomorrow’s big opportunity awaits
The traditional approach to IT simply can’t deliver the speed and flexibility required for today’s speed of business. The best way to stay flexible for tomorrow’s big opportunity is to embrace open solutions. All these steps – and more – can support your IT department’s drive to digital transformation.
Cloud is more than a technology shift; instead, it’s a move towards a business outcome-based transformation. To enjoy the benefits of cloud, today’s IT must change its focus from maintaining systems to delivering services; from technology to outcomes.
Thanks to colleagues Kristen Leroux and Brian Franks for sharing their thoughts in shaping this blog.
Stay tuned for Part 2 of this blog, where I will share how long a legacy transformation typically takes, where most Canadian companies are along the path to cloud transformation, and the types of business benefits that are being realized.
Migration to cloud is a continuously unfolding journey. I compare it to a marathon, not a sprint. By now, most enterprises have at least one public and one private cloud and are beginning to realize the many benefits of cloud computing: operational efficiencies, cost savings, agility and improved productivity. Not surprisingly, they’re hungry for more.
We are now at a crucial turning point. A decade after cloud adoption began, 80% of enterprise workloads are yet to be migrated – many mission-critical, security-dependent workloads are still to come. The real enterprise value of cloud has yet to be realized.
To accelerate and simplify application modernization, companies are evolving toward a hybrid, multicloud approach that uses a mix of public and private across a multitude of cloud platforms and vendors. It’s definitely complex, but also full of promise.
It could take another decade or more for enterprise workloads to be fully migrated to cloud. That’s the reality. As you continue the journey, how can you ensure effective, efficient communication across a complex multicloud environment – and what are the best practices and strategies that can take you from complexity to simplicity?
First, let’s define hybrid cloud, multicloud and hybrid multicloud
Hybrid cloud: Hybrid cloud is a mix of public and on-prem private cloud that is orchestrated to run a single task. An “encrypted highway” enables data sharing of mission-critical data between two on-prem and public cloud using standardized interfaces. This allows organizations to deploy secure and mission-critical workloads on-prem, while using public cloud for new workloads, and to build and test new applications.
Multicloud: Wikipedia defines multicloudas “the use of multiple cloud computing and storage services in a single heterogeneous architecture.” Unlike hybrid cloud, a multicloud environment always uses more than one public cloud. I like to differentiate between multiple clouds and multicloud. The first is what too many have – islands of cloud technology, with no clear strategy or governance that determines which cloud, for which workload, or why.
True multicloud is the thoughtful use of cloud resources so that systems work together in a way that maximizes value, minimizes cost, and is managed proactively. 70% of enterprises claim they will be implementing a formal multicloud strategy by the end of 2019. Why? Organizations choose public cloud to avoid vendor lock-in while gaining access to the best features and latest functionality offered by public cloud providers such as Amazon Web Services (AWS), IBM, Microsoft Azure, etc.
Hybrid Multicloud: Fuelled by open source technology with its built-in data portability and interoperability, hybrid multicloud lets customers fully leverage the data services in the cloud: containers, VMware, public cloud, private cloud, services, etc. The ability to connect public cloud to on prem or move workloads to a different public cloud lets you quickly realize the benefits of “build once, deploy anywhere.”
The situation today is a little like the islands of automation that were created in the early 1980s when minicomputers were introduced. One department would buy Digital, another HP and a third one IBM. Each had its own separate network protocols and ways to communicate. When personal computers (PCs) were added, they didn’t communicate with any of them at first. While it was possible to build connections between them, it was always easier if the connections were considered upfront.
So, how do you go about realizing ‘build once, deploy anywhere’ simplicity – and avoid the complexity and islands of disparate systems?
Solving for complexity and cost in a multicloud world
Regardless of which cloud environment you adopt, success lies in being able to holistically manage your cloud resources as if they were in one location. That’s one of the main challenges that IT organizations are facing today. The ongoing maturation of cloud computing platforms — as well as the constant movement of both workloads and data — means complexity is virtually guaranteed.
I believe this is the reality you must manage and plan for. Trying to manage resources in one cloud without considering the rest of the environment will result in wasted capacity and budget. It is estimated that business is wasting millions on unused cloud services.
What’s needed to achieve business goals is deep visibility and insight into your hybrid, multicloud environment. We need to solve the complexity problem now, before it morphs into operational failures.
Tools can help (but there is no magic solution)
Before cloud, if a problem developed, standardized escalation procedures were followed, either on site or at an outsourced data centre. Businesses had full control of the service levels they set. It is more complex in the world of multicloud. Each of the cloud providers – whether IaaS, PaaS or SaaS – have different benefits, service levels, reporting processes and escalation responses. The benefit of standardization of the service is lower cost for you, but the flip side is you must fit their model, not the other way around. So, your team needs to know: which cloud, and which container? Is the restart automated (yet)? Is it a router issue in your network, or with the cloud provider, or with an application inside the cloud?
You can see how difficult it is stay on top of an effective management function internally. As our clients go deeper into this multicloud world, they see the need for a management layer – “cloud ops” – that provides consistency and cost control in operations. Tools like Multicloud Manager and services like IBM “AppOps as a Service” tie everything together into an integrated operations centre (SOC) and help clients use the various cloud providers to maximum benefit. It acts as a level 1.5 help desk and manages the operations and escalation process so that IT – and business – have a common view of all clouds.
Although the strategic use of tools and engaging a trusted third-party services provider can help you manage this environment, there is no magic tool; indeed, too many tools can make a complex cloud environment even more difficult to manage.
To optimize cloud use, start with planning
As you migrate more workloads to the cloud, you must decide which workload should go on which cloud and how to track usage. Here are the first few steps that will provide you with the key metrics needed to make informed decisions.
Develop a clear strategy for your workloads. Think through which of your workloads fit best in which cloud: public, private, hybrid or multicloud. Which workloads are more data intensive, which more compute intensive? Which are more static, which will scale more? Which cloud providers offer the right mix of cloud services? Are you using the specific features of your IaaS / PaaS providers, or are you keeping your applications open? That’s first.
Know your costs. Build an economic model and business case to track the ROI on cloud adoption and workload move. Track what you are spending – is it what you expected, or more? Are you managing costs associated with moving workloads to the cloud?
Define your key governance metrics. What do you want to track? Availability, response time, and cost are obvious. What about time to create new environments, respond to new requests, or the ease of managing incidents? Are you tracking against the service levels the vendor committed to, or against your own corporate targets?
Refine regularly. The cloud is constantly evolving. You will need to think about how to respond to new capabilities and monitor which suppliers are exceeding your expectations. That can help you choose where new workloads go and whether it’s time to move existing ones. When you are confident that you can move a workload from one cloud to another seamlessly, then you’ve achieved true multicloud management.
Unlock new value – without lock-in
In the minicomputer era, the islands of technology gradually became better integrated as standards emerged: TCP/IP as the network protocol replaced proprietary solutions; Unix variants replaced different server operating systems, and so on. In the new cloud era, while open standards like Linux and Kubernetes exist, there are also customizations that can turn into the kind of lock-in that dominated the previous era. The question always needs to be, is the lock-in worth the value?
We can all agree that moving mission-critical enterprise workloads onto the cloud will unlock new value for your business. But how? By developing a strategy and an economic model, you can make the informed decisions that will optimize your workloads in a multicloud architecture. From there, decide on the tools (or services) that will help you achieve your objectives.
Only then will you know the changes you’ll needed in how you run your IT shop. But that’s a topic for my next blog. Stay tuned!
When cloud computing went mainstream a decade ago, early adopters quickly realized business value by migrating “easy” workloads to cloud and adopting a cloud-first strategy for new systems. Fast forward to today, when the top IT priority is to modernize existing, mission-critical applications on hybrid cloud for better business agility and faster access to data insights. To fuel that effort, cloud spending is growing at six times the rate of overall IT spending.
Yet, surprisingly, only about 20% of enterprise workloads have migrated to the cloud so far. What is slowing down the remaining 80% of workload migration? What are the obstacles on the migration path? And what steps can organizations take to realize the agility, innovation and digital transformation that is only possible on the cloud?
Why migration to cloud is a multi-year journey
Let’s look at some of the major obstacles to enterprise cloud migration.
Organizations often encounter unexpected and discouraging costs and complexity, even during routine lift and shift migrations.
Developing new apps across multiple vendors and clouds, each with their own propriety tools and management systems, has led to challenges in terms of security, compliance and governance.
The larger the organization, the older the application suite. According to IDC 47% of existing applications in Canada are more than five years old, which means they were probably not built with containers or microservices in mind. Migrating these legacy applications can be painstakingly slow because they first must be evaluated based on suitability, cost and feasibility.
All of this is going on while enterprises continue to make significant investments in traditional IT.
The reality is, for all but the newest companies, migration will be an incremental, multi-year journey. You won’t get there in one step. Modernized cloud native and legacy apps will coexist for the next 10-plus years, with business and IT value realized as each step is taken as part of a comprehensive digital transformation strategy.
Migration to cloud is a marathon, not a sprint. The question is, how can you accelerate and simplify that journey?
First, why migrate (and why not)?
Why migrate? Don’t move to the cloud because everyone else is, or you believe it could “automagically” save on infrastructure costs. However, if cloud is part of an overall change in how IT operates – creating more flexibility, agility, and the ability to respond quickly to changing business circumstances – you are on the right track.
Many organizations begin their migration efforts by rehosting an easy workload, but the highest value applications are in the functional areas where you want the most innovation and are the most amenable to change, such as client facing or internal business systems – or both.
Remember, not every application is a candidate for migration. Consider:
Large, stable, existing applications are probably a lower priority, until or unless you need to refresh hardware or the underlying infrastructure.
Security issues. Today’s hybrid, multi-cloud reality has IT leaders wrestling with how to move and manage workloads and apps between clouds efficiently without creating security risks. Sometimes this is more fear than reality, but in some cases, private cloud is the answer.
The impact of network performance on the migrated application. If the application is not designed as cloud native, consider network latency (lag) when you’re designing the system. If you are not located in a larger metropolitan area, complex tasks may take long to be received or processed. Applications that use overlapping data probably should be moved together.
Know your technical debt One of the major challenges in moving to cloud is dealing with your organization’s ‘technical debt.’ If you’ve ever put off a project with the intent to finish it later, you have incurred technical debt. If you are behind on software versions, you have technical debt. And if you took short-cuts like hard-coding variables to get code to work ‘for now’; you definitely have technical debt!
Today, many enterprise workloads are drowning in technical debt. Migrating or refactoring workloads and applications to the cloud is probably the best way to pay off that technical debt – while improving flexibility and agility. Understanding software currency, release levels and requirements for underlying middleware, database and operating system are crucial before you begin.
Lift and shift, refactor or modernization?
Business growth in a digital world requires innovation and the ability to leverage data for competitive advantage. Cloud makes that possible by removing infrastructure costs and transforming enterprise processes. There are several approaches to take:
Replace: When considering the “build vs purchase” question, decide if you really need to build a proprietary, cloud-based solution – for things like CRM or ERP — or if you can leverage a “born on the cloud” packaged solution? (If a package does most of what you want, why re-invent the wheel?)
Rehost – Lift and Shift: This is the easiest way to migrate to the cloud. Recently developed applications are often “cloud ready” and can be easily moved to the cloud using a container or virtual machine with no modifications required.
Refactor:Refactoring (or redeveloping) an existing application to take advantage of cloud-native functionality can make it more efficient, scalable, maintainable or reusable. Or you can make minimal changes by removing some dependencies on mainframe while avoiding the cost of completely rewriting the program.
Rebuild – Application modernization using microservices: Leverage microservices and containers to build a cloud solution based on an existing application.
The American Airlines modernization story
The reality is, you will use all of these techniques, depending on your application mix and how you intend to use them going forward.
Take American Airlines, who used this combined approach to save money and improve customer service. Most of us have endured a flight cancellation due to weather events. IBM helped American Airlines build a mobile Dynamic Rebooking app on the IBM Cloud to vastly improve the customer experience. Behind the scenes in parallel, using the agile, product-focused Garage method, American Airlines migrated core VMware applications from its legacy environment to IBM IaaS in just four months. The result? Major capital cost savings while ensuring secure integration with back-end systems.
This is a great example of a large enterprise leveraging cloud to digitally transform their business quickly, through a mix of cloud-native innovation development, lift-and-shift migration and modernization of important business functions with microservices.
How to evaluate application readiness
To determine the complexity of moving legacy applications to cloud, ask questions such as:
Is it mission critical?
How complex and time-consuming will the move to cloud be?
What is the value generated to the business by being able to scale the application? The greater the need to scale, the higher the migration priority.
How frequently are you changing or innovating in this business area? The greater the innovation required, the more value there is in moving to cloud.
How secure is the data and where should that data reside? If your application is linked to other high priority applications, it is advisable to move it as a bundle.
Determining migration priorities
IBM doesn’t use a one-size-fits-all approach to cloud migration. With a focus on enterprise innovation, data privacy, security, and your specific business priorities, IBM uses a matrix that evaluates complexity, time to value and frequency of updates to create a cost-benefit analysis showing which applications should move to cloud first, be retired, or kept on traditional IT (for now).
The evaluation of migration priorities is stored in a database that can be queried, revisited and revised on a regular basis. As the centre of “data gravity” shifts in your environment, priorities will also shift, so data and database dependencies are identified to ensure that all complimentary tools/datasets are migrated to the same cloud to optimize integration.
Take the next step in the journey
One of the main reasons that companies accrue technical debt is that their IT departments are already overwhelmed with everyday tasks. An experienced cloud migration partner can accelerate your switch to cloud and help you avoid costly mistakes along the way.
If your company’s application portfolio is more than five years old, not built for the cloud, nor optimized for containers, yet digital transformation is your top priority, it’s time to think about taking the next step in the multi-year journey of migration to the cloud – and the path from complexity to simplicity.
No doubt you’ve heard the buzz about microservices, and you are probably wondering what all the excitement is about. Far from just being another IT buzzword, microservices are absolutely crucial for application development on the cloud. I’d like to explain how microservices architecture can create new value for business – often in ways you had never previously imagined.
I’ve been in IT a long time. The ability to reuse code has been the holy grail of application development my entire career. Subroutines, object-oriented programming, object libraries, SOA – these were all supposed to be the magic bullet for reuse. We may not have reached the final answer, but microservices is one of the biggest steps forward in years.
Think of microservices as the logical extension of long-term trends in the evolution of software development. It’s a simple, yet powerful concept that is making all the difference for cloud-native application development.
Microservices architecture is a new approach to software development that creates very small modules containing data and functionality that can be easily combined, reused and scaled in various ways. Application services are broken up into small, independent functions, allowing developers to rapidly build applications, add new features, or launch new services. When combined with containerization, microservices can scale up and down depending on the need of the moment.
An analogy for non-techies
I like to explain microservices architecture using the analogy of a house. Think of your house as an ‘application’ or business system. In traditional, monolithic application development, the front door and back door of the house are the only ways to get in or out. Accessing ‘functionality’ in the house, such as the kitchen, meant following the floor plan – through the front door, down the hall, and so on.
With microservices, the house itself is radically redesigned, so that individual functions such as the dishwasher or a bedroom are available in any quantity you want, in one simple step, from wherever you are. Having a weekend party? Get instant access to five dishwashers and three extra bedrooms, then scale back to normal operations when you’re finished. Reuse or recombine functionality in any way you want!
That’s the power of microservice architecture.
Why microservices are here to stay
Complex applications built from a set of modular components are faster and easier to develop, adapt, and scale to meet demand, and take up fewer resources. Developers have the freedom to focus only on their piece of the project, which means continuous integration and development are built into microservices architecture, while virtually eliminating infrastructure risk. If one component fails, it’s easy to isolate and fix it while the rest of the application keeps on ticking.
Microservices architecture splits large applications into (much) smaller pieces that exist independently of each other.
Each microservice does one thing and does it very well.
Microservices offer a fast, flexible and efficient approach to building, deploying, and updating individual parts of an enterprise application, streamlining application development and updates.
Security based on industry standards is built into each microservice, so it’s already there when you recombine or reuse the component.
A word about APIs and containers
Microservices are most powerful when they are containerized. Think of a shipping container, which is used to quickly and efficiently load and transport cargo around the world. Putting microservices into a container lets you build a powerful system that gives you all the functionality you need, since the container gives you both scalability and replicability. The most common way to implement this is using Docker containers and using Kubernetes to orchestrate, monitor, manage, and scale microservices.
Microservices use application program interfaces (APIs) as gateways to access an application’s data or use an application’s functionality. APIs are how the world’s electronics, applications, and web pages are linked up to work together. It is the combination of microservices (public and private) plus containers that gives developers the scalability needed to rapidly add new value, features and functionality.
Why microservices are revolutionary
The 2019 IDC FutureScape Report says that the digital economy is driving the shift to modular, distributed, cloud-native technologies. It predicts that by 2022, 90% of all applications will be using microservices architectures.
That’s not surprising, when you consider what microservices can do for businesses.
Consider a very simple example. Not too long ago, company web sites contained static maps that showed their location. To get there, you needed a paper map. Today, a single mapping app such as Google Maps (a third party, microservice-based API) shows the company location along with dynamic, real-time suggested routes and predicted travel times for walking, driving, or transit. Not only is this revolutionary, it’s a win-win-win: it saves developers the effort of reinventing a mapping app every time; it gives consumers fast, customized information; and it is a huge win for the company who developed the product.
The opportunity is enormous
I believe that many organizations have not yet grasped the breadth of opportunity awaiting them with microservices architecture. They hear the buzzwords, but they aren’t clear on the value it brings to the business. Let me elaborate.
Not only are microservices at the frontier of cloud application development, they are the most powerful way to move the valuable functionality in your legacy applications onto the cloud. No need to move your 50-year-old legacy payroll solution to the cloud all at once to experience the benefits. Take out the piece that is scalable and reusable for other business needs, turn it into a microservice, then build or refactor the rest of the system over time.
Imagine taking a great idea from your finance application, combining it with another idea from your HR application, and creating a new, powerful solution that nobody had thought of before. It’s possible with microservices architecture.
Business leaders ask us how they can get the same kind of competitive edge. We have been thrilled to help large enterprise clients rapidly gain new functionality and create new business models using containerized microservices architecture on the cloud.
In one sense, microservices is just a new word for reuse and scalability in IT, but these days, multi-million-dollar business are stitching together publicly available microservices with their own solutions and ideas to disrupt and topple entrenched industries. Who knows what the next industry disruptor will be?
Next up: Migrating applications to the cloud
Most companies are starting to build cloud-native apps using microservices architecture, but it’s also a key component of a good strategy to migrate legacy applications to the cloud. Everyone’s journey has its own twists and turns, but to reap the greatest benefit, you need to refactor and redesign your application as you migrate it and modernize it for the cloud.
I’ll be talking about cloud migration in my next blog. Stay tuned!
I got two pieces of advice when I first started canoeing with friends. The first was “expect wind in your face and rain every day ‘, which was the starting point of this blog series. The second item that has stayed with me all these years and is relevant to projects was to “stay calm on the last day.”
When I asked what my friend Graham meant by that or why he was saying that he highlighted:
The last day is often the toughest day of the trip. You are tired, you have a full day of paddling, repacking the cars, plus a long drive to get home.
You’ve will have been together with a group of people who you have different relationships, some you don’t know very well, for a long time
You will be tired, dirty, want some clean clothes, and a chair with a back and legs, something better than a log to sit on. Maybe a craving for something as simple as cold milk in a glass.
It’s easy in that scenario to let tensions or frustration get the better of you and say or do something you might regret in the long-term (even if it feels good in the here and now).
At the end of a trip it is particularly easy to let something slip if you’ve been holding it in for a long time.
In the same way, we all need to be able to keep our emotions in check when we get tired or the going gets tough on a project. I confess that this is much easier for me to say than to do. Particularly on the big troubled projects that I’ve been brought in to, to help get them back on track. But the same thing process is at work. You get tired and easily triggered. It goes back to getting the frequent micro breaks, taking a deep breath… and staying calm on the last day.
When I started this sequence of blog posts on ‘paddling and projects’ it was the middle of winter. Now my canoe is back in the water, its time to do real paddling!
Here’s the complete list of ‘Paddling and Projects – Lesson’s Learned’ in summary form that we’ve explored in these blog posts:
1. Expect wind in your face and rain every day
Plan for problems; have contingency
2. Make sure you have the right experts on your project
Its not about ‘all senior’; its about balance in the team. Do you have ‘just-enough’ expertise?
3. Everyone has a role, Know your role, do more, carry your fair share
Understand your role; take on more when you can
4. Do you trust the person planning the route;
– if I had seen the video, I wouldn’t have come;
– The group moves at the speed of the slowest person
Understand the plan & whether the resources are capable of achieving it; provide constructive criticism to fill the gaps
5. Innovators are often mocked before they are copied. So are the people who resist change.
Embrace change; look for the benefit of change
6. Take planned and unplanned breaks
Take opportunities to recharge
7. Be the early warning system
Don’t assume everyone else sees the risks you see; don’t be afraid to highlight risks; bring a solution
8. If you want to see the beautiful sites few have seen, You have to carry a heavy load & you have to go off the beaten path
No client pays us $20m to go for a walk in the park; if it wasn’t hard it wouldn’t be transformative
9. Don’t watch the end of the lake, you see progress by what you pass
Make sure that there are frequent meaningful milestones to measure progress
10. Any fool can be uncomfortable in the woods
Methodology is important; follow a method to achieve maximum benefit
11. A leatherman is the wrong tool for every job
Get and use the right tools to execute your project
12. Don’t take cheap chocolate or mediocre wine into the woods
Know your trade-offs -Understand what’s most important, when out of time / money; what’s the best thing for the client / /for the business
13. Maybe someone moved the hydro wires
Usually the simplest answer is the correct one; test your assumptions regularly
14. Stay calm on the last day
Don’t let stress get you down; find ways to reduce stress (like canoeing!)