Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

Turducken, DevOps, and Unified Orchestration

$
0
0

A portmanteau is a blend of words where parts of various words are combined into something new. This time of year as we approach Thanksgiving here in the States we get to see my favorite example - the Turducken. Variations on this engastration have been around for over a century. In fact, an old Indian story from the 1800s described what might be the ultimate in the genre – "Prepare a whole camel, skinned and cleaned, put a goat inside it, and inside the goat a turkey and inside the turkey a chicken. Stuff the chicken with a grouse and inside that put a quail and finally inside that a sparrow. Then season it all well, place the camel in a hole in the ground and roast it." – thus was born the CamGoTurChikGroQualRow.


In the spirit of verbal mashups, I spent last week in San Francisco at the DevOps enterprise summit. It was a great event and good chance to meet and hear from @RealGeneKim, author and one of the patriarchs of the movement. DevSecOps, BizDevOps, DevCloudOps, and several other variations of silo elimination were on display at the event. While I was joining the other nerds in the search for DevOps enlightenment and for the best new sticker for my mac I ran across Torsten Volk, Senior Analyst at Enterprise Management Associates. I attended a panel session with Torsten and afterward managed to steal a few minutes for a discussion.

Even though DevOps has been an active space for a while many of the enterprise customers at the show and the experts giving presentations echoed the common observation that most mainstream enterprises were only just barely scratching the surface. That was my first question for Torsten:

>

At Morpheus, we've been dealing first-hand with DevOps as a multi-cloud management tool and unified orchestration engine. AstraZeneca as an example was able to speed deploys from 80 hours in their VRA implementation down to just 10 minutes with Morpheus including full ServiceNow integration. Not everybody is that far along, many are just trying to get their cloud house in order so developers get what they need from IT without feeling the need to bypass legacy processes. On that note, in addition to the DOES2017 event, I was working with the team last week to launch our 3.1 software update… you can check out the news here.  Much like the Turducken, our new platform is the ultimate deployment mashup... combining cloud cost analytics, governance, app release automation, and multi-platform scaling.   It's why many large enterprises are making the switch to a unified approach.

I asked Torsten for his thoughts on scaling  DevOps practices in the enterprise.

As mentioned at the start, it is Thanksgiving week. I'm thankful for the opportunity to jump into a new industry and make new contacts with customers, partners, and influencers. If you've got comments or would like to connect directly you can find me @MorpheusDude on twitter.


What got you here, won't get you there

$
0
0

It’s been used as a hook a lot, I know… but it’s one of those pithy truisms that is sometimes too hard for us marketing types to resist. In this case, I thought it applicable to a number of items in the news recently. 

First, on the executive front, last week Meg Whitman stepped down from the CEO spot at Hewlett Packard Enterprise (HPE). Not really a surprise for anyone paying attention this past year… in fact, quite a bit later than most speculated. To the title of this post, she came in with a turnaround plan that ran its course and she acknowledged the need for a change after exiting the software and managed services business. In my former life as a storage guy, I saw how a hardware-focused sales channel could significantly grow a business when presented with the right offer at the right time. The acquisition of 3PAR and more recently Nimble Storage were moves that I hope (as a shareholder) continue to pay off in hardware margins as Antonio takes the top spot.

That said, the appetite of enterprise customers to base their go-forward strategic initiatives on infrastructure-centric vendor engagements is not what it once was. Most have realized that the platform for digital disruption is not going to be grounded in hardware choice points or in traditional packaged applications. Converged, hyper-converged, composable, and other integrated stack conversations are all better ways of delivering hardware for sure, but they are incremental improvements in a world where step-function change is turning entire industries upside down.

Banks, transportation companies, retailers… they are all software companies now, simply with banking licenses, distribution networks, and point of sale locations. The developers striving to move from monthly release cycles to daily, hourly, and beyond are not interested in hardware platforms. The move to multi-cloud deployments and agile development frameworks is not about what got us here and in fact as I look around at AWS re:Invent this week it reinforces that the world has changed and there is no turning back. The event has taken over 5 hotels and most of the Vegas strip and registering for sessions reminded me of going to college… hundreds of sessions, dozens of tracks and a flurry of high impact product announcements.

This is not to say that on-premises deployments are going away, just that the world is much more complicated now for IT leaders. Much of the growth of public cloud has come from developers going around slow-moving IT processes within enterprises and more than one CIO has been shocked by 6-figure expenses due to shadow-IT. DevOps initiatives embrace this reality by having infrastructure teams shift-left and integrate into development pipelines but as that occurs, those traditional IT teams are realizing that the need for extreme agility cannot be met with legacy thinking or legacy vendors.

Everyone is a software company, everybody is heterogeneous, and everything is hybrid… meaning having your software deployment engine dictated by a single hardware vendor, hypervisor company, container orchestrator, or cloud vendor is untenable and unrealistic. The open source world is a juggernaut for software developers and in response, multi-cloud management and DevOps automation must be just as open and ubiquitous.

It starts and ends with the applications that are transforming engagement with customers and markets. These apps may run on-prem, in hosted data centers, or in public clouds. They could be monolithic or based on micro-services architectures. They could be bare metal, virtualized, containerized, or even serverless workload elements. The truth is for most enterprises it’s all of the above meaning toolchains must take into account all eventualities and be instrumented not for what got you here but rather for whatever comes next.

Taxicab Confessions from AWS re:Invent 2017

$
0
0

What’s better than a week in Vegas?  Two weeks… maybe that’s 12 days too many but hard to say when you are surrounded by so much great content and so many fantastic people.

I’m half-way through my Vegas marathon with AWS behind me and the Gartner Datacenter show coming up.  It was my first #reinvent and WOW… more people, vendors, sessions, and energy than any tech conference I’ve been at in recent memory.

While the logistics of handling a show that spread out across the vegas strip my have tested the limits of what’s possible it was still a very positive experience.  I met some new members of the Clouderati and found some others who should be great contacts as I continue my new adventure at #CloudBeers… Thanks to @Stu for introducing me to good times at #CloudBeers with @Julian_Wood, @jpwarren, @tcrawford, @amber_rowland and others.   Also did a great man on the street video and had conversations with @evankirstel plus caught up on all things storage with old friend @ChrisMEvans.

Some good recap blogs have already come out from a few.   Check out the re:invent posts from SiliconAngle with videos from Furrier and team as well as daily details from Julian Wood (http://twitter.com/@julian_wood/) on WoodITWork.com

For my part I was able to catch a ride to the airport with my new Morpheus BFAM, Adam Hicks and captured some recap thoughts while on the road.  Take a look and send me your thoughts @MorpheusDude (will have to work while I keep working on getting comments enabled on this site).

 

 

Bi-Directional Middle-Out for CloudOps and DevOps

$
0
0

Let’s agree on a few things right out of the gate…  

  1.  First. with respect to CloudOps, the major driver of public cloud usage is new app development.   Hybrid IT and multi-cloud brokering is the new service provided by IT; those orgs who don’t do it well are stuck with shadow IT issues from rogue developer teams.

  2. Second, let’s turn attention to DevOps, where developers care about apps… not infrastructure.  These teams live in the world of Agile and Open Source where faster releases and better tooling rules the day.

  3. Lastly, the whiteboard scene from the Silicon Valley season one finale ranks among the funniest 5 minutes of TV ever.

 

Everybody with me so far?

So, it stands to reason that the only thing everybody has in common is the application.  The application is the face of today’s business… in virtually every industry and geography. 

For CloudOps it’s about where the app lives, for DevOps it’s about how it’s deployed, and for Pied Piper it was a compression platform.  The application is the center of the universe… yes, you need cost control and governance as well as automation and deployment flexibility but without an app-centric point of view, the rest of it is noise.

That’s why I’m here to advocate Middle-Out Cloud Management… and not just Richard Hendricks style middle-out.  I’ll do you one better with Bi-Directional Middle-Out.  Now while I’d love to think that the 2x2 wizards over at Gartner would come out with a middle-out MQ I’m not holding my breath.  However, if you look at this webinar on How to Choose a Cloud Management Platform (CMP) then you can see where I’m headed.

There are a set of governance and provisioning table stakes for CMPs as well as emerging demand for cost optimization, however, it’s critical to take a step back and ask why cloud in the first place.  The objective of the organizations looking at all of these tools is to meet digital disruption head on through the development of applications and changes in process to fundamentally change engagement with customers and markets.

 

Morpheus was designed middle-out

So yes, of course, I’ve ended up here but I want to highlight why an application-centric design center can help reduce tool sprawl and may be better aligned with where your enterprise is looking to go.

First, let’s look at the North-South or on-prem and off-prem capabilities.  You don’t want a server, VM, or container… your ultimate goal is a functional config for an application.  That’s why we go beyond and actually provision the OS, database, web server, and other app components on the right tier, in the right boot order, and provide tools to help companies evolve over time as they modernize both traditional and micro-services applications.  

As multi-cloud orchestration platform, we have the governance tools as well as analytics to help optimize costs but the point is Morpheus starts with the application in a 100% infrastructure agnostic way... deploying, migrating, and elastically scaling the application on any OS, any platform, any database, and any cloud.

Next, we can dig into the East-West development and deployment pipeline.  With our history as a DevOps oriented deployment tool for IT practitioners, we eat our own dog food.  Our engineers actually use Morpheus to develop and deploy Morpheus.  It’s our application, and as we think middle out we look to the left and integrate with code repositories and binary artifacts that come from builds as well as to the right to the last-mile and ‘day 2’ operations required to run an application… load balancers, backups, monitoring, and logging are all native to Morpheus.  If you are looking at cloud management tools to serve your app dev teams you should ask your tool provider how much they practice what they preach.

 
I’m still surprised at how some tools are narrowly focused on one slice of the cloud and DevOps world given how tightly coupled these two inexorable forces are so I give you middle-out as a construct to re-think tool choices and/or possibly your TV binge-watching.  Now if only I could proxy a Weissman score for lossless cloud management tools…

If you want to set up a demo to discuss your application needs let us know or if you want to hit me up on twitter to comment shoot me a DM @MorpheusDude.

The Unexpected Business Benefits of Hybrid Clouds

$
0
0

IT pros of every stripe are in the process of rewriting their job descriptions. In many cases, the reality of 21st century data management leads to a version of this concept:

 

“I’m a strategy consultant helping the business from the inside.”

 

IT managers are transforming themselves into in-house business consultants charged with providing decision makers in the organization with the tools and business insights they need to achieve the company's goals. Such transformations always present challenges -- some obvious, and some unexpected.

 

One of the unforeseen obstacles facing IT departments as they shift from service provider to business consultant is called the "endowment effect." In a November 27, 2017, article on ITProPortal, Ian Furness defines the endowment effect as the tendency of people to value something more highly solely because they own it. This leads to them undervaluing alternatives that might actually be a better deal.

 

The result of the endowment effect on IT is the lost opportunity to capitalize on new approaches and technologies. According to Furness, the emotional connection to "owning" data and systems explains the slow pace of cloud migration in many organizations. Not only does overvaluing in-house systems cost companies more money, it leaves data less secure: Professional cloud services now deliver a higher level of security expertise than any company can provide economically on its own.

 

Spend less time managing machines, more time consulting with business managers

 

IT workers continue to spend most of their time managing the information infrastructure that their business relies on. However, the clear trend is to let cloud services do the heavy lifting: they provide the CPUs, servers, storage, security, and network plumbing that supports your company's apps and data. IT's attention turns away from infrastructure concerns and toward discovering and distributing the business insights sought by customers throughout the organization.

 

Companies that are just now ramping up their migration to cloud services have one built-in advantage: As David Linthicum writes in a December 4, 2017, article on Datamation, "we're just getting better at migration and refactoring." Linthicum points out that some workloads will never be a good fit for the cloud -- legacy systems and proprietary databases in particular. Still, the two factors most likely to stall a cloud migration plan's momentum are fear of vendor lock-in and skepticism about the cloud's much-touted ability to save money.

 

 

More potential cloud customers cite concerns about vendor lock-in and compliance/governance, while fewer worry about security, cost, and loss of control. Source: Kleiner Perkins, via Datamation

 

When enthusiasm for the cloud projects in your organization begins to sag, Linthicum recommends reminding the stakeholders that the only way to keep pace with the competition is to be as agile as they are in terms of provisioning compute and storage resources. If you can't get your products to market as quickly as your competitors do, you'll always be playing catch-up. This is one of the strategic advantages that are easy to miss when you focus exclusively on the cloud's tactical advantages.

 

Another way to keep people optimistic about the company's cloud efforts is to generate early successes with small projects. This serves as both a proof of concept and a way to fail early, fail often, and fail small so you can learn quickly from initial mistakes. Thinking small also helps you collect, organize, and analyze metrics in a way that can be applied quickly and simply to future projects, many of which will have a larger scale.

 

Minimize hybrid cloud management overhead to maximize cost savings

 

A potential sinkhole for businesses planning their cloud strategy is overspending on management. 451 Research's Cloud Price Index found that taking a hybrid/multi-cloud approach rather than relying on a single cloud service can save companies 74 percent of direct expenditures, as reported by TechTarget's Alan R. Earls in a November 2017 article. The caveat is that such savings are possible only when cost controls are applied to monitor and optimize all cloud spending.

 

 

The top three drivers of hybrid/multi-cloud adoption are flexibility and choice in deciding where workloads run (64 percent), extending IT resource capacity (56 percent), and maximizing return on existing IT investments (56 percent). Source: 451 Research, via Forbes

 

A comprehensive cost-control system ranges from such basic costs as powering off idle servers and other unused equipment, to logging the many different cloud accounts in use, including which applications and teams are using them. To automate the operating model, apply a tagging taxonomy at the build and design phases. Use tagging to identify all resources consumed in the multi-cloud infrastructure. This lets you tie each resource to a cost center. The tagging can be done via third-party tools or by open languages such as JSON or YAML.

 

Tagging also lets you set budget alerts as part of a DevOps methodology. This highlights the need to monitor centrally all bills received from all accounts and subscriptions.

 

Benefits of a deliberate, cost-focused approach to cloud migration

 

You can't blame people for believing that cloud computing is the answer to all their problems. The tremendous level of hype surrounding cloud services causes many business people to think all their apps and data need to be moved to the cloud, the sooner the better. It is becoming increasingly common for CIOs and IT managers to find themselves in the role of bringing these cloud dreamers back down to earth.

 

Market research firm IDC forecasts that 58.7 percent of worldwide IT infrastructure spending in 2017 will go toward in-house systems, as Jessica Lyons Hardcastle reports in a July 6, 2017, article on SDX Central. This represents a decline of 4.6 percent from 62.6 percent in 2016, and the trend is clearly toward increases in cloud IT spending: According to IDC, cloud infrastructure spending will have a compound annual growth rate of 11 percent from 2017 to 2021, to a projected $45.7 billion that year.

 

 

While public and private cloud will account for an increasing share of total worldwide IT infrastructure spending through 2021, traditional in-house data centers will continue to represent the largest single category. Source: IDC, via SDX Central

 

In a December 9, 2017, article in the Economic Times, Deepak Misra and Rajeev Mittal list some common reasons for keeping applications in-house: compliance and regulatory issues, the need for low latency, and the inability of custom legacy apps to run in cloud environments. Misra and Mittal present five typical use cases justifying a cloud-migration plan:

 

1.Start by bringing the cloud in-house: Rather than maintaining a legacy in-house infrastructure while migrating apps to the cloud one by one, transform your data center to an architecture that is compatible with public and private clouds. The result is greater efficiency, improved performance, enhanced security, and lower operating costs through replacement of outdated multi-vendor servers, storage, and backup.

 

2.Realize savings quicker by adopting appliances: Appliances that are pre-configured for a specific operation are simple to deploy and run, and they require fewer specialized cloud skills.

 

3.Extend your private cloud to a hybrid setup: A path many companies take is to begin their cloud strategy by migrating a handful of non-critical workloads to a private cloud as a proof of concept and testbed for tools and techniques. The downside of this approach is the time it takes to "build out" private clouds, which cancels out the speed, cost, and agility benefits of the cloud.

 

4.Focus on your most-critical applications: It's counter-intuitive that an organization's most important systems are also the ones most likely to be running on outdated infrastructure, and those most likely to benefit from an upgrade. For these apps, migrating to high-end servers -- whether in public or private clouds -- on a single platform and using a unified management system will deliver the most bang for your IT bucks.

 

5.Replace outdated storage systems with their fast, efficient cloud-ready counterparts: The typical scenario as IT departments rushed to accommodate the tidal wave of data swamping their systems is simply to keep adding to the legacy storage infrastructure. The result is overspending on outdated technology, and ineffective data management. Cloud-ready storage allows you to consolidate existing data resources while improving both security and performance.

 

The IT departments that will come out ahead in the "race" to the cloud are those that get a jump on the future while ensuring that their apps and data will continue to run smoothly and safely in the present -- whether they reside in-house or in the cloud. The IT professionals most likely to succeed in the future are those who think like business consultants rather than like data managers.

What do experts say about the cloud for 2018?

$
0
0

 As the cloud continues to grow and expand, more and more businesses are embracing the cloud and making use of all of the advantages that are offered. Recent years have been filled with much innovation in storage, scaling, and security, along with the addition of container tools such as Docker and Kubernetes.

In 2018, pure technological wizardry will no longer be the driving force behind the cloud because business customers, first intrigued and then enchanted by what the cloud can do, have made the intellectual and psychological leap to fully accepting the cloud as their primary IT strategy and foundation for the future.

Source: Bob Evans for Forbes

As noted in the above quote, organizations are now engaged in the cloud and are less likely to need convincing to make the switch. Instead, they will be more interested in what is offered among the cloud vendors that most suits their needs. With this in mind, what are some things we may expect to see in the world of the cloud for 2018?

Competition on the rise

Right now, public cloud operations are a black box. But the European GDPR requirements will hit in 2018, and advanced enterprise users are becoming more sophisticated in their cloud strategies and need to be able to differentiate their offerings based on the customer experience they provide. As a result, public cloud vendors will be forced to reveal more about how and where specific data packets are hopping through their cloud networks.

Source: Steven Mih, CEO of Aviatrix Systems for DZone

Given that cloud adoption is happening and will continue to grow, competition will become even more heated for all of the new customers looking to make the transition. Since much has been done with technological innovation, winning those new customers may come down to familiar things such as customer service, good product offerings, and pricing. 

With many cloud vendors to choose from, these new customers can certainly do their research, so vendors will need to be sure they are on top of things that might make the difference in which service a customer chooses. Customer service is definitely one of those, so it will likely be a good idea for vendors to ensure they are offering the best service possible to their customers.

Hybrid cloud on the rise

 Although hybrid clouds have been talked about for a very long time, they will become a “thing” in 2018. As consolidations and partnerships accelerate, and as workload portability becomes an imperative for Azure and GCP, we will witness many new services where customers will be able to run their own private clouds and seamlessly connect with Azure or GCP for additional capacity on demand. Partners like Cisco, HPE, Dell, and VMware will participate in this thing wholeheartedly to ensure a prolonged revenue stream from their existing products.

Source: Dimitri Stiliadis, CEO of Aporeto via eWeek

With more and more organizations making the move, many will want to have some portion of their cloud environment be their own private cloud in order to further protect particular resources and information. 

For some companies, the ability to have a hybrid cloud setup could be the difference between making the transition or not. Cloud vendors that can integrate well with private clouds to make having a hybrid cloud easy for customers should stand to benefit from this.

Security will still dominate

We expect 2018 will see more individual and state-sponsored attacks aimed at undermining the security of cloud infrastructures. As cyber attackers become more sophisticated, security analysts in government, public, and private sectors will also have to become more sophisticated and timely in their methods for detecting and preventing attacks.

Source: NetworkWorld

As always, security will be a key concern and yet another thing that could be the deciding factor in a potential customer making the transition to the cloud. With all the attacks that could arise, it is imperative that cloud vendors make security a top priority again - for 2018 and into the future. Since security will never be something that can be ignored, it certainly is a good idea for cloud vendors to put great efforts into making sure their cloud infrastructures are as safe as possible from any potential breaches.

While cloud adoption will continue to rise, it looks like 2018 could be a year for competition among cloud vendors. The difference may come down to which vendors can offer the best combination of price, security, and customer service, which in the end is good for those organizations which are seeking to make the cloud transition in the new year!

How Cloud Management Lets You Re-imagine the Possible

$
0
0

It seems like a contradiction: In the burgeoning cloud era of speed and agility, the biggest mistake you can make in formulating a winning cloud-management strategy is not that you think too big, but rather that you fail to think big enough.

 

Movie superheroes rarely sport pocket protectors (except maybe for their alter-ego). Nor are you likely to find "The DevOps Handbook" or "The Pheonix Project" on their nightstands or e-readers.

 

That doesn't mean IT managers should give up their dreams of being recognized as the people who saved the day for their companies, battling the competition to its knees with a combination of tech-savvy, automation tools, and superior digital transformation weaponry.

 

What's keeping IT pros from enjoying the adulation of their coworkers -- from the executive suite to the production floor -- may be a failure of imagination. Infrastructure and Ops managers are understandably focused on remaining relevant as change swirls around their profession. Rather than taking a defensive posture by fighting to preserve the status quo -- a losing battle if there ever was one -- the IT champions of the future are thinking big and taking calculated risks.

 

The cloud is 'how we do IT going forward'

 

"Cloud first" is now the rule rather than the exception for IT strategies. As the Stack's Sam Clark writes in a December 14, 2017, post, all computing is now cloud-based, in full or in part. In Clark's survey of industry executives about what IT departments can expect in 2018, a common theme is a recognition that cloud is mainstream.  In fact, at last year's Gartner Datacenter event over 80% of enterprise IT leaders stated that they had a cloud-first strategy for new applications.

 

Cloud Foundry Foundation CTO Chip Childers believes the cloud's signature attributes -- self-service and API addressability -- have usurped other compute models and are now "how we do IT going forward." SysAid Technologies CEO Sarah Lahav says companies now realize their cloud focus has to expand beyond reducing effort and cost. The best cloud strategies emphasize increasing the value of data assets to the organization and delivering a great user experience.

 

 

The transition to multi-cloud infrastructure relies on the management and orchestration layer, while the "cloud exchange" of the future will commoditize cloud resources and make app deployment infrastructure-agnostic. Source: IHS Markit

 

Velostrata CEO Issy Ben-Shaul recommends that enterprises preparing to move their workloads to the cloud adopt a multi-cloud strategy that relies on services available on multiple platforms, and that makes the migration process quick, simple, and safe. Puppet Chief Technology Strategist Ngel Kersten reminds companies making the move to the cloud to remain "adaptable and flexible" to ensure smooth integration of the public and private cloud components. 

 

Dipping back to Gartner research, over 90% of organizations reported that by 2020 they will be making use of multiple clouds to satisfy CEO expectations around digital engagement.

 

A cloud infusion makes businesses smarter, faster, focused on growth

 

After they have committed to a cloud foundation, companies are able to re-think their entire approach to business transformation - from systems of engagement and innovation to analytics and business intelligence. In a December 4, 2017, article on Forbes, Bob Evans writes that 2018 will be the year business customers will be "making high-stakes decisions about which cloud vendors demonstrate the greatest understanding of their businesses, their industries, their needs, their opportunities."

 

The goal is to become "highly capable digital enterprises" using cloud services to make themselves smarter, faster, aware, and ready to capitalize on growth opportunities. The winners of the "Cloud Wars" that Evans envisions in 2018 are the services that assist their customers as they complete their digital transformation. 

 

In particular, the cloud services comprising today's multi-cloud environments will need to demonstrate expertise in vertical industries. That's the only way they will win the support of enterprises that are entrusting their businesses' future to them.

 

 

A recent IDG Market Pulse Study found diverse views among IT managers about the ultimate goal of their digital transformation: process automation, meeting customer expectations, enhancing worker productivity, and secure anywhere/anytime access were the most frequent responses. Source: IDG, via CloudExpo

 

Echoing the "cloud as default" theme are the predictions for 2018 made by research firm Gartner. Scot Petersen writes in a December 5, 2017, article on eWeek that companies need to be past the stage of thinking about recreating their infrastructure in the cloud, "they must embrace [digital transformation] now." According to one Gartner analyst, tech executives "should not be known by infrastructure [they] keep, but by outcomes [they] provide."

 

Successful use of technology is now measured in terms of business outcomes: new customers, revenue growth, greater market share. The challenge is to innovate while "keeping the lights on" by ensuring that existing systems continue to run smoothly. Gartner analysts insist that new services be given priority, and business outcomes are emphasized.

 

Fluid data and app mobility requires a security re-think

 

The digital transformation now underway is one of the seven technology flashpoints for 2018 that Information Age's Nick Ismail identifies in a December 13, 2017, article. Your business's data is being "re-platformed," according to Ismail, to take advantage of the cloud's fluid data movement and edge-to-cloud security. An indication of the focus on business needs rather than infrastructure management is the integration of security at all levels of the DevOps virtuous circle of plan-build-test-deploy-monitor.

 

Ismail uses the term "DevSecOps" to define the merger of security and DevOps so that safeguards are implemented starting at the initial development stages. Two trends, in particular, are driving the integration of security and DevOps: the need to comply with Europe's General Data Protection Regulation (GDPR) and other data regulations; and the growing popularity of NoSQL for enterprise databases.

 

 

Increasing regulatory pressure and the growth of NoSQL in enterprises require that security is integrated with DevOps from initial planning, which creates "DevSecOps." Source: Larry Maccherone

 

The continuing emphasis on unlimited agility and scalability will drive the adoption of serverless infrastructure in 2018, according to Ismail. The goal of ever-greater efficiencies at enterprise scale will remain a challenge as companies adopt new use cases that assemble and disassemble the stack in novel, unique ways.

 

One of the biggest questions that come up in Morpheus Data discussions with end-users is that of security.  Many organizations who are adopting multi-cloud strategies for business agility are caught in a catch-22 that with agility can come added risk.  However, standardizing multi-cloud access and provisioning with a standardized set of processes and controls which also integrates with 3rd party identity management closes those security holes and enables DevSecCloudOps.

 

Vertical clouds signal the rise of best-of-breed specialization

 

Multi-cloud architectures will be the norm in 2018: A survey by IBM found that 85 percent of enterprises will commit to the multi-cloud approach in the coming year. As Paul Gillen writes in a December 18, 2017, article on SiliconAngle, multi-cloud will lead to increased use of specialty services. Established cloud vendors and so-called second-tier providers are expected to pitch products for high-performance computing, genomics, fluid dynamics, and other resource-intensive areas.

 

The growth in vertical clouds is expected to outpace the overall cloud market, according to International Data Corp. forecasts cited by CloudTech's David H. Deans in a December 15, 2017, article. For example, worldwide spending on cloud services by the financial industry is expected to increase from $3.2 billion in 2017 to $7.2 billion in 2021, while cloud spending by manufacturing industries will jump 23 percent in 2018, from $4.2 billion in 2017, and will more than double to $9.2 billion in 2021.

 

 

IDC projects that growth in such vertical clouds as finance, manufacturing, and healthcare will outpace overall cloud revenue increases through 2021. Source: IDC, via CloudTech

 

According to IDC, the vertical cloud expected to experience the greatest increase through 2025 is healthcare, which could record total spending of $17.6 billion in 2021, up from $8.9 billion in total spending in 2017. An advantage for vendors offering vertical cloud services is the ability to leverage their customers' "digital ecosystem" to establish a long-lasting presence in the industry.

 

Long-term benefits of managed cloud services now coming into focus

 

It's understandable that the short-term goals IT departments set for cloud services emphasized speed, agility, security, and cost-savings. TechTarget's Jason Sparapani writes in a November 30, 2017, article that CIOs are only now coming to grips with the potential to recreate their data operations to take full advantage of the cloud as infrastructure.

 

Sparapani quotes Gartner analyst Peter Sondergaard at a recent IT symposium as distinguishing "digital projects" from "digital business." The projects require only investments in technology. Becoming a truly digital business entails investing in people. CIOs are no longer providing services to their customers, they're delivering value by helping their customers achieve the business's goals more effectively and efficiently.

 

To date, IT departments have been fixating on answering the question, "How do we apply these new technologies?" The question IT should be asking customers is "What is the problem you're trying to solve?" A strategic mistake some companies make is to think they can build out from one successful cloud project. As beneficial as it can be to prove cloud concepts via small victories, such an approach can narrow your long-term vision.

 

Cloud technology is opportunity knocking on the doors of IT departments around the world. Capitalizing on that opportunity requires calculated boldness, and a willingness to shoot for the moon.

Tips for making your cloud environment last

$
0
0

If you are looking to convince your company to make the transition to the cloud (or already have), you will want to ensure that both the cloud culture and infrastructure is as future-proof and long-lasting as possible. 

You certainly don’t want to make the transition only to have to transition back to an older infrastructure due to lack of support or because things stopped working as expected.  With that in mind, here are some tips that may help make your cloud environment last not only for 2018 but into the future as well.

Celebrate and grow from early successes

 A.K.A "Land and Expand" - If you can start with some smaller pilot projects then you can demonstrate to the right people in your organization to how a cloud integration can be successful. If people see the value added when the cloud is put in place for the smaller projects, then it will be easier to see how that value would be brought into the larger infrastructure.  These controlled pilots also give you a chance to iron out the people handoffs and process issues that inevitably accompany any major shift in culture.

For example, if you are being pushed to move over quite a few apps into the cloud, including a very important app that will be a larger project, you can start with one or several of the smaller apps. Once you have made the transition, you and your team can display how the transition went and how things have improved with the cloud. You might be able to demonstrate how easy it can be to spin up another server when needed to deal with a heavy load or to provide additional memory or disk space for easily scaling of these apps.

Being able to display apps working in the cloud, along with the agility provided to quickly balance load or scale as needed may help to show how much help a cloud infrastructure can be for many years to come.

As an example, AstraZeneca used the Morpheus unified orchestration platform to streamline their provisioning of virtual machines early in 2017 after having struggled with a costly and complicated implementation of VMware vRealize.  With over $1M spent on professional services, it was still taking a week to provision new infrastructure.  After hitting the reset button the team was given 90 days to either get the VMware solution working or come up with a new plan.  The team kicked out VRA and turned to Morpheus for help.  After a short roll-out (and without the need for an army of services engineers) they were able to demonstrate that provisioning time had been reduced to minutes.  At the end of the year when it came time to deploy infrastructure for one of the companies major 2018 transformation projects the team was able to do in just 3 weeks what the project manager had assumed would be a multi-month roll-out.

Design for agility via hybrid and multi-cloud

Protected, confidential, and sensitive information can be stored on a private cloud while still leveraging resources of the public cloud to run apps that rely on that data. This is especially important for businesses that store sensitive data for their customers. (Think health care providers and payroll processors, for example.)

Source: Tyler Keenan for Upwork

It is almost certain that your company will have some information that needs to remain private, while other things can be deployed into the public sphere. If you plan for a hybrid cloud early, it can save quite a few headaches later.

For example, deploying everything to a public cloud could potentially leave private information in greater peril of being obtained by someone that should not have it, and could cause a company to leave the cloud entirely, not wanting to risk such an incident happening again. On the other hand, having a hybrid cloud in place early can help people see that both the public and private parts of the transition have been thought about and can be applied as needed for your organization.

In addition to recognizing the need to holistically manage on-prem and off-prem workloads in a hybrid manner, you should avoid painting yourself into a corner when it comes to public cloud options.  According to recent Gartner research, over 90% of companies have stated they will be using Multiple Clouds by 2020.  While there is certainly some elegance in using the native tools of a given public cloud provider, best-in-class IT organizations are establishing a multi-cloud strategy that does not require bouncing between multiple cloud tools. 

Avoid lock-in with a single vendor

Suppliers are nervous about customers changing providers and are therefore not making the whole task of migration easy. Suppliers will try to lock in customers to avoid them migrating to a new provider. For the customers, they often don't know how the lock-in will impact them and this can be devastating when it happens.

Source: ComputerWeekly.com

There are a number of cloud vendors, and migrating from one to the other may not be the easiest thing to do. Also, if one or more of these vendors should merge or leave the field, you would need to migrate to something you haven’t used yet rather quickly. This could be particularly difficult and has the potential for downtime and other issues that could arise. 

It is a good idea to work with several vendors, at least for the ability to know how each works and be able to have a migration plan in place should the need ever arise. Being able to see how your infrastructure fits in with a number of cloud vendors can prove to be very helpful in creating a good migration plan and being able to implement it quickly when needed.  

The lock-in discussion doesn't just apply to cloud choice.  Decisions around hypervisor stacks, container technologies, PaaS offerings, configuration management tools, and more are all rife with potential pitfalls.  When you look at the average enterprise IT landscape it's no wonder that automation and agility is such an elusive goal.  Homegrown approaches are impossible to maintain but getting into bed with a single vendor can be limiting.  This is particularly true when you look at the half-life and hype cycle of technology nowadays.

Another option you can use that allows you to work with multiple cloud and technology stack elements in a standard way is a unified orchestration layer such as Morpheus.  The right orchestration can provide a standard set of APIs and processes which will, in turn, insulate organizations from the underlying complexity and make enterprise agility possible.    With such a service, you can use most any cloud vendor or platform to create public, private and hybrid cloud infrastructures. 

Following these tips can certainly help you make your cloud environment a happy and lasting one. Keep in mind that every company’s situation is different, so be sure to move forward in a way that will best suit the needs of your organization.  Let us know if you'd like to set up a demo and learn how Morpheus can provide you with a timeless cloud platform.

 

 


Benefits and Challenges of Moving Existing Apps to the Cloud

$
0
0

When you need to move an existing app to the cloud, it can be quite a challenge. To begin, you will need to decide what method would be best to use in order to move a particular app over to the cloud. The method you use may need to be different for each app that needs to be moved, so it is a good idea to see what benefits and challenges there are to each one so that you can make the best decision for all of the apps you need to transfer.

Why put apps in the cloud? First and foremost, this should be driven by fundamental business considerations, not simply the desire to offload overburdened IT professionals.

Source: ZDNet

Secondly, you need to be sure that moving the app is the best course of action for your business at the time. Would the costs to move it outweigh the benefits? If so, it may be best to wait until the timing is better, unless of course, the move is urgent for other reasons (e.g., the app’s old environment will no longer exist or the app is failing in the old environment).

The three most common ways that you can move an app to the cloud are to replace it with a SaaS (software as a service) option that does the same thing, relocate the existing app to the cloud, or rewrite the existing app so that it works (or works better) in the cloud environment. Each of these options will be looked at for the advantages and disadvantages that come with it.  The experts at Gartner visualize it similarly as shown below.

 

Gartner_Cloud_Migration

Replacing the app with SaaS

We typically recommend SaaS to small to medium businesses with fairly straightforward business processes that are looking to reduce upfront expenses. 

Source: Derek Singleton, Software Advice

If the app you are moving is does something relatively common there may already be a SaaS solution available in the cloud already that you could use in place of the old app. If so, this can be the simplest solution, since you could simply take a little time to configure the SaaS app for your needs and begin using it. 

This has a great upside when such an app is available, but the downside is that you may not be able to find such an app already available. In such cases, you would still need to move your app using one of the other solutions. 

Relocating the app to the cloud

If you have a smaller budget and/or little spare resources, the next best option would be to relocate the existing app to the cloud. This might be as simple as spinning up a Windows or Unix instance and installing the app there so that it can be run, but it might also be more complex. 

You will likely need to make sure that your custom operating system, database, and other settings can be set up on the new system. This could be as simple as moving over some configuration files but could be more difficult if some of the specifics of the new system are different.

In cases where the operating system the app uses is unavailable, you may have to do some extensive work in order to move the app into that particular cloud structure. Depending on just how extensive this work is, you may need to move to the option that involves rewriting the app for the cloud since it may actually end up being less time-consuming in some cases.

Rewriting the app for the cloud

This is a wonderful option if you have the time, budget, and resources to rewrite the app entirely in order to take advantage of the options available in the cloud. In this case, you get to choose the operating system, database, and programming languages optimal for your team, the cloud service you have, and the goals of the specific app. 

If you have a cloud service that offers a good variety of operating systems, you will have quite a bit of freedom to choose the type of setup you would like. With this, it will be much easier if you determine the database you like would work best on Unix. You can simply spin up a Unix instance in the cloud, put the preferred database in place, and get to development!

No matter which option you decide to use for a particular app, the cloud offers quite a bit more flexibility for your budget and your development team. With a unified multi-cloud orchestration tool like Morpheus, you will have the ability to spin up many different operating systems, monitor your apps, and scale your apps as you grow.   More importantly, you can do it at your own pace from brand new cloud-native micro-services to legacy bare metal apps to mixed migrations that may different tiers on different platforms located in different clouds.

This level of agility and flexibility can help you find the right path to cloud migration that works for your unique needs!

 

 

 

A valentine's day special… I DevOps, take you Cloud, to be my happily ever after

$
0
0

A ‘meet cute’ for you non-Rom-Com fans is where the couple gets together for the first time.  Just think of Harry and Sally in that first car-share sequence.  If that means nothing to you then I can’t help you…. Google it.

What does any of that have to do with DevOps and Cloud?  DevOps Is the key to a successful cloud-first strategy and in fact, these two are as inexorably linked as Tom Cruise and Renee Zellweger in Jerry Maquire.  The DevOps shift is driving the demand for rapid provisioning of elastic resources and CloudOps is on the hook for delivering the infrastructure required to meet the business goals.  In this plotline, our meet-cute started with digital transformation.

A recent survey cited by Chris Pope in a December 11, 2017, article on ITProPortal found that in 76 percent of enterprises, DevOps is the primary driver of the companies' cloud-first initiatives. Even more telling, nearly all respondents -- including IT and line-of-business managers -- reported being involved in their firm's DevOps program.

Any IT managers who feared the rise of cloud services would render them obsolete can rest assured, there's great demand for their tech and business savvy. In 75 percent of the enterprises surveyed, a cloud-first approach made IT more relevant to the business and a DevOps mindset made them invaluable.

The challenge for CIOs is to bring their staff up to speed with the new skills required by the digital transformation: 90 percent of companies that have adopted a cloud-first strategy report their IT workers lacked skills needed to implement their plans. In addition to helping IT staff gain the skills required to keep a cloud-first infrastructure running smoothly, CIOs need to change the culture of the department to be more entrepreneurial and less risk-averse.  This is the same fail-fast mantra of DevOps evangelists.

 

Communication is key to any relationship... Managing bottom-up adoption of cloud services 

When the center of your IT universe moves from your own premises to the cloud, it becomes more difficult to get an end-to-end view of your operations and costs. Because cloud projects frequently begin from the bottom up, the IT department has less control over the process, yet IT is still responsible for security, compliance, performance, and reliability.

An application-centric organization benefits from the efficiencies of a bottom-up approach to infrastructure combined with the mature management capabilities of the top-down approach. Source: Deloitte Insights

The only way to track cloud efforts that originate in business units is to work directly with the people on the business side -- not just once a month or once a week, but every day. The fundamental role of IT professionals changes, which means relationships between IT and business departments change as well. It can be challenging for IT pros to manage the transition from infrastructure builders to service brokers.

One of the primary groups they are brokering on behalf of is the development team.  In a recent Gartner survey, 88% of respondents cited application developers as the largest consumer of infrastructure and operations resources.  When IT shifts-left and becomes an integrated part of those development projects then cloud-first can start to be a reality and ops teams can help align the right resources both on-prem and off.  

At Morpheus, we often find ourselves in discussions where both of these groups are at the table.  I'm happy to play matchmaker and give both developers and operations the feeling that they are getting the best out of the relationship.  It's all about providing operations the control they need while allowing developers to maintain their own identity in a BYOT (bring your own toolchain) scenario.

 

Flexibility is important and in this case, the Application makes the rules

The inexorable march from Bare Metal to VM to Containers and now toward serverless infrastructure requires new ways of thinking about planning, deploying, and managing applications. In a December 8, 2017, article on Data Center Dynamics, Mark Baker describes serverless as "the ultimate layer of abstraction -- write code, define a function, execute and get a return." Among the challenges to be overcome are application portability, service predictability, and liability for failures.

Perhaps the greatest impediment to a successful cloud-first strategy is old-fashioned inertia. Much time and effort have been invested in the traditional data-center model and development pipeline and legacy platform modalities.   In this case, it's time to toss aside the baggage of your ex-girlfriend and embrace what's new.  The advantages of daily deployments compared to weekly or monthly is profound.  To put it another way, cloud-first done right with end-to-end application automation and the ability to span multiple platforms can literally be the lynchpin in making digital transformation a reality.   Doing it in such a way that helps you evolve current application libraries to a containerized and FaaS future is what long-term relationships are made of.

Baker points out that a common mistake of IT decisionmakers is conflating what is possible with what is optimal. He states that "[b]uilding infrastructure that works and building infrastructure that really makes a difference to the business are two entirely different things."  

 

Good relationships start with knowing yourself:  Accurate analysis of workloads

A cloud-first approach requires that CIOs demonstrate more than mere cloud competence. Jyoti Lalchandani writes in a December 14, 2017, article on Gulf News Technology that CIOs must become masters of the cloud services model. The failure to do so will lead to more line-of-business managers "taking control of their own computing futures," according to Lalchandani.

Digital transformation business initiatives have become CEO-level priorities as more development and production workloads migrate to cloud platforms. A primary role for IT departments is matching various types of workloads to the cloud services that will optimize performance while minimizing costs. Evaluating the needs of individual workloads entails comparing the cost, performance, security, and contract terms of different cloud services.

Mapping workloads in multi-cloud settings presents three scenarios: presentation and business logic (public cloud) are separated from data (private cloud), the entire app runs in the public cloud using data filtered and replicated in the private cloud, and cloud-native apps aggregate data from back-end systems hosted in the private cloud and on-premises data centers. Source: IBM

Kevin Casey claims in a December 28, 2017, article on the Enterprisers Project that 2018 will be the year enterprises shift their focus from cloud adoption to cloud optimization. Currently, managing multi-cloud environments entails "chair swiveling" from console to console, but Casey quotes cloud consultant Jeff Budge as saying that through 2018 more IT departments will turn to "single pane of glass" cloud-management tools such as the Morpheus unified ops orchestration tool.  While I cringe at the SPOG reference I do think that minimizing pain by minimizing panes is a goal of every customer we talk to.

Morpheus started as a DevOps centric deployment engine across any cloud infrastructure.  That infrastructure often included brownfield environments so being able to manage application requests and provisioning actions was a natural underpinning to what came next... the analysis of utilization related to those applications.   Running in either an agentless or agent-assisted model we can accurately gather and aggregate workload data to help inform cloud optimization.  When we turn that machine learning to DevOps we can help extend insight to the full development pipeline.

Make the cloud-first culture change one of growing together... not apart

The maxim "no pain, no gain" has been applied to all fundamental technology changes in the business world, and the cloud-first transition is no exception. When the U.S. Department of Homeland Security began its Cloud Factory project, its goal was to develop a cloud-services platform that would meet the needs of the entire organization. DHS CTO Michael Hermus writes in a December 13, 2017, article on the Enterprisers Project that the department's cloud transformation entailed "a fair amount of cultural pain."

The characteristics of a digitally maturing organization include a shift from no user focus to a central user focus, and from a risk-averse culture to "risk receptive" to encourage innovation and collaboration. Source: Deloitte Insights

The DHS's Cloud Factory had four goals:

1. Create a single platform able to host all of the department's applications and systems.

2. Make the platform flexible enough to meet the needs of individual missions.

3. Implement a tech stack that complies with federal mandates.

4. Use a common DevSecOps toolset.

Cloud Factory's benefits include the use of best-of-breed tools, an agnostic integration framework, and access to the full gamut of cloud services and software components. However, the ultimate success of the project depends on DHS's ability to transform the department's culture in five ways:

1. Instill a mindset that change is more than "OK," it's necessary. Rather than waiting for change to happen, the agency will "continually drive change."

2. Invite people to take a chance on occasion and "try something new." Failure in an attempt to improve something should be recognized and even rewarded as a "good risk."

3. Hire people who have a passion for what they do. Even those who have the skills you need today will need to learn new skills in the future, and a talented worker with passion can be taught the skills required to thrive.

4. Modernize your workplace to make it more aesthetically pleasing. Standing workstations and other amenities can improve the morale of workers at relatively little expense.

5. Learn from and contribute to your community. The best way to find out about innovations and new opportunities is by rubbing elbows with others in your industry, as well as with cloud services and other tech providers.

The digital transformation now underway is often seen as having two facets: one virtual (infrastructure) and one human (IT and business staff). The application layer is where cloud platforms meet knowledge workers. The focus in enterprises is shifting to multi-cloud optimization and cultural reinvention. Transparency about how well your workloads are running and how effective your multi-cloud approach is in achieving your business goals becomesimperative.

 

Relationships both personal and professional are always evolving.  In the case of DevOps and CloudOps it's clear that one completes the other and the folks at Morpheus are ready to help you make the love connection required to fulfill your digital. transformation dreams.  Let us know if you'd like to have a discussion or schedule a demo to learn more.

5 Ways to Ensure Cloud Cost Transparency and Accurate Analytics

$
0
0

"You can't manage what you can't see."

It can also be said that nothing of value is gained without a price. There are cost savings to be realized by migrating workloads from legacy systems to cloud services, but the only way to maximize the cloud's financial benefits is to do the hard work of analyzing and comparing costs per workload when running on-premises and when relocated to a public or managed private cloud.  

It's also critical that I&O prepare for inevitable cloud-sprawl by designing for a unified and end-to-end view of cloud operations.   Cost optimization is available from companies like Cloud Health and Cloudability but it's only a small piece of the puzzle and is quickly becoming a feature of broader CloudOps and DevOps conversations.  An end-to-end view of hybrid cloud management should extend to processes such as container and config management as well as the full range of policies, governance, security and day 2 operations required to keep the CEO, CIO, and Cloud users on the same page.

These tips will help you collect and accurately analyze the cost-accounting information you need to ensure you reap the most savings possible from your multi-cloud strategy.

 

1. Make sure your cloud service offers easy-to-use tools that let you track costs in real time. Sometimes, the ability to spin up multiple test environments simultaneously can go to a developer's head. Before you know it, the CIO is left with a monthly bill for cloud services that far exceeds the budgeted amount.

One of the cost-control tips Venkat Etikyala offers to CIOs in his November 30, 2017, article on Data Center Knowledge is to insist on first-rate usage-tracking tools from cloud vendors. These tools give customers real-time, contextual views of the cloud's unpredictable operational elasticity.

This can be particularly challenging yet even more important in multi-cloud environments, which are most susceptible to out-of-policy spending and potential unused or under-used virtual machines.  The analytics and guidance capabilities such as those from Morpheus gives leaders a standardized way to right legacy on-prem environments and multiple external clouds.

 

The financial management component of a comprehensive cloud management platform requires automated consumption tracking, real-time analysis and reporting, and predictive analytics. Source: Cloud Standards Customer Council

 

2. Adopt multi-cloud to prevent unexpected price increases. In a December 5, 2017, follow-up on Data Center Knowledge, Etikyala highlights a primary benefit of a multi-cloud architecture: it serves as insurance against unanticipated vendor price hikes. Differences in the methods used by cloud services to build their platforms and devise their policies and services make migrating between cloud providers costly and time-consuming.

Etikyala recommends signing on with a second cloud service to provide redundancy for both your data center and primary cloud service. The second provider also serves as a platform for any apps and systems that are not well suited to your main cloud service's infrastructure. The key for CIOs is to take advantage multi-cloud's flexibility and security, but in a way that doesn't blow up their cloud budgets.

It’s critical however when deploying a multi-cloud strategy that you standardize as many processes as possible to avoid disruption when it comes time to migrate.  This points to the adoption of agnostic third-party tools such as Morpheus rather than the native tools of a single public cloud provider.

  

3. Don't spend money on cloud capacity you don't need. Everybody knows managing cloud services is different from running an in-house data center. An all-too-common trap organizations fall into is duplicating the excess capacity of their on-premises servers when they migrate their workloads to the cloud. Business Insider's Becky Peterson reports in a December 1, 2017, article on work conducted by Stanford researcher Dr. Jonathan Koomey relating to cloud cost efficiencies.

According to Koomey's research, four out of five in-house data centers have "way more server capacity than is necessary," and the overspending is repeated when the companies migrate their workloads to cloud services. Koomey found that firms are paying an average of 36 percent more for cloud resources than they need to. Koomey uses Gartner's projection that $173 billion will be spent in 2018 on data storage to conclude potential savings globally from optimizing workloads could total $62 billion for the year.

Rightsizing of application instances has become a critical part of any cloud management tool and should go beyond basic VM management.  For example, the ability to see up the stack and control for things like power schedules can assure you are only paying for compute capacity when it's being used.

Efficient management of cloud capacity requires a new approach to reserve capacity made possible by real-time monitoring of resource demand. Source: Eric Bauer, via Nokia

 

 

4. Qualify your applications to balance workloads before you quantify cloud benefits.

Financial data analysis firm FICO relies on sophisticated cost modeling to determine which workloads are best suited to public and private clouds. Jeffrey Burt writes in a December 5, 2017, article on the Next Platform that the modeling encompasses the resources required to send a workload out, test it, and run it. Also factored into the model are how frequently the resource is accessed, its availability requirements and redundancy needs, and disaster recovery.

FICO CIO Claus Moldt says that when conducting public and private cloud cost comparisons, the biggest mistake companies make is failing to represent all in-house costs in the calculation. Burt quotes Moldt as saying "You cannot use a lift-and-shift... you're going to get your math wrong." The result is that it appears to be less expensive than it actually is to run workloads in-house -- on legacy systems or private clouds -- compared to the cost of migrating them to the public cloud.

How do you know when you've reached "cloud nirvana"? According to Jeff Fraleigh in a November 2017 article on ITProPortal, the state of cloud maturity is achieved when four conditions are met:

1. You have analyzed your complete software portfolio to identify those apps that matter the most.

2. You have quantified your applications based on how important they are to achieve business goals.

3. You have devised a roadmap that encompasses all your objectives.

4. You have monitored cloud performance over time and confirmed that your app-migration program has resulted in maximum benefit to the organization.

 

5. Save time and money by relocating your test environment to the cloud. It's no mystery why the first foray into cloud services for many companies is their application-testing environment. Dalibor Siroky writes in a December 17, 2017, article on WebSphere Journal that app testing is by nature fast and temporary. Cloud-based testing can accomplish in minutes what would take hours or days to complete in-house.

The two requirements for a successful cloud test environment are automated infrastructure and automated acceptance testing. When automation is implemented correctly, the resulting cost savings and agility "will more than pay for the cost of both automation initiatives," according to Siroky. When you remove manual testing, the test environment exists only as long as it takes to run the full test suite.

In the continuous deployment pipeline, testing is treated as "just another step in the build pipeline." Test environments can spin up and spin down on demand, and the process of setting up the test data is fully automated. QA environments are created only when they are needed and are part of the continuous deployment pipeline rather than standalone entities. Instead of spending $1000 or more per month for a separate QA environment, you pay about $12 per test suite execution during "busy development weeks," and run a regression test twice a day.

 

From cost optimization to governance to automation and app modernization the team at Morpheus is working hard to help enterprise customers reduce tool sprawl and accelerate their digital transformation in the most efficient way possible.  Let us know if you'd like to have a discussion or schedule a demo to learn more.

 

 

Calling all partners: Join the CloudOps and DevOps revolution to tap into ‘the Big Spiff’

$
0
0

I’ve spent the last few months since I joined Morpheus talking to the channel community and some patterns are emerging that I wanted to share.  For customers, this blog is not for you… no offense but I’m speaking to the sales guys and business principles who are the engine of the tech industry.  The ones who enterprises around the world lean on to help knit together the often-brittle world of technology components which in turn power today’s digital economy.

 

Here is the movie… I’ve now seen a dozen times:

  • Bob’s house of IT has built a solid business over the last decade selling and deploying Servers + VMware along with SAN Storage and Networking to go along with it.
  • Unfortunately, what was once a very profitable and high growth business is slowing down big time.  Hardware margins have been shrinking as those platforms become commoditized and specialist level administrators and buyers inside enterprises are on the decline.   This is great news for customers but requires a pivot for partners.
  • Options to expand revenue include embracing HCI and Converged Systems, getting more creative on financing, increasing services attach, and otherwise getting more value out of those infrastructure relationships.  All good plays that smart partners are already pursuing but not long term game changers.

 

Growing with multi-cloud management and unified orchestration

Best-in-class partners are actively pivoting hard to adjacent technology stacks and project centers to continue the trusted advisor relationship they have built over the last decade. Two of the fastest growing and most profitable adjacencies are clearly Cloud and DevOps transformations.  These two are inexorably linked, of critical importance to most enterprises, and incredibly difficult for those enterprises to deliver on.   In fact, as the image illustrates there is a systemic gap between business expectations and IT's ability to deliver.

 

The complexity of optimizing multi-cloud environments, automating self-service workloads across clouds, and accelerating the velocity of application deployments across legacy or micro-services architectures is a multi-billion-dollar market growing at double digits.  The feedback we routinely hear from partners about Morpheus is not only how it helps simplfy and automate those enviornments but also how it can be the sales 'glue' that holds the environment together.  Our industry-leading set of integrations enables us to play well with others but help round-out functionality that customers need to go the last mile in their transformation projects.

 

Go Big or Go Home

We were excited to host our first global sales kickoff and MVP (Morpheus Verified Professionals) boot camp this week.  We had dozens of solutions architects representing our gold-tier partners from as far as Germany, China, and across the US.  We’ve also had more seven-figure deals this past quarter than ever before. 

 

 

As part of the event, we’re announcing “the Big Spiff” for any partner that can help close $2.5M in business by the end of August.  The objective of the Big Spiff is to drive the organizational transition and incremental focus on Cloud/Software sales as traditional HW business continues to erode.  While we’re not sharing all of the specifics in this public forum we’re sure that the magnitude of this rebate on top of existing best in class margins will more than double what you’re getting in traditional businesses.  Please contact us if you’d be interested in becoming an MVP and learning more about the 2018 incentive program.

 

What do the Olympics and Zambonis have to do with Cloud and DevOps?

$
0
0

It might be a stretch but stick with me.  First, as a winter sports nut and father of two daughters, it’s been fun to watch the women bring home the medals– including a fantastic win by the women’s hockey team.  Being the Cloud and DevOps obsessed guy that I am, however, I couldn’t watch the games without also thinking about enterprise IT.  I know… I need to get a life.  The analogy that came to me was the linkage between an overused Olympic skating rink and the often fragmented, chipped, and torn up landscape of enterprise IT systems.

Every platform choice, control plane entry, automation project, failed deployment, manual handoff, script, API, and cloud option adds up to a massively brittle infrastructure laden with technical debt and cruft… leaving something completely at odds with high-velocity deployment.  The need to focus on the software delivery pipeline rather than on specific tools and scripts for each step in deployment was echoed recently by Neil Weinberg in a January 18, 2018 article in InformationWeek.  After all, some of those tools, clouds, and API’s won't be around in the future, and new options are always around the corner.   There must be a way to transcend tool limitations that force round pegs into square holes.

 

Winning digital gold can start with the right mental attitude

Delivering enterprise agility and DevOps at scale means focusing as much on 'mindsets' as on 'toolsets'.  The development workflow encompasses build, test, deploy, monitor, and improve -- all done continuously. Your pipeline has to be flexible enough to allow new tools, platforms, and application components to be swapped in easily and just as continuously. This means architecting pipelines as software deployment factories that are as standardized and agnostic as possible.  Equally as important is that the team of developers and operations staff who make continuous delivery/continuous integration possible have adopted the DevOps mindset rather than simply choosing a tool and assuming the rest will follow.  

Weinberg predicts that 2018 will be the year stability arrives in the DevOps area. As more enterprises adopt container-centric and serverless architectures, customers will have clearer end-to-end choices, reducing the pressure to devise their own mix-and-match toolchain.  A common trap that companies fall into when implementing DevOps is trying to choose the individual tools they'll use before they've determined how they will orchestrate the pipeline. The failure is that your process is then defined by the nature of the tools rather than the nature of your business's unique needs.

I was at the DevOps Enterprise Summit (#DOES2018) event last year and heard an SVP from a large bank describe how they had successfully moved to daily releases by becoming deployment-centric.  All of the other steps in the chain were tied to individual tools but for him, using deploy as the center of the universe and anchoring the east-west toolchain and north-south infrastructure provided the right level of abstraction.  

 

Just like the US Women’s Hockey team, DevOps is a team sport

Once you’ve got the mental commitment, you’ve got to rally the organization.  In this case, the Ops team needs to provide the right self-service deployment foundation for developers to push code to.  SD Times' Lisa Morgan writes in a January 3, 2018, article that one reason only 29 percent of companies surveyed by Forrester Research have implemented end-to-end deployment automation is that operations haven't yet offered them "a consistent pipeline that developers can just push their code to without worrying about operational requirements." The goal of the fully automated software pipeline is to let the Dev side of DevOps push code and see it in production environments immediately.

Cloud automation combines workload deployment (define and deploy common configuration items to create a complete operational environment) and management (performance monitoring, alerts, and resource conservation) to encompass the entire workload lifecycle Source: TechTarget

The benefit of pipeline automation for the Ops side is that they can focus on managing the pipeline rather than always working to fix things when they break. The increased complexity that will result from large-scale software-defined infrastructure projects presents a challenge for the operations side in particular. Tomorrow's intelligent, automated, self-organized systems will integrate policy, inference, and orchestration in what Milne calls a "cognitive infrastructure." The goals are to accelerate workload mobility at scale, reduce both capital expenses and operational expenses, optimize performance, and enhance management and control -- all without requiring specialized skills.

BeyondTrust is a Morpheus customer who is a great example of teaming.  Created out of a private equity combination of multiple software companies, the ops team needed a way to enable different autonomous development orgs to consistently provision without forcing them to conform to a single toolchain.  With Morpheus they were able to standardize deployment and provide a common self-service interface that was 100% tool and infrastructure agnostic.

 

The Zamboni – orchestrating frictionless Cloud and DevOps

The twin themes of automation and self-service are echoed by Stuart Burns in TechTarget's December 2017 guide to IT orchestration. Business users are able to follow the workflows that make a service or compute item available to them, while automated tasks can be repeated "flawlessly," on demand or on a set schedule.

 

 A typical orchestration scenario entails a user request for duplicate apps to scale capacity: instances are spun up and configured to suit the app and network, and the app is deployed to an on-prem or off-prem infrastructure cloud, where it is monitored, logged, and protected. Source:TechTarget

By hiding the technical components of instance provisioning from users, they are able to request that a VM or entire multi-component application stack be provisioned via a self-service console.   Burns lists the components of an IT orchestration environment including

  • IT resources
  • Service catalogs
  • Workflow and business logic
  • Isolation mechanisms for groups or businesses
  • Back-end automation
  • Self-service portal

The last two categories are "where the magic happens," according to Burns. By automating business workflows, a VM can be created via a simple scripted build, for example, that populates the user account during the creation process. Users come to view the self-service portal as a "cloud shopping basket" they use to buy VMs and services, configure them to the users' needs, and then get them approved or built directly, depending on the workflow.

Now, I’ve never spoken with Mr. Burns but I feel like he’s my new BFF as he has very articulately echoed the exact point of view that I’ve been building out at Morpheus over the last few months. By empowering business users, appealing to developers' needs for autonomy, and improving the efficiency and performance of cloud workloads, orchestration becomes the glue that keeps the disparate components unified.

Just like the Zamboni smooths out the rough surface of the rink, unified multi-cloud orchestration enables both sides of the DevOps equation to become more closely integrated with the workflows of the business units they serve without giving up what made them special as individuals. What had been two separate and distinct workgroups begin to function as one, with singular practices, processes, and goals.

The large enterprise companies that have successfully deployed Morpheus for CI/CD and DevOps are using us as a standard deployment and delivery interface for workflows in order to insulate themselves from the need to constantly care and feed scripts spread across fragmented tools and clouds.  Many of these customers have tried a variety of methods to transform over the last couple of years - PaaS, CaaS, IaaS, Config Management, ITSM, etc.  Inevitably, those projects failed to fully deliver because they still required handoffs and left the surface of the infrastructure inconsistent.  

With the Morpheus Zamboni, you too can achieve frictionless operations and provisioning time measured in minutes. To learn more I’d encourage you to setup time for a demo with one of our solution architects and see if your organization can go for gold.

EMA Top3 report on containers and DevOps at scale: 10 priorities for 2018

$
0
0

When I first met Torsten Volk from Enterprise Management Associates (EMA) I was attending the DevOps Enterprise Summit (#DOES2018) in San Francisco.  We did a fun video and blog post based on some of my initial observations from being a month on the job.  Fast forward 4 months and Morpheus has had a tremendous Q4 plus we're taking off like a rocket in the first part of 2018.  Why?  I’d like to think it was their new marketing guy, but the fact is the Morpheus is aligned with where the puck is going. 

I’ve written before that the founding engineers at Morpheus were designing for their internal own use which led to a design center that I’ve described recently as ‘self-leveling’.  No matter what automation tools, platforms choices, application frameworks, or day 2 ops activities that an enterprise has in place Morpheus will layer on top to help enable frictionless service delivery.  We don’t force IT Ops to put round pegs in square holes… instead, we’ll conform to your choices to help you master your cloud destiny in your way.

Drum roll please “the top 10 priorities for containers and DevOps are…”

Torsten and the research team at EMA recently completed some in-depth customer-focused analysis on the priorities around containers and DevOps which included multi-cloud automation and the need to span bare metal, VM, container, and serverless options.  Here were the high-level priorities that emerged.

 

You can download the full report here and dig into each of these 10 priorities as well as a description of what it takes to become a ‘digital attacker’.  According to Torsten, “Enterprise Management Associates (EMA) named Morpheus an EMA Top 3 vendor due to the product’s ability to tie together the DevOps toolchain, IT operations solutions, and private and public cloud resources.” 

 

What’s the Morpheus angle in container discussions?

At inception, Morpheus was architected to leverage docker as the core provisioning element for cloud services.  The design goal was to simplify orchestration for applications which resulted in innovative cross-host linking and persistent storage capability on top of a very user-friendly interface and full fidelity API.

As Morpheus evolved, we found most enterprises were not ready for containerized architecture so we built-in VM and Bare Metal orchestration on top of our already container-optimized base.  This differs from legacy tools which have limited container features ‘bolted-on’ to VM-centric architectures.  The result? All workloads get access to the same environments, networks, security groups, etc. enabling seamless orchestration of apps, infrastructure, tools, and teams.   This ensures a futureproof and unified approach to deployment across bare metal, VMs, containers, PaaS, and even serverless application stacks which are starting to emerge.   It also gives VMware customers a very clean path to running VMware automation in a consistent way in any cloud along with a risk-free path to containerization.

Most enterprises are heterogeneous and want to avoid tool silos or architectural lock-in while at the same time enabling developer teams to ‘bring your own tools’ (BYOT) and take advantage of the latest and greatest options.  By using Morpheus as a standardized provisioning engine, they can focus on DevOps build velocity, not tool management.  

Don’t all roads lead to Kubernetes?

The end goal is to streamline deployment of applications…  it’s about the app, not the platform.  If a unified orchestration tool like Morpheus with its built-in container scheduler can provide the features you need as well as best-in-class multi-cloud management, cost optimization, and DevOps automation at a lower cost with less tool sprawl then you should consider it.  That said, Morpheus also integrates with Docker Swarm and Kubernetes container schedulers and picks up where these tools leave off (self-leveling).  Features include:

  • Integration with docker repositories for easier management of container build pipeline.
  • More in-depth monitoring than native K8s and unique auto-scaling features
  • Day 2 Integration with load balancers, IPAM, Identity Management, and ITSM tools enables a complete policy driven tool for provisioning and presenting applications end-to-end from Dev to Production

To learn more I’d encourage you to set up time for a demo with one of our solution architects and see we can help you become a digital attacker.

 

 

Defining (and Leading) a Winning Digital Transformation Strategy

$
0
0

New industries and technologies spin off new terms like sparks off a metal grinder. Finding two people who agree on the precise definition of these neologisms is not easy. That's why any venture should begin with a shared understanding and lexicon for both the business objectives and the underlying technology. 

For example, you would never base your company's critical strategic plans on a term that has hundreds of different definitions and interpretations. Yet the business term of the moment, "digital transformation," or DX, is defined in hundreds of ways and the source of all knowledge (Wikipedia) doesn’t help with a warning stating the definition page might be incomprehensible.

Technology has been transforming business since the invention of the wheel and with each new step-function improvement, traditional industries are forced to reinvent themselves. The digital transformation now underway is perhaps unique in the breadth of scope.  From banking to retail, to entertainment, to food services… virtually every company in every industry is challenged to reinvent how they engage with customers and markets.  Also unprecedented is the pace of change digital technology instigates. This creates the twin challenges of gaining expertise as the technology develops and doing so faster and more effectively than your competition.  That competitive velocity is what has spun off the sister term “digital disruption.”

 

Thinking strategically about your digital business's goals

It isn't unusual for a word's meaning to evolve over time, but rarely has a definition been stretched so far over such a short period. Researcher and business-transformation pioneer Simon Chan notes in a January 25, 2018, article on LinkedIn that the term "digital transformation" has "morphed into a bit of beast" in just a handful of years which is an understatement. 

Chan writes that a transformation is "a much more profound and radical process" than a simple change, resulting in a new direction and heightened effectiveness for the entire organization: people, processes, and technology. Such a fundamental business-process shift is doomed unless you begin with all stakeholders agreeing to a shared vision.  He provides a layman's definition of digital transformation which I think is pretty easy to align around:  "Using technology to create differentiating ways of doing business with the aim of driving growth in new and existing markets." 

That describes most of the large enterprises that Morpheus works with every day.   Leaders in their respective industries, the centralized architecture and engineering teams have a mandate to ‘get out of the way’ so that application developers, researchers, and other users can accelerate transformation efforts.

 

The digital transformation is "updating the business models," according to Chan. That sounds simple, but doing so "permeates every living cell of the company, resulting in changes to structure, capabilities, policies, processes, people, technologies and culture." Yet the CEO perspective of the business model digital transformation is only one dimension:

  1. Customer Experience led (CMO perspective):  Encompasses how you connect with customers, and how you maximize the value of your organization's data and package intelligence to decision makers.
  2. Operational Transformation (COO/CIO perspective):  Focuses on operational efficiency by blasting through departmental silos to promote cross-functional processes, communications, and tools. Here is where DevOps and service management reside.
  3. Cost-Centric Transformation (CFO perspective):  DX as a driver for efficiency via lower capital expense, facility consolidation, and staff reductions.   The driving force behind many XaaS pivots.
  4. Business Model Transformation (CEO perspective): The mother of all DX, the CEO pulls from all other perspectives to champion a single vision and business strategy with the support of the board.

 

Making the case for a C-Level 'Transformation Officer'

The Enterprises that Morpheus works with on CloudOps and DevOps are all using technology to transform how they engage with customers in order to disrupt markets and drive new growth. But this is as much about business process and leadership as it is about tools and technologies. In many industries, the disruptors are "cloud-native" companies that are pushing more traditional enterprises to "transform" to avoid getting left behind.

When organizations undergo a major transformation, it must be driven top-down from the board and CEO, but also from a proven change agent. This is the role played by the Digital or Chief Transformation Officer (DTO/CTO), who helps give the right level of importance to what can be a difficult shift.  However, a single individual alone cannot drive the agenda. It must be clearly communicated from the top that these types of major changes require the complete buy-in of the entire organization.

The DTO/CTO's key role is to "encourage and embed change." They drive change by holding responsible the parties managing the hundreds or thousands of components that comprise a typical program. They are masters at balancing: short-term successes vs. long-term value, delegating responsibility to line managers vs. personally ensuring results, and committing limited resources to specific transformation projects vs. shifting resources as priorities change.

These leaders must have cross-functional expertise in the technologies, processes, and people skills to evangelize change and drive results. Ideally, they should not be seen as part of the legacy they are trying to displace, but they can benefit from having proven success in other turbulent projects within the organization.

 

A survey of successful Digital Transformation Officers identified key attributes of executives in the role:  Source: Russell Reynolds Associates]

 

While DTOs can own the mandate and provide the urgency required to personally orchestrate a great number of disparate initiatives, they must be seen as an integral part of the executive team and an extension of the CEO. The CEO can help support the initiative by providing access to the facts and resources required, and by assuring the transformation project has clear measures of success.

Many of these requirements are echoed in projects involving cloud and DevOps. Simply creating a "DevOps team" or a "Cloud Czar" does not assure success. In fact, the very thought that DevOps could be driven by an individual or group is a symbol that the organization has not truly embraced the need for a complete shift at a broad level to ensure its success.

 

Like a good poker player, you’ve got to be all-in when it comes to digital transformation.  To learn about how Morpheus could fast-track your transformation project setup time for a demo with one of our solution architects.

 


An interview with Tim Cook - From Google to NASA to Morpheus

$
0
0

How’s that for a click-bait title?  Now to set the record straight I didn’t get air time with Apple’s CEO but I did get something even better… a few minutes with a Tim Cook who’s resume is equally as impressive when talking about application deployment automation.  We’re talking time spent at some shops with serious tech… Google (home of K8s) and NASA (OpenStack anyone?) just to name a couple.

We’ve built out several new sales territories in the last few months as customer demand continues to increase.  Tim is one of our newest solution architects and is working with some major accounts on the west coast.  I went on a couple of sales calls with Tim and after seeing customers relate to his story knew I had to share it here.  

One of the things customers appreciate most about Tim is that much like the famous line from Sy Sperling, he’s also a client.  After spending 20+ years knee deep in automation and related scripting technologies he found himself on the hook to automate multi-cloud app deployments for a large sync-share operation. The unicorn he has been chasing this whole time has been to automate himself out of a job and get to true push-button deploys in complex multi-cloud implementations. Once he found his unicorn, what better next step then to join them.  

Take a look at the interview and I think you’ll get a sense of the passion that a skeptic turned believer can bring to the cause.

 

 

The team continues to expand and we're always on the lookout for Cloud and DevOps talent for sales, solution architecture, and professional service delivery. If you've got the right stuff take a look at the careers page or just drop us a note and resume via the contact us form.

Disneyland, Industry Changes, and Cloud Field Day

$
0
0

Years ago, when I was at HPE, the best Social Media guy I know (@CalvinZito) hosted a bunch of technical bloggers for a day to deep dive on all things storage.   Fast forward and this week I found myself at the GestaltIT Cloud Field Day (#CFD3) with Stephen Foskett (@SFoskett), a concept which was born out of that humble beginning and which brings to mind images of 3-legged races or tug-of-wars (probably a vendor joke there somewhere).  

This CFD was a reunion with storage bloggers like @chrismevans. I was co-presenting with other vendors including Oracle where I bumped into Leo Leung, another storage alumni.  I even ran into @Stu Miniman from @Wikibon and @theCube fame who just happened to be in the neighborhood.  It really is a small world… and one full of ‘@’ symbols. 

It’s also a good picture of what’s happening in the industry at large.  Legacy hardware businesses are declining as more enterprises shift from thinking about IT as how they managed ‘things’ to thinking about delivering new customer experiences and re-shaping markets.   The ‘shift-left’ we talk about in DevOps circles is alive and well in the vendor and social media landscape. 

This was our first Tech Field Day event and all-in-all I’d count it a success... the first couple of videos are a good product overview and demo of the core self-service functionality.

Companies like Nutanix and Nimble Storage kicked off their existence in social circles at past TFD events so not bad company to be in.   Most of the audience was made up of infrastructure SMEs which biased the discussion a bit but that’s not unlike what we see with large customers.  Developers pushing the infrastructure teams to change and act more like cloud providers.  

These next videos get into more detail on governance and automation.

Many thanks to the #CFD3 delegates for the great interaction.  Take a look at the sessions and drop me a note @MorpheusDude to let me know your thoughts or setup some time for your own demo and discussion.   CFD3 delegates included @ChrisMEvans, @UprightVinyl, @FollowEstelle, @JeffWilsonTech, @JPWarren, @CTOAdvisor, @GreenReedTech, @M_Laverick, @Ned1313, @NickJanetakis, @OtherScottLowe, and @TCrawford

 

Is it better to be 'Cloud-first', 'Cloud-only', or 'Cloud-ready'

$
0
0

Is "private cloud" an endangered species? After all, the conventional way of running a private cloud is to buy and manage the servers and other hardware infrastructure used to store and operate your apps and data. A principal benefit of cloud computing is not having to deal with hardware (much) because so much of your IT infrastructure is virtualized. While private cloud services will fill an important niche in the future, hybrid clouds will continue to be the cornerstone of companies' cloud strategies for many years to come.

Why private clouds? In a word, security. But there are two other good reason enterprises, in particular, retain in-house data operations: First, they haven't yet squeezed every penny of amortization out of the hardware and software they own outright -- or lease via a long-term agreement. CIOs may talk about security and governance concerns related to the public cloud, but what's holding back much of their cloud adoption is legacy equipment and process.  

The second reason some IT functionality remains on premises is prudence: Growing familiarity with managing cloud infrastructure shows the wisdom of not putting all your eggs in one basket. Organizations want to take advantage of the benefits of new technologies at a pace that ensures their valuable data assets will not be put at risk by relying too much on a single third-party cloud service. This explains the growing popularity of multi-cloud strategies, as Computer Business Review's April Slattery explains in a September 19, 2017, article.

 

How Hybrid Clouds Smooth the Journey

The consensus of the industry experts is that eventually, organizations of all types and sizes will rely on cloud services for a majority of operations but the nature and mix of those services will vary. The "cloud only" approach can be seen as a logical end-game but as InfoWorld's David Linthicum writes, the first cloud-only companies are usually those that are very small, or very new, and often both.

At the same time, the clear trend is toward increased reliance on hybrid clouds by companies of all sizes. Research conducted by MarketsandMarkets concludes that the global hybrid-cloud market will grow to $91.74 billion by 2021, representing a compound annual growth rate of 22.5 percent.   The popularity of hybrid clouds shows no signs of waning: A survey conducted by McAfee found that the percentage of companies adopting a hybrid-cloud strategy increased from 19 percent in 2015 to 57 percent in 2016, as reported by Associations Now's Ernie Smith in a September 19, 2017, article. Numbers compiled by Statistica forecast growth in the global hybrid-cloud market from $40.8 billion in 2017 to $91.74 billion by 2021.

 

 Source: Statistica

 

InfoWorld's Linthicum cites a recent survey of "IT leaders" conducted by Commvault that found two-thirds of the executives fear missing out on the latest innovations being offered by cloud services. While only 24 percent of the survey respondents report being "cloud only," another 32 percent describe themselves as "cloud first" with plans to become cloud only.  Linthicum likens today's cloud-adoption trend to IT's reaction to the rise of the web two decades ago: first, a "go away" mentality, followed by a slow and reluctant (and piecemeal) adoption, and finally comes a rush to seize an opportunity before the competition does. That rush is where critical mistakes can be made, which is one of the many good reasons for adopting a hybrid-cloud approach that maximizes internal resources and gives organizations a range of cloud options to choose from.

 

Private cloud finds it's place and pace

What prevents companies from committing to cloud only is the need to safeguard sensitive data, trade secrets, and intellectual property. In particular, highly regulated industries such as government, financial services, and healthcare must ensure compliance and proper governance of sensitive information.

Research conducted by Gartner forecasts growth in private cloud services as an important component of the multi-cloud approaches that are becoming the norm in organizations of all types. In a September 8, 2017, article on Silicon Angle, Michael Wheatley points to a Gartner analysis that found "rapid growth" in the use of third-party private cloud services. Still, the reticence CIOs harbor about trusting their data to a cloud-only approach applies equally to the firms offering to host their companies' private clouds.

According to Gartner's most recent Hype Cycle for Cloud Security Products, private clouds have reached the "trough of disillusionment," which means that in terms of value to users, the technology has failed to live up to its hype.

 

Source: Gartner, via Silicon Angle

 

The long-term outlook for private clouds shows promise, however: Technologies that are able to survive the disillusionment period enter the "slope of enlightenment," which leads to the "plateau of productivity" -- if they are able to earn their customers' trust, that is.   Low storage costs, instant scale, high availability, and the elimination of in-house infrastructure top the list of public cloud benefits.  However, any company that has adopted a cloud-first or cloud-only strategy knows well the other side of the coin: spotty or nonexistent internet connections, the added risk to data as a result of multi-tenant cloud services, bandwidth limitations, and untrustworthy cloud service providers.  

Your cloud plans don't have to be binary or one size fits all and they shouldn't be dictated by a single hypervisor, platform choice, or cloud provider.  

 

Getting from now to next and always being ready for anything 

There are a number of issues that we see customers encounter when engaging in cloud transformation projects.  While there is a desire to stand-up internal private clouds for all of the reasons called out above the truth is these deployments are often more complex than expected, hardware costs are displaced by people and tool costs, and it can be difficult for those internal clouds to keep up with the pace of innovation found in the public domain.

Many of our most successful clients have had false starts along the way, have tried their own automation projects, have worked with a number of IaaS and PaaS platforms, and are constantly re-evaluating their tool choices.  The truth is there is nothing wrong with that description and in fact, failing fast and reducing the mean time between experiments is part of both cloud and DevOps maturity.  There is no way your internal IT teams can keep up with the pace of change across multiple hybrid IT stacks, developer toolchains, and deployment platforms.  

 

The trick is to create an environment that provides control without chaos and agility without anarchy.  You should be able to:

  • Span platforms:  bare metal, hypervisors, native containers, PaaS, serverless,etc..

  • Span destinations:  OpenStack, VMware, PCF, K8s, AzureStack, AWS, Azure, Alibaba, etc...

  • Provide I&O teams the guardrails and role-based access to meet the desire for control

  • Provide Dev teams the ability to bring their own tools and treat infrastructure as code

  • Cover end-to-end deployment needs from build servers to day-2 monitoring, logging, and scaling

 

To learn about how Morpheus could help your teams deploy applications in less time independent of cloud implementation strategy setup time for a demo with one of our solution architects.  Our next-generation cloud management platform can unify orchestration across the tools you already have and the ones you've yet to discover.

 

 

 

 

Cloud Governance: The Key to Effectively Scaling Your Cloud

$
0
0

“Cloud speed” has taken the IT world by storm. When it comes to getting applications and data systems relocated to cloud services, the quicker the better, right? In most cases, such hasty implementations do indeed make waste – of time, resources, and ultimately efficiency.

In a classic example of “ready, fire, aim,” companies find themselves attempting to retrofit governance into the systems they have migrated to the cloud in a rush. Part of the problem is that IT often finds itself reacting to cloud migration projects that are driven by business departments.

A typical example is the “accidental hybrid,” which arises when one of the organization’s apps, such as for HR or CRM, has been moved to the cloud, and subsequently, a cloud database is glommed onto the app, followed by a data warehouse, and then some analytical tools for good measure. The job of integrating and securing the complex system that results falls to IT.

Here’s the challenge: Get a comprehensive, cohesive, and singular infrastructure in place for data security and data governance without slowing the pace of cloud adoption in the company. Step one is to create a central point of operational management for all data systems.

Like Big Brother, but in a good way

Start by giving IT a central point that provides a view into everything that’s happening related to your applications:

  • The who: Which users have access?
  • The what: How large is the app’s footprint?
  • The when: What are the app’s usage patterns?
  • The how: Is the app meeting users’ needs? Is it efficient? Is it scaling properly?

Cloud identity and access management (AIM) is projected to boom in coming years. According to a Garner report cited by Allied Market Research, the cloud security market will grow at a compound annual rate of 23.5 percent through 2020, to a total value of $8.9 billion by that year. Driving the market are increased use of handheld devices (BYOD and CYOD, or “choose your own device”), growing reliance on cloud services by midsize and large enterprises, and the surge in demand for managed security services.

As companies come to rely on the cloud more heavily, the emphasis shifts from awareness to managed security and government compliance. Source: Allied Market Research

CloudTech’s Nitin Chitmalwar lays out the basic framework for cloud AIM: automate the initiation, capture, recording, and management of user identities for each process in the business network. Grant access privileges only after timely and comprehensive application of organizational policies. The analysis covers all individuals and services comprising the cloud AIM system and includes authorization, authentication, and auditing.

The role of cloud AIM software

Your cloud IAM system models the entire organization:

  • IT creates user IDs and organizes them into groups based on privileges
  • The IAM software assigns the IDs and permissions for accessing services and resources
  • The IAM software itself is managed through a web portal

Regulatory and compliance policies are applied automatically via IAM processes. Careful assignment of access permissions combined with multi-factor authentication identify and block malware, malicious acts of insiders, hackers, spoofing, and fake ID holders. Security is enforced via two parallel streams: one passive stream monitors all traffic into, out of, and within the organization’s data systems; and the other active stream enforces the company’s comprehensive security policy.

A three-step approach to implementing identity and access management begins by assessing your current IAM systems, identifying the IAM approach that best fits the organization, and then defining your implementation strategy. Source: Chip Epps, via Sand Hill

Cloud IAM software helps reduce overall IT spending by consolidating all data security – keeping data safe and controlling access to data and network resources – in a centralized security hub that operates on multiple levels. The success of any cloud IAM project depends on ensuring visibility into your applications. Not just “Do you know where your app is?” visibility. We’re talking deep-dive visibility like IT has never had before.

Governance requires an ‘x-ray vision’ view of your apps

Dean Wiech of security vendor Tools4Ever explains in a CSO article why cloud governance requires greater insight into data operation than is available from traditional auditing products. There are many more potential access points in modern networks, and many more potential users of the company’s data. More “layers” of operational data are needed to model the “what ifs”: every possible outcome, and its impact on the organization’s information infrastructure.

System administrators need to have a complete history of every employee’s activity that they can organize and manage from a single vantage point. In an instant, admins need to determine who has accounts on what systems, when were those accounts used, what permissions are associated with the accounts, and who has authority to grant the account permissions.

The permissions auditing must apply to all of the organization’s data assets: databases, shared file systems, data centers, access control, backups, passwords, network devices, and printers. Privilege creep is addressed by ensuring permissions are revoked when an employee no longer needs them following a change in responsibilities. Centralized control simplifies elimination of stale and orphan accounts, and identifies shared accounts for which no single individual is responsible.

A to Z governance in a single cloud portal

The deep-dive visibility and wide-angle view of all your apps and databases that are the core of cloud governance are delivered in the intuitive control dashboard that is the heart of the Morpheus cloud application management service. Morpheus ensures that all of your organization’s most important data can be monitored and controlled from a single intuitive interface that provides at a glance an overall indication of system health, as well as the status of individual apps, databases, and components.

In addition to the web UI, Morpheus lets you manage apps, databases, and app stack components via a command-line interface or through API calls. There is no faster way to provision these resources and to add new nodes; databases and app clusters are reconfigured automatically to accommodate the added nodes. Morpheus’s automatic logging supports immediate introspection, troubleshooting, and uptime monitoring.

That immediacy extends to defining roles and access for individuals and groups using Morpheus’s clear dashboard to set responsibilities and access limits for teams and individuals based on geographic zone, server groups, application, or database. It’s time for your governance operations to benefit from the same efficiency and scalability of your other cloud apps.

Multi-cloud and Hybrid Cloud: A World of Difference

$
0
0

"Multi-cloud" and "hybrid cloud" are often used interchangeably. While the two terms are related, they describe two distinct (but both important) concepts. Understanding the difference can help ensure the success of your overall cloud strategy or at a minimum keep you out of a semantical rat’s nest.

In a January 4, 2018, article on Forbes, Kelly Ahuja defines "multi-cloud" broadly, presenting it as an infrastructure encompassing "private clouds, SaaS-based applications, ERP, Salesforce, Office 365 and public cloud storage apps such as DropBox, as well as large-scale consumer cloud applications such as Google, AWS and others." 

By contrast, in a September 5, 2017, post on the Enterprisers Project, Kevin Casey distinguishes the terms, claiming "multi-cloud" describes a strategy, while "hybrid cloud" is a new type of infrastructure. Specifically, a hybrid cloud combines aspects of private and public clouds using orchestration tools to ensure the components operate as a single unit. The hybrid-cloud concept encompasses the entire DevOps cycle and supports continuous integration/continuous development (CI/CD).

That definition fits well with what Morpheus finds in discussions with large enterprise customers. Multi-cloud initiatives tend to address the fear of lock-in and enable a business to prioritize “horses for courses” over “single throat to choke” whereby different applications or business outcomes may be derived from the best supplier at that point in time. Customers use hybrid-cloud to describe application architectures and scaling rules which could mix and match locality for a variety of business reasons.

Of course, if you fixate too much on buzzwords, you lose sight of your goal: Finding the best cloud services for your organization's unique data and application needs.

Reasons to avoid the one-cloud-fits-all approach

Multi-cloud comes with a built-in contradiction: One of the cloud's principal benefits is efficiency, yet there's no way managing two or more separate cloud services can be more efficient than managing a single service. So why complicate cloud management? The first reason you hear from many IT managers is their need to avoid being locked into a single vendor's offerings.

In an October 26, 2017, article on Data Center Knowledge, Ruslan Synytsky explains that using only one cloud service seriously constrains your ability to adapt to changing conditions. As anyone in IT knows, change is the only constant. If you don't build flexibility into your cloud plans, you could end up facing a monumental data-migration project simply to switch vendors.

A survey on the impact of vendor lock-in on business IT published in the Journal of Cloud Computing found that more than 90 percent of respondents identify being locked into a single cloud service a "critical" or "moderate" risk. Source: Journal of Cloud Computing, via Springer Open

Justice Opara-Martins, Reza Sahandi, and Feng Tian examine the problem of vendor lock-in from a business perspective in an April 2016 paper in the Journal of Cloud Computing. The results of a survey conducted by the researchers indicate that the primary causes of lock-in are the lack of integration points between various cloud management tools (cited by 47.7 percent of survey respondents), incompatibility with on-premises software (41.1 percent), and the inability to move their cloud data to in-house systems or to an alternative cloud service (31.8 percent).

Sometimes, the decision to use multiple cloud services is taken out of IT managers' hands. A study by research firm Studio 61 published in an October 17, 2017, article on ZDNet reports that employees are now the single greatest security risk in private cloud and hybrid cloud environments, cited by 50 percent of IT managers as a "top security concern," followed by coarse-grained user access controls (41 percent), and lateral, or east-west, movement of advanced threats (also 41 percent).

In most cases, IT departments have given up their attempts to prevent all use of unsanctioned cloud apps in their organizations. Employees are going to find their own ways to "get the job done," as the researchers state. The solution many companies have adopted is to "embrace and extend" their sanctioned cloud services to these employee-adopted offerings, despite the potential security risks.

The Studio 61 researchers point out that this may indicate an overly optimistic approach by IT departments because they underestimate the scope of the shadow-IT problem in their organizations. After all, you can't track what you can't see. The only way for IT to have end-to-end visibility into their entire cloud universe is by scanning and identifying all traffic and apps crossing their networks. Then you can filter out unauthorized and suspicious sources.

What percentage of cloud apps in use by your organization are unauthorized? While many IT departments believe the unauthorized use of cloud services by their employees is a small percentage of overall cloud use, they are likely underestimating the risk to their data posed by shadow IT. Source: Studio 61

One word distinguishes multi-cloud from hybrid cloud: Orchestration

Anyone who adopts more than one cloud service can claim to be "multi-cloud." However, unless those separate clouds function as a single unit, all you've done is moved from one set of data silos to another set. The reason hybrid clouds continue to be the go-to cloud deployment architecture for companies of all sizes is the benefit of connecting multiple cloud services into a seamless, interoperating whole. The key to achieving this interoperability is the proper application of orchestration tools.

Radhesh Balakrishnan, general manager of OpenStack at Red Hat, defines hybrid cloud as “[a] mix of on-premises private cloud and third-party public cloud with orchestration between these two.” The Enterprisers Project's Guide to Hybrid Cloud adds that the mix may include on-premises infrastructure, virtualization, bare-metal servers, and/or containers.

Kurt Marko writes in an October 2017 article on TechTarget that IT operations staff are less concerned about internal VM configurations (apart from the underlying OS) than they are about such instance details as the number of virtual CPUs, memory, network interfaces, and storage volumes attached. Most infrastructure orchestration software focuses on deploying new cloud resources rather than on managing the configuration of your existing VMs.

The functionality expected in a product labeled as a container orchestration tool varies based on IT role: operations, app development, or DevOps. Source: The New Stack

It is important to apply orchestration tools in a way that accommodates the different drivers of the ops functions: Business, developers, and infrastructure. TechTarget's Alan R. Earls writes in an October 2017 article that the choice of orchestration tools usually comes down to provider-native offerings from AWS, Microsoft, Google, or IBM; or to third-party products. The mistake many companies make is in comparing the two options feature by feature. Success lies in identifying the specific cloud-management functions your hybrid setup requires.

The consensus of experts such as Forrester principal analyst Dave Bartoletti is that multi-cloud management is served best by console-based third-party tools, although infrastructure vendors' own offerings continue to improve. Still, IT shops supporting hybrid clouds specifically and multi-clouds generally are served better by a single tool that integrates deployment and configuration tasks rather than by each cloud offering's native tools.

Managing virtual IT environments requires transparency, usability

If you thought monitoring and updating apps and data residing on a single public cloud was a challenge, wait until your data assets are distributed across three, four, or more distinct clouds. The scenario presented by Pete Johnson in an October 12, 2017, article in NetworkWorld will be typical: You write a text file on AWS S3 that triggers Microsoft Azure's text-to-speech service to generate an MP3 that is written to the IBM Bluemix object storage hosting your website.

Reaching this stage of cloud interoperability will require that enterprises rethink both application architecture and automation tools. This sort of infrastructure agnostic deployment was at the heart of the original design center for Morpheus. As enterprise apps evolve they don’t do it all at once. Rather, deconstructing apps into their component services and being able to incrementally modernize makes the process more achievable. This bite-size approach requires deployment tools that can span bare metal, VMs, containers, and eventually serverless functions while at the same time spanning multi-cloud strategies and hybrid-cloud deployments.

With this many moving parts and unpredictability companies better invest in the right tooling. In this case, having a Swiss army knife merged with a transformer may be only answer. That… or what we refer to as Unified Ops Orchestration.

Let us know if you’d like to set up a demo and discover how we can help you better orchestrate your own cloud… multi, hybrid, or whatever else you might want.

Viewing all 1101 articles
Browse latest View live