Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

For Cloud and DevOps, Look to People and Process as much as Technology

$
0
0

No industry hypes like the tech industry. The Internet of Things, Big Data, AI, Cloud, DevOps… All are world-changing, trillion-dollar opportunities that your company can't afford to miss out on, according to the pundits and vendors alike. 

 

Well, maybe it's time to take a tip from those famous technology naysayers, the Amish. That's the suggestion of Michael J. Coren in a May 18, 2018, long read on Quartz. A common misconception is that the Amish see technology as inherently evil. As Coren points out, there are no rules in the religious community that prohibit the use of new inventions.

 

What the Amish possess is a strong understanding of, and unshakable confidence in their way of life: For them, there is no greater calling than to be a farmer. Coren highlights the deliberate approach the Amish take to new technology. "They watch what happens when we adopt new technology," according to Wetmore, "and then they decide whether that's something they want to adopt for themselves."

 

Know your people and don’t underestimate tool impact

 

Now, we’re not suggesting to turn your back on a fully automated multi-cloud deployment toolchain. However, the lesson organizations can learn from the incredibly selective stance the Amish take toward innovation is to know who you are, stay focused on your goals, and to always be true to your community. The decision to adopt a new technology depends on whether it's a good fit for your culture and a boon to your company's long-term success.

 

 

While 73 percent of the companies participating in IDC's 2017 Cloudview survey claim to have a hybrid cloud strategy, 61 percent claim they lack IT staff skilled in the use of cloud automation tools. Source: IDC, via Digitalist

 

It seems obvious that companies would begin their planning for DevOps by shopping for tools. After all, vendors announce shiny new-and-improved offerings almost daily. But you can't expect to realize tangible benefits until your management framework runs at cloud speed, stakeholders have bought in, and IT workers acquire the right skills.

 

As development cycles become shorter, teams have to innovate faster to keep pace. This translates into near-constant collaboration, on-demand resource allocation, and the "redistribution of accountability" within and outside of IT. The required level of management integration isn't possible without unprecedented visibility into how, where, and when software is being used.

 

Understanding the link between tools and process

 

It's not uncommon for companies to be dazzled by the breadth and depth of the hybrid cloud management tools now available. They may feel the need to adopt the latest-greatest favorites of the tech media hype machine rather than select only those tools that meet their hybrid-cloud management needs. In a recent TechBeacon article, David Linthicum includes "understanding the tools" as one of the five essentials for managing hybrid clouds.

 

Linthicum recommends that organizations "start with the essentials": Understand your unique needs in terms of security, data, governance, and end-user dynamics. Step one is listing what exactly is being managed. Profile the workloads that will run on public and private clouds but as importantly address the teams and processes that surround the workload.  Among the questions to answer are who "owns" the workloads, who needs to be alerted when problems arise, and how critical specific workloads are to the organization.

 

Linthicum also emphasizes the need for a single pane of glass to address the complexity of hybrid cloud management. Learning and accommodating all the native APIs and resources used by cloud providers is impractical in terms of cost and efficiency. The best solution is something like Morpheus, which abstracts the hybrid-cloud interface to a layer that lets you decide which hybrid-cloud components are most important for your operation.

 

IT focus shifts to the 'employee experience'

 

For many veteran IT managers, hardware and software is their comfort zone. In the cloud era, most of the work done by the IT department exists outside the data center and instead takes place wherever the organization's employees operate. This has led many companies to extend their "customer experience" efforts to internal staff and service delivery processes.

 

That's the goal Adobe CIO Cynthia Stoddard set for her department via a program entitled "advancing the inside." CMO's Nadia Cameron writes that Adobe's goal is to improve the employee experience just as much as the company works to enhance its customers' experiences.

 

 

The digital era requires that leaders adopt new ways of thinking, acting, and reacting, particularly when meeting the workplace needs of their employees. Source: Deloitte University Press, via Edublogs

 

Rather than attempt to "shut down" shadow IT in the company, Stoddard embraces it as a way for employees to "control their destiny." The goal is to create security "guardrails" that employees must operate within, and any technology that operates inside those guardrails is sanctioned. A result of Adobe's employee-experience emphasis has been workers from business units spending more time "inside IT". The silos that once predominated in enterprises are not being blown up so much as they're being transformed into conduits that connect every part of the organization.

 

At Morpheus, we’ve often been pulled into the stormy sea of organizational dysfunction as part of major cloud management and DevOps automation projects. One major lesson learned is that each major stakeholder group must get their base level needs met in order for higher-level communication to be successful.

 

As an example, Infrastructure and Ops teams need to feel they are meeting the service level and security needs of the business. Developers need to see rapid deployment times so their code pushes to production quickly. Management teams are looking for consolidated reporting, cost controls, and return on investment. If you can meet all three of these needs from a single tool then you may be on the right track.

 

To learn about how Morpheus could help these three groups all feel like they’re winning please setup time for a demo with one of our solution architects.


FinTech Meets DevOps, and the Rest Is History

$
0
0

These days, the place to see state-of-the-art technology in action is your local financial services company. If software is indeed eating the world, as Marc Andreessen famously stated back in 2011, the first course on the menu appears to be the banking industry.

Based on the early fintech returns, being served up on a platter has never been more rewarding. The shining star of the financial industry's tech rebirth is DevOps, which like a good wine is the ideal pairing for "fast, faster, fastest" mindset that dominates the industry.

It hasn't always been this way. In fact, until recently, established companies couldn't approach the agile, lightning-fast software development practices of cloud-native titans such as Amazon and Netflix. A confluence of events has shifted the landscape to the point that today’s financial giants are essentially software companies who happen to have a banking license.  First was the arrival of DevOps as a cultural shift and adoption of processes for continuous integration and continuous deployment. The second was the rapid mainstreaming of public cloud technologies and the pressure on internal IT to leverage automation as part of refactoring themselves in that image.

The nexus of these forces means if you walk around the corporate halls of finance firms you may see as many hipster T-shirts as 3-piece suits. 

 

DevOps as the Great Liberator of enterprise IT

In addition to their traditional rivals, financial services face new challenges from innovative banking and fintech startups.  But bogged down by legacy systems, reams of regulation, and thorny InfoSec concerns, it seemed impossible to catch up with Silicon Valley.  Yet today, financial giants have emerged as leaders in modern software-delivery pipelines.  DevOps has become the pathway to greater market value along with lower risk.

Improving end-user experience is in large part a math equation.  The more iterations of a given project you go through, the better the final result.  This is why deploying software multiple times a day versus a few times a year was such a game changer. According to the 2017 State of DevOps Report high-performing organizations deployed code 46 times more frequently. Productivity benefits result from applying lean production-line concepts borrowed from manufacturing: Development and IT operations are integrated with each other and in turn embedded into the business units so there is no room for time-wasting silos.

Nothing eats up an IT budget like a failed application deployment and the associated rollback. DevOps' continuous delivery/continuous integration reduces errors by doing away with manual interventions by operations.  With a ‘shift-left’ mentality there is a focus on replicating production earlier in the life-cycle so small configuration changes don’t blow up down the road.  The result of shared involvement is fewer unpleasant surprises and higher quality releases.  When issues do come up the mean time to recovery is substantially lower.  In fact, that same 2017 DevOps report found a 96x faster time to recovery in high performing DevOps shops. 

 

Get out of the way: End-to-end pipeline automation

I spoke with the architecture and engineering lead at one of the world’s largest banks recently who also happens to be one of our customers.  In his own words, he said the corporate mandate was for IT to ‘get out of the way’ so business units and associated development teams could be more innovate.

This shift over the last few years is a far cry from the days of the all-powerful infrastructure administrators who held the keys to the monolithic kingdom.  It is in large part due to the customer experience driven by public cloud providers like AWS, Microsoft Azure, and Google.  In a matter of minutes, a user can swipe a credit card and have a fully functional server with an OS and an application stack which begs the question why can’t I get that from my friendly neighborhood IT guy?

The DevOps mantra is: Anything that can be automated, must be automated. Process automation is the foundation for consistent high quality, compliance under changing circumstances, and scalability. Most important to the finance sectors, when applied with the right tooling DevOps delivers apps at unmatchable speed.  In fact, the challenge can become keeping up with such an unpredictable and rapidly changing environment.

All the automation in the world gets you nothing if you don't also provide easy access to a best-of-breed toolset that's tied together in a seamless way. The "unified orchestration” layer not only knits all the pieces together, it should insulate Developers from changes to underlying programming interfaces (APIs) related to infrastructure elements, application platforms, and public clouds.  At the same time, it should give the Operations and Security team the confidence that compliance is all in place.  The right approach serves to reinforce the DevOps mindset and remind everyone that you're not just swapping in a new system, you're changing the company culture.

 

Victims of their own success:  Skills gap driving up demand for DevOps talent

Today you'll find more internal software developers working at financial services heavyweights than you'll find at most software companies. A few years ago, one JPMorgan executive claimed they employed “more software developers than Google, and more technologists than Microsoft."

Similarly, asset-management firm BlackRock has a reputation for being one of the best places for developers to work because of a transparent management style and a culture that rewards risk-taking. For example. the company recently created and released a cloud-native investor research application in just 100 days. That's about how long it would've taken to order computer equipment under its previous development regime.

The digital transformation underway at companies large and small is about people as much as it's about automated processes. At the Swiss financial company SIX, implementation of DevOps entailed employees building on and adapting their skills: along with their one area of expertise, each person became familiar with other related areas of DevOps, and they are always in the process of learning a new skill. Cross-functionality is promoted via gamification, such as the company's Haka awards, and through frequent presentations by teams to the entire group to acknowledge major achievements.

These firms and others see DevOps people and processes critical tools in their IT arsenal as they fend off new, well-armed competitors. Throw in the bonus of enhanced security and compliance, and DevOps becomes something the financial industry can't do without.

 

DevOps and CloudOps: The connection behind digital transformation acceleration

$
0
0

DevOps transforms internal release processes to in turn transform systems of innovation, and CloudOps transforms IT’s relationship with infrastructure and architecture.  These twin trends of DevOps and CloudOps are spawning new business models and disrupting old ones while up-ending most of what people thought they knew about IT service delivery. Both topics are also as much about processes and people as tools and technologies.

In many industries, the disruptors are ‘cloud-native’ companies that are pushing more traditional enterprises to transform or risk being left behind. The same can be said for IT management practices… transform or be left behind. Welcome to digital transformation.

Sometimes when you shift left, you might run into a wall

Breaking through traditional IT silos is at the heart of DevOps and is key for enterprise agility. The hierarchy of handoffs is replaced by a cycle of continuous cooperation (not to be confused with the conjoined triangles of success for you ‘Silicon Valley’ fans).

In DevOps, a flattened set of tools and processes works in parallel across a pipeline. The same holds true for people, but reaching organizational nirvana can be a challenge as everybody is saddled with hold-over role definitions.

Tackling organizational challenges of wide-scale transition begins by conducting a full inventory of all current processes. Identify those that need to stay in place, those that should be retired, and those that need a healthy dose of automation.

It’s also important to realize that you can’t modernize an entire application library in one swoop. Look for unified orchestration platforms that conform to your methodologies and enable refactoring, re-architecting, or re-building apps without throwing out your existing tools or the skill sets of your existing staff.

To be successful in using DevOps as the springboard to digital transformation, follow the advice of the agile-development pioneers and go pick-up a copy of ‘The Phoenix Project’.

• Align DevOps with your business strategy.
• Attach DevOps pilots to existing systems (no ‘science projects’).
• Be inclusive to win hearts and minds (no DevOps ‘cool kids’).
• Start with projects that will add new value and differentiation.
• Cultivate DevOps talent from across the organization.
• Embrace small mistakes as much as small victories. Failing fast is a virtue.

Embracing changing premises…on the premise that everything changes

Any IT manager who feared the rise of cloud would render them obsolete can rest knowing there’s demand for operations teams to reinvent themselves in this new world. In a recent ServiceNow survey, 75% of enterprises reported that a cloud-first approach made IT more relevant to the business. Unfortunately, 90% reported IT workers lacked skills to implement those plans.

 Traditional descriptions of various X-as-a-Service deployments focused on shifting responsibility for care and feeding of IT to somebody else. However, now several years into a multi-cloud future, the reality is that enterprises using heterogeneous clouds to satisfy the right mix of application optimization are left with a patchwork of tools and overheads to rival what they had pre-cloud. The answer for cloud operators could be to learn from DevOps automation and apply those same principles to the operating of hybrid IT… i.e. CloudOps.

Just as virtualization abstracted OS images from the underlying hardware, the right unified cloud management can abstract operators from the specialized nuances of native tools from cloud or application platform providers. This can be described as a ‘self-levelling’ approach to orchestration and is something to think about as you face tool and cloud sprawl.

Making the case for a ‘Digital Transformation Officer’

Change is always hard, but knitting together dozens of IT systems and doing the organizational mash-up discussed here can be downright impossible without the right leadership.

The Digital Transformation Officer’s role is to drive change by holding responsible the parties who manage the hundreds or thousands of components that comprise a typical application. They must be masters at balancing: short-term successes vs. long-term value; delegating vs. personally ensuring results; and committing resources vs. staying agile.

 A report from McKinsey & Company identifies two factors that can undermine the efforts. The first is a failure to act with authority straight from the CEO. The DTO must have the power to act on the CEO’s behalf and to influence important business decisions.

The second great obstacle is an environment in which managers and employees don’t buy into the “urgent need for change.” Any bureaucratic dragging of feet or hesitancy is a sign of the wrong attitude and the failure of the message to get through.

The end may be the start

Earlier this year I was moderating a panel focused on disruptive transformation in London with a mix of cloud architects and vendors including Rackspace and Salesforce; I asked each of the panelists to give their best piece of advice for the audience and thought it might be useful to share the speed dating summary of their wisdom.

1) Focus on the people, not the tech.
2) Be ready to break all the rules.
3) Design for unpredictability.
4) Start with the end in mind.

It can be easy as technologists to pick a tool or a technology stack and go down a path without realizing the impact that those choices make on a transformation journey. The common thread in all of the advice was that you must have a clear idea as to where you want to go and what business outcomes you want to prioritize. Only then can you reverse engineer the DevOps and CloudOps tooling required to get you there.

Every CTO needs to know: Do I have the full backing of the CEO? Have I amped up the business’s internal clock? Am I in tune with the frontline workers and understand what they’re going through? Source: McKinsey & Company

How to conquer cloud complexity by solving for the right problem

$
0
0

We're a couple of months post-Dell EMC world, on the heels of Cisco Live and in advance of HPE Discover and VMworld.  Given that timing, I feel compelled to put pen to paper to discuss the macro level chessboard and relative value (or not) for customers coming out of these mega-vendors. 

Industry changes and growth rates on cloud computing have been written about ad nauseum so this isn’t meant to go deep on financials... however, here’s the short version 

/var/folders/jy/q79tn54s39v4pj71h9fgsm580000gp/T/com.microsoft.Word/WebArchiveCopyPasteTempFiles/getfile.dyn?sectionId=49519486&containerId=prUS43508918&elementId=54707524&attachmentId=47306881

source:  https://www.idc.com/getdoc.jsp?containerId=prUS43508918

 

In this corner… weighing in at 57% of total spend but declining steadily is the traditional IT market.  While no vendor likes being called ‘traditional’ we mean servers, storage, networking and anything else that isn’t directly related to the cloud tsunami.  And in this corner… rapidly picking up speed with $50B in spend and whopping 20% CAGR is cloud-related IT spend.

Dell (including EMC and VMware), HPE, and Cisco are all heavyweights in that traditional category while wonderkids AWS, Azure, and Google dominate the hipster neighborhood.  The old guard is doing whatever it takes to reinvent themselves as relevant in a world coming apart like a deconstructed dish by Ferran Adria.  

This includes partnerships and promises of collaboration such as the one that Cisco has ‘penciled-in’ for Q3 with Google, what VMware is working with AWS, and HPE’s long-standing Microsoft alliance.  It’s a bit like me standing next to all the XXL dads at kiddo pick-up time to feel slightly better about my waistline.

 

Think about what you are doing to your kids!

I saw lots of articles this past week reflecting on news from the legacy vendors including good ones by Torsten Volk of EMA and Keith Townsend of The CTO Advisor.  who said “I trust Cisco when it comes to servers, switching, and routing hardware…. However, when it comes to cloud strategy, Cisco isn't near to gaining my trust for vision.”

One reason these IT giants might be struggling in vision is they are caught between a rock and a hard place.  They need to insert themselves into the migration off-premises but they are also inherently disincentivized when it comes to decoupling customers from their hardware.

 

Image result for divorced parents choose meme

 

The question I wanted to ask all of those entering into these ‘marriages of convenience’ is what about the customers? Has anyone stopped to think about what’s best for them?  We talk to large enterprises every day and it’s clear they are fed up with being at the mercy of the large vendor community.

Lock-in is inevitable (and not inherently evil) so the question is not about how to avoid lock-in but rather how do you lock yourself into a scenario that provides you with the maximum agility and minimum amount of pain.   We’re bringing more and more customers on board with Morpheus as a neutral layer to insulate from complexity as well as insulate from vendor madness.  Time to take your power back. 

In one of the recent Cisco announcements, a senior exec said, "Workload placement should be a business decision, not a result of technology limitations".  We couldn’t agree more and while abstraction itself is not an abstract concept (see what I did just there) customers are looking to move beyond where most of these vendors are focused.

 

Time to give complexity the bird

Cloud rockstar David Linthicum wrote a great article in InfoWorld recently titled “How to avoid the coming cloud complexity crisis”. In the article, he observes that with hundreds of workloads being added to the cloud without a decrease in on-premises resources there is a train wreck coming.  His sage advice is to create a complexity management plan, select the right tools to manage complexity, and set up new processes.  The warning he ended with was “If you this do right, you’ll have a very productive next ten years. If you do this wrong, chances are that you’ll drown in your own work. Take your pick.”

It’s that advice that leads me to the Albatross.  More specifically a the Gossamer Albatross, a human-powered aircraft developed by Paul MacCready.  Mr. MacCready is considered to be one of the best mechanical engineers of the 20thcentury and famously said: “The problem is we don’t understand the problem.”

Backstory… early in his career, MacCready tackled a decades-old challenge to address the question of if a human-powered aircraft could cross the English Channel.  Dozens of teams had tried and failed to build such an aircraft and it was widely thought to be impossible.  When our hero faced the challenge, he realized that in those failures, teams would spend a year building their plan based on theory then test it, fail, and go back to the drawing board for another year.

 

Image result for paul maccready gossamer albatross solving wrong problem

 

His reframed problem was not about human-powered flight at all.  Rather, it was to design a plane that could be modified and rebuilt in hours.  He answered that problem and ultimately won the contest.  It’s one of the first and best examples of ‘failing fast’ and designing for maximum agility.  

This same mantra is at the heart of the Cloud and DevOps equation and the conundrum faced by traditional vendors.  They are designed for the wrong problem.  In their case, it’s how to protect their cash cow estates. Morpheus, on the other hand, was designed by IT Ops and Developers to enable rapid application deployment and lifecycle management in a 100% infrastructure agnostic world.  Our founders built Morpheus for themselves and it was only several years later they decided to open it up to the masses.  

Now, with over 200,000 instances deployed and under our belt we've learned a bit about hybrid IT.  If you abstract high enough in the stack, you can deploy, rebuild, and move your applications across vendors, development platforms, and clouds in a matter of minutes.  It’s not cheating on your favorite vendor, it’s simply giving you leverage to pick the best path at any point in time with unlimited freedom of choice down the road.

Our customers can blueprint an application and develop, deploy, and scale across a mix of bare metal, virtualized, and containerized components spanning Dell, HPE, Cisco, IBM, etc. as well as any public cloud provider.  We give them a full fidelity API for developers, a simple GUI and role-based access for Ops teams, and hooks to orchestrate and automate over 75 different technologies in the stack to address the complexity crisis.  

 

If you’d like to learn more reach out and setup time for a demo with one of our solution architects.  You’ll be glad you did.

How to Avoid the Most Common Cloud-Migration Mistakes

$
0
0

There's no teacher like experience. There's also nothing like learning from the experience of others.

The cloud pioneers are worth heeding when you're embarking on an application migration project. One such early cloud adopter goes by the name of "Joe the IT Guy." Joe identifies five things people misunderstand about cloud migrations:

  • "Any app is a candidate for lift-and-shift." If your apps are temperamental (laden with technical debt), they will be downright incorrigible when they run on whichever cloud platform you choose. Maintenance and repair are relatively easy when the server is located in the next room. Maintaining cloud apps requires "shared responsibilities" with your service provider.
  • "There's nothing complicated about lift-and-shift." You can't just "zip up" your VMs and apps and then copy them to the cloud. First, you're dealing with an entirely new security model. Second, the very nature of the underlying network is different and is dictated primarily by the cloud service's operational model.
  • "A VM is a VM, regardless of whose server it's running on." Cost comparisons between an on-premise server and a cloud instance aren't simple. Calculating the total cost of ownership (TCO) has to consider that a cloud instance includes all the services the provider offers at a single cost, rather than just the VM hardware and software. That may include directories, DBaaS, serverless computing, and other new models.
  • "There's no real difference between cloud providers." Start with billing models: Each major cloud service uses a different one. Some are per minute, others are per hour. Instance sizes and prices vary widely. Services offer unique sets of features, and sometimes the same features but with different names. Last but not least, each service has its own set of APIs and consoles that are generally incompatible with each other.
  • "If we take a multi-cloud approach, we won't be locked in." Choosing one cloud provider entails a great deal of study and preparation. Now consider how much more entailed the process will be when you're trying to choose multiple cloud partners for various operations, all of which need to be included under a single management umbrella. For a growing number of businesses, the Morpheus Intelligent Analytics makes it easy to match workloads to the optimal infrastructure at the right price.

 

https://coschedule.s3.amazonaws.com/52932/dfa610f6-2bd8-4e47-bc7d-b4f1cb0b352a/05_11_17_migration_mistakes1a.jpg

Before deciding which applications to migrate, and how extensively to adapt them to cloud environments, consider the many inherent differences between on-premises and cloud architectures. Source: David S. Linthicum, via SlideShare

Failures can't be avoided, but they can be planned for

What you can't prevent you can at least prepare for. Before you move any application to the cloud, consider what will happen when the app becomes unavailable, for whatever reason. Fortune's Barb Darrow quotes one migration vet who recommends building a "reliability bubble" around critical apps you host in the cloud.

For each application, create a checklist of all possible failure points, considering every potential scenario. Then determine the best way to mitigate each glitch the app could encounter. Cloud veterans recommend adding "retry logic" to the app so it will attempt to auto-correct small errors to prevent them from becoming major outages. This is analogous to restarting a stalled PC before you call the help desk. When it hits a snag, the app is programmed to wait a preset period and then retry rather than stop immediately.

 

https://coschedule.s3.amazonaws.com/52932/5780ae89-df6f-4513-90eb-786610f7aecb/05_11_17_migration_mistakes2.png

To overcome common transient failures, set the period between retries as evenly as possible among applications to avoid perpetuating a service overload. Source: Microsoft

A good way to prepare for any app migration project is to have an experienced partner by your side. Morpheus's unified multi-cloud orchestration applies intelligent analytics to lower costs by delivering end-to-end diagnostics on VMs, containers, and public clouds. The service's machine learning powered remediation lets you resize app components and set power schedules. There's no better way to manage the complete application lifecycle via a self-service infrastructure.

Government Organizations Driving Further Cloud Computing Growth

$
0
0

 While businesses often are the catalysts for driving cloud growth, government organizations are picking up steam in this area and are helping to drive the growth of the cloud even further. A recent market research report by Technavio predicted a compound annual growth rate (CAGR) of over 13% over the next four years for the government portion of the global cloud computing market.  

The report found that the biggest reasons for this growth were that government organizations were in need of more cross-functional organization and higher internet speeds due to the increasingly larger and more complex requirements for the IT services they provide.  

Cross-functional needs 

Breaking down the silos is critical in an enterprise IT strategy, and there are ways to get different IT and business groups on the same team. 

Source: John Connolly for InformationWeek 

As government organizations and services have grown, so has the organizational complexity for their IT departments and services. One way of substantially streamlining processes and making it easier for different IT groups to work together is to make use of the advantages the cloud has to offer. Moving much of the complexity to the cloud allows government organizations to improve efficiency in both the organizational and technical aspects of their IT.  

On the organizational side, teams can more quickly and easily work together, and duplication of responsibilities can be reduced or eliminated. Since it is easier for everyone involved to be talking about and using the same system as the basis for their work, teams can more easily see the overall picture and how they can more effectively tackle the IT needs of their organization. 

On the technical side, cloud management platforms (CMP) can help streamline things such as security and the deployment of servers and other resources. It can also provide much more consistency to all processes since standards can more easily be set and enforced.  For example, the Morpheus CMP is a unified orchestration layer which provides consistency across multi-cloud and hybrid IT by providing a single interface that can be used to provision resources even if an organization uses several different cloud providers for their needs. Instead of worrying about the differences in native toolsets between spinning up services on AWS vs. Azure, for example, teams can simply focus on their application requirements and use the same workflow to spin up the resources they need from any of their cloud providers!   

The need for higher speeds 

Governments deal not only with general issues of big-data integration from multiple sources and in different formats and cost but also with some special challenges. The biggest is collecting data; governments have dif?culty, as the data not only comes from multiple channels (such as social networks, the Web, and crowdsourcing) but from different sources (such as countries, institutions, agencies, and departments). 

Source: ResearchGate 

As mentioned, gathering data can be a big process for government organizations. They often have to obtain information from many sources and perhaps do quite a bit of research to find what is needed. If this process is slow due to internet speeds available to government employees, this process can be difficult if not impossible to complete in the time needed to have an effective turnaround for the services they provide. Of course, this is not optimal, so government organizations could definitely use faster connectivity in order to get things done more efficiently on the data collection side. 

Not only are there issues with gathering data, government organizations are needing to meet the demand for more of their services to be available online. As more government services have been migrated to web and mobile apps, the need for more speedy and higher quality internet service has risen. With all of this movement, there is also the need for more storage to house the additional data needed, which of course needs to be able to be accessed quickly by an organization's apps – whether they are serving customers or providing internal support. 

In both the private and public sector, the concept of data gravity has taken hold and is impacting how these entities architect their cloud applications.  Morpheus is currently engaged with one large multi-country customer deployment where satellite-based data sets have gotten so large that the organization can no longer efficiently transport information to the various agencies and researchers who need it.  Instead, they are using Morpheus to creating a centralized portal for provisioning application workspaces and link file and object datasets… essentially bringing the application users closer to the data. 

With all of these factors in place, it looks like government organizations indeed are poised to give rise to additional cloud growth in the coming years, which will benefit both governments and people using their services as these services become more streamlined and easier to access! 

 

 

 

Six ways to ensure you're conserving your cloud resources

$
0
0

"I didn’t realize we were wasting so much time and money."

 

That is the most likely reaction to your next cloud audit, particularly if it has been more than a few months since your last cloud-spending review. If your management checks aren't happening at cloud speed, it's likely you can squeeze more cycles out of whatever amount you've budgeted for cloud services.

 

Keep these six tips in mind when prepping for your next cloud cost accounting to maximize your benefits and minimize waste without increasing the risks to your valuable data resources.

 

1. Don't let the cloud's simplicity become a governance trap. As Computerworld's John Edwards writes, "It's dead simple to provision infrastructure resources in the cloud, and just as easy to lose sight of... policy, security, and cost."

 

Edwards cites cloud infrastructure consultant Chris Hansen's advice to apply governance from the get-go by relying on small iterations that are focused on automation. Doing so allows problems related to monitoring/management, security, and finance to surface and be remedied quickly. Hansen states that an important component of cost control is being prepared: you have to make it crystal clear who in the organization is responsible for cloud security, backups, and business continuity.

 

2. Update your TCO analysis for cloud-based management. A mistake that's easy make when switching to cloud infrastructure is applying the same total cost of ownership metrics for cloud spends that you used when planning the budget for your in-house data center. For example, a single server running 24/7 in a data center won't affect the facility's utility bill much, but paying for a virtual cloud server's idle time can triple your cloud bill, according to Edwards.

 

 

Subscription fees dominate cloud total-cost-of-ownership calculations, while TCO for on-premises IT is based almost entirely on such ongoing costs as maintenance, downtime, performance tuning, and hardware/software upgrades. Source: 451 Research, via Digitalist

 

A related problem is the belief that a "lift and shift" migration is the least expensive approach to cloud adoption. In fact, the resulting cloud infrastructure wastes so many resources you end up losing the efficiency and scalability benefits that motivated the transition in the first place. The pennywise approach is to invest a little time and money up front to redesign your apps to take advantage of the cloud's cost-saving potential.

 

3. Monitor cloud utilization to right-size your instances. Determining the optimal size of the instances you create when you port in-house servers to the cloud doesn't have to be a guessing game. Robert Green explains in an article on TechTarget how to use steady state average utilization to capture server usage over a set period. Doing so lets you track the current use of server CPU, memory, disk, and network.

 

Size instances based on the average use over 30 to 90 days correlate to user sessions or another key metric. Any spikes in utilization can be accommodated via autoscaling, which is key to realizing cloud efficiencies. Once you've found the appropriate instance sizes, classify your instances as either dedicated (running 720 to 750 hours each month) or spot (not time-sensitive and activated based on demand).

 

The former, also called reserved instances, may qualify for steep discounts from the cloud provider if you can commit to running them for at least one year. The latter, which are appropriate only for specific use cases, can be purchased by bidding on unused AWS instances, for example. If your bid is highest, your workload will run until the spot price exceeds your bid.

 

You can take all the guesswork out of instance sizing by using the Morpheus unified orchestration platform, which automates instance optimization via real-time cloud brokerage. The service's clear, comprehensive management console lets you set custom tiers and pricing for the instances you provision. All costs from public cloud providers are visible instantly, allowing your users to balance cost, capacity, and performance with ease.

 

4. Be realistic about cloud infrastructure cost savings. Making the business case for migrating to the cloud requires collecting and analyzing a great deal of information about your existing IT setup. In addition to auditing servers, components, and applications, you must also monitor closely your peak and average demand for CPU and memory resources.

 

Because in-house systems are designed to accommodate peak loads, data-center utilization can be as low as 5 to 15 percent at some organizations, according to AWS senior consultant Mario Thomas, who is quoted by ZDNet's Steve Ranger. Even if they're operating at the industry average of 45 percent utilization, companies see the switch to cloud services as an opportunity to reduce their infrastructure costs.

 

Nearly all organizations choose a hybrid cloud approach that keeps some key applications and systems in-house. However, even a down-sized data center will continue to incur such expenses as leased lines, physical and virtual servers, CPUs,  RAM, and storage, whether SAN, NAS or direct attached. Without an accurate analysis of ongoing in-house IT costs, you may overestimate the savings you'll realize from adopting cloud infrastructure.

 

 

Intel's financial model for assessing the relative variable costs of public, private, and hybrid cloud setups found that hybrid clouds not only save businesses money, they let companies deliver new products faster and reallocate resources more quickly to meet changing demand. Source: Intel

 

5. Confirm the accuracy of your cloud cost accounting. Making the best decisions about how your cloud budget is spent requires the highest-quality usage data you can get your hands on. Network World contributor Chris Churchey points out the importance of basing the profiles of your performance, capacity, and availability requirements on hard historical data. One year's worth of records on your actual resource consumption captures sufficient fluctuations in demand, according to Churchey.

 

Comparing the costs of various cloud services is complicated in large part because each vendor uses a unique pricing structure. Among the options they may present are paying one fixed price, paying by the gigabyte, and paying for CPU and network bandwidth "as you go." Prices also vary based on the use of multi-tenant or dedicated services, performance requirements, and security needs.

 

Keep in mind that services may not mention latency in their quote. If your high-storage, high-transaction apps require 2 milliseconds or less of latency, make sure the service's agreement doesn't allow latency as high as 5 milliseconds, for example. Such resource-intensive applications may require more-expensive dedicated services rather than hosting in a multi-tenant environment.

 

6. Run your numbers through the 'cloudops' calculator. The obvious shortcoming of basing your future cost estimates on historical data is the failure to account for changes. Anyone who hasn't slept through the past decade knows the ability to anticipate changes has never been more important. To address this conundrum, InfoWorld's David Linthicum has devised a "back-of-the-napkin" cloudops calculator that factors in future changes in technology costs, and the cost of adding and deleting workloads on the public cloud.

 

Start with the number of workloads (NW); then rate their complexity (CW) on a scale from 1.01 to 2.0. Next, rate your security requirements (SR) from 100 to 500, then your monitoring requirements (MR) from 100 to 500, and finally apply your cloudops multiplier (CM), from 1,000 to 10,000, based on people costs, service costs, and other required resources.

 

Here are a typical calculation and a typical use case:

 

 

Using the cloudops calculator, you can create an accurate forecast of overall cloud costs based on workload number and complexity, security, monitoring, and overall scope. Source: InfoWorld

 

In the above example, the use case totals $9.8 million: $8.75 million for workload number/complexity using a median multiplier of 5,000; $612,500 for security using a multiplier of 350; and $437,500 for monitoring using a multiplier of 250. Because you're starting with speculative data, the original calculation will be a rough estimate that you can refine over time as more accurate cloud-usage data becomes available.

 

The greatest benefit the cloudops calculator provides is in opening a window into the actual costs associated with ongoing cloud operations rather than merely the startup costs. The "reality check" offered by the calculator goes a long way toward ensuring you won't make the critical mistake of underestimating the cost of your cloud operations.

 

Get a jump on cloud optimization via Morpheus's end-to-end lifecycle management, which lets you initiate requests in ServiceNow and automate your shutdown, power scheduling, and service expiration configurations. Morpheus Intelligent Analytics make it simple to track services from creation to deletion by setting approval and expiration policies, and pausing services in off hours.

 

Sign up for a Morpheus Demo to learn how companies are saving money by keeping their sprawling multi-cloud infrastructures under control.

Three questions to ask before you start a cloud migration

$
0
0

While many times cloud migrations can feel like they are at a standstill or going way too slow, there are also times where migrations can actually be moving too fast. To determine whether your migration is moving along at the right pace, there are several things you might need to consider, such as your budget, your business requirements, and your personnel.  

What are your business requirements? 

There is an old saying: If you don’t know where you’re going, then any road will get you there. This applies as much to cloud migration as to any other technology transformation. It is important to know your strategic goals and success criteria at the enterprise level, not just for the project itself. 

Source: IT World Canada 

While cloud migration is typically a good thing, doing so without a fully developed plan can make it very difficult or even impossible to determine whether the migration was a success in the end. As mentioned, your business requirements should include criteria not just for IT, but also for the business side of things. If this doesn't happen, it is entirely possible for the migration to appear successful to one group, while the other thinks it did not go so well.  

For example, the migration could meet business criteria of saving money (a success for the business team) but some features IT needed such as autoscaling or one click server provisioning was not implemented. The business team might feel this migration was a success while IT would feel like the migration fell short of the goals they had in place.  

To help with this, it is important not only to have success criteria for both sets of teams, it is also important to set goals that fit the context of the type of migration you are doing. Using the previous example, perhaps IT wouldn't have felt the need for one-click server provisioning if the goal was simply to move a single app over to the cloud to save some money on server costs. If the context and goals are both clear, it becomes much easier for the various teams to evaluate the success or failure of the migration, and the teams are more likely to come to similar conclusions.  

Do you have the right skills? 

The demand for advanced skills in cloud and software development is already several times larger than the current pool of talent and resources can support. There’s no indication that this gap will shrink in 2018. 

Source: Cloud Academy 

While this tends to fit it tightly with the budget requirements, it is a good idea to try to get the right personnel in place to keep things at the pace you are needing. Given the quote above, this may need to be done by training members of your current IT staff so that they have the necessary cloud skills. Since these skills are in a shortage, it is certainly valuable if you have employees that already have these skills or those who can learn them and excel in a new or updated role. 

In any case, making sure your personnel have the skills they need will help meet your business requirements as well as to ensure that important issues like security are not going unnoticed! 

Just a few years ago the conversation around cloud was focused on moving virtualized on-prem infrastructure to either private cloud or public cloud environments.  Today these multi-cloud platform options are being used to build new applications and the world of Dev and Ops are moving even closer together.  Skills in infrastructure management and virtualization are giving way to declarative scripting and knowledge in open source projects from the likes of HashiCorp.  One benefit that Morpheus provides is that both Dev and Ops get what they need without requiring a major skills upgrade.    

Do you have the right tools? 

Large numbers of organizations have adopted cloud services to achieve cost savings, flexibility, and scalability of IT infrastructure. However, managing these services is often easier said than done, involving complexities like management and cost evaluation for multiple services running across multiple cloud platforms, resource consumption details, integration with other enterprise tools, and other factors. 

Source: NETWORKComputing 

One of the challenges of cloud migration is finding the services you need for your situation. In many cases, it can make sense to use more than one cloud provider. Perhaps what you need is a particular setup from AWS along with a particular setup through Azure. Later, maybe even a Google Cloud server may be needed for another set of tasks. This can mean that the knowledge needed and/or the learning curve for IT staff to handle these various tools becomes more and more expansive as your cloud implementation grows.  

If you add hybrid clouds, provisioning, reporting, and the numerous other needs of the IT and business teams, this can begin to look overwhelming pretty quickly.  Thankfully, there are cloud management platform tools such as Morpheus that allow you to unify orchestration of these tasks from a single control plane while still letting you utilize best-of-breed elements through API integration. 

What do these types of tools do? Consider the example from above where an organization needs to work with two, three, or more cloud vendors. Rather than needing your IT staff to have or learn the administration for each vendor individually, you can instead have them master a single point of management which makes provisioning resources from AWS the same as it is for Google Cloud or Azure!  

With a tool like this, hybrid clouds and reporting are also made a breeze and can all be managed within that single tool. This improves efficiency and also reduces stress levels of IT staff that need to manage all of these resources. This is a win for both business and IT, and can certainly be a great asset to your cloud migration! 


Efficient multi-cloud management and DevOps requires transparency

$
0
0

As multi-clouds become the norm, finding and addressing wasteful cloud resources jump to the top of the list of IT concerns. Keeping cloud management simple, timely, and accurate requires a view into your application usage that is clear and comprehensive.

 

Hybrid clouds give organizations the ability to get the best of both worlds: on-premises for traditional apps and resources they want to keep close at hand, and in the public cloud to realize the speed, agility, and efficiency of cloud-native applications. The challenge is to maintain the optimal balance between public and private clouds to achieve your business objectives. Doing so requires a 360-degree view of the full application lifecycle.

 

Companies need to partner with vendors who take the mystery out of hybrid cloud management by giving managers in IT and business units an up-to-date view of what resources are being used without slowing down DevOps activities. EnterpriseTech's George Leopold reports on a recent survey of CIOs and IT managers by cloud vendor SoftwareOne that found 37 percent of respondents identified unpredictable costs as their greatest cloud concern, topped only by security.

 

Sharpening your view into critical operations

 

According to studies conducted by ISACA Research, one out of three organizations doesn't calculate cloud computing ROI. Network World contributor Bhanu Singh identifies three "core IT activities" that must be monitored regularly and accurately:

 

  1. Spinning up new cloud environments or adapting old ones as business needs change
  2. Providing the right services to the right people at the lowest cost possible
  3. Keeping those user services and app stacks reliable, secure, and stable

 

 

Gaining visibility into application health is one of the top four challenges of multi-cloud management, topped only by security and performance concerns. Source: eWeek

The benefits of visibility into all your workloads -- in the cloud and on-premises -- is demonstrated by an accounting application, which has peaks and valleys of activity in standard business cycles. Hosting the application in the cloud allows resources to be freed up easily when not needed, but only if you monitor workload demands as close to real time as possible.

One of the greatest impacts of enhanced visibility into application performance and health is the ability of CIOs to partner with the business units that rely on the apps. Knowing how cloud and non-cloud resources are being used in the organization allows CIOs to recommend specific platforms and services, keep tabs on the inevitable shadow IT projects, and have a more thorough knowledge of what the business units need. 

 

Bringing monitoring and logging into a consolidated view across clouds with an orchestration platform like Morpheus unlocks the ability to detect app stack outages, scale across platforms and clouds, and otherwise assure day-2 production application tasks are first class citizens within the deployment phase of the app lifecycle. 

 

As Singh puts it, having an untrammeled view of your multi-cloud operations can "elevate IT to a position of driving business value rather than restricting and frustrating the business."

 

An opportunity for DevOps to drive the business to new heights

 

The continuous integration/continuous (CI/CD) delivery nature of DevOps is taking organizations by storm. Irshad Raihad writes on Computer Business Review that DevOps teams "think in terms of application portability," which means applications are built "independent of where they will live [and] move across the continuum of on-prem, private and public clouds with complete transparency to end users."

 

There's only one way to achieve such a level of end-user transparency in multi-cloud and hybrid cloud environments: via a single unified interface that is shared by people in IT and in business units. The growing popularity of multi-cloud management platforms such as Morpheus is due in large part to the increasing demand for a single, comprehensive view of diverse public cloud and private cloud services.

 

It must go beyond a unified interface though. Organizations using config management tools as part of their orchestration flow can track configuration state changes in development and then enforce an identical state of dependencies through test and production. When coupled with self-service provisioning via Morpheus, organizations are able to quickly tear down and refresh the entire pipeline at any time because everything is stored and managed as code within the CM tool.   

 

TechTarget's Kerry Doyle states that unified cloud management services let teams execute workloads more simply and efficiently by identifying the optimal platform based on cost, reliability, and service portfolio. The single point of control the services provide means users have "new levels of order and visibility into multi-cloud environments and governance," according to Doyle.

An Architect's View on DevOps in the Multi-Cloud Enterprise

$
0
0

Guest Post by Morpheus Senior Solution Architect Adam Hicks

 

Earlier this summer, I had the opportunity to speak at a small, Enterprise IT event in Chicago.  To give you some context, this was a conference dedicated to Enterprise Architecture.  I chose to speak about two hot-button concepts at once: DevOps and hybrid-cloud.  I came away feeling as though my discussion subject warranted further reflection.

My day-to-day role is to evangelize and deploy orchestration and automation solutions in hybrid-cloud environments, and DevOps is the solution du jour for accomplishing this. However, as I prepared my talk, during my talk, and after my talk, I came to realize that there is a critical need to level set on the subject of DevOps.

 

So what is DevOps anyway?  

At one point in my career, I could keep track of the number of customers who were actively engaged in DevOps challenges, but now I see the same thing everywhere.  Here’s the typical scenario:

Me: "Mr. Customer, I have a great solution that will finally allow you to deliver self-service compute instances in your private and public clouds."

Customer: "Can you do it in code?  Because our DevOps team wants to deploy infrastructure as code."

The attuned reader may have noticed it, but most will have missed the issue in that exchange: The customer pointed to a specific team as though this ‘DevOps team’, and ONLY this team, are the doers of the DevOps.

(The customer also implied -- not so subtly -- that there are some ways of doing tasks that are "DevOps" and some that are not.)

 

 

Don't lose sight of DevOps' fundamental goals

In my event presentation, I asked how many in the audience were attacking DevOps at their company.  I'd put the hands raised at about 20%.  Next, I asked how many had DevOps teams or, at the very least, talent with "DevOps" in their title.  About 10% of hands went up.

This means that about half of the Enterprise representatives whose companies are engaged in DevOps also have designated specific doers of these deeds. 

What, exactly, am I getting at?  It's clear to me that we have radically strayed from the original intent and meaning of DevOps.  In a world of "DevOps Engineers" and "DevOps Toolchains," I get the sense that as devout IT practitioners, we have missed the mark on what this movement is really supposed to be accomplishing.

My goal here is to spotlight the need to see DevOps in its original light, reorient our understanding of what DevOps really is, and ultimately set some goals we can all accomplish.

 

DevOps creates one team where there were many

Let’s start with a definition:  DevOps (a clipped compound of "development" and "operations") is a software engineering culture and practice that aims at unifying software development (Dev) and software operation (Ops).  The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction -- from integration, testing, releasing to deployment and infrastructure management.  DevOps aims at shorter development cycles, increased deployment frequency, and more dependable releases, in close alignment with business objectives. --Wikipedia

At its core, DevOps is an organizational movement to increase the value of IT organizations.  It began like any other culture shift:  In response to different viewpoints seeking to understand a problem.  IT is unique in that it is not always the core competency of a company, but in the digital age DevOps has become absolutely an indispensable way to drive business value.  

The DevOps pipeline begins with agile methodology, expands to CI/CD, and finally to a DevOps culture. Source: The Modern Developer

10 years ago, Agile and Lean methodologies yielded incredible windfalls for application teams the world over.  You'd be hard pressed to find developers today who are not working in an Agile or Lean manner to some degree.

Once the fruit of those trees was exhausted, however, the next pressing bottleneck was in Operations, whose job centers around availability - of services, of data, of infrastructure. These responsibilities stand in stark opposition to development, whose job centers around delivering features.

The answer is fundamentally, strikingly simple: Bring these disparate teams to each other’s respective tables to see how their domain of expertise can help the other to move more quickly while maintaining reliability.  This philosophy of working together is the basic underpinning of what DevOps really is.

Why so simple a goal is so difficult to achieve

No matter how easy it is to grasp the concept that we should all get along, the pressing day-to-day needs of our organizations make achieving this goal anything but a breeze.  That's why it's imperative that the decision to put specific practices in place consider both sides of the DevOps coin.  In fact, these practices were once at the forefront of the DevOps movement under the guiding light of the renown Gene Kim, who heralded DevOps's Three Ways:

  1. Systems Thinking - understand the organization as an organism

  2. Amplify Feedback Loops - consistent, 360 degree awareness of the what, where, when, and why of problems

  3. Culture of Continual Experimentation and Learning - engineers are problem solvers and are best put to use engaging in solving problems

 

The three general practice focuses can be boiled down to more specific practice areas including CI/CD, Configuration Management, Automated Testing, and the holy grail, Automated Self Service.  Each of these areas represents a microcosm of the three ways in practice, with the common goal of creating a finely tuned concert that automates the delivery of business-critical features in an incredibly reliable manner.  (they are also at the heart of the Morpheus orchestration platform but this isn’t a product pitch)

At the end of the day, DevOps has to be organization wide.  Frequently, it doesn't work unless it starts at the top of the food-chain.  I don't mean to discourage any eager-beaver, individual contributors from proselytizing the heathens they work with, and by all means they should continue to do so but buy-in for a better way of working together needs to happen in those crucial corner offices of the IT organization, or it won't get the complete lift.

Most importantly, an organization doesn't need to go out and hire people or buy tools to start thinking in a DevOps way, even though many of the people and tools with "DevOps" in their tagline bring features that can enable all sides of an organization to play well together.  One thing I encourage everyone to consider is that these days it is rare for an organization to simply buy off the shelf software.  Our IT vendors look more and more like integral and valuable members of the business.  I encourage you to lean on them to help you "do the DevOps."

 

If you want to hop on a call to see how Morpheus can bridge the Dev-Ops gap we’d love to chat.  

How scaling and bursting help futureproof your clouds

$
0
0

Someday, something will cause the traffic on your network to spike unexpectedly. Maybe the jump will be the result of a royal engagement, as Canadian fashion company Line the Label experienced late last year. When Prince Harry announced his engagement to Meghan Markle, now the Duchess of Sussex, she was interviewed wearing one of Line the Label's coats. The ensuing crush of traffic brought down the clothier's site.

 

More likely your demand spike will be caused by something mundane, such as a bot attack or run-of-the-mill hardware failure. The best way to prepare for the worst-case traffic scenario is by taking advantage of the scalability and connectability of the cloud.

 

Information Age's Nick Ismail believes what he calls the "Meghan Effect" can be traced to IT departments relying too much on in-house data centers. Ismail writes that any on-premises infrastructure will reach its limit -- perhaps sooner than later.

 

IT managers often cite concerns about cost control and compliance as reasons to keep operations inside their own walls. However, they can't deny that data resources have never resided solely on-premises: every disaster-contingency plan worth a lick includes complete, up-to-date backups stored far offsite and ready to be restored at a moment's notice. Today's hybrid clouds combine the protection of real-time offsite backup with the price efficiency of an on-demand utility.

 

Make best use of the cloud's ability to burst

 

Ismail highlights another capability of cloud computing that IT departments need: the ability to scale out, or "burst" available capacity on demand. If your in-house setup is capable of handling an unanticipated leap in traffic that's a factor of 10 greater than a typical load, you're probably spending way too much on hardware.

 


 

A typical cloud-scaling scenario uses service agents to detect simultaneous accesses (1), create redundant instances (2), and send an alert when the workload limit has been exceeded (3 and 4). Source: CSDN

 

The traditional way to accommodate wide swings in network traffic volume is via load balancing done by physical appliances in the data center. Of course, these devices are subject to the same capacity limitations and extended idle times as in-house servers. 

 

Some cloud approaches to load balancing use a virtual version of the physical appliance that's based on the same architecture as the hardware it replaces. Software load balancers offer near-real-time elasticity that extends from the data center to the cloud. They go one step further by adding application analytics to support predictive auto-scaling. 

 

Morpheus's physical stack integration combines built-in load balancing with native or third-party monitoring, logging, and incident handling. New load balancers can be brought online automatically to respond almost instantly to sudden, dramatic, and unexpected swings in traffic volumes. In addition, Morpheus's Intelligent Analytics let you tap into hidden details about how you're using VMs, containers, and public clouds. 

 

Gauge your network's burstability via load testing

 

Any application that is likely to experience wide variations in use patterns needs to be performance-tested under a range of scenarios before it's deployed. DZone MVB Noga Cohen offers five tips for scaling a site or app to one million users. As you might expect, Cohen's first suggestion is to give yourself time to work up to load testing your peak anticipated load. In other words, don't start by testing a million-user load.

 

The best approach is to incorporate load testing in your continuous integration process. This lowers the chances that a traffic spike will catch you by surprise. In addition to recording all load-test scenarios, you need to remove all resources unrelated to the test so the app doesn't crash due to a bottleneck that exists in the test environment itself, not in the real world.

 


 

By incorporating load testing in the continuous integration/continuous delivery process, the development and operations teams are linked via self-monitoring and analytics features built directly into applications. Source: Sanjeev Sharma

 

Finally, analyze each round of load tests in real time, and all test results in the series. Not only will a time-series analysis reveal weaknesses and bottlenecks, the results paint a picture of the system in action as conditions, components, and features change. Of course, the analysis will also confirm that your fixes are effective and complete.

 

How hybrid cloud's scalability maximizes the value of your data

 

In some IT departments, the cloud is treated as nothing more than an extension of the company's in-house operations. Forbes' Adrian Bridgewater writes that "the real power and opportunity lies in the aggregation and analysis of all the information being processed through the cloud." According to an IT executive quoted by Bridgewater, the primary reason companies are unable to realize the full potential of cloud infrastructure is a shortage of technical expertise.

 

The scarcity of cloud talent causes many organizations to filter their data analytics through the prism of their in-house tools and processes. They end up with a "hybrid cloud" that is actually two separate and distinct operations: one on-premises and the other in the cloud. John Zanni writes on IT Pro Portal that an efficient hybrid architecture can't "discriminate between on-premises and cloud environments."

 

Tomorrow's applications will rely more than ever on scalability. The only way to ensure apps running on hybrid clouds can scale on demand is by using cloud-native tools. Machine learning is a prime example of the need to accommodate massive amounts of data dispersed across public and private clouds. EnterpriseTech's Kurt Kuckein describes a machine-learning project at the University of Miami's Center for Computational Sciences that pushed hybrid-cloud scaling to the limit.

 

The school worked with the city of Miami to implement a hybrid cloud for the collection and real-time analysis of data received at a rate of 10 times per second from more than 100 sensors spread across a 60-square-block area. The goal was to improve public safety for drivers and pedestrians by optimizing the city's service and maintenance operations. Artificial intelligence projects such as this quickly run into problems as they attempt to scale from prototypes to production.

 

Typical scaling problems include the inability to provide data access at the speed required for real-time analysis, the failure to maintain a reasonable and cost-efficient data-storage footprint, and outputs that can't scale because inputs and the deep-learning networks themselves are unable to grow quickly enough.

 

 

 

A taxonomy of cloud auto-scaling identifies attributes of auto-scaling for SaaS, PaaS, and IaaS, as well as areas of modeling and prediction (simulation-based, analytical, migration/configuration, and price model). Source: Hanieh Alipour, Xiaoyan Liu, and Abdelwahab Hamou-Lhadj, via ResearchGate

 

Scaling becomes the foundation of application resiliency

 

No area benefits more from cloud scaling than disaster recovery. Not long ago, static application deployments in large, centralized data centers required always-on scaling able to accommodate max workloads, as NetworkWorld contributor Kris Beevers writes. Component-level redundancy for specific app and database servers was layered atop the system-wide DR setup encompassing the full application infrastructure. You ended up with a lot of infrastructure sitting idle most of the time.

 

With cloud infrastructure, DR takes on an entirely new, streamlined complexion. Now thin provisioning and auto-scaling allow resources to be deployed in an instant based on current workloads and conditions. Multi-master database replication systems and global load balancing support active/active setups, the equivalent of switching between two continuously replicated DR configurations.

 

One danger of auto-scaling is the potential for overspending on cloud services. InfoWorld's David Linthicum identifies over-reliance on auto-scaling and auto-provisioning services as a prime reason why companies burn through their cloud budget quicker than they planned. When you allow a public-cloud provider to determine the resources your apps need for optimal performance, it's like handing the service a blank check.

 

To ensure your multi-cloud setup is delivering the perfect mix of performance and resource consumption, take advantage of the dashboard interface and easy-to-configure alerts offered by the Morpheus unified cloud management service. For example, Morpheus's Intelligent Analytics let you view monthly costs by cloud, and instance types by cloud and by group (for memory, storage, and CPUs/cores). You can view and compare public-cloud costs to ensure your workloads are mapped to the best infrastructure.

 

That's hybrid-cloud scaling that both the CIO and the CFO will appreciate.

Top 5 Reasons VMware Users Love Morpheus

$
0
0
Countdown to #VMworld2018!

It’s going to be my first VMware event with Morpheus after having spent many a VMworld attached to the storage industry. My own journey from storage specialist to the world of multi-cloud and DevOps isn’t that far removed from changes in the event itself.  For many years, the platinum sponsor pedestal was dominated by EMC, NetApp, and other storage stalwarts. 

However, as traditional architectures have made way for hyper-converged and integrated systems so has the tone and tenor of the event changed with the times.  Recently it evolved again, moving beyond the integrated systems hype to increasing focus on cloud, containerization, and automation.  

I’m excited to see that momentum continue this year and proud that Morpheus is at the center of many large-scale customer transformations with our next generation cloud management platform.  We just closed our biggest customer deal yet and already have dozens of household names under our belt.  

In advance of the event, I spoke with many of our customers, partners, and solution architects to tease out exactly why our unified approach to multi-cloud self-service and orchestration has been so transformational.  I took the data points and put together an Infographic that may wet your appetite enough to swing by the booth.

 

 

 

Come by and see us at Booth #2136 or just skip straight to the finish and setup time for a demo with one of our specialists.  We've orchestrated and automated the provisioning of over 225,000 instances for some of the most complex enterprises on the planet and would love an opportunity to help you get to what's next.

Your plan for sourcing and supporting I&O automation

$
0
0

Your company's early DevOps projects went off without a hitch, resulting in automated deployment, continuous integration/continuous delivery, and other promised benefits. So why did your DevOps plans go off the rails when you tried to implement them throughout the organization? 

You may have succeeded in changing the processes employed in your pilot projects, but you failed to adapt your IT department's culture to the DevOps way of doing things. In an October 17, 2017, article on DataQuest, Siba Prasad Pulugurty cites a Gartner report forecasting that by 2018, 90 percent of I&O organizations attempting to implement DevOps without altering their culture will fail.

Here is Pulugurty's seven-step plan for establishing a DevOps culture:

  • Break down silos by ensuring that all teams collaborate seamlessly with one another -- they all must function as "one team."
  • Early in the process, make sure all stakeholders understand and agree on their shared objective.
  • Train from the bottom up (line workers first) to prepare everyone for the change in work processes.
  • Welcome failures as a sign of progress, but only if they are detected early and corrected quickly.
  • Implement DevOps bit-by-bit, choosing key projects, and continue to run existing systems in parallel with new ones (bimodal IT).
  • Reinforce the "one team" concept by emphasizing the importance of continual communications and transparency among all parties for all projects.
  • Remember that your project's success will rely not on the automation of systems, but rather on the collaboration of humans.

 

When expanding the use of DevOps in an organization, information issues and technology issues are dwarfed by process issues and people issues. Source: Gartner

 

I&O configuration/change management proliferates

The emphasis on cultural change for successful implementation of I&O automation is echoed by Forrester principal analyst Robert Stroud in an October 31, 2017, article on Information Management. Stroud identifies two important benefits of continuous delivery and release automation: the ability to deploy automatically in multiple formats, and to restore the environment instantly should a particular deployment fail.

The onus is on I&O professionals to shorten their existing change-management practices by automating manual procedures. As Stroud puts it, I&O must "speed up deployment cadence." The trick is doing so while also validating the quality of the deployed applications just as quickly. This requires the choice of the most effective metrics for the particular system, ensuring that the metrics "align to velocity throughput and success."

No single DevOps tool is capable of collecting, analyzing, and reporting crucial performance metrics on hybrid cloud systems. According to Gartner, organizations will mix and match I&O automation technologies from 20 or more services for many years to come.

 

Research firm Gartner expects the number of separate I&O automation technologies to proliferate, requiring a unified method for planning, implementing, and managing automation components. Source: Gartner, via Advanced Systems Concepts

Before selecting their custom automation toolset, companies need a big-picture view that identifies the needs of all areas. As Jim Manias writes in a September 20, 2017, article on ITProPortal, the goal of automation is to streamline workflows and processes. These includeworkload automation, IT process automation (ITPA), and application release automation (ARA).

 

A clear-cut plan keeps you ahead of the I&O automation jigsaw puzzle

Most companies find themselves with a patchwork of automation tools in place. It's natural that staff picked up a tool on this DevOps project and another tool on that project. When you transition to an enterprise-wide approach to automation, you have to account for the many disparate tools already in use.

 

The resulting mish-mash lacks any central control. Even worse, you've set off on a journey without knowing your destination. As Manias points out, this is a recipe for wasted resources and underutilized IT skills. The solution is to hit the pause button and devise a comprehensive automation infrastructure, even if this means pulling back from some existing projects.

The benefits of a unified I&O automation infrastructure are the provision of a single point of control, the ability to incorporate new technologies seamlessly, and "strategic resource utilization." In addition, the improved transparency of a unified automation platform simplifies governance. However, enhanced manageability is only one of several factors that need to be addressed. The long-term success of your I&O automation strategy hinges on the three IT standbys: Security, efficiency, and usability.

 

Make the break from 'deterministic' automation

The pattern for most organizations is to implement automation one manual process at a time. While the piecemeal approach leads to lower costs and improved quality of service in discrete areas, it doesn't scale, so the benefits can't be extended throughout the enterprise. Vijaya Shanker writes in an April 27, 2017, article on CXOtoday that the current "deterministic" approach to I&O automation must give way to the creation of a comprehensive service automation management strategy.

 

Shanker highlights three keys to service automation:

  • Reducing the need for human intervention eliminates the number-one source of system errors.
  • Auto-remediation, automatic resource fulfillment, and auto-provisioning deliver process efficiencies that translate directly into improved productivity.
  • All the efficiencies in the world are worthless if they don't make life easier for your customers in business units, so keep their needs in mind first, last, and always.

 

Key technologies enabling I&O automation are infrastructure as code (IaC), which adds a software layer atop the physical infrastructure; and containers, which make it possible to define and manage complete application stacks as standardized text files and images. Source: Gartner

 

Now that hybrid clouds are the mainstay of IT operations, your workloads must migrate seamlessly between the public cloud and local hosts. Creating the unified platform hybrid clouds require begins and ends with the physical infrastructure. Gartner's 2018 Planning Guide for Infrastructure and Operations (Gartner account required) emphasizes the critical role automation plays in providing the scale and agility today's cloud operations depend on.

 

Gartner executive Tony Iams writes in an October 12, 2017, post that automation frees I&O staff to work on activities that provide "real business value" rather than on "low-impact management tasks." In addition, you're better able to change your infrastructure quickly to keep pace with "evolving business requirements." Another benefit of infrastructure as code (IaC) is immutability, which ensures that the individual components comprising your hybrid network are always in a known-good-configuration state.

 

The elimination of siloed "snowflake" applications and subsystems means that I&O staff must broaden their view of business processes. The new collaborative environment means any new architecture component can impact all existing and future systems. No longer can a single team or team member claim total control over any slice of the operating environment. Once again, a culture of sharing, openness, and transparency is paramount as a prerequisite for modern I&O management.

 

Hybrid cloud automation relies on unified policy enforcement

I&O leaders have been deterred from jumping on the hybrid cloud bandwagon by the inability to automate enforcement of a single set of policies uniformly across the public and private components of their hybrid setup. As Rhett Dillingham explains in a June 1, 2017, article on Forbes, without automated policy enforcement, application owners are free to pick and choose which cloud services meet their specific needs.

The result is a disjointed collection of "bespoke application implementations" that vary in their approach to security, capacity management, and tool selection. Despite the recent flood of cloud management services from top vendors, there is still no comprehensive hybrid cloud solution that enterprises can implement right out of the box. 

The need for seamless, transparent service automation for hybrid clouds is heightened for I&O leaders because CEOs and CFOs increasingly look to cloud solutions as a way to improve productivity and reduce expenses. To address these expectations, I&O managers will consolidate to a handful of cloud vendors to "maximize discounts and reduce training and expense costs," according to Dillingham.

 

Now that enterprises have acknowledged the central role of hybrid clouds in their business strategies, it falls to I&O leaders in the organization to deliver the consistency, reliability, and security their customers have come to expect from their experiences with on-premises systems. Your plan for automating service management for hybrid clouds starts with a single control point from which a uniform set of policies can be enforced on all the various public and private components.

Morpheus CMP awarded a Best of VMworld for Agile Operations and Automation

$
0
0

On the plane back from VMworld US 2018 I was exhausted after 6 days in Las Vegas but also so excited I couldn’t sleep.  VMworld has definitively shifted towards Multi-Cloud Orchestration and DevOps Automation over the last few years and while we may not yet be at the point of renting out the ultra-lounge and hosting thousands of attendees, Morpheus did have a breakthrough event in only our second year.

Prior to the event, we announced major updates to our next-generation cloud management platform (CMP).  With v3.5 in July, we added SDN, Storage, and Automation features to help customers: 

  • Simplify networking with VMware NSXand phpIPAM
  • Assure service levels with the Zerto IT Resilience Platform
  • Scale IT automation with Ansible Tower and Ansible Vault 
  Taking home the Gold

Morpheus Data is honored to have our multi-cloud management platform receive a Best of VMworld 2018 U.S. award for Agile Operations and Automation.  These awards recognize the most innovative new products in the virtualization and cloud market.  75 products were judged by the vExpert and Editors at TechTarget on innovation, ease of integration into existing environments, and ability to fill a market gap.  We want to give a big “THANK YOU!” to TechTarget and the entire VMware community for this recognition.

To win the Gold Award, Morpheus was compared against many disruptive products “that help operations teams deploy and support apps, as well as those products that monitor, track and manage workloads either in on-premises servers or in cloud environments, or that enable the migration of workloads across cloud platforms

 

Why did we win?  Businesses are bringing more software development in-house, yet traditional IT organizations are overwhelmed by modern application lifecycles and hybrid cloud management.  It often takes 40 hours or more for IT to fulfill service requests due to complex handoffs between teams, tools, and processes.   

As a 100% infrastructure and platform agnostic orchestrator, Morpheus provides IT Ops and Developers fully self-service and automated provisioning of bare metal, VM, and containerized instances (IaaS/CaaS), as well as full app stacks (PaaS), deployed into in virtually any cloud (on-prem and off).  Morpheus doesn’t replace best-in-class products customers already have for various functions but does provide a unified control plane to bring those tools under management and enables IT to deliver resources to developers in minutes instead of days.

 Plus, with over 75 third-party integrations shipped ‘out-of-the-box’, Morpheus can be up and running in under an hour with near-instant connections into virtually every cloud, platform, and app lifecycle tool found in enterprise IT shops.  Full list available at www.morpheusdata.com/integrations.  The service catalog ships with 50 of the most popular app components to get customers started as well as a built-in image creation/conversion tool and sophisticated blueprinting engine for custom catalog items.

 

It's about people and process as much as tools and tech

For your multi-cloud orchestration and DevOps automation projects to be successful, you've got to address the basic needs of all stakeholders.  Many tools in this space appeal to just one or two and as a result, they are never truly embraced and deployed widely.  

  • For Infrastructure teams, we standardize management across what would otherwise be a heterogeneous mix of disconnected cloud platforms and tools plus we expose native cloud services, so you are not stuck with ‘least common denominator’ features.  Morpheus also provide a simple GUI on top of tools such as Ansible, Chef, Salt, Puppet, HashiCorp Terraform, Packer, etc.
  • For Developers, we provide a robust API/CLI with over 450 commands, so teams can ‘bring their own tools’ and accelerate app modernization.   Teams can tie into Jenkins and Git for code deploys as part of CI/CD pipelines.  Our multi-tier and multi-cloud blueprints provide our native JSON/YAML DSL or support Terraform and Microsoft ARM templates for Infrastructure as Code. 
  • ForBusiness teams, we provide optimization via machine learning based analytics and cloud brokering.  The solution provides a detailed audit log and centralized multi-tenant role-based access for security and compliance.  Morpheus also supports native approvals, expirations, and workflows as well as deep integration into ServiceNow so teams don’t need to learn anything new.  

 

Don't worry if you missed us in Vegas.  

We'd love to hop on the phone and learn more about your multi-cloud management challenges and set up a tailored demo for your use case.  From there a PoC deployment at your site is an easy next step... we've had some of the largest enterprises in the world up and running in production in under 60 days.  Just check out this case study on AstraZeneca.

 

Can’t wait to see you in Barcelona!

As for VMworld itself, hundreds of #cloud and #DevOps heroes are sporting a new outfit.  We even spotted a number walking around the show.  You know you hit a home-run when folks wear your swag instead of what they packed from home… forget Maximum Effort and embrace Maximum Agility!

 

 

Match multi-cloud workloads to specific cloud types to maximize your apps' value

$
0
0

Not many companies are able to rely exclusively on cloud-native apps. Most are balancing cloud-hosted workloads with their counterparts running on premises and will be doing so for some time to come. Getting the balance right is a big reason why multi-cloud is being adopted so widely.

In multi-cloud settings, workloads need to be agile to take advantage of the range of cloud services available. The challenge is figuring out how to parse the convoluted pricing schemes and feature sets of the public cloud providers. As ITProPortal's Gaurav Yadav points out, cloud service brokerages deliver value by aggregating public cloud services via a single point of access.

Not only does multi-cloud brokerage allow you to run your service catalog less expensively, you avoid being locked into the services of a single provider. Companies are also finding that unified orchestration and cloud management platforms such as Morpheus allow them to save money by matching the right workload to the right cloud without jumping between provisioning tools. 

Finding and maintaining the perfect mix of public/managed/private cloud infrastructure requires consolidated visibility: metrics, logs, cost, monitoring, and other app indicators in real time. 

The primary deployment method by workload/business function through 2019 for private, public, and hybrid clouds. Source: 451 Research

Intelligent Analytics provides fine-grained usage data on VMs, containers, and public clouds, allowing you to resize app components, create power schedules, and find the optimum balance of cost, capacity, and performance.  Being able to do all of this across clouds and platforms are primary reasons for the growing popularity of the Morpheus unified orchestration platform.

Avoid hidden lock-in traps

Adopting a multi-cloud strategy is no guarantee you're protected against vendor lock-in, as NetworkWorld contributor Kyle York explains. You're still left to deal with multiple proprietary APIs, unless you partner with a cloud management provider that supports open APIs and such open-source tools as Kubernetes, Kafka, and Terraform. 

Multi-cloud management goes beyond basic interoperability to include supporting second- and third-tier services such as DNS, load balancing, content delivery networks, and web application firewalls (WAF). These and other services must be integrated with the infrastructure endpoints of competing cloud platforms.

Likewise, avoid the major platforms' proprietary development tools when creating cloud-native apps. These services often rely on non-integrated, discrete, and proprietary features. Look for a cloud provider that offers a stack that is open, standards-based, and composable.

Lastly, don't get stuck with a cloud partner who doesn't support non-cloud native workloads. For better or worse, enterprises continue to depend on monolithic "mode 1" applications and are not going to modernize in one fell swoop. Your cloud management platform should provide a comprehensive set of tools for working with the full gamut of workloads, whether they run in-house or in the cloud.

 

By 2020, 41 percent of enterprise workloads will run on the public cloud, 20 percent will run on private clouds, and 22 percent will run on hybrid clouds. Source: LogicMonitor, via Enterprise Irregulars

Take full advantage of the full range of cloud services

You can never have too many options. The trick is knowing how your many different cloud alternatives correspond to the unique characteristics of your workloads. Datamation's Christine Taylor compares multi-cloud management to shopping: For everyday items, the big-box store is the place to go, but for specialty purchases, a trip to a boutique is more appropriate.

Taylor breaks the multi-cloud mix into four categories: private clouds, hyperscaled public clouds, cloud service providers, and hybrid clouds. She points out a major weakness of hyperscaled public clouds: they're basically DIY affairs whose profit model assumes very little will be spent on customer support. Yet the typical multi-cloud infrastructure is getting more and more complicated.

A growing number of companies are finding that the best way to maximize their multi-cloud investment is by partnering with Morpheus to provide the unified orchestration that knits all the multi-cloud pieces together. Taylor writes that CSPs such as Morpheus offer "the best of public and private clouds": the dynamic scalability of the former and the high performance and fast recovery of the latter. In addition to workload optimization, CSPs provide responsive customer service, customized SLAs, and flexibility.

After all, wouldn't you rather have a cloud "partner" that actually acts like one?


Right-size your multi-cloud workloads using AI and machine learning

$
0
0

The only thing better than knowing what your customers are doing right now is knowing what they're going to do before they do it.

Much like the adoption of autonomous vehicles promises to transform how we deal with traffic, the use of predictive analytics in IT is fundamentally changing the way businesses operate. That's the conclusion of Tim Sandle on Digital Journal.

Widespread use of predictive analytics requires overcoming two obstacles: First, companies are struggling to find IT staff with the right set of skills; and second, analytics are resource-intensive, which drives up costs. Things are getting easier as vendors start to embed AI into software and hardware layers in the IT stack. Here are some other perspectives on how to ensure efficient provisioning of cloud workloads.

The difficulty of calculating cloud total cost of ownership

The bottom-line calculation for any application is total cost of ownership (TCO) when running on-premises vs. TCO on multi-cloud infrastructure. You end up with an apples-to-oranges comparison of operational expenses (hardware/software/maintenance) vs. capital expenses (cloud services for compute, storage, and other resources, plus app rebuild and migration costs).

Being able to assign and track costs to specific users, applications, and clouds is one piece of the puzzle that the multi-cloud orchestration offered by Morpheus can help resolve. Gathering this data in a centralized way is a foundational element for rightsizing.

Tim Lebel writes on InfoWorld that miscalculating TCO causes companies to spend more than they expect on cloud migrations, services, and maintenance. The first step in determining cloud TCO is assessing the workload requirements of your applications in three categories:

  1. Predictability of the workload's demands for cloud resources, including migration costs and recurring costs
  2. Flexibility of a particular service to accommodate the application's unique dependencies, or whether several services should be combined to meet the application's resource needs
  3. Control to ensure you have a clear and complete view of the application's use and operation, including being able to adjust quickly to changing conditions and user requirements

 

An example cloud TCO calculation forecasts CapEx and OpEx costs over a five-year period for an organization migrating to web-scale infrastructure. Source: Steve Kaplan, via Wikibon

Signs of wasted workload resources

The only way to keep your applications running at peak performance is to determine the optimum instance types and sizes for your workloads. TechTarget's Kathleen Casey identifies three measures that indicate a problem related to instances:

  • Long runtimes
  • Failure to accommodate usage spikes
  • Having to add new instances to a workload

Cloud usage reports help you address workload inefficiencies by determining more precisely the memory, virtual CPU cores, or other resources your workloads need.

The general rule is that selecting larger instances will reduce your total instance count, which translates into lower costs. As with most general rules, there are a great number of exceptions. For bursty workloads, long-term instances such as Amazon EC2 Reserve Instances may be best for the baseline load, while larger instance types are used for bursts, and instances from the spot market are applied to other peak loads.

AWS claims its Spot Instances, which are bid on from unused EC2 capacity, can save customers as much as 90 percent compared to the cost of On-Demand Instances. Similarly, Google Preemptible VMs cost up to 70 percent less than the company's standard instances, although the instances terminate after 24 hours or when the resources are required for other tasks.

Instance performance for hypervisors in EC2 is shown by comparing old virtualization types (top rows) to new types (bottom rows) along with expected performance by resource from most important (CPU, memory) to least important (motherboard, boot). Source: Brendan Gregg

One challenge faced in a multi-cloud world is the fact that every cloud vendor has its own approach to reporting. On-premises infrastructure is often poorly understood. Consolidating cloud management in a unified view can help normalize reporting, as demonstrated by the Morpheus cloud management service's Intelligent Analytics feature. When coupled with guided recommendations and migration tools, consolidation can cut costs by tens of thousands of dollars per month.

Avoid the most common cloud-overprovisioning mistakes

The number-one cause of companies spending too much on their cloud services is failing to understand how their applications actually operate. On ITProPortal, Yama Habibzai writes that the workload type determines the preferred instance qualities. For example, savings should be greatest for a batch processing job with infrequent high utilization, which is suitable to instances that turn off when inactive so you aren't paying for CPU cycles you don't need.

An accurate workload assessment requires applying a statistical model of workload patterns that represent hourly, daily, monthly, and quarterly activity. Keep in mind that many applications are designed to take as many resources as the system makes available. Your reporting tool may recommend boosting CPU resources if utilization stays near 100 percent, but after you provision new resources, you're back to 100 percent utilization.

This underscores the need to know how your individual workloads function at a granular level.  Morpheus Data not only provides this visibility of statistical usage and instance resizing, it also recommends and applies smart power scheduling.

Habibzai points out that knowing how much memory your workloads are consuming won't prevent overspending. The memory an instance actually requires includes both consumed memory and active memory, as well as the memory set aside for the operating system. The goal is to accommodate a "reasonable amount of caching" while avoiding bloat and ensuring the proper balance between cost efficiency and app performance.

Your cloud budget can be stretched further by taking advantage of new instance types, which are likely to be hosted on newer, higher-performance hardware. This can help you minimize the complexity of services and instance types offered by cloud providers. It also makes it easier to identify and eliminate idle instances without deleting critical batch processing instances that are active infrequently.

A well-thought-out plan minimizes cloud waste

If you're still on the fence about using multiple cloud services, you're missing out. Owen Jenkins of research firm Kadence International says the benefits of multi-cloud in terms of efficiency and agility are so significant, the message from users is "just go for it." A recent study conducted by MIT Technology Review and VMware points out that companies adopting multi-cloud can expect some "growing pains," as Ty Trumbull writes on Channele2e.

The researchers asked 1,300 IT managers at enterprises around the world about their approach to cloud adoption. The results indicate that organizations can minimize migration glitches by having a comprehensive multi-cloud roadmap. The three greatest challenges to successful implementation of multi-cloud are integrating legacy systems (cited by 61 percent of respondents); the lack of skilled staff (more than 50 percent); and understanding the new technology (61 percent).

Gartner's multi-cloud management framework is designed to improve the accuracy of cloud spending estimates based on cloud providers' price lists, pricing models, discounts, and billing mechanisms. Source: Gartner

Another survey cited by ComputerWeekly's Cliff Saran found that 25 percent of businesses aren't sure what they are spending for public cloud services. In addition, 35 percent of the companies surveyed report that their spending for cloud services exceeded their budgets, and only 20 percent of organizations use automation to optimize their use of cloud infrastructure.

Addressing the two great multi-cloud challenges: Complexity and a lack of skills

The main reason for the uncertainty about multi-cloud is the complexity of the tools AWS, Microsoft Azure, and other large public cloud services offer their customers to keep track of their cloud instances. Using native tools, it's not easy to determine when instances are used and how much memory, CPU cycles, network bandwidth, and storage they consume.

I mentioned up front that many of these questions are starting to be answered in more holistic fashion as vendors build-in analytics as part of their stack. Examples include how Nimble Storage embedded analytics into their InfoSight platform and how Morpheus Data has embedded it into its multi-cloud orchestration tool.

In both cases, turning analytics into action and leveraging a large historical data set are critical to informing the future. Morpheus provides a unified approach to orchestration for both infrastructure and development teams. Today we expose cloud usage analytics via a single dashboard interface to provide IT managers transparency into how efficiently and effectively their cloud instances are performing.

In the future, we can apply that same analytics engine to the development pipeline efforts of app-centric organizations to help them better understand how different teams are working together and how DevOps toolchains are functioning. We’re in the early stages of the autonomous infrastructure movement. It's sure to be exciting this year, next year, and into the new decade as this becomes table stakes for management tools.

Global multi-cloud management from a single interface

$
0
0

Since the dawn of information technology, practitioners have worn their cynicism about new technologies like a badge of honor. For the most part, the cynics are spot-on: No tech innovation ever lives up to the hype spread by its earliest proponents. Yet there comes a time when the promises of a new approach actually bear fruit that even diehard cynics can't deny.

Case in point: Volumes have been written on cloud computing and recently much of that has focused on management in a world adopting multiple clouds. One recent article compared the single-pane management approach to universal remote controls for televisions… making the point that no universal remote can include the "intricacies" unique to each separate control. So instead of one remote, you merely end up with one more added to your existing pile.

It’s a clever analogy and one I’d like to riff on if you’ll indulge me. I’ve got a ‘smart’ TV, video player, cable box, and receiver at my home. Like many of you, I probably binge watch more than I should but in the last 12 months, I’ve used a single remote for 90% of my needs. I can rapidly access Netflix, Prime, Pandora, Comcast, and the Blu-ray from one remote while occasionally dipping into specialty remotes when I run into a corner case.

That’s true of multi-cloud management platforms like Morpheus. We aim to aggregate a vast majority of the day to day IT operations tasks as related to provisioning and expose most of the cloud-native API functionality through a single orchestrated control plane. For most large multi-cloud and multi-platform enterprises, it’s a significant step forward compared to bespoke tools for every step of the application deployment lifecycle.

The proof is in happy, productive workers

Torsten Volk, a researcher for Enterprise Management Associates, believes past attempts to offer a unified interface for multi-cloud management failed to address the needs of the DevOps team. Volk cites the Morpheus unified multi-cloud orchestration platform as an example of "building a different mousetrap" rather than simply attempting to improve on existing cloud-management approaches. You can download the EMA Top3 report here.

The problems DevOps teams face include compliance, security, cost, speed, and quality. These needs are addressed by applying robust automation, according to Volk. Automation and the single-pane interface go hand-in-hand. As the focus of multi-cloud management shifts from implementation to optimization, two things become paramount: resource management and balancing workloads across pricing models. Visibility and adaptability make this level of multi-cloud optimization possible.

Enterprise IT departments are realizing millions of dollars annually in added revenues and cost savings by adopting cloud-based applications. Source: IDC, via Salesforce

Visibility and adaptability are two of the five multi-cloud "lessons learned" by Bikash Koley of the Forbes Technology Council. The other three are simplicity, openness, and security. In particular, Koley believes CIOs and CTOs need a comprehensive view of their multi-cloud infrastructure, which is only possible via a single pane of glass that puts "all functionality at your fingertips." Without this level of visibility, blind spots are created and you lose your ability to detect and respond to breaches.

Echoing the benefits of multi-cloud management via a single pane is Gaurav Yadav on Datacenter Dynamics. Creating highly available cross-cloud applications and standardized workflows requires being able to see in one window all pertinent performance information. According to Yadav, creating fault-tolerant applications that are highly available across regions and clouds is possible only if diverse types operate and appear to users as a single, seamless solution.

Yadav concludes that today's global IT environment demands that companies take advantage of all their options for combatting threats, complying with ever-changing regulations, and ensuring agile DevOps. Tying all these disparate public/private/hybrid cloud systems together simply, seamlessly, and on the fly is the job of the Morpheus single-pane orchestration solution. End-to-end visibility into your cloud infrastructure is complemented by predictive analytics and remediation to manage costs and enhance control over apps running on bare metal, VMs, or containers.

 

 

Multi-cloud flexibility and choice require the highest level of abstraction

You owe it to your customers and your organization to take advantage of all the IT tools at your disposal. As Dan Lahl writes on JAXenter, companies should be able to build an app on Microsoft Azure, run it seamlessly on AWS, and integrate with data or processes running on Google Cloud Platform. The only way to do this is by abstraction predicated on three concepts.

  1. Choice and flexibility to avoid vendor lock-in
  2. The smooth migration of legacy apps
  3. The development and operation of their cloud-native counterparts

Standardizing on a single interface is cited as the key for reliable and efficient multi-cloud implementation. By abstracting the app interface, enterprises are able to select the underlying cloud infrastructure based on the unique characteristics of each public cloud provider. Companies have the option of co-locating new cloud apps with their legacy counterparts to ensure governance and regulatory compliance.

Topping the list of features offered by next-gen cloud management services is a unified, comprehensive view of all cloud operations that shows at a glance the status of apps throughout the DevOps cycle, regardless of their location in the public/private/hybrid mix. In addition to simplifying the orchestration of diverse multi-cloud elements, today's best-of-breed cloud managers provide intelligent deep-dive analytics based on predictive AI that is platform-agnostic. 

For a growing number of companies, the most effective approach to overcoming wasteful, unpredictable multi-cloud complexity is by applying a flexible, frictionless, and future-proof approach which unifies orchestration. Request a demo to find out what Morpheus can offer your company.

A perspective on Q3 CMP acquisitions and market requirements

$
0
0

I just officially hit the 1-year mark here at Morpheus after spending a decade in storage and another managing IT systems and software as a customer. Given that timing and some recent news in the CMP space, I thought a follow-up to my original #startuplife blog might be in order.

One of the biggest changes in the past year has been some serious heating up of the cloud management landscape. We saw that last month at VMworld with an increasing focus on automation and orchestration. At that event, Morpheus was proud to take home the Best of VMworld award for Agile Operations and Automation.

Analysts like Gartner have also come out this year with robust evaluation frameworks and I’m sure a 2x2 matrix will soon follow. Gartner calls out a number of critical functions that in an ideal world would span multiple platforms both on-prem and off. Like many other technologies in the formative stage, there has been some consolidation as customers seek to reduce tool sprawl and systematically address the cloud complexity crisis while also advancing DevOps initiatives.

 


Flexera and RightScale
News this morning was that software asset management and cost optimization company Flexera had acquired early CMP entrant RightScale. From the press release and coverage so far it’s clear that a focus on controlling IT spend is top of mind. As a SaaS-based offering for departments that need multi-cloud optimization and as part of Flexera they’ve now got a full suite of offerings focused on cost control.  Congrats to the RightScale team!

VMware and CloudHealth
Last month we saw VMware pick up cost optimization startup CloudHealth to help round out their CMP mashup which already includes vRealize Automation, vRealize Business, Wavefront, and SaaS based services including Cloud Assembly, Code Stream, Service Broker, and others. It’s a collection of numerous products for VMware customers looking to go narrow and deep with investment into a single platform and associated professional services.  Congrats to the CloudHealth team!

The case for systematic, agnostic, and agile cloud management
Coming off of our biggest single month of revenue to date and on track to grow 3x this year I’m more excited than ever to have joined this space and more specifically the team at Morpheus. The two acquisitions from this past quarter are a great backdrop to articulate some of the reasons we’re seeing such growth. I thought I’d, provide a perspective across a few key areas that enterprise clients should consider when looking at this market.

  1. Systematic: Cost optimization is becoming a feature of full stack orchestrators as by itself it doesn’t address the root cause. In addition to cost optimization, you’ve got to check off the rest of the core cloud management functions such as governance, automation, monitoring, scaling, etc. Without this systematic approach, you end up with a fragmented set of disconnected tools.

  2. Agnostic: Point CMP products from traditional vendors such as VMware, RedHat, and Cisco are inherently grounded in those vendors technology stacks and not good for heterogeneous enterprises. Similarly, cloud-specific tools from AWS, Azure, and others are equally narrow. With most enterprises having 3 or more clouds and a mix of bare metal, VM, and containerized apps it’s become critical to put in abstractions that can hide the underlying complexity.

  3. Agile: The cloud and DevOps space is changing incredibly fast. If it takes you a year or more to get your orchestration and automation in place then you’ve lost before you started. Unfortunately, that’s what many enterprises face today with plug-in and webhook architectures that require a ton of scripting and post-deployment integration. It also means your tool provider has to embrace agile methods and release new features constantly instead of just a couple of times per year.

 

Morpheus is in a great position headed into Q4 2018 and the added visibility of recent acquisitions is only going to fuel the fire next year. As a multi-function and multi-platform CMP with more out-of-the-box integrations than any other tool in this space we’ve automated over 225,000 application deployments spanning 20+ clouds running bare metal, virtualized, and containerized instances. We can be up and running in less than an hour and have helped some of the largest enterprises and MSPs in the world unify their tools, people, and processes.

We'd love to help you, so reach out if you’d like to hop on a call with a solution architect to discuss your multi-cloud journey and get a demo.

A long and winding road… One SA's path to Morpheus Data

$
0
0

Gul Paryani

I’ve been around the IT industry as a customer and on the vendor side for most of my life, including time spent working in storage, hyper-converged infrastructure, and monitoring. In that time I was exposed to most of the good, bad, and ugly that comes with infrastructure and operations in enterprise environments. In one of those previous gigs, I helped customers with visibility into infrastructure costs, including public/private cloud.

 

At the time, the level of full stack visibility was awesome! I was able to show customers that by right-sizing their cloud infrastructure they could save millions of dollars. One specific customer was an epiphany for me. The company had lifted-and-shifted approximately ten percent of its infrastructure to the public cloud, and it intended to move most and maybe all of the rest later on.

 

After working with the firm for about three months, we were able to show that by right-sizing just that ten percent of migrated infrastructure, it could save about $800,000 per month… Amazing! We could save the company $6 millionin one year! That’s when one senior manager spoke up (slightly paraphrased):

 

“Great job! The fact that we can realize that savings is incredible! The challenge is, we only have a team of three to implement those suggestions and we just can’t add this to the current workload. It’s great to have that data, but almost impossible to execute without adding headcount.”

 

Fast forward to my landing here at Morpheus. My epiphany came about during a conversation with our CTO while I was interviewing here. The right-sizing effort in-and-of-itself is cost prohibitive to organizations, and more importantly, it addresses only a symptom of a much larger problem.

 

You’ve got to be systematic… not symptomatic

The main reason that right-sizing effort needs to occur is that of a lack of controls– that’s right… simple governance (or maybe not that simple, given how many people are lacking consistent role-based access). I’ve worked in IT for about half my career, and every company had a common theme: Elevated risk and compliance issues due to poorly implemented governance.

In many cases, the companies relied on the “honor system.” End-user and operations staff were given a set of unwritten rules about how to provide resources for our business – in this instance, how we provision infrastructure resources (think VMs) for our business users – without real standards in place or concern for “IT hygiene,” i.e., how/why things get commissioned and decommissioned. It often took developers and R&D teams so long to get the VMs they requested that once they were done with a project they would wipe and re-use those resources rather than release them and properly decommission things. This results in a huge waste of capital and manpower, or worst of all, unsuccessful outcomes for the business and massive compliance concerns.

Morpheus Data is an amazing platform for user self-service, but it’s not the speed of deployment or provisioning that amazes me. The best part of the platform is automating governance, which in turn improves the quality of IT operations and consistency of service delivery. Setting up policies about what end-users (yes, “the business!”) can provision, how much they can provision, where they can provision, and even what happens to expire and reclaim resources lets the infrastructure and operations team implement the types of guardrails that prevent massive cloud overspend in the first place.  You could even consider it compliance-as-code as a front end to infrastructure-as-code... you've got to have both to manage cloud complexity.

 

 

Source: Sandeep Cashyap, Business Development @ Amazon, via SlideShare

 

By staying inside the guardrails, we are assured of three things. First, the business has the agility that cloud promises across every public or private cloud deployed. Second, the right-sizing effort gets much less labor intensive. And third, by using Morpheus, the human time required is minimized – fewer people involved in the process means fewer mistakes. Fewer mistakes mean a more repeatable process for the business. This accelerates IT staff's ability to work on those higher-level business functions, such as modernizing application stacks and getting to DevOps nirvana.

To top it all off, this highly automated and well-governed cloud management platform includes a host of brownfield discovery tools and built-in right-sizing. So not only can you get your house in order, you can keep it that way. After my first month in, I’m still overwhelmed by how much this systematic approach to CMP has changed my life in IT. 

I’d love to connect with some of you to share my new perspective and run through a quick demo of Morpheus.

 

 

The antidote for multi-cloud complexity is unified management

$
0
0

Start counting off multi-cloud benefits and you soon run out of fingers:

  • Avoid vendor lock-in
  • Match the right tool for the job
  • Democratized access to stakeholders
  • Balance performance and cost

Of course, there's always a catch. For multi-clouds, one of the big gotchas is complexity. A recent MIT Technology Review/VMware study (pdf) found that 57 percent of the senior IT managers surveyed report technical and skills challenges were "critical learnings" from their multi-cloud implementations. This was topped only by security (62 percent) and integrating legacy systems (59 percent). Mark Baker reports on the study on Computer Business Review.

Source: MIT Technology Review Custom

What to do when you reach the multi-cloud 'tipping point'

Sooner or later, your multi-cloud setup will reach what David Linthicum refers to as the "tipping point" at which "the number of services you use exceeds your ability to properly manage them." In a TechTarget article, Linthicum writes that the exact tipping point varies based on your company's size, the complexity of the services you use, security/governance matters, and your staff's skill set.

Linthicum lists four factors that indicate your multi-cloud will benefit from a third-party cloud management platform such as Morpheus:

  1. Are your developers unhappy about how long it takes for them to allocate resources to their applications?
  2. Are your managers uncertain about who is responsible for the security of specific cloud resources?
  3. Are your users griping about performance glitches, many of which are caused by applications not getting the cloud resources they need?
  4. Are you unable to charge back cloud costs to the appropriate departments and users?

If the answer to any of these questions is "yes," you should consider using a multi-cloud management platform (CMP). Your developers benefit by being able to allocate various cloud resources to their apps directly and on-demand via GUI or API/CLI. A CMP also makes it easy to track who is provisioning specific resources and confirm that they are properly securing the workloads.

The smart folks over at Gartner have spent hundreds of hours talking to customers and vendors to come up with what is a pretty slick framework to think about the CMP space. In their "wheel" you can see the core categories of capability. There are tools that provide one of these capabilities across multiple cloud platforms. There are also tools that provide a range of these features within a narrow set of platforms. And then there are the unicorns… truly multi-function and multi-platform CMPs such as Morpheus.

Source: Gartner Evaluation Criteria for Cloud Management Platforms
 

Your multi-cloud strategy must meet the needs of multiple stakeholders

In the modern multi-cloud world, companies need a way to move between public and private clouds quickly, simply, and reliably. The only way to accomplish this is by cutting through the inherent complexity of multiple individual services, as BusinessWorld's David Webster explains. The key is to shift your focus to collaboration: place the customer experience in the center by creating "new customer engagement models."

Improving the customer experience, managing costs, and enhancing DevOps velocity are all possible with the right multi-cloud orchestration approach, one that treats Infrastructure teams, Developers, and Business users as equal citizens. Collaboration and partnerships are easier to establish when all parties share the platform that delivers the apps and underlying analytics that drive the business forward.

These personas have different needs however, so it’s key to strike a balance that delivers on their key need without compromising that of the others. For example, IT operations teams have KPIs around security and service levels which tends to lead to more conservative approaches to technology adoption. Developer teams on the other hand, are all about velocity and continuous innovation. Business teams care about differentiation and innovation but not at the expense of reputation or cost.

Business and IT Operations: Security, cost, and cross-cloud management

TechRepublic's Alison DeNisco Rayome reports that 86 percent of cloud technology decision makers at large enterprises have a multi-cloud strategy. The benefits cited by the executives include improved IT infrastructure management and flexibility (33 percent), improved cost management (33 percent), and enhanced security and compliance (30 percent).

Transitioning to a cloud-first IT operation is bound to entail overcoming inertia, adjusting to changing roles, and learning new skills. Realizing multi-cloud benefits requires overcoming challenges in three areas in particular, according to CloudTech's Gaurav Yadav:

  1. Public cloud security. While the security of the public cloud is considered robust, the transit of data from on-premises infrastructure to the public cloud needs to be carefully planned and implemented.
  2. Cost accounting. Multi-cloud commoditizes cloud resources by letting users choose the services that best meet their specific needs. To accomplish this, enterprise IT must transition from vendor-enforced workflows to a vendor-agnostic infrastructure.
  3. Unified cross-cloud view. The goal is to give users a single management platform that lets them visualize and implement workloads using multiple cloud services that are viewed as a single resource rather than as "isolated entities."

Developers: New kids with new demands

What do developers need out of the multi-cloud management equation? They are interested in full API/CLI access, infrastructure as code, and speed of deployment. As David Feuer writes on Medium, the proliferation of developer products and services is matched by increases in use cases and backend technical complexity. Feuer recommends building your multi-cloud strategy from the ground up, putting APIs and developers first.

Developers want to use cutting-edge tools to create modern apps. The results of the 2018 Stackoverflow Developer Survey show that when choosing an employer, developers' second-highest priority -- after salary and benefits -- is the languages, frameworks, and other technologies they will be working with. Considering that more than half of the developers surveyed have had their current job for less than two years, it pays for companies to give talented developers access to the tools they need to excel.

Viewing all 1101 articles
Browse latest View live