Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

Managed Service Providers Find New, Profitable Roles in the Cloud

$
0
0

Morpheus helps managed service providers

All signs point to a rocky future for managed service providers. As Zachary Purl writes in an August 1, 2016, article on Business2Community, MSP margins are shrinking, and the traditional managed services model is being commoditized.

So why are so many MSP executives smiling? What do they know that the industry pundits appear to be missing? Maybe it’s that old adage that times of turmoil are also times of opportunity. The opportunity that’s knocking loudly and clearly for MSPs is the cloud. A number of forward-thinking MSPs have put the reins in their teeth and are charging full-steam ahead into a cloudy future.

Two factors in particular work in the favor of a rosy outlook for MSPs. First, their customers see cloud computing as their future, but they are uncertain about how to realize cloud benefits while avoiding the technology’s pitfalls. Second, MSPs have already gained the trust of their customers, so they’re in a great position to guide them as they transition to the cloud model.

The only prerequisite for MSPs to capitalize on the changing marketplace is that they embrace cloud services and abandon the revenue models that proved so profitable in the past. Unfortunately, MSPs aren’t necessarily noted for their agility, and for many, the old ways die hard. What’s likely to shake the laggards out of their lethargy is the realization that their customers are going to the cloud, with them or without them.

Nothing new about giving your customers what they want

For decades, IT has operated as a closed shop, generally speaking. Most of the hardware and software that keeps the organization running smoothly toward its goals is on the premises and managed by in-house staff. Slowly at first, and then in a rush, the in-house model evaporated. Companies no longer own the servers their data is stored on and their applications run on. Often they don’t even develop or own the apps themselves.

IT’s relationships with its customers are evolving from direct provider of services to a broker for myriad services offered by third parties. In effect, IT becomes a kind of MSP for the organization. Rather than squeezing out existing MSPs, the change opens new areas for MSPs to serve businesses of all sizes. As Business2Community’s Purl puts it, MSPs have to start “selling sticky.”

Purl claims that cloud services make it easier than ever to expand beyond the “break-fix” approach to service provision to establish a relationship that is both long-lasting and mutually beneficial.

  • First off, predictable cloud pricing lets you build a cloud portfolio almost instantly because your rate scales along with your customers’ cloud use.
  • Secondly, the cloud solutions MSPs sell their customers make the companies more competitive in their own markets, which leads to increased use of and reliance on those solutions.
  • Lastly, the sales cycle itself is shortened, which allows MSPs to apply discounts and other sales incentives outside the standard long-term, fixed-duration contract.

Survey: MSPs optimistic despite cloud challenges

Maybe it’s their generally positive outlook on life, but MSPs see a bright future for their industry despite a growing number of their customers choosing to buy services directly from cloud vendors. 451 Research’s report entitled “2016 Trends in Service Providers” (pdf) found that most money spent on cloud services this year will be paid to channel partners rather than directly to service providers. The survey also identified a promising opportunity for MSPs to assist their corporate customers transition their networks to hybrid clouds.

451 service provider trends

The characteristics that will separate the successful MSPs from the also-rans, according to 451 Research’s 2016 survey.

According to 451 Research, a principal challenge for MSPs is to sort out a marketplace in which “the traditional, clearly divisible market segments that make up the hosting world are rapidly collapsing into each other and boundaries are being erased between operationally distinct types of hosting businesses.” Customer workloads are shifting from traditional shared, virtualized, and dedicated infrastructure to specialized environments. New service providers appear to have a leg up on their longer-tenured counterparts in offering the specialty products that are increasingly in demand.

MSPs’ most important skill is now managing change

Considering how dependent so many MSPs are on providing hardware to their customers, the switch to pure software and service offerings will leave some providers in the dust. Not many MSPs look forward to competing directly with the likes of Amazon, Google, and Microsoft, as the Wall Street Journal’s Angus Loten writes in a June 30, 2016, article. At the same time, MSPs’ customers are now more comfortable with relying on cloud infrastructure and applications.

According to figures compiled by research firm MarketsandMarkets, the worldwide MSP market is forecast to grow by 12.5 percent annually, from $107 billion in 2014 to $193 billion in 2019. Yet gross profit margins for MSPs are expected to decline from the 60 percent level of recent years to only 30 percent, according to research by 2112 Group. There’s no way MSPs can sustain such profit levels simply by reselling existing cloud services.

public cloud infrastructure players

The public cloud has expanded and diversified to accommodate every size and type of business. Source: Ephor Group

Instead, MSPs must embrace new roles, particularly by offering to serve as an “orchestrator” of the various cloud services their customers are using, according to CompTIA senior director of industry analysis Carolyn April. MSPs’ search for new revenue models will likely lead them to advise their customers via a “selection and vetting process” for cloud services, according to April.

Ensuring the security of cloud apps and data

Before contracting for any cloud service, whether from a third party or directly from the provider, you have to verify the security of your organization’s data assets. In an August 4, 2016, article in Computerworld, Paul Desmond lists the four keys to security in managed-services environments. The first is to gather as much information about the service’s security measures as possible. This means asking a lot of questions:

• Where are your facilities located?

• If they’re offshore, what security guarantees do you offer?

• How do you manage change control and documentation?

• How frequently do you update your security?

The second of the four keys is to verify compliance with applicable security standards. The best way to do that is to conduct an onsite audit and discussions with the administrators who secure the data systems. Thirdly, if the data is stored offshore, ensure the privacy regulations for that region are followed. The last of the four keys is to confirm identity and access management controls are in place.

One service that meets all the security requirements of cloud customers is the Morpheus cloud application management platform. Morpheus lets you provision databases, apps, and app stack components in just seconds. For all provisioned IT systems, Morpheus automatically creates system, database, and application logs. It also monitors uptime and creates backups automatically. Morpheus’s “any app, any database, anywhere” support for all cloud platforms ensures that your data is never locked in.


VMworld 2016 - The Ultimate Guide

$
0
0

 

Welcome to VMworld!

 

Thousands of attendees will be at VMworld 2016, and we know that it can be tricky to find your way around, so we put together this handy guide to help.

 

Be sure to stop by our booth in the VMware New Innovators Area (booth #841) to meet the team, get a demo, and pick up your schwag.

 

Registration

 

Getting Around At VMworld

 

Events At VMworld

 

Community

 

FAQ

 

Demo Morpheus at Booth #841

VMworld-2016-Morpheus-Booth

The only cloud application management and orchestration platform that's truly infrastructure agnostic.

Making the Business Case for Cloud-based Disaster Recovery

$
0
0


Business managers and IT managers generally agree on the importance of achieving their organization’s goals, and on the strategies and methods to be used to achieve those goals – with one glaring and potentially calamitous exception: backups.

Yes, the “b” word once again arises as a point of contention between business units and IT departments. A recent survey by security firm Bluelock found that one out of three companies has experienced a “technology-related disruption” in the past two years, yet 11.5 percent of the firms surveyed have no disaster-recovery plan at all. Zip. Nada. Goose egg.

This despite the fact that 73 percent of business executives have “high confidence” in their company’s ability to recover lost systems in a timely manner. Only 45 percent of IT professionals share the assurance of the business managers, according to the survey, which eWeek’s Nathan Eddy reports on in an August 10, 2016, article.

Translating data-loss risk into disaster-recovery planning… and spending

The Bluelock survey highlights the continuing disconnect between the expectations of business units and the reality of data risks that IT managers face every day. While 80 percent of the IT managers surveyed said they place a “high value” on protecting their organizations from disruptions, only half of the business managers indicated the same.

Perhaps the most telling finding of the survey is that 60 percent of IT managers said it is extremely important to protect against technology-related disruptions, yet none of the VP and C-level executives participating in the survey placed disaster recovery in the “extremely important” category. Tell that to the top brass of companies now attempting to bounce back after the recent 100-year flood that struck Louisiana.

While most system outages have only a negligible impact on the organization’s long-term success, those lasting several days can push a company to the brink of bankruptcy. Source: Evolving Solutions

Considering the budget-stretching features of cloud services, it’s no surprise that IT departments would expect similar savings by moving their backup and disaster-recovery operations to the cloud. After all, offsite backup has been a key component of disaster planning for about as long as there have been IT departments.

Still, a little hesitancy on the part of IT managers about relying on third parties as their last line of defense is understandable. When it comes to backup, no single characteristic is more important than dependability – not cost, not timeliness, not simplicity. Whatever your disaster-recovery strategy, it has to be rock-solid. Slowly, cloud-based disaster recovery is amassing a track record for reliability.

Consider disaster recovery an investment, not insurance

You would think that a Fortune 100 company such as Delta Airlines would have a disaster-recovery plan in place that could weather any misfortune. Yet all it took to knock the airline’s data systems offline for more than a day was a single cascading power outage, as The Next Platform’s Nicole Hemsoth explains in an August 9, 2016, article.

Delta is far from the only large organization to be shut down by system-wide outage, nor is it the only company to find itself locked out of its data following a failure, with no real-time disaster-recovery or failover plan ready to be activated. Hemsoth states the matter plainly: Any company that can’t countenance even a moment of downtime has no choice but to implement geo-replication across multiple zones with high availability and resiliency built in.

Disaster-recovery expert Joe Rodden blames the widespread lack of disaster preparation on companies’ failure to conduct adequate testing of their restoration process on “real machines.” The testing isn’t being done, according to Rodden, because the process is so difficult. For example, a company may believe its systems are protected because it uses two separate network providers in different locations. But when they trace the links back far enough, they find that both networks originate from the same source, which creates an invisible single point of failure.

Another common mistake is to correlate “high availability” and “resiliency” with disaster recovery. A company with multiple redundant databases may believe it is protected against failure when in fact all of the databases are housed in a single data center.

Justifying the cost of real-time DR remains an uphill battle

The problem is, few companies can afford the most-reliable backup approach: duplicate clusters running the same workload in different locations, plus replication in a cloud-based data center for an added layer of protection. Also, convincing the people who control the purse strings to spend even a fraction of the potential loss from a system outage on disaster recovery is all but impossible, according to Rodden. Spend $2 million to prevent a network failure resulting in $20 million in losses? No, thanks. Most companies would rather accept the risk of an outage that may not happen rather than invest in disaster recovery.

Perhaps the best way to make the case for disaster-recovery spending is to point out that all companies are tech companies these days. No matter your industry, your organization runs on data, and when that data becomes inaccessible, your operation grinds to a halt. At today’s rapid pace of business, even a brief outage can have long-lasting negative repercussions, and an extended failure can truly be disastrous to the company’s bottom line.

A 2015 survey of IT and business professionals found that improving the efficiency and recovery time of their DR projects were the most important goals, while cost is – by far – the most important consideration. Source: TechTarget

Taking the risk rather than taking precautions becomes more difficult to justify considering the availability of such services as the Morpheus cloud application management platform, which automatically backs up every new database or app stack component. You decide the time, day, and frequency of the backups, as well as the destination targets for the backups (locally or in the cloud), without requiring that any custom cron jobs be written.

Morpheus makes it easy to define roles and access privileges for teams or individuals based on geographic location, server groups, individual apps, or databases. The service’s automatic logging and monitoring lets you identify and address trouble spots quickly and simply. Every time you provision a system, uptime monitoring is configured automatically, including proactive targeted alerts when performance or uptime issues arise.

Picture a data backup you can ‘snap’ like a photograph

Snapshot backups are gaining in popularity, but as an August 19, 2016, article on CIOReview explains, snapshots were not initially considered a viable backup option for VMware because of incompatibilities with application servers. Also, snapshots were perceived as lacking application-level support. These days, snapshots are much more application-aware and are able to retain such app information as state, required resources, and use patterns. In particular, redirect-on-write (ROW) snapshots are much more efficient than copy-on-write techniques because ROW minimizes the impact of the process on app performance.

Flat backups replicate backups to other secure locations to protect against losing a snapshot due to media corruption. For disaster recovery, the standard approach is to use three storage locations: two onsite and one offsite specifically for disaster recovery. The offsite system is configured to accept input from and to output to both onsite systems, thus it will likely be a higher-performance system than either of the onsite systems.

Since a snapshot can be taken at any time, administrators can determine the frequency of snapshots based on available storage, bandwidth, and processing power, as well as on the nature of the workload itself in terms of sensitivity, timeliness, and value. Snapshot metadata serves as a virtual database detailing your backup history.

The size of a snapshot will be affected by the update pattern: the greater the number of pages updated during the life of the snapshot, the larger the sparse file used to store the original pages. Source: Microsoft Developer Network

In some instances, companies may prefer to stick with traditional backup systems, particularly if they are concerned about security and longevity of backup media. They may also lack the bandwidth required for cloud-based backup and disaster recovery, or they may encounter memory mismatches that prevent reliable storage and recovery of their data and apps. However, even in these settings, the most effective approach is likely to combine the best features of onsite and cloud backup.

The unique backup and recovery needs of big-data operations

The increasing reliance of organizations on data analytics makes it imperative to have a plan in place to ensure that customers’ access to these critical analysis tools isn’t interrupted. In an August 26, 2016, vendor-sponsored article in NetworkWorld, Talena executive Jay Desai presents seven myths relating to big-data backup and recovery.

The first of the misconceptions described by Desai is that having multiple copies of data stored on widely distributed servers and racks is tantamount to a backup/recovery strategy. While this approach may protect against hardware failure, it leaves the organization vulnerable to user errors, accidental deletions, and data corruptions, among other problems.

While few companies can afford a comprehensive backup and recovery system for a petabyte of data, nearly all can afford to protect a subset of such a humongous data store representing the firm’s most critical data resources.

Similarly, a script approach to backup and recovery may be suitable when you’re dealing with relatively small amounts of data, but scripts are impractical for systems comprised of many terabytes of data. The scripts would have to be written for each platform separately (Hadoop, Cassandra, Couchbase, etc.), and they would have to be tested at scale and retested each time the platform was updated.

Someday perhaps restoring a company’s data systems following a disaster will be as easy as flipping a switch. Clearly, that day is a long way off. In the interim, the keepers of the company’s valuable data assets will be left to minimize the impact of outages by adapting traditional backup approaches to today’s virtual, infinitely scalable data environments.

Cloud Adoption: How to Overcome Concerns with Collaboration

$
0
0

morpheus makes cloud adoption easy

When you are in DevOps and IT management, achieving cloud adoption can be a daunting task. Even though the cost savings for adopting cloud solutions are often excellent, getting everyone on board can still be difficult.

One way to get things going in the right direction is to encourage collaboration between your security team and your development team. A study by IDC found that respondents to the study generally agreed that IT security risk was the biggest inhibitor to business innovation, with a large majority of executives letting it be known that their organizations had shied away from one or more business opportunities due to IT security concerns.

While these concerns are certainly important, the study also noted that collaboration and a balance between security concerns and innovation could bring about the opportunities that could help propel a business beyond its competition.

Barriers to Innovation

The study noted that there were perceived barriers to innovation, and these depended on who was thought to be in charge of making it happen. For example, it was noted that study respondents who felt that CEOs were responsible for driving innovation indicated there were five things keeping innovation from moving forward.

Information security is not aligned with business goals

If the security team does not know or is not concerned about the business goals of the company, then it is very difficult to effectively cooperate to help foster innovation.

Information security turnaround time on business needs takes too long

When a bottleneck occurs every time ideas or apps are reviewed by the security team, it can be discouraging for management, developers, and others who have worked to bring ideas forward.

Executive leadership is too conservative on information risk

If leadership feels like everything is too risky and unwilling to budge, it can hinder the presentation of new ideas, as managers or other employees may feel they will simply not be approved due to risk concerns.

A Limited budget/resources for innovation investments

Limited budget can always be a hindrance, and can defintiely keep any project from moving forward.

The Information security approach is too much of a lock down and not enabling

A security environment perceived as “lock down” rather than cooperative and helpful can certainly be discouraging to those wanting to push innovative ideas.

Excluding IT Security Can Be Detrimental

While many of the previous barriers were felt to be issues with the security team, there may be reasons for this. If IT security is not included and informed when it comes to business goals, those very concerns can end up being a reality in an organization. If the security team is not included in the discussion of business innovation, there are certain consequences to this that were noted by the study.

An innovative project fails because of poor information access

Not having the security team fully informed on what is being done can easily cause a project not to succeed, since finding a problem later in the process can be far more detrimental that working together to solve it from the beginning.

Information security risks associated with innovative initiatives are too high because security was not brought into the process

If IT security is not part of the process, the security risks are not known until they finally get to review the project. In the end, security concerns could end up being too costly once the project information is finally given to IT security.

Slower time to market and higher costs when security needs to be put in as an afterthought

If the security team does find a way to make things work when not informed until the very end of a project, then the project can be heavily delayed while security concerns are addressed. If tacked on at the end, implementing security measures can end up being quite costly, as more development time or other expenditures may be necessary to mitigate any issues.

How can IT Security Be Included?

The study found that respondents believed there were particular strategies that could be employed to help the security team be included in the development and business process.

Ensure the security team understands the industry and the business goals of the organization

When IT security is well informed, it makes it easier for them to address and concerns early on and allows them more time during a project to find cooperative and meaningful ways to make the project work.

Ensure enabling business innovation is part of the charter or on the scorecard for measuring the information security function

Having a measure of accountability for helping to enable innovation rather than simply dismissing business initiatives due to concerns can help a security team better find creative ways to implement the goals of projects as they come along.

Communicate a well-defined roadmap for security that ties to corporate strategy and share it with other business functions

Sharing the security objectives of your organization with all parties that will be involved can help members of each team to have a better understanding of what the security team will be doing and how they can work with IT security to proactively address and/or fix what might be security concerns.

Ensure the security team has connections with key business leadership

Relationships with the right leaders on the business side of things will help pave the way toward mutual collaboration when developing a project, allowing security concerns to be addressed in a way that is beneficial to both parties.

Demonstrate how security technology investments have direct links to business priorities

When business understands the need to invest in security technology, then those costs can be addressed early in project development, and can even have something in place to address particular security concerns before a project is started. This can make both the security team and the business team feel much more comfortable when moving forward with a particular project.

Security and Development

While business and IT security can deliberate on how to innovate and maintain security, something that could springboard this process is to have your software developers and IT security work in collaboration when planning the innovative apps you want to deploy.

When asked about what their IT security teams were doing to enable innovation, one of the of the noted responses from respondents was that IT security teams would allow developers to “become embedded in business lines”. This allows security to work with development in collaboration to determine how to best innovate for business needs while also addressing security concerns. Another noted response included “breaking down barriers”, another good reason for the different departments to work together.

Breaking down barriers can be an essential part of fostering collaboration. If developers and security professionals understand where each other are coming from and what they hope to accomplish, innovation can be achieved, as they can use their knowledge to work together in order to meet the needs of business as well as security.

By getting this type of collaboration, IT managers and devops professionals will be able to look at cloud adoption in a new way, a way which may include the adoption with the full support of the IT security team!

Get on the Cloud with Morpheus

If you are looking to begin cloud adoption Morpheus makes provisioning, scaling, and maintenance of your apps and servers a breeze. You can provision databases and servers quickly, and have your app up and running in no time! Using the available tools, you can also monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.

Cloud adoption with Morpheus

The Morpheus interface is clean and easy to use. Source: Morpheus.

Morpheus allows you to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. To learn more, sign up for a Morpheus demo today. 

 

Click here => I want a Morpheus demo!

Tips for Ensuring You Get a Sweet Deal on Your Cloud Contract

$
0
0

Sometimes, the transformation of IT to a cloud infrastructure can seem like the Oklahoma land rush: Get there fast, get there first, stake your claim or miss out.

“Not so fast,” say the people holding the purse strings – and rightly so. As Michelle Tyde writes in a September 6, 2016, article on the Daily Report, cloud service providers often lack the adaptability of traditional outsourcing services in the types and prices of packages they offer their customers. In their haste to stake a claim to the cost and efficiency benefits of cloud services, many companies fail to consider the unique legal, security, and service-level requirements of their cloud data strategies.

These and other pertinent matters need to be addressed during the provider selection and contract negotiation process, according to Tyde. That starts with an understanding of the differences between the three predominant cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Next, comes the decision about which deployment model best serves the organization: public cloud, private cloud, or hybrid cloud.

Once you’ve settled on a service and deployment model, the nitty-gritty of the selection and negotiation process can begin. The two most important considerations for any data service are security and reliability; the vendor’s assurances in these areas are the cornerstones of your cloud operations. Nearly as important but easy to overlook are the potential to be locked into a service, ensuring accessibility to your data resources under any circumstances, and interoperability of your internal operations with the vendor’s platform.

The components of a standard cloud services contract

It isn’t unusual for a cloud service to offer only a standard set of terms and conditions, often a click-wrap agreement that puts the service provider in the driver’s seat. A multi-layered cloud contract will include the terms and conditions as well as an acceptable-use policy, privacy policy, and service-level agreement. Providers selling SaaS on a public cloud are the least likely to negotiate terms because of their use of shared infrastructure, but those offering other types of services are more amenable to negotiating the contract terms.

What terms should you bargain for? Price, obviously, but also the term of the contract, service-level guarantees, limitation of liabilities, termination rights, and the right of the provider to alter service features and policies unilaterally. A common mistake of legal counsel involved in cloud negotiations is to treat the process as they would a software license or technology acquisition. In particular, the negotiations should consider such matters as intellectual property rights, insurance, and force majeure.

‘Standard’ service-level agreements always favor the vendor: Negotiate!

One area where customers have a negotiating edge over cloud services is the service-level agreement. The first of the five negotiating tips offered by David Chou in a July 18, 2016, article in CIO is to demand a high level of specificity in the SLA details covering uptime, backup frequency, recovery time, and other important matters. While some services may want to be measured quarterly, businesses in the healthcare, finance, and similar industries will require measures reported monthly or more frequently.

When conducting due diligence of a potential cloud-services partner, verify the company’s financial health and the data-security measures it implements. Source: Paul Armitage, Gowlings, via SlideShare

It’s important to ask the cloud vendor where your data will be stored; if overseas, ask which country’s regulations will be applied to protect the data. Also, specify the data file formats so you’re able to migrate the data easily when the time comes. Many cloud services require a notice of non-renewal within a particular time, so ask whether you can opt out of or otherwise negotiate the termination notice requirement.

Data retention regulations require that many industries retain data for a specified time, but your cloud service’s policy may be to delete your data upon termination of the contract, or within 30 days of the contract’s end. If your organization is subject to data-retention rules, negotiate an extended time for the service to retain and provide access to that data – usually 60 or 90 days. This also gives you sufficient time to ensure your data migrated smoothly to your new service.

Finally, an incentive for both you and the service provider is to add a low renewal rate guarantee into this contract. Even if you can’t get an assurance of a discount on your next agreement, you may be able to avoid a rate increase being applied when you renew.

There’s no ‘outsourcing’ of responsibility for security breaches

While service levels are likely to be the primary negotiation point in most contracts for cloud services, it’s nearly impossible to spend too much time going over the details for data security protections, as well as the steps that will be taken in the event of a breach of security. Mike Chapple explains in an August 15, 2016, article on EdTech that the contract proposed by the vendor is likely to protect the vendor’s interests rather than yours. The contract must state clearly the level of security to be maintained, as well as the consequences of the vendor’s failure to meet the contract’s security requirements.

Before you can set security requirements, you have to know the level of risk your organization is facing in terms of the sensitivity of the data and applications to be maintained by the cloud service. The knee-jerk reaction may be to avoid placing any sensitive data in the cloud, but consider that cloud services often provide a higher degree of security than is being used to safeguard your in-house data. The cloud may offer security equivalent to or greater than you’re paying for now, and at a much lower cost.

For example, the Morpheus cloud application management platform sets and performs backups automatically for each database and app stack component you provision with the service. You determine the time and frequency of the backups, and where the backups are stored (in the cloud or on the premises). Access controls can be determined with a high degree of granularity, so just the right people have just the right privileges.

The negotiating points most important to C-level execs

There is nothing simple about IT contract negotiations. Executives do themselves and their organization a disservice by trying to simplify what is an increasingly complicated process. Much of the complexity is the result of rules and regulations specific to various industries, such as finance, healthcare, and telecommunications. Other aspects apply to all companies, such as human resources, data retention, customer relationship management. Finally, there’s the added complication of dealing with new companies in an entirely new industry, where the “rules” are in constant flux.

In a May 10, 2016, article, CIO’s David Adler lists the five key areas C-level executives must focus on when negotiating a contract for cloud services. The first step is to determine who owns the proprietary and confidential information in the organization. Adler recommends creating an intellectual property checklist (pdf) for managing and safeguarding your companies IP assets.

When negotiating price and payment, bargain for a reduced price if you’ll be paying in full at the commencement of the contract term, as is typical for software licenses, or when payment for services will be “Net 30.” It’s almost certain that some of the contract terms will be modified during the term as your needs and other unforeseen circumstances dictate. Be sure the contract includes a mechanism for accommodating the inevitable, unpredictable changes that will occur.

Equally important, according to Adler, is to consider how you’ll be able to get out of the contract if necessary. For example, even if you negotiate immediate unilateral termination in the event of a breach of a material obligation, you’ll still be left holding the bag if you’ve prepaid on the contract. State laws vary widely in whether and how they allow consequential damages resulting from a material breach, as well as in how they provide equitable relief via contract reformation.

Be particularly cautious about the contract’s disclaimers and other attempts by the vendor to limit their liability and to cap any potential liability awards. These clauses are intended to shift the risk entailed in the contract, and courts generally rule that the disclaimers are enforceable. The caps may be a fixed sum, such as the amount paid for the service, or they may apply to the types of damages for which compensation will be available, including personal injury, property damage, or liability for confidentiality violations.

The more “what if” scenarios you consider before you sign on the dotted line, the less likely you’ll be caught unawares when a contract-related problem arises.

Taking a Top-Level View of Hybrid Cloud Management

$
0
0

Hybrid cloud adoption is at a tipping point. See the surprising list of today's top hybrid cloud management providers, software, and tools.

Okay, let’s get down to it: There’s nothing magical or unusual about protecting your organization’s most sensitive information by storing it on secure servers maintained in-house while placing its less-sensitive data on secure servers maintained by a third party. This may sound simple, but managing such a hybrid setup is anything but. In fact, complexity increases exponentially when adopting a hybrid cloud strategy, as Arthur Cole explains in a June 10, 2016, article on IT Business Edge.

All that flexibility and control made possible by hybrid-cloud systems comes at a steep price, according to Cole. While advanced operating systems and middleware can potentially remove some of the intricacies of designing, implementing, maintaining, and upgrading hybrid networks, companies are still left with the thorny choice of which management tools to use, whether to rely on proprietary solutions or go with open source, and last but not least, how to integrate the new system with your legacy data.

Demo Morpheus Today

IT’s transition continues from a dedicated, physical environment, to multiple clouds linked to in-house resources, and finally to a single hybrid cloud environment. 

Hybrid cloud management tools continue to proliferate

Matching a cloud management system to your organization’s unique data requirements gets no easier as the vendor options proliferate. Traditional providers, such as NetApp (ONTAP 9) and HPE (Cloud Suite on the OpenStack-based Helios platform) have been joined by the open-source Apache Mesosphere project backed by HPE and Microsoft, as well as NephoScale, offered by the folks behind the NephOS cloud operating system based on OpenStack Liberty.

The cloud-management platform choice gets even muddier as more cloud providers release their own management solutions tailored to the unique cloud infrastructures provisioned by their enterprise customers. What such single-service offerings lack, obviously, is support for multi-vendor cloud operations. Before you know it, your data could be back in the silo you’ve been working so hard to liberate it from.

Outsource the infrastructure to focus on IT’s human capital.

The overarching benefit of software-defined architectures is that they free IT from the hardware limitations of the past. Now the only limit to what your organization can accomplish is the talents and imagination of your team. This refocuses IT on the human side of management, as Kristen Knapp writes in a June 2016 article on Tech Target. The flexibility that is the hybrid cloud’s claim to fame also presents one of the technology’s thorniest management issues: Which workloads go on the public cloud, and which belong on the private cloud?

For financial and healthcare industries, the choice is often determined by compliance requirements. Yet public-cloud services now support the governance and compliance standards these industries must meet, according to a cloud consultant quoted by Knapp. Still, to facilitate auditing, the trend continues to separate workloads based on compliance needs.

The public cloud/private cloud choice is often determined by the age of the application. Legacy apps may simply be too expensive to adapt to the public cloud, particularly because they tend to be bloated resource hogs compared to their leaner, more-efficient cloud-native counterparts.

Begin with your hybrid cloud management plan.

Once you’ve selected your platform and management tools, and determined which workloads go public and which stay private, it’s time to apply the glue that will keep all the components operating as a single well-oiled machine. The key to putting the pieces together is careful implementation of RESTful APIs and similar application integration methods.

Your data traffic patterns are going to affect your apps’ performance and resiliency. This is where multipathing comes into play. By creating multiple paths for data to travel between hybrid network environments, you can improve load balancing and reduce latency. In many cases, it makes financial sense to establish a direct connection between the public and private sides of your hybrid cloud, such as AWS’s Direct Connect and Microsoft Azure’s ExpressRoute (many third-party connectors are available as well).

With so many hybrid cloud providers, how do you pick one?

As companies come to rely on cloud services for more of their IT operations, the nature of the provider side of the equation is going through big transitions of its own. A recent Forrester report entitled Hybrid Cloud Management Solutions of Major Service Providers found that solutions for managing assets in hybrid-cloud environments are offered by independent software vendors, traditional service companies, and open-source communities. IT Wire’s Peter Dinham writes about the Forrester study in a June 1, 2016, article.

That makes finding the best provider for your organization’s needs more challenging. As service providers develop their own IP assets, comparing various offerings becomes more complex. In addition, the technology is changing so quickly that the relationships between the components of any hybrid cloud integration are not well established. You won’t find many standards for managing hybrid clouds, either.

Hybrid Cloud Management

Microsoft Azure, Amazon Web Services, Google Cloud, and IBM Cloud combine to represent about 80 percent of the enterprise cloud market, but the cloud battle has just begun. Source: Clutch

As cloud service providers focus increasingly on their own IP, the distinctions between providers and software vendors disappear. According to the Forrester findings, clients want to be free of having to manage the increasing complexity of software and expect providers to bring their own IP assets to do much of the managing for them.

Finding the hybrid-cloud management tools you need isn’t easy.

The Forrester report concludes that software vendors have been slow to offer the features their hybrid cloud customers need, yet brokering services haven’t shown they can deliver the performance and functionality required to keep hybrid cloud setups running smoothly and efficiently.

One service that offers built-in scalability, performance and dashboard-based management for a range of applications is the Morpheus cloud application and orchestration platform. Morpheus makes it easy to provision databases, apps, and app-stack components on any server or cloud, whether on-premises, private, public, or hybrid. Morpheus simplifies integration with heterogeneous systems via RESTful APIs. Add more nodes anytime via the web interface, a command-line interface, or an API call – the database or app cluster is configured automatically to accommodate the new nodes. All this with zero cloud lock-in.

Now Morpheus even supports Hyper-V. For all types of hybrid-cloud management, Morpheus is well worth a spot on your short list.

Cloud Management from an Enterprise Perspective

$
0
0

It stands to reason: The larger the organization, the more complicated the task of managing its diverse cloud operations. It’s a sure bet that whatever cloud strategy your company began with bears little resemblance to the cloud infrastructure currently in place. And that goes double for the cloud setup you’ll be using months and years from now.

That’s why one characteristic dominates all others when implementing any cloud program: Openness. Whatever services and processes you’re using today have to be designed to accommodate tomorrow’s innovations – whatever form they may take. The only way to achieve this goal is to rely heavily if not exclusively on cloud services that connect seamlessly to the outside world.

While it’s unlikely enterprises will be able to make their hybrid cloud configurations less complex anytime soon, there are ways for companies to make the networks easier to manage and use. In a September 15, 2016, post on Light Reading’s Heavy Reading blog, Sandra O’Boyle explains that what’s needed is “open cloud management platforms” that offer “an IT cloud control dashboard.” The two most important roles of such a platform, according to O’Boyle, are to ensure reliability and to reduce latency.

Complex cloud migrations require deliberation

One trait more than any other characterizes enterprise cloud transitions that have gone awry: haste. That’s the conclusion of a recent Gartner survey that found a wide gap between enterprise IT executives’ expectations about how quickly they will realize the benefits of cloud technology, and the actual time required to plan and implement a comprehensive hybrid cloud strategy. ComputerWeekly’s Carolyn Donnelly reports on the survey in a September 15, 2016, article.

In a typical enterprise DevOps blueprint, the overarching goal is market responsiveness rather than timely delivery. Source: ITaaS Now LLC, via CIO

The dichotomy in the Gartner study is that enterprises realize an average 14 percent savings in IT spending by migrating to cloud services, yet “a large number of organizations still have no current plans to use cloud services,” according to Gartner research director Sid Nag. The primary reason enterprises shun cloud technology is their concern about security and privacy, which Gartner perceives as a failure of communication rather than of any inherent attributes of the cloud. The security vulnerability is not in the cloud services themselves, Gartner concludes, but rather in the way the services are being used.

For example, the Morpheus cloud application management platform ensures that an enterprise’s cloud resources are both safe and accessible, whether they reside on public or private clouds. The intuitive Morpheus dashboard puts in a single window all the controls you need to provision a diverse range of databases, apps, and app stack components on any server – on premises, or in private, public, or hybrid clouds – in just seconds. In addition, each new database or app stack component is backed up automatically and recoverable with only a few clicks.

Making the business case for the cloud is the easy part

Conventional wisdom identifies the principal benefits of adopting cloud services as flexibility and disaster recovery, but according to the results of the fifth annual State of the Cloud survey (sponsored by RightScale), enterprises choose cloud services primarily to reduce their IT budgets. Markets Media’s Rob Daly examines the study results in an October 6, 2016, article.

The savings result from the shift of computing costs from a capital expense that in traditional IT is fixed and immutable, to an operational expense that does away with most of the upfront costs that companies were previously forced to depreciate over the long term. That’s the conclusion of BNY Mellon managing director of network architecture Neal Secher, who participated in a panel at a recent IT conference in New York City.

Survey: Use of DevOps leads to faster production, higher-quality software

Any doubts enterprises have about the benefits of adopting DevOps teams to break out of IT silos are dispelled by the results of a June 2016 survey by Puppet that quantified the advantages of the DevOps approach. IT departments with a “robust” DevOps process in place realized a 200 percent increase in software development speed over “low-performing” IT operations. They also report recovery times that are 24 times faster, while they spend 50 percent less time addressing security concerns, and 22 percent less time doing unplanned work or rework. TechRepublic’s Alison DeNisco reports on the survey in an October 6, 2016, article.

Among the characteristics of a successful DevOps program is integration of all IT operations; a continuous feedback loop connecting development, design, production, and product management; building in transparency to allow business managers to participate; baking security into the product’s DNA; and keeping third-party links in mind from day one.

In place of the top-down flowchart of traditional IT processes, DevOps entails continuous processes best represented by circles. Source: CSC

One conclusion of a survey of DevOps teams by Intel in September 2016 is that enterprises may underestimate the time and effort required by team members to monitor their cloud environments. The result is that other important tasks may be neglected, according to Intel executive Jeff Klaus. A comprehensive cloud management service such as Morpheus allows DevOps to spend less time monitoring by showing at a glance the status of all your cloud-based databases and applications.

Challenges remain for enterprise cloud management

The tentative nature of enterprise cloud programs to date is highlighted by John Webster’s “survey of surveys” on the state of corporate cloud strategies in a May 3, 2016, article on Forbes. On average, large enterprises have implemented six different public or private clouds due to concerns about in-house expertise, security, interoperability, and the fear of having their critical data locked into a single service.

An unfortunate result of IT’s hesitancy in adopting cloud services is the rise of shadow IT as business managers take to the cloud on their own, with little or no input from or consultation with the IT department. Slowly, IT managers are working with their counterparts in lines of business to develop and implement guidelines on cloud usage that IT is able to enforce.

Enterprise IT’s conservative approach to cloud services was justifiable when there were more questions about the technology than answers, but today, any company that isn’t primed to realize the benefits of the cloud in terms of speed, efficiency, and cost savings is at risk of being left in the dust.

DNS Load Balancing: How to Improve Web Site Performance

$
0
0

If you are looking to speed up your website or web app, DNS load balancing may be something that can help you achieve your goal. By helping to disperse the traffic you receive, you can help keep things running at optimal speed.

What is DNS Load Balancing?

DNS load balancing is a technique used to help distribute the requests for a domain across different machines so that no single machine is bearing the entire load. This can be used to help improve website and/or web application performance since the traffic load can be shared among numerous server machines rather than a single one.

DNS stands for Domain Name System. It is used to associate a domain name, such as google.com, to the IP address of a particular server machine, such as 101.22.83.144. This mapping allows visitors to a website to easily remember its address using the domain name and allows routing to take place to identify the correct machine IP address for the connection.

How load balancing works?

While one machine IP per domain name is the simplest version of DNS routing, many companies make use of a single domain to be able to point to one of multiple IP addresses, thus allowing more than one server machine the ability to handle the request.

Most clients simply use the first IP address that is received for a domain name, and DNS load balancing makes use of this in order to distribute the load across the all of the available machines. DNS can send the list of available IP addresses for a domain name in a different order each time a new request is received.

In what is known as the round-robin method, this change in the order of the listed IP addresses - combined with clients using the first IP address in the list for the domain name – allows the different clients to be sent to different servers to handle their requests. As a result of this, the request load effectively gets distributed across multiple server machines rather than relying on a single machine to handle all of the incoming requests.

Why every enterprise needs DNS load balancing?

If an e-commerce site is making $100,000 per day, a 1-second page delay could potentially cost you $2.5 million in lost sales every year

Source: Kissmetrics 

Lost revenue due to a site being too slow can have quite an impact. For a small startup, even a single lost sale could be devastating. Every second you save in loading time can help increase sales, so it is certainly a good idea to make sure your site or app loads a quickly as possible.

DNS load balancing can help with this by distributing the request load across numerous servers, thus helping to speed up the all-important response times that can get a sale completed or lose the sale entirely. With more and more IT resources being moved to the cloud today, it is easier than ever to commission additional servers, and you can take advantage of this to provide additional web servers, implement DNS load balancing, and enjoy the increased performance or your website or app.

Notable examples

Imperva Incapsula

Source: Imperva Incapsula 

Imperva Incapsula noted in a study that improved performance was a great deal of help to a company's bottom line. At the end of the study, they noted the top three things that helped them improve their own web site's performance. Included among these was the use of a load balancing solution, showing that this can indeed be an important and helpful technique in the search for better performance.

ThoughtFarmer

ThoughtFarmer found that there were potential users of their app who just needed it sped up a bit so that it would have faster response times. In 2013, they sent out a mandate to significantly improve the speed of the app. Between caching, load balancing, and some other improvements, they were able to make their page load speeds up to 35 times faster.

ThoughtFarmer was able to use load balancing in conjunction with other techniques to significantly improve the speed of their web app. Source: ThoughtFarmer 

The Morpheus Difference

If you are looking to add load balancing to the mix to help improve the speed of your website or app, then Morpheus offers an incredible service for this. First, Morpheus allows you to provision additional servers quickly so that scaling up is a breeze. When you have the number of machines you need, the process of adding load balancing is extremely simple.

For example, to add a load balancer from Morpheus, you simply need to click Infrastructure in the main navigation panel, go to Load Balancers, and click the Morpheus Load Balancer button. This will open up an easy-to-use wizard that will help you select the appropriate cloud, input some setup information, and save the new load balancer. Once done, load balancing is in place and you can simply enjoy the improved performance!

The Morpheus Load Balancers interface is clean and easy to use. Source: Morpheus.

If you have an existing external load balancer, you can also use that with Morpheus. The process to add is very similar, by simply selecting External Load Balancer instead. The ensuing wizard will ask a few additional questions for the setup, which will allow the external load balancer to communicate with your Morpheus setup.

In addition to all of this, Morpheus automatically collects system, database, and application logs for all provisioned IT systems, and each newly provisioned system is automatically set up for uptime monitoring. With this, you can be sent alerts proactively when it is determined that an issue may occur. You can also customize how alerts are distributed so that you receive only the alerts you need when you need them.

Morpheus also provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. So, with all of these advantages plus the ability to add a load balancer of your choice, why not watch a demo and take advantage of all that Morpheus has to offer?


If Cloud Bursting Is So Great, Why Aren’t More Companies Doing It?

$
0
0

It seems so straightforward: Combine the security and reliability of private clouds with the scalability and efficiency of public clouds to create an infinitely elastic infrastructure for your apps and data. Like so many other seemingly simple concepts, actually putting a cloud-bursting plan in place is thornier than a rose garden.

In hindsight, the cloud’s initial vision of the pay-as-you-go model was overly simplistic: dynamically source and broker cloud services based on real-time changes in cost and performance. As Forrester principal analyst Lauren Nelson writes in a September 2016 article on ComputerWeekly, such an ideal system “remains a vision.”

Nelson states that the primary limitation of the first generation of cloud-bursting tools is that public cloud costs don’t vary sufficiently to generate much demand for cloud brokering. The real-time information the tools provide is useful only for “right-sourcing” initial deployments to ensure they run in an optimal cloud environment. However, they don’t help in porting workloads that have already been provisioned, according to Nelson.

Addressing the cloud ‘interoperability challenge’

It turns out, running your average workload on in-house servers and using on-demand public cloud capacity when usage spikes put a tremendous strain on your internal network. It also incurs high data-out charges, introduces latency in applications, and mandates that you operate two identical clouds with matching templates. An alternative is to host private clouds in the same data center as a public cloud that uses the same templates and platforms. However, few enterprises to date have implemented such a multi-data center bursting architecture.

An overly simplistic design of a cloud-bursting configuration links app servers on a private cloud to a mirror environment in the public cloud as demand for resources spikes. Source: RightScale, via Slideshare

There’s nothing simple about connecting cloud and non-cloud environments for such functions as authentication, usage tracking, performance monitoring, process mapping, and cost optimization. Suppliers may pitch their products’ interoperability features, but in practice, few of the products are capable of cutting across infrastructure, hypervisor, and cloud platforms.

A service that makes application mapping a breeze

Hybrid cloud management requires an understanding of the economics of various cloud services, and a comprehensive map of all your applications’ dependencies. You also need to know exactly what data the cloud service collects, and how to maximize your providers’ integration options. For example, the Happy Apps uptime monitoring service provides dependency maps that show at a glance the relationships between individual IT systems as they interact with your apps.

With Happy Apps, you can group and monitor databases, web servers, app servers, and message queues as a single application. In a single clear, intuitive dashboard you see the overall status of your systems as well as the status of each group member. The range of databases, servers, message queues, and apps supported by Happy Apps is unmatched in the industry: MongoDB, Riak, MySQL, MS SQL Server, Redis, Elasticsearch, RabbitMQ, and in the near future, Oracle. Last but not least, Happy Apps’ reporting functions facilitate analysis on stored data to identify patterns that affect performance, outages and other parameters.

API management is the key to easy connectivity

The future of IT belongs to application-to-application messaging, usually based on Restful APIs. Managing APIs becomes the key for quick, universal access to all available cloud resources. Unfortunately, the APIs of many cloud services are problematic: reports include receiving inconsistent results from a single API call, daily changes without notifying customers, and libraries of exposed APIs that lack core functions.

Conversely, customers often exacerbate the problem by treating each API as a one-off rather than applying consistent policies to all APIs. Having a well-document API on the customer side makes it more likely the cloud service provider will connect to your database, applications, and other systems with no latency or other performance issues.

Even hyperconverged boxes need TLC to work with hybrid clouds

Some companies view hyperconverged storage as a private cloud in a box, as the Register’s Danny Bradbury writes in a September 22, 2016, article. When cloud bursting involves hyperconverged boxes, your local resources are likely to become overburdened as compute and storage are offloaded from the on-premise kit to the public cloud. The only way to coordinate security, charging and budget control is via orchestration.

In a July 29, 2016, article on Business2Community, Tyler Keenan identifies the technical challenges facing IT in implementing cloud bursting in a hybrid setup. The most common problem area is limited bandwidth: the burst of data you need to move between the datacenter and the public cloud overwhelms your network connection at just the time you need the bandwidth the most. Even if your storage and compute capacities are scalable, your data-transfer pipes may not be so flexible.

The importance of infrastructure and resource orchestration is highlighted in this diagram of a typical cloud-bursting scenario. Source: Inside Big Data

Cloud bursting requires that your software is configured to run multiple instances simultaneously. This is particularly troublesome when you have to retrofit existing apps to accommodate multiple instances. In many organizations, compliance with HIPPA or PCI DSS may be a factor when shifting data between your in-house private cloud and a public cloud service.

What channel partners can do to help make cloud bursting work

Despite the difficulties cloud bursting presents to organizations, the technology still offers the promise of unmatched efficiency when accommodating peaks in demand, whether they’re anticipated (such as a retailer’s traffic bumps at holiday season) or a surprise. In a June 16, 2016, post on the Channel Partners blog, Bernard Sanders presents a five-step plan for cloud service providers who want to assist enterprises in implementing their cloud-bursting strategy.

1. Automate the server build process to the custom needs of the company, including IP and hostname allocation, VM creation, and installation of the base OS and various agents.

2. Automate the entire app stack to ensure builds for common apps are standardized across the company, and best practices are implemented at the automation layer.

3. Set up and test auto-scaling, starting with a single app and a trigger action for scaling resources up and down, such as high or low CPU capacity.

4. Enable the complete stack deployment process inside the public cloud provider’s infrastructure, preferably using a configuration management system such as Morpheus to automate the deployment process.

5. Implement your cloud-bursting plan, starting with a stateless app, such as a web app with no dynamic content or database connection as a proof of concept (and confidence booster).

Cloud bursting is more than just some data-management ideal that will never be attained. But when it comes to actually implementing the technology, IT departments may need to adopt the old motto of the U.S. Marines: “The difficult we do right away. The impossible takes a little longer.”

Autoscaling: How the Cloud Provides a Tremendous Boost

$
0
0

If you are looking to transition some or all of your IT infrastructure to the cloud, one advantage that may help persuade people to get on board is the ability to use autoscaling. With autoscaling, you can provide a real boost to your website or web app by making the amount of resources it uses responsive to what actual usage is at any given time.

An Introduction to Autoscaling

Autoscaling is a powerful feature found in many cloud computing services which allow resources to be added or removed based on the load that occurs at a particular time. Basically, resources such as additional servers can be set up, much like in load balancing, but the number of them being used at any given time is based on the number of requests being received at that time.

The term autoscaling was originated by Amazon Web Services (AWS); however, many other cloud services now offer this feature as well, so you have quite a few options to choose from in order to best fit the needs of your company.

How Autoscaling Works

Autoscaling can be beneficial both for handling traffic spikes and for the bottom line. With autoscaling, additional resources are only put in place when needed, so that you do not have to, for instance, pay to have a number of extra servers running all the time in order to handle the possibility of a heavy load at some point. Instead, you simply pay for them when they are needed, which can potentially save you quite a bit of money!

Many services set up autoscaling so that you can have a minimum and a maximum number of servers that are allowed to be running to handle requests. For example, you could set a minimum of one and a maximum of eight. If the number of requests is minimal, then only one server will be running. If load becomes very heavy, then all eight servers can be running in order to handle it. Once traffic decreases again, some of the servers can be shut down.

A fixed cycle is a related technique that uses a static schedule to turn on or off resources for expected traffic patterns. The downside to this technique, though, is that it may not always account for everything.

For instance, an unexpected amount of traffic could arrive in the middle of the night simply because more people happened to be up late surfing the web that night. This unexpected traffic pattern could cause downtime, as the fixed cycle had only a minimum number of servers running and wasn't able to handle the load. With autoscaling, the surprise usage would be handled automatically, thus helping to avoid the downtime.

Examples of Enterprise Autoscaling

Netflix

IN 2013, Netflix published a report showing how they were able to make use of two forms of autoscaling: one provided by AWS, and one they customized to work with AWS for a few of their specific use cases. The main goal for Netflix has been to always have a scalable system that has minimal outages, and to be able to quickly respond should any outages occur.

The first bit of technology they used to meet their needs was Amazon Auto Scaling (AAS) provided by AWS. Regarding AAS, Netflix had great praise for the effectiveness of the autoscaling feature.

Source: Netflix

Netflix also went a step further by adding a customized autoscaling engine to handle some of their specific use cases, such as a rapid spike in demand, outages (which they noted were often quickly followed by a “retry storm”), and variable traffic patterns. They noted that both scaling up aggressively and always having more than the required servers running were both not cost-optimal solutions.

As a result, they built what became known as Scryer, a predictive autoscaling engine that is able to help predict the resource needs based on daily traffic patterns. By adding Scryer to the mix, they noted some additional benefits they received, including, better service availability and a reduction in EC2 costs.

Scryer in action. Source: Netflix 

In the end, Netflix was able to create a hybrid of predictive autoscaling through Scryer and reactive autoscaling with AAS, and felt that the combination really help provide a robust solution for them.

Facebook

In August 2014, Facebook published a post from one of their engineers which described how they  use autoscaling to significantly lower their energy costs.

Autoscale led to a 27% power savings around midnight (and, as expected, the power saving was 0% around peak hours). The average power saving over a 24-hour cycle is about 10-15% for different web clusters.

Source: Facebook 

Facebook had a goal of remaining energy-efficient and keeping a minimal environmental impact as they continued to scale. Since their servers were handling billions of requests, they were already using a modified round-robin system for load balancing. While this was helpful, they felt they could save more energy by adding some autoscaling features into the mix.

To accomplish their goal, they implemented an autoscaling solution that was able to push workload to a server until it was taking on a medium workload, and would also use a minimal number of servers when the workload is low (in their case, near midnight). This resulted in a great deal of savings when compared to a typical cluster.

The results of autoscaling on power consumption for Facebook. Source: Facebook

As you can see, their power consumption for their autoscaled cluster was quite a bit less than for their base cluster, especially at non-peak hours. As a result, autoscaling helped them to both save money and to minimize their environmental impact when scaling.

How Morpheus Handles Autoscaling

Autoscaling in Morpheus is just a few simple clicks. Morpheus also allows you to easily scales instances based on CPU, ram, I/O, or custom schedules. Check out how easy it is to set up autoscaling in this Morpheus Minute video.

Start Autoscaling with Morpheus

Nutanix .NEXT Europe

$
0
0

Welcome to Nutanix .NEXT Europe!

Thousands of attendees will be at Nutanix .NEXT 2016, and we know that it can be tricky to find your way around, so we put together this handy guide to help.

Be sure to stop by the Bigtec stand to meet the team, get a demo, and pick up your Morpheus swag. We're located in the back left of the exhibitor hall, near the bigscreen TV.

Morpheus is a Nutanix Ready Technology Alliance Partner and a BigTec Partner. Come see us to learn how your organization can get more out of Nutanix.  

Getting Around At .NEXT

Highlighted Events At .NEXT

Wednesday

  • Registration Open, Exhibit Area, 3:00PM - 9:00PM
  • Welcome Reception, Exhibit Area, 7:00PM - 9:00PM
    • Sponsored by Dell, the opening .NEXT reception promises to bring all the fun of the "X" games to Vienna! Come enjoy Vienna's finest beer, wine and food, while competing with your peers at a wide variety of X games! Plus, rock out to DJ DUO!

  • Breakfast, Solutions Expo, 7:30-8:30PM
  • Day 1 Keynote, 8:30AM-10:00AM
    • Join Nutanix CEO Dheeraj Pandey, along with Nutanix Chief Product & Development Officer Sunil Potti and special guests from Citrix, Puppet and more, to learn about the new enterprise cloud platform that is radically simplifying virtualization, containers, compute, storage and more...

Thursday

    • Breakfast, Solutions Expo, 8:00AM - 9:00AM
    • Day 2 Keynote, 9:00AM - 10:00AM
    • Closing Keynote, 1:45PM - 3:00PM

Community

  • Don't miss a minute of the action. Catch the Twitter live stream here:

3 Keys to a Well-defined Cloud Strategy

$
0
0

Cloud computing is no longer a just a buzzword, rather a norm in the world of enterprise IT. Recent surveys indicate over ninety percent of responding IT pros are cloud users. This statistic highlights the prevalence of cloud-based implementations and the importance of a well-defined cloud strategy.

While developing a plan from scratch can seem like a daunting task on the surface, the reality is taking the time to define a cloud strategy upfront will help you leverage the economies of scale found in the myriad of cloud solutions available today to ensure your business doesn’t fall behind the curve in an ever-evolving global market. Following a few rules of thumb can go a long way in assuring your strategy is one that allows you to benefit.

Getting Started with Cloud Strategy

As with most projects, defining the reasons why your business needs the cloud is a key component to understanding how you should craft a cloud strategy. Almost without exception, every organization can benefit from cloud computing, but the needs of a tech startup with their eyes on exponential growth in the next year are different from those of a multi-national conglomerate. The former will need to minimize deployment times and employee-hours dedicated to non-core services, whereas the latter may need to support custom legacy apps and assure they meet PCI and HIPPA requirements. 

In any circumstance, when implemented properly, using the cloud can increase the flexibility of your IT infrastructure, allow you to reap the benefits of economies of scale, reduce TCO, and increase ROI. The takeaway here is to make sure your goals with the cloud are in line with your goals as an organization. Determine what you’re looking to achieve at a high level, and build your cloud strategy around these objectives.

Identify Opportunities and Build Your Plan Around Your Needs

With your business needs clearly defined, it’s now time to get to work defining what tangible actions can be taken. Inventorying your IT landscape with quantification of the security and compliance requirements and growth projections as suggested in this RackSpace whitepaper is a great place to start. Generally, apps with low security and compliance requirements and high growth and variability usage projections are great candidates for cloud migration whereas legacy apps and stringent security requirements require a bit more consideration.

As mentioned in this Tech Target article the realities of your current IT infrastructure and the costs associated with resource consumption fees when using IaaS solutions AWS or Microsoft Azure cannot be ignored. For example, moving an onsite Oracle database to a public cloud in an environment where LAN speed is significantly greater than WAN speed could negatively impact performance. This isn’t to say the migration of said database should be nixed, rather that it’s important to consider all facets of any solution.

Always Have a Backup

Have a backup and disaster recovery plan. Period. The cloud is amazing, there is an “aaS” for pretty much anything you can imagine, the competitive conditions cloud vendors are in have made things significantly easier for IT groups the world over. You’ve outsourced your security and maintenance and don’t have a care in the world, right? Wrong. Despite the diligent efforts of cloud vendors, hacks happen, catastrophic failures occur, and service interruptions are real. While a cloud vendor probably outperforms most onsite alternatives in terms of uptime and security, disaster recovery still needs to be considered. Morpheus can help here by empowering administrators to automate backups and define destination targets, mitigating the need for custom cron jobs and other “duck taped” solutions.

Get Started, Learn, and Optimize

Start now and gain experience with what the cloud can do for you. According to a 2016 State of the Cloud Report, lack of resources and expertise is now the number one cloud challenge. This can create a catch-22 of sorts for many IT teams of not leveraging the cloud because they lack the experience. Don’t let that lack of familiarity stop your organization from modernizing. Take a lesson from Arthur Ashe, and start where you are and do what you can. Begin trials with some lower-risk applications and services so your team can learn through experience.

Invariably, as your organization gains experience, your comfort level with cloud computing, virtualization, and the alphabet soup of “aaSes” will increase allowing you to bridge skills gaps and further leverage the benefits of the cloud. Invariably, there will be some level of trial and error in this process and the need to migrate apps and services to and from public clouds, private clouds, hybrid clouds, and on-premise servers will arise. The Morpheus Cloud Application Management Platform can help you seamlessly provision apps, databases, and app stack components on physical hardware or in the cloud making the once daunting task of migration much less intimidating. Additionally, the advanced logging and monitoring features offered by Morpheus will allow you to get a head start on one of the more overlooked components of a cloud strategy: optimization. As your organization matures in its use of cloud solutions, you will be able to parse data and identify opportunities to operate in a leaner, more cost-effective manner.

To summarize, the key components to an effective cloud strategy are: understanding what you need the cloud to do for you, identifying pieces of your infrastructure that can be effectively migrated to the cloud, assuring backup and disaster recovery plans are in place, and learning from experience and optimizing once you get started.

Why Big Brands Are Abandoning Data Centers for Better Cloud Infrastructure

$
0
0

According to reports, the use of cloud infrastructure is growing and will continue an upward trend of growth moving forward. A roundup from Forbes cites a number of sources that show the recent growth as well as the predicted growth for the coming years, and the forecast is excellent for cloud infrastructure.

Recent Trend

In 2015, Amazon Web Services (AWS) generated $7.88B in revenue with Q4 2015, up 69% over last year.

Source: Forbes

The Forbes roundup found a report that showed revenue up 69% over the previous year for AWS. In addition, they also cited a report showing that the cloud computing market grew to $110 billion in revenues for 2015, an increase of 28%. This report also noted that aaS/PaaS services grew at a rate of 51% and that private and hybrid cloud infrastructure services grew at a rate of 45% during that same period.

As noted by the reports, there was definitely substantial growth in the cloud market recently. With more and more companies finding that the cloud can make such things a scaling (a big bonus with autoscaling), load balancing, alerts, reports, and a number of other important tasks quicker and easier to accomplish, industry experts are predicting that major growth for cloud infrastructure will continue to be the trend for the coming years.

Predictions

In 2016, spending on public cloud Infrastructure as a Service hardware and software is forecast to reach $38B, growing to $173B in 2026.

Source: Forbes

With a forecast like this, there is no better time to be in the industry and to make use of the services provided by companies offering cloud infrastructure, as the competition in the growing market will likely fuel even more innovation and additional bonuses for new and existing customers.

Other predictions included:

Worldwide IT cloud service revenue is projected to increase to $127 billion by 2018, with managed services being projected to reach $256 billion in the same year.

[2018-prediction.jpg]

2018 forecasts for the cloud. Source: Microsoft Cloud Landscape 2015 

Microsoft cloud products are projected to reach 30% of revenue by 2018, up from 11%. This included the Office 365, Azure, and CRM cloud products.

Public cloud revenue is projected to increase to $167 billion in 2020 (up from $80 billion in 2015). Interestingly, the report cited also noted that 49% of the market believes the public cloud is just as secure, or even more secure, than the private cloud. This shows a great deal of confidence in cloud service providers, which is likely another reason for a large amount of projected growth.

When it comes to public cloud services, worldwide spending is predicted to grow to $141 billion in 2019 (up from close to $70 billion in 2015). Again, this indicates a substantial confidence in the service providers by those that will be projected to use the services.

Notable Examples

In an article about data center consolidation due to the cloud, Data Center Knowledge’s Yevgeniy Sverdlik cites some interesting examples of companies that are transitioning from larger data centers to smaller ones, using cloud services to accomplish the task. Among the companies mentioned, Capital One and GE were notable in how they planned to employ the use of cloud infrastructure in addition to what they currently had at the time.

Capital One

We have to be great at building software and data products if we’re going to win at where banking is going.

Source: Rob Alexander, Capital One CIO

Capital One has plans to lower the number of data centers it uses from eight down to five, and then to lower it all the way down to three in 2018. The idea is to be able to deploy new software much faster and to be able to quickly expand products that are in high demand.

Notably, based on recent trends, they have responded to the fact that mobile is the preferred method of access for the majority of their customers. To make their newest mobile banking application, they made use of AWS to handle the load.

GE

GE’s ongoing data center consolidation program is expected to take the company from 34 data centers to four, which will hold only the most secret and valuable data it has.

Source: Jim Fowler, GE CIO

GE had an interesting task to accomplish: communicating with devices that aren't typical, such as aircraft engines, manufacturing plant machines, wind turbines, and more. The biggest part of the workload in going from 34 data centers down to four will be transferred to AWS.

These examples again show the confidence of companies in cloud infrastructure services going forward.

Morpheus Difference

If you are looking to begin cloud adoption, why not try out Morpheus, which makes provisioning, scaling, and maintenance of your apps and servers a breeze. You can provision databases and servers quickly, and have your app up and running in no time! If you are looking to add load balancing to the mix to help improve the speed of your web site or app, then Morpheus offers an incredible service for this, allowing you to provision additional servers quickly so that scaling up is quick and easy.

Using the available tools, you can also monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise. Morpheus has an excellent alerting system that helps to minimize false alarms in order to help you stay more productive in your day to day activities.

In addition, Morpheus has a clear, concise, easy-to-use interface that makes accomplishing your tasks a breeze.

The Morpheus interface is clean and easy to use. Source: Morpheus.

With Morpheus, you can provision apps in a single click, and the service provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. So, with all of these advantages, why not register an account today and take advantage of all that the cloud has to offer?

3 Reasons You Need an IP Address Management (IPAM) Strategy in 2017

$
0
0

IP addresses are typically an extremely important part of a network, and managing them can become pretty difficult as the number of IP addresses used on your network increases. With more and more of these being used all the time for applications, databases, microservices, and more, managing all of the necessary IP addresses becomes incredibly important, as the ability for the machines to communicate with one another could make the difference between working services and dreaded downtime. 

What is IPAM?

We currently live in an IP dependent world; the explosion of IP enabled devices such as smartphones, laptops, tablets and other random devices in the workplace has made today’s enterprise networks more dynamic and complex.

Source: Gabriel Lando

IP Address Management (IPAM) is a way in which you can track and manage the IP address space on your network. Typically, IPAM integrates DHCP and DNS, which allows for changes in one to be seen by the other. This allows updating to occur automatically when a change is detected in one or the other. 

 

IPAM Security

When it comes to network and machine security, access to IPAM data can make it far easier to detect potential breaches or abuse within your infrastructure. IPAM data include information such as the IP addresses in use, what devices each IP is assigned to, the time each was assigned and to whom each IP was assigned. 

This information can be helpful in identifying patterns that indicate security breaches or other abuses of your network. Of course, preventing or bringing to an end such security issues is of extreme importance in order to maintain data integrity and the health of your network and systems.

 

IPAM Compliance

IPAM can also be helpful when it comes to compliance. Certain internal policies can be enforced using IPAM data and a network access control (NAC) system. For example, before access is granted to your network, the NAC, with help from IPAM information, can determine if your antivirus software is up to date, thus potentially preventing an intruder from succeeding in an attack due to your antivirus software being behind on updates.

In addition, if you are subject to any regulatory compliance, IPAM can help identify information that can help you comply with the regulation. For example, if a regulation requires that a log is produced from your systems with the network IP assignments for a given time, the IPAM data can quickly be used to generate such a log on a regular basis in order to get into and remain in compliance with the regulation.

 

IPAM Network Health

IP address conflicts (aka duplicate IP addresses in use on the network) are one of the most devastating issues that can happen on an enterprise network.

Source: David Davis

While security and compliance are both very important, IPAM is also quite helpful in order to provide you with general information on the state of your network and all of the IP addresses being used at any given time. For example, information can be collected on whether an IP is static, dynamic, reserved, or in another status. In addition, data such as MAC addresses, DHCP leases, hostnames, and more can be collected and viewed in order to help you get a solid overview or detailed report on what is happening on your network.

 

What sort of IPAM solutions are available?

If your company does not have an IPAM strategy in place, it may be time to look at getting one in place. With all of the new devices and services in need of IP addresses today, keeping track of all of them on your network without an organized plan could become quite a mess. Depending on how large your company is, you might be able to do something simple in-house, or may need to make an investment in a solution that can provide you with the features and customization you need for your company.

 

How IPV4 and IPV6 addresses are formed. Source: Wikipedia

If your company and/or network is small enough, it may be possible to use an in-house solution such as tracking IP address usage in one or more spreadsheets. Of course, if this method is not diligently maintained by all parties involved, the one or more pieces of information can become outdated, and thus next to useless to someone trying to get a report or analyze a potential security or compliance issue. Of course, a more robust in-house solution would be a bit better, such as developing your own reporting system. 

As technology (and employees) change over time; however, in-house solutions can become increasingly difficult to use and/or maintain. Thus, while the financial investment of an in-house solution may not be large, it can be beneficial at some point to move to a system that you can use without the added worry of maintenance.

A solution maintained by an outside source can help you keep an up-to-date and easy to use system that you can use for the long-term. This, of course, assumes that you have the funding to do so and are able to take the time to carefully choose a vendor that will also be around for the long term. As you can see, each approach has its own strengths, depending on the size and financial state of your company.

 

Morpheus Difference

If you are looking to begin using IPAM, as well as other cloud services, why not try out Morpheus, which makes provisioning, scaling, and maintenance of your apps and servers a breeze. In addition to having an excellent IPAM solution, Morpheus allows you to provision databases and servers quickly, and have your app up and running in no time! If you are looking to speed up website or app performance with load balancing, then Morpheus offers an incredible service for this, allowing you to provision additional servers quickly so that scaling up is quick and easy.

Using the available tools, you can also monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise. Morpheus has an excellent alerting system that helps to minimize false alarms in order to help you stay more productive in your day to day activities.

In addition, Morpheus has a clear, concise, easy-to-use interface that makes accomplishing your tasks a breeze.

The Morpheus interface is clean and easy to use. Source: Morpheus.

With Morpheus, you can provision apps in a single click, and the service provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. So, with all of these advantages, why not register an account today and take advantage of all that the cloud has to offer?

Making the Case for and Against Lift-and-Shift Cloud Migration

$
0
0

There’s a trend being played out in companies of all types and sizes: They’ve dipped their toes in the cloud-computing waters with secondary, peripheral IT projects, learned some valuable lessons, met with more successes than failures, and are now ready to migrate their core apps and processes to the cloud. For several good reasons, recreating every internal system to its cloud-native counterpart is impractical: too costly, too disruptive, too slow, and generally too risky.

What works for many organizations facing this situation is adopting the lift-and-shift approach to cloud migration. You port your apps and data resources to a cloud infrastructure with minimal changes to the existing code. As you can imagine, lifting and shifting modern, multifaceted applications from an in-house data center to a cloud infrastructure has both advantages and disadvantages. For many internal systems, lifting and shifting to the cloud is worthwhile. For others, reworking the apps capitalize on specific cloud features is a better option. 

When containerizing works, and when it doesn’t

Conventional wisdom says placing legacy apps and workloads in containers to port them to the cloud is a bad idea. Gartner analyst Traverse Clayton states that containerizing existing systems adds the container’s “technical debt” to the equation, which is another word for “overhead”: Now you’ve got two platforms to manage instead of just one. Clayton is quoted in a November 28, 2016, article by InfoWorld’s Bob Violino.

The exception, according to Clayton, is when containerizing lets a company extend the lifecycle of its legacy apps. For one thing, containers allow faster development iterations. More importantly, placing existing systems inside containers conforms to the overall trend in organizations away from reliance on virtualization and toward exclusive use of containers for all applications.

Clayton states that a common misconception is that containerization optimizes an enterprise’s server use by increasing server “density.” However, only “mega-web scale companies and cloud providers” are large enough to realize significant server-optimization benefits from containerization, according to the analyst.

How containerization stacks up against lift-and-shift and ‘refactoring’

In a December 4, 2015, article on Cloud Technology Partners, David Linthicum describes an intermediate step between the lift-and-shift approach and what he calls “refactoring,” which customizes the application specifically for the cloud. “Partial refactoring” modifies only portions of the application code, so the conversion is quicker than complete refactoring, but the app will miss out on some cloud benefits, and it will be more expensive to operate.

An in-between approach to moving applications from the data center to the cloud is partial refactoring, which gets you to the cloud faster but skips some cloud features and increases costs. Source: Cloud Technology Partners

When compared to containerization, refactoring and lift-and-shift migration come up short in terms of features, performance, portability of code and data, agility, and governance/security, according to Linthicum’s analysis. He notes, however, that containers are still a relatively new technology, and the value proposition for competing migration approaches is likely to improve in the future.

Containers currently offer advantages over refactoring and lift-and-shift, particularly in terms of data and code portability, but the situation is likely to change in the future as other cloud-migration approaches are enhanced. Source: Cloud Technology Partners

Greatest benefit of lift and shift: Speed

It’s one thing to envision 100-percent cloud-native applications in your company, and it’s a very different thing to do the step-by-step planning of such a massive transition. When speed is of the essence, taking the lift-and-shift road can minimize your downtime. Situations best suited to transfer as-is from the data center to the cloud are when consolidating or shutting down data centers, preparing for or reacting to a merger or acquisition, and relocating your disaster-recovery or high-availability operations.

The other side of the equation is a “greenfield” migration that recreates the application in a form optimized for the cloud. These applications may serve as the templates for future automated development and deployment, allowing you to design from scratch policies for security, performance, and cost. An obstacle for many companies is the competition for people with the skills required for developing, deploying, maintaining, and enhancing cloud applications.

To address the talent gap, Stephen Orban, head of the enterprise strategy for Amazon Web Services, recommends that companies implement a cloud training and certification program for their IT workers. Orban was interviewed by Computerworld’s Sharon Gaudin at the recent AWS re:Invent conference in Las Vegas. According to Orban, the cloud represents a golden opportunity for anyone working in IT, yet many IT staffers hesitate to expand their cloud skills due to fear of the unknown. The more fluent IT managers are in cloud technology, the faster the migration to cloud services.

Cloud migration is now seen as ‘table stakes,’ not business differentiator

It has reached the point where any business without a cloud operation is playing catch-up with the competition. That’s the conclusion of VMware chief technology officer Chris Wolf in a December 5, 2016, post entitled “The Third Industrial Revolution: 2017 and Beyond.” The IT executives Wolf meets with agree that adopting an agile, programmatic infrastructure is now one of the costs of doing business, or “table stakes,” rather than a way to distinguish your firm from the competition.

Wolf believes we are on the verge of a new era in networking and security that decouples workload security from an arbitrary IP address that is a nightmare to manage when an app is moved or redeployed, to a unique global identifier, such as the app name, so the “security context” follows the application automatically. A big part of making programmatic infrastructures practical in the short run is accommodation for legacy systems, which will be able to share and benefit from the “globally consistent” operational layer. The same management and processes are applied for security, auditing, and performance regardless of where the application runs.

Cloud-migration plans: Shorter, heavier on details

How the “cloud conversation” has evolved in recent years was one of the trends noted by Computerweekly’s Carolyn Donnelly following Amazon’s recent re:Invent show. In a December 9, 2016, article, Donnelly states that enterprises no longer devise cloud-migration strategies extending two or more years into the future. Today, the migration plans span no more than 18 months, and often as few as 11 months. More importantly, the plans are much more specific these days, which indicates the person doing the planning intends to do the plan implementation as well rather than leaving the hands-on work to their successor.

Another recent trend sees companies making the cloud transition in two parallel tracks: Some apps are still being lifted and shifted to a bare-metal infrastructure as a service (IaaS) so they can realize cloud resource efficiencies in the short term; while other apps are being rebuilt as cloud-native to make them as agile as possible to develop, deploy, and maintain.

It’s as if the cloud seeds that were planted years ago in the form of one-off applications intended as a proof of concept are now blooming into full gardens that will be the foundation of the organization’s digital transformation. Lift and shift still plays a role in the migration process, but that role is likely to wane as application “lifecycles” are replaced by the cloud’s continuous-deployment model.

For more on planning your application and database migration strategy, see this post on the Morpheus blog, which takes a deeper dive into the role of containers vs. virtualization.


Morpheus 2016 Year in Review

$
0
0

The past 12 months have flown by at Morpheus Data. We had the chance to bring better cloud management to customers all over the globe. We even opened up a new sales office in the UK. Here’s a look at some of our highlights and a sneak peak at what’s in store in 2017. 

Thank You for helping make 2016

our best year ever!

10 Advantages of Cloud Migration

$
0
0

When you're migrating applications and data to the cloud, you want to use the cloud services that best meet your needs so your migration goes as smoothly as possible. In order to ensure you get the most out of the cloud once migrated, you need to know that you are getting as many of the advantages of the cloud as possible. 

With that in mind, here are 10 things your cloud service or services should provide to ensure that your cloud migration is quick, smooth, and gives you the major advantages of moving into the cloud.

1. Faster Deployment Times

Migrating to the cloud means you should be able to deploy your apps and services more quickly. Many services provide the ability for you to quickly provision servers and other resources within a few steps, which tends to be a much simpler process than buying servers, installing the needed operating system, and placing it into a network or data center.

Source: Edwin Schouten, Cloud Services Leader for IBM Global Technology Services, Benelux region 

2. Enhanced Security Features

Most cloud providers take care of some of the tougher security issues, such as keeping unwanted traffic outside a specific scope from accessing the machines on which your data and apps reside and ensuring automatic security updates are applied to their systems to keep from being vulnerable to the latest known security threats. 

Keep in mind, though, that you will still need to have security policies in place on your end, such as keeping mobile devices secure, making sure employees don’t divulge passwords or other sensitive information to unauthorized parties, and so on.

3. Less Infrastructure Complexity

Cloud systems tend to peel away the complexity of the infrastructure that underlies the architecture being used to provision new machines and make them all work together to provide the needed services. Instead, you are able to fill out some information on what is needed and launch the necessary services. This can save quite a bit of time, as those particular complexities are no longer a part of your process.

4. Built-in Status Monitoring

A number of cloud services are able to provide monitoring so that you can be notified when an app or machine has potential issues or is actually experiencing an outage. This, of course, can save you quite a bit of time as opposed to keeping track of the state of your services on your own.

5. Automatic Backup and Logging of Key Metrics

Related to monitoring, backup and logging services are extremely important, especially if you need to perform disaster recovery from an outage and see where things went wrong. The backups will allow you to get things up and running again, and the logs may provide some critical information to help you find out what caused the issue in the first place.

6. A “Single Pane of Glass” for Services

A good cloud service can make all of these things appear as though they are shown through a “single piece of glass”. Thus, workload deployment, monitoring, and mobility are all able to be taken care of in a single location as opposed to having to go through several different services that don’t necessarily use a common interface for the user. 

7. Greater Flexibility and Collaboration for Staff

With cloud services, your team won’t need to be at a specific location to deploy, update, or fix issues with any of the various machines being used. This makes it a more flexible solution when compared to the necessity of being on-site. Also, the consistency of the provisioning and deployment processes the cloud provides can make it much easier to collaborate, as everyone can be on the same page, without shadow IT.

8. Reduced Footprint (aka Data Centers)

By making use of the cloud, you can potentially reduce the number of data centers needed in your organization. Instead, you may be able to get by with one data center for particularly sensitive information, or even zero if that is not needed in your case. This, of course, can help save on the costs of operating multiple data centers.

9. Improved Cost Management

For Netflix, this has proven to be quite effective at improving system availability, optimizing costs, and in some cases reducing latencies. 

Source: Netflix 

Some cloud providers also are able to provide autoscaling, which allows you to provision more services when needed while turning them off when they are not needed. This highly responsive technique can help even more with cost savings, as you only need to be charged for the time those additional systems are on instead of having to keep additional machines up and running all the time to deal with peak loads. In this way, your services can automatically respond with the number of resources needed at any given time, preventing both downtime and unnecessary expense!

10. Morpheus does all of these things on a single platform

If you are looking to get all of these cloud advantages and make your cloud migration better, why not try Morpheus, which provides all of these services in a single platform. Morpheus allows you to provision databases and servers quickly, and have your app up and running in no time!

Morpheus allows you to place workload deployment, monitoring, and mobility behind a single pane of glass. Morpheus provides automatic monitoring, backups, and logging, so that you can focus on your app rather than all of the other complexities involved in the architecture, and also provides autoscaling services so that you can optimize the resources needed for your apps at any given time.

In addition, Morpheus has a clear, concise, easy-to-use interface that makes accomplishing your tasks a breeze.

The Morpheus interface is clean and easy to use. Source: Morpheus.

So, with all of these advantages, why not register an account today and take advantage of all that the cloud has to offer?

 

Hybrid Cloud Migration: Phase One of a Long-term Multi-Cloud Strategy

$
0
0

Somewhere there’s a business that’s still using dial telephones and paper rolodexes. That’s not your company. In fact, it is all but certain that your operation is using more than one cloud service, and perhaps a dozen or more distinct services for particular purposes. Multi-cloud is today’s business reality.

According to a recent report on cloud technology, companies have an average of six separate clouds. In a November 30, 2016, article, Information Age’s Neil Ismail defines multi-cloud as any situation where apps are deployed across two or more cloud platforms. The goal is to improve performance and cost efficiency by matching app components to the tools and technologies best suited to them.

One of the misconceptions about multi-clouds is that they are synonymous with hybrid clouds. As Ismail explains, hybrid clouds are a specific type of multi-cloud distinguished by “traditional” application deployment via on-premises computing or managed hosting on a combination of public clouds and private clouds. Orchestration tools are used to manage the various hybrid platforms.

According to RightScale’s 2016 State of the Cloud report, three out of four enterprises and nearly two out of three small businesses use a mix of public and private clouds.

Long-term Cloud Management

The cloud is not some place you’re going to visit for a short while before returning back home. It’s the place your company’s systems are going to live for the long run. Of course, you don’t plan for a vacation the same way you plan for a permanent move. Even if you’ve got a good start on your cloud operations – initial apps in place and running smoothly, core competencies well developed in your staff – your plan isn’t complete until you’ve laid down a foundation for the long run.

In a December 19, 2016, article on EnterpriseTech, Logicworks’ Jason McKay cites a recent Gartner study that found for each dollar a company invests in innovation through 2019, it will spend $7 for core execution. 

According to Gartner, “[d]esigning, implementing, integrating, operationalizing, and managing the ideated solution can be significantly more than the initial innovation costs [because] the deployment costs of the Mode 2 ‘ideated solution’ are not necessarily considered during ideation.”

McKay offers three suggestions for building an at-scale cloud-computing practice:

  • Focus on automated, cloud-native tools that streamline ongoing management rather than on an “easier” lift-and-shift migration that relies on existing toolsets.
  • Use writing resource templates such as AWS CloudFormation or TerraForm rather than asking in-house system engineers to devise one-off “snowflake” cloud environments.
  • Partner with a company that offers a mature automation framework, or invest in your own configuration management system.

Debunking the Myth of Multi-cloud Management 

The success of any project depends on having a good understanding of where you’re starting from. For a multi-cloud management plan, that means knowing the characteristics and status of all existing cloud operations in the organization. As Information Age’s Ismael writes, this initial step can be the trickiest of the entire process because IT managers may be in denial about how people in the company are currently using the cloud.

The LANDESK survey of shadow IT found that 92 percent of IT departments report having shadow IT projects turned over to them to manage, averaging 7.1 shadow-IT projects per company.

The result, according to Rackspace CTO John Engates, is that “businesses often end[] up employing multi-cloud by accident due [to] other departments employing cloud services without their knowledge.” After investigating cloud use in the company, IT may discover that the marketing department is using one cloud, for example, while the human resources department relies on an entirely different cloud service. The solution, Ismael says, is to offer “a menu of cloud providers which have been researched and pre-approved by the IT department.”

Two other common misconceptions about multi-cloud are that they are less secure than in-house systems and that they require technical expertise that is in short supply. To debunk the first matter, Ismael points out that a carefully planned and implemented multi-cloud strategy improves overall security by giving companies more control over the physical location of their most-sensitive data for compliance purposes. Firms also exercise more precise control over the methods used to protect their information.

As for the shortage of cloud management expertise, Ismael recommends that companies survey their IT staff and business departments to “assess the level of expertise that already exists in the business and find out where the gaps are.” Once those gaps are identified, you shop for just the skills you need from cloud brokers or managed cloud providers. Doing so frees up in-house staff to spend their time on activities that move the business closer to its goals.

Multi-cloud and Hybrid Cloud: The foundation for Virtualization Management

The multi-cloud and hybrid-cloud setups that have evolved from initial pilot projects and shadow IT form the basis of an integrated virtualized approach to application management. Embotics President Jay Litkey calls this typautomated cloud management “IT as a service” because it places resource optimization, lifecycle management, workflow and automation, IT costing and chargeback, and self-service and service catalog in a single in a separate layer that accommodates cloud services of all types.

The software-defined data center (SDDC) shifts management of applications and workloads to a single layer where they are abstracted and aggregated apart from any particular resource, physical or virtual. Only then are the instances comprising the workloads able to be distributed most efficiently to the multi-cloud and hybrid cloud resources optimized for them.

From Cloud Migration to Cloud Management

It’s understandable that companies introduce cloud computing to their operations through small, targeted projects that serve as proofs of concept. Over time the migrations become smoother, and the cloud projects grow in number and complexity. The problem is, many organizations remain stuck in “migration” mode and struggle to get to “management” mode, which entails the gamut of IT duties.

The Morpheus cloud application management platform bridges the migration and management phases for your company’s apps. Morpheus delivers all provisioning tools for databases, applications, and app stack components in a single pane of glass for all servers and clouds: on-premise, public, private, or hybrid. The service supports asynchronous provisioning, so you can work on multiple IT systems at the same time.

With Morpheus, cloud lock-in is a thing of the past. In addition to one-click provisioning, you can migrate clouds and hypervisors quickly and simply between AWS, Google Cloud Platform, VMware, Microsoft Azure, Xen, Nutanix, and other platforms. Automated monitoring lets you group apps and systems for an immediate view of all your instances. Calculating an app’s overall uptime is more accurate via Morpheus’s built-in redundancy layers.

New Views Into Cloud Application Performance

$
0
0

You can’t monitor what you can’t see. That seems overly simplistic, but it is the heart of the problem facing IT departments as their apps and other data resources are more widely dispersed among on-premises systems and across public and hybrid clouds. 

Universal visibility will become more important in the future: according to the Cisco Global Cloud Index, 92% of all workloads will be processed in public and private clouds by 2020. To address this “visibility gap,” companies have to find a way to identify and extract only the most relevant monitoring data to avoid swamping your monitoring, analytics, and security tools. 

One problem is that in virtual data centers, 80% of all traffic is internal and horizontal, or “east-west,” according to Cisco. By contrast, most cloud traffic travels vertically, or “north-south.” They’re not designed to scale up and down fluidly as virtual machines are created, run their course, and disappear.

Adopting a virtualized infrastructure flips network traffic monitoring from predominantly east-west to predominantly north-south. Source: NeuVector

4 Components of Monitoring Modern Virtual Environments 

Realizing all the efficiency, scalability, and agility benefits of today’s virtualized infrastructures requires monitoring four distinct aspects of application traffic:

  • Horizontal scaling: as an app scales up to accommodate exponentially more users, the tool you use to monitor the app has to be capable of scaling right along with it.
  • Viewing virtual network segments: What may appear as a single virtualized environment is in fact partitioned to secure sensitive applications and data. Monitoring traffic between the virtual segments requires peering through the virtual firewalls without compromising the security of the protected assets.
  • Accommodating containers: Analysts anticipate a ten-fold increase in the use of containers as more companies convert their apps into multiple virtualized containers to improve app performance. Your monitoring system has to access these containers as quickly as they are generated and redeployed.
  • Running DevOps at light speed: The lifespans of virtual machines, containers, and the apps they comprise are shorter than ever, which means the DevOps team has to keep pace as it deploys new builds and updates existing ones. This includes the ability to archive and retrieve monitored data from a container that no longer exists.

Integrate network security with app monitoring

It’s not just the people charged with app performance monitoring who are racing to keep pace with the speed of virtualization. Security pros are struggling to find tools capable of identifying and reporting potential breaches of widely dispersed virtual networks. Network security managers often don’t have access to monitoring dashboards and are unaware of on-the-fly network reconfigurations.

This has led to a new approach to network modeling that extends virtual networks into the realms of security and compliance in addition to performance monitoring. The key is to normalize data from diverse sources – whether physical, virtual or purely cloud-based – and then apply a single set of network policies that encompasses security and performance monitoring.

Perhaps the biggest change in mindset is that all data traffic has to be treated the same. The traditional north-south pathway into and out of the network that's always posed security and monitoring challenges, as well as the east-west transmissions inside the network that in the past were considered trusted and 100% contained. There is no longer a “trusted zone” within the organization’s secured perimeter.

New Views of End-to-end Network Modeling

The approach that many companies are adopting successfully is to model north-south and east-west network traffic using a single set of policies. Doing so offers what Skybox Security VP of Products Ravid Circus calls “a more realistic view of applied policy at the host level rather than verifying access only at ‘chokepoints’ or gateways to the virtual network.” Relying on one unified policy also breaks down the traditional barriers separating physical, virtual, and cloud networks, according to Circus.

How important is it to keep virtualized environments running at peak performance? In the words of OpsDataStore founder Bernd Harzog, “slow is the new down.” Harzog cites an Amazon study that found an added 100 milliseconds of latency translated to a 1% decrease in online sales revenue for the year. The monitoring challenge is compounded by the confluence of four trends:

  • More software being deployed and updated more frequently
  • A more diverse software stack comprised of new languages such as NodeJS and new runtime environments such as Pivotal Cloud Foundry
  • Applications are increasingly abstracted from hardware with the rise of network, compute, and storage virtualization, as well as JVM and Docker
  • The rise of the virtualized, dynamic, and automated infrastructure

The monitoring challenge: faster, more frequent app deployments on increasingly abstracted, dynamic, and automated virtual infrastructures. Source: Network World

Constructing Your Best-of-breed Monitoring Toolkit

If you’re relying on a single vendor for your network monitoring operations, you’re missing out. The new reality is multiple tools from multiple vendors, which makes it critical to select the optimum set of tools for your environment. The pit many companies fall into is what Harzog calls the “Franken-Monitor:” Each team relies on its own favorite tool at the exclusion of all others, so you spend all your time trying to move performance data between these virtual silos.

To avoid creating a monitoring monster, place the metric streams from all the tools in a common, low-latency, high-performance back-end that lets the performance data drive IT the same way business data drives business decision making. That’s where a service such as Morpheus proves its true value.

Morpheus is complete cloud application management platform that combines all your vital data assets in a single place so you can quickly find, use, and share custom reports highlighting resource activity and app and server capacity.  All system, database, and application logs are collected automatically for near-instant analysis and troubleshooting. The moment you provision a new system, it is set up for uptime monitoring automatically, complete with proactive, customizable alerts. Best of all, Morpheus keeps IT moving at cloud speed without launching your IT budget into the stratosphere.

Why Multi-Tenant Application Architecture Matters in 2017

$
0
0

According to this ZDNet article, the global public cloud market will reach $146 billion in 2017, a $59 billion increase over 2015. A big chunk of this market is enterprises with a core product built on top of Multi-Tenant Application Architecture. For example, you may have heard of this little company going by the name of SalesForce.com. Yet, despite the clear indications multi-tenancy has been a game changer in the tech industry, many are uncertain of exactly what makes an application “Multi-Tenant” or why it matters.  

Multi-Tenancy Defined

To quote TechTarget, “multi-tenancy is an architecture in which a single instance of a software application serves multiple customers.” Consistent with many other ideas that have led to breakthroughs and exponential growth, at the core of multi-tenancy is the idea of resource maximization. That being said, the idea of resource maximization is not new or unique to multi-tenancy. It’s a rather common objective of most business endeavors to maximize available resources. So what makes multi-tenancy special?

The Problems Multi-Tenant Application Architectures Solve

As discussed in this University of Würzburg whitepaper, colocation data centers, virtualization, and middleware sharing are some examples of resource sharing with similar ambitions of reducing cost while maximizing efficiency. What differentiates multi-tenant application architecture is its effectiveness in achieving the same goal in a scalable and sustainable fashion. In a nutshell, to quote CNCCookbook CEO Bob Warfield, the benefit of multi-tenancy is ”instead of 10 copies of the OS, 10 copies of the DB, and 10 copies of the app, it has 1 OS, 1 DB and 1 app on the server”. 

For most organizations, 10 is quite a conservative estimate, nonetheless, the takeaway is clear, Multi-Tenant Application Architecture helps optimize the use of hardware, software, and human capital. Larry Aiken makes some astute observations on this topic in this Cloudbook article

As an alternative to a multi-tenant application, many technology vendors are tempted to enter the market with a solution that simply creates a virtual appliance from existing code, sell a software license, rinse and repeat. There are lower entry costs this way and it seems like a reasonable option for organizations looking to create a cloud offering of a software that already exists. However, as the project scales, so do the flaws in this approach. 

Each upgrade of the application will require each customer to upgrade and the ability to implement tenant management tools and tenant-specific customizations is significantly limited. With multi-tenant architecture centralized updates and maintenance are possible and the level of granularity possible using tenant management tools is significantly higher. In fact, in the aforementioned Cloudbook article, OpSource CEO Treb Ryan is cited as indicating a true multi-tenant application can reduce a SaaS provider’s cost of goods sold from 40 % to 10 %.

The Challenges and Drawbacks of Implementing A Multi-Tenant Application Architecture

While Multi-Tenant Application Architecture is and will continue to be a staple of the industry for quite some time, there are alternate architectures which work better or are easier to implement for a given project. One of the more common reasons is simply that there can be quite the barrier to entry in building a multi-tenant application from scratch. 

There is a knowledge gap for those without the experience, and sometimes it makes more sense to get a minimum viable product out there and learn from your users. While that approach may cost more on the back end, there are more prudent reasons to consider multi-instance applications instead as well. 

This ServiceNow blog post details some of the reasons they find multi-instance to be the superior architecture. Some of the common counter-arguments raised against multi-tenancy can be reconciled to the simple fact that customer data resides in the same application and or database. 

These arguments center around the idea that despite the vast array of fail-safes, security measures, and encryption techniques available to mitigate risk, the reality is a shared resource does mean there will be some (at least theoretical) security and performance tradeoffs in certain circumstances. Additionally, sometimes one customer may just become big enough that their data warrants their own instance. Even SalesForce.com has somewhat acknowledged this reality by introducing Superpod.

Multi-tenant vs Multi-instance

Making a choice between multi-tenant and multi-instance application architectures will depend almost entirely on your position and the business problems you are trying to solve. As a user, it’s probably best to focus on the end product, meaning evaluating SLAs, functionality, and meeting any relevant requirements for data integrity, security, and uptime as opposed to basing a decision on the underlying architecture. 

As a solution provider, your focus should be on which architecture allows your product to add the most value to the marketplace. Will there be more benefit in your team being able to leverage the extensibility of multi-tenancy or the portability of multi-instance? Taking a step back, why not both? A fairly popular approach is to implement groups of tenants across a number of instances. The focus should always be on delivery of the best product possible. 

Whatever the final decision on architecture is, a key component to ensuring quality and delivery is optimized is following a DevOps philosophy that emphasizes continuous improvement, automation, and monitoring. Tools like Morpheus that allow for customizable reporting and elastic scaling allow you to assure that your software solution is optimized for today and scalable for tomorrow, regardless of if that means database configuration to accommodate growth, provisioning of a virtual appliance, or anything in between.

In conclusion, Multi-Tenant Application Architecture is an architecture that allows resources to be centralized and leads to benefits in the form of various technological economies of scale. Multi-tenancy has contributed to a disruptive change in the market over the last 10 years and continues to be at the core of many applications today. While there are alternatives and, in practice, applications may be a bit of a hybrid between multiple architectures, multi-tenancy is a core concept of cloud computing and seems likely to be so for the foreseeable future.

Viewing all 1101 articles
Browse latest View live