Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

New Views Into Cloud Application Performance

$
0
0

You can’t monitor what you can’t see. That seems overly simplistic, but it is the heart of the problem facing IT departments as their apps and other data resources are more widely dispersed among on-premises systems and across public and hybrid clouds. 

Universal visibility will become more important in the future: according to the Cisco Global Cloud Index, 92% of all workloads will be processed in public and private clouds by 2020. To address this “visibility gap,” companies have to find a way to identify and extract only the most relevant monitoring data to avoid swamping your monitoring, analytics, and security tools. 

One problem is that in virtual data centers, 80% of all traffic is internal and horizontal, or “east-west,” according to Cisco. By contrast, most cloud traffic travels vertically, or “north-south.” They’re not designed to scale up and down fluidly as virtual machines are created, run their course, and disappear.

Adopting a virtualized infrastructure flips network traffic monitoring from predominantly east-west to predominantly north-south. Source: NeuVector

4 Components of Monitoring Modern Virtual Environments 

Realizing all the efficiency, scalability, and agility benefits of today’s virtualized infrastructures requires monitoring four distinct aspects of application traffic:

  • Horizontal scaling: as an app scales up to accommodate exponentially more users, the tool you use to monitor the app has to be capable of scaling right along with it.
  • Viewing virtual network segments: What may appear as a single virtualized environment is in fact partitioned to secure sensitive applications and data. Monitoring traffic between the virtual segments requires peering through the virtual firewalls without compromising the security of the protected assets.
  • Accommodating containers: Analysts anticipate a ten-fold increase in the use of containers as more companies convert their apps into multiple virtualized containers to improve app performance. Your monitoring system has to access these containers as quickly as they are generated and redeployed.
  • Running DevOps at light speed: The lifespans of virtual machines, containers, and the apps they comprise are shorter than ever, which means the DevOps team has to keep pace as it deploys new builds and updates existing ones. This includes the ability to archive and retrieve monitored data from a container that no longer exists.
Integrate network security with app monitoring

It’s not just the people charged with app performance monitoring who are racing to keep pace with the speed of virtualization. Security pros are struggling to find tools capable of identifying and reporting potential breaches of widely dispersed virtual networks. Network security managers often don’t have access to monitoring dashboards and are unaware of on-the-fly network reconfigurations.

This has led to a new approach to network modeling that extends virtual networks into the realms of security and compliance in addition to performance monitoring. The key is to normalize data from diverse sources – whether physical, virtual or purely cloud-based – and then apply a single set of network policies that encompasses security and performance monitoring.

Perhaps the biggest change in mindset is that all data traffic has to be treated the same. The traditional north-south pathway into and out of the network that's always posed security and monitoring challenges, as well as the east-west transmissions inside the network that in the past were considered trusted and 100% contained. There is no longer a “trusted zone” within the organization’s secured perimeter.

New Views of End-to-end Network Modeling

The approach that many companies are adopting successfully is to model north-south and east-west network traffic using a single set of policies. Doing so offers what Skybox Security VP of Products Ravid Circus calls “a more realistic view of applied policy at the host level rather than verifying access only at ‘chokepoints’ or gateways to the virtual network.” Relying on one unified policy also breaks down the traditional barriers separating physical, virtual, and cloud networks, according to Circus.

How important is it to keep virtualized environments running at peak performance? In the words of OpsDataStore founder Bernd Harzog, “slow is the new down.” Harzog cites an Amazon study that found an added 100 milliseconds of latency translated to a 1% decrease in online sales revenue for the year. The monitoring challenge is compounded by the confluence of four trends:

  • More software being deployed and updated more frequently
  • A more diverse software stack comprised of new languages such as NodeJS and new runtime environments such as Pivotal Cloud Foundry
  • Applications are increasingly abstracted from hardware with the rise of network, compute, and storage virtualization, as well as JVM and Docker
  • The rise of the virtualized, dynamic, and automated infrastructure

The monitoring challenge: faster, more frequent app deployments on increasingly abstracted, dynamic, and automated virtual infrastructures. Source: Network World

Constructing Your Best-of-breed Monitoring Toolkit

If you’re relying on a single vendor for your network monitoring operations, you’re missing out. The new reality is multiple tools from multiple vendors, which makes it critical to select the optimum set of tools for your environment. The pit many companies fall into is what Harzog calls the “Franken-Monitor:” Each team relies on its own favorite tool at the exclusion of all others, so you spend all your time trying to move performance data between these virtual silos.

To avoid creating a monitoring monster, place the metric streams from all the tools in a common, low-latency, high-performance back-end that lets the performance data drive IT the same way business data drives business decision making. That’s where a service such as Morpheus proves its true value.

Morpheus is complete cloud application management platform that combines all your vital data assets in a single place so you can quickly find, use, and share custom reports highlighting resource activity and app and server capacity.  All system, database, and application logs are collected automatically for near-instant analysis and troubleshooting. The moment you provision a new system, it is set up for uptime monitoring automatically, complete with proactive, customizable alerts. Best of all, Morpheus keeps IT moving at cloud speed without launching your IT budget into the stratosphere.


Why Multi-Tenant Application Architecture Matters in 2017

$
0
0

According to this ZDNet article, the global public cloud market will reach $146 billion in 2017, a $59 billion increase over 2015. A big chunk of this market is enterprises with a core product built on top of Multi-Tenant Application Architecture. For example, you may have heard of this little company going by the name of SalesForce.com. Yet, despite the clear indications multi-tenancy has been a game changer in the tech industry, many are uncertain of exactly what makes an application “Multi-Tenant” or why it matters.  

Multi-Tenancy Defined

To quote TechTarget, “multi-tenancy is an architecture in which a single instance of a software application serves multiple customers.” Consistent with many other ideas that have led to breakthroughs and exponential growth, at the core of multi-tenancy is the idea of resource maximization. That being said, the idea of resource maximization is not new or unique to multi-tenancy. It’s a rather common objective of most business endeavors to maximize available resources. So what makes multi-tenancy special?

The Problems Multi-Tenant Application Architectures Solve

As discussed in this University of Würzburg whitepaper, colocation data centers, virtualization, and middleware sharing are some examples of resource sharing with similar ambitions of reducing cost while maximizing efficiency. What differentiates multi-tenant application architecture is its effectiveness in achieving the same goal in a scalable and sustainable fashion. In a nutshell, to quote CNCCookbook CEO Bob Warfield, the benefit of multi-tenancy is ”instead of 10 copies of the OS, 10 copies of the DB, and 10 copies of the app, it has 1 OS, 1 DB and 1 app on the server”. 

For most organizations, 10 is quite a conservative estimate, nonetheless, the takeaway is clear, Multi-Tenant Application Architecture helps optimize the use of hardware, software, and human capital. Larry Aiken makes some astute observations on this topic in this Cloudbook article

As an alternative to a multi-tenant application, many technology vendors are tempted to enter the market with a solution that simply creates a virtual appliance from existing code, sell a software license, rinse and repeat. There are lower entry costs this way and it seems like a reasonable option for organizations looking to create a cloud offering of a software that already exists. However, as the project scales, so do the flaws in this approach. 

Each upgrade of the application will require each customer to upgrade and the ability to implement tenant management tools and tenant-specific customizations is significantly limited. With multi-tenant architecture centralized updates and maintenance are possible and the level of granularity possible using tenant management tools is significantly higher. In fact, in the aforementioned Cloudbook article, OpSource CEO Treb Ryan is cited as indicating a true multi-tenant application can reduce a SaaS provider’s cost of goods sold from 40 % to 10 %.

The Challenges and Drawbacks of Implementing A Multi-Tenant Application Architecture

While Multi-Tenant Application Architecture is and will continue to be a staple of the industry for quite some time, there are alternate architectures which work better or are easier to implement for a given project. One of the more common reasons is simply that there can be quite the barrier to entry in building a multi-tenant application from scratch. 

There is a knowledge gap for those without the experience, and sometimes it makes more sense to get a minimum viable product out there and learn from your users. While that approach may cost more on the back end, there are more prudent reasons to consider multi-instance applications instead as well. 

This ServiceNow blog post details some of the reasons they find multi-instance to be the superior architecture. Some of the common counter-arguments raised against multi-tenancy can be reconciled to the simple fact that customer data resides in the same application and or database. 

These arguments center around the idea that despite the vast array of fail-safes, security measures, and encryption techniques available to mitigate risk, the reality is a shared resource does mean there will be some (at least theoretical) security and performance tradeoffs in certain circumstances. Additionally, sometimes one customer may just become big enough that their data warrants their own instance. Even SalesForce.com has somewhat acknowledged this reality by introducing Superpod.

Multi-tenant vs Multi-instance

Making a choice between multi-tenant and multi-instance application architectures will depend almost entirely on your position and the business problems you are trying to solve. As a user, it’s probably best to focus on the end product, meaning evaluating SLAs, functionality, and meeting any relevant requirements for data integrity, security, and uptime as opposed to basing a decision on the underlying architecture. 

As a solution provider, your focus should be on which architecture allows your product to add the most value to the marketplace. Will there be more benefit in your team being able to leverage the extensibility of multi-tenancy or the portability of multi-instance? Taking a step back, why not both? A fairly popular approach is to implement groups of tenants across a number of instances. The focus should always be on delivery of the best product possible. 

Whatever the final decision on architecture is, a key component to ensuring quality and delivery is optimized is following a DevOps philosophy that emphasizes continuous improvement, automation, and monitoring. Tools like Morpheus that allow for customizable reporting and elastic scaling allow you to assure that your software solution is optimized for today and scalable for tomorrow, regardless of if that means database configuration to accommodate growth, provisioning of a virtual appliance, or anything in between.

In conclusion, Multi-Tenant Application Architecture is an architecture that allows resources to be centralized and leads to benefits in the form of various technological economies of scale. Multi-tenancy has contributed to a disruptive change in the market over the last 10 years and continues to be at the core of many applications today. While there are alternatives and, in practice, applications may be a bit of a hybrid between multiple architectures, multi-tenancy is a core concept of cloud computing and seems likely to be so for the foreseeable future.

Multi-Cloud Orchestration Tips from Morpheus' CTO

$
0
0

There’s an old joke about somebody going into a restaurant, opening a menu, and saying, “Everything looks so good, I think I’ll have ‘em all!” That’s what it can feel like when you look at your options for managing multiple clouds. Make no mistake, multi-cloud is today’s reality for companies large and small.

In the cloud orchestration field, you no longer “select” a particular approach and toolset at the exclusion of all others. For years the difference between two of the most popular orchestration tools, Chef and Puppet, has been that Chef is “imperative” while Puppet is “declarative.” That’s why developers are said to prefer Chef’s recipes, which let them describe the steps required to achieve the desired state (similar to programming), while the operations side favors Puppet’s ability to define the target state and the path from the current state to the target state (akin to project management).

Just as the walls separating the dev side from the ops side have been removed with the adoption of continuous delivery, the two principal orchestration methods are coming to resemble each other as each is enhanced. CIMI Corp.’s Tom Nolle points out on the Server Side blog that, despite their “radically different architectures,” Chef and Puppet both rely on scripting and the client/server model. Both also are comprised of repositories of reusable DevOps elements modularized to support multiple clouds, hybrid clouds, and cloud data center environments.

A side-by-side comparison of Chef and Puppet, two popular orchestration tools, shows the rivals have as many similarities as they have differences. Source: OpenSourceForU.com

Giant steps toward ‘infrastructure as code’

To accommodate the application change management that is the heart of continuous delivery, Chef and Puppet are converging: Chef Delivery supports code base management, version control, development collaboration control, and development pipelining; while Puppet’s Code Manager covers the entire development/change cycle, and Node Manager makes it possible to support software-defined networks. The key is for both approaches to be ready ahead of time for any new models or new infrastructures as they arise.

Attempts to future-proof your application operations will be centered on the concept of “infrastructure as code,” which separates deployment into a separate intermediate layer via an abstract hosting model. The model works with any cloud, multi-cloud, or hybrid environment: to add a new cloud provider, you simply define it in the deployment layer as infrastructure.

Any time you add a layer to the operation stack, you’re introducing some complexity, which can negate the operational benefits you’re intending to realize. To ensure your software-defined network delivers on its performance and efficiency promises, you have to maximize automation and minimize any manual work. Use the fewest abstract hosting models that you're able to, take advantage of infrastructure-as-code toolkits, and eliminate as many resource dependencies as you can.

Combine Puppet With Docker to Automate Cloud Configuration Management

Some DevOps pros believe that if they use Docker’s container orchestration, they no longer need Puppet or Chef. TechTarget’s Beth Pariseau describes how an Australian health-care provider has turned this idea on its ear. Scott Coulton, the architect of Healthdirect Australia’s Docker-Puppet solution, explains that “Docker does build, ship, run, [and] Puppet is the ship” in which the container code is delivered. Coulton described the process in a recent PuppetConf keynote presentation.

The healthcare company used Puppet automation to harden the Docker REST API, which was then directed to deploy the container infrastructure. The process joined app development and infrastructure as code in a single continuous delivery process, according to Coulton. "If Puppet sees that one of the containers that's part of a service is not running, Puppet will actually send an API call to update the service to make sure it is running," Coulton added.

Forrester Research analyst Robert Stroud states that as containers are used in more complex environments, the need for efficient configuration management will increase. Puppet, Chef, and other orchestration tools will have to coexist with Kubernetes, Docker Swarm, and Mesos. The recently released Gareth Module for Puppet created by Gareth Rushgrove is one such tool, allowing Puppet to communicate with the Docker REST API. According to Coulton, “as long as you write your Ruby code to understand the responses when Puppet runs, it will look for the resource on a cluster of nodes."

Orchestration will take a front-line role in enterprises as cloud IP traffic represents the lion’s share of data-center traffic, reaching 92 percent by 2020. Source: Gartner, via Equinox

Are you ready for the multi-cloud era?

The future of data management is in the clouds, as evident by the findings of research conducted by Cisco Systems that forecasts global cloud IP traffic will almost quadruple between 2015 and 2020 to a total of 14.1 zettabytes. In addition, a recent IDC study reports that 85 percent of enterprises will commit to multi-cloud architectures in 2017, as reported by Tony Bishop on the Equinox blog. Bishop recommends the use of “cloud exchanges” that offer “fast and cost-effective, direct, and secure provisioning of virtualized connections to multiple cloud vendors and services.”

Speed, efficiency, and security are three of the hallmarks of the Morpheus cloud application management platform, which provides one-click provisioning of databases, apps, and app-stack components to public, private, and hybrid clouds of all descriptions. Morpheus makes it easy to add more nodes on the web UI, a command line, or through an API call; the database or app cluster is reconfigured automatically to accommodate the new nodes.

As your company’s data assets are dispersed far and wide in multiple clouds, tools and services that let you develop, deploy, maintain, and update critical apps and databases from a single window will become the command center your operation relies on for smooth sailing.

How to Break the Vicious Cycle of VM Overprovisioning

$
0
0

It’s an immutable fact of life for any IT manager: Sometime, somewhere, somehow, something will go wrong. It doesn’t matter whether you or anyone else was responsible. All that matters is you do everything reasonable to prevent failures and be ready to respond quickly when prevention is no longer an option.

Since the first data center opened, managers have relied on one single technique more than any other to avoid system crashes: overprovisioning. The only way to accommodate unpredictable spikes in demand is to build in a cushion that provides the processor power, storage space, and network bandwidth required at peak demand. Actual storage capacity usage may be as low as 33 percent in some organizations, according to Storage Bits’ Robin Harris.

Striking the optimal balance when provisioning VMs is influenced by the natural tendency to respond to what TechTarget’s Stephen J. Bigelow calls “inadvertent resource starvation” by overcompensating. As you might expect, this is the exact wrong way to react to resource optimization. The appropriate response to VM slowdowns is to test workloads continuously to calculate resource levels, both before the workloads are deployed and continually thereafter.

Breaking VM provisioning into its constituent parts

The natural starting point for any provisioning strategy is processors. Whenever you create more vCPUs for allocation to VMs, each vCPU has to be scheduled, so it waits for a physical CPU before it can process instructions and data from VMs. Ready times can reach 20 percent as vCPUs are queued until processors are available.

Two ways to give VMs more ready access to physical CPUs are by increasing the CPU shares priority, and by setting CPU reservations for the VM. Workload balancing lets you reduce the number of vCPUs on a server by moving slow-running VMs to servers with more available resources.

Likewise, when you allocate more memory to a VM than it and its applications need, there’s no easy way for the hypervisor to recoup that lost memory. To avoid excessive disk swapping, the hypervisor may use memory ballooning or other aggressive memory-reclamation methods to recover idle memory.

The temptation is to compensate by overprovisioning memory to the VM. To prevent this, analyze logical unit number (LUN) volumes assigned to VMs to determine capacity optimization. With thin provisioning, the actual physical disk capacity could be a fraction of the specified logical volume size. You’ll save money by thin provisioning a 100GB LUN with just 10GB allocated, for example, and then add physical memory subsequently as the physical volume fills up.

With thin provisioning, a VM configured with 40GB of storage will have only 20GB of that total allocated from the underlying VMFS volume. Source: GOVMLab

Containers can make the overprovisioning problem even worse 

Traditional virtualization models place a hypervisor atop the main OS, where it supports multiple “guest” OSes, each with their own app instances. By contrast, containers allow more efficient virtualization: the Docker Engine runs on the host OS, and virtualized apps run in their own instances above the host. Docker’s simplified architecture allows more containers to fit on the same server, and it lets containers be spun up in microseconds rather than minutes.

But there’s a price to pay. Containers create more virtual servers, which are spun up and down in an instant. This draws more power, which generates more heat. More heat means the load on cooling systems is increased. The result could be flash “heat floods” and overprovisioning of infrastructure. IT managers need to be more aware of server loads and performance hiccups in container-centric environments.

How the cloud turns overprovisioning on its head

In organizations of all types and sizes, the same discussion is taking place: “Which data resources do we keep in-house, and which do we relocate to the cloud?” The question belies the complexity involved in managing today’s multi-cloud and hybrid-cloud environments. TechTarget’s Alan R. Earls points out how provisioning cloud services turns the traditional resource-allocation model “on its head.”

For in-house systems, the danger is sizing capacity too low, causing apps to crash for lack of processor power or storage space. In the cloud, the danger is sizing capacity too high, which leads to overpayments and cancels out a primary reason for moving to the cloud in the first place: efficiency. One problem organizations often fail to consider is that legacy apps won’t take advantage of cloud elasticity unless the application load is uniform, so running the app in the cloud may be more expensive than keeping it in-house.

An application that scales only vertically within a server or instance as its load changes will leave managers with only one option: reboot the app onto a larger or smaller instance. The resulting interruption incurs its own costs, increased user dissatisfaction in particular. Overprovisioning is often an attempt to avoid having to reboot these apps.

By contrast, apps that scale horizontally allow more than one instance to be deployed to accommodate the change in load demand. Gartner researcher J. Craig Lowery points out that horizontal scaling allows IT to “identify the optimum building block instance size, with cost being a part of that assessment, and change the number of instances running as the load varies."

In search of the auto-scaling ideal

The Holy Grail of cloud provisioning is auto-scaling, a concept that is central to cloud-native software design. There is no shortage of instrumentation and diagnostic tools that afford deep dives into cloud utilization. The challenge for DevOps teams is to apply the knowledge they gain from such tools into a strategy to redesign or reconfigure apps so they support auto-scaling and other cloud cost-saving features.

An example of an auto-scaling group in AWS ensures at least one instance is always available, the optimal instance capacity is available most of the time, and a max capacity is available to accommodate the worst-case scenario. Source: Auth0

You’ll find the best of the cloud’s resource management abilities implemented in the Morpheus cloud application management platform. There is no faster way to provision databases, apps, and app stack components on public, private, or hybrid clouds than by using Morpheus’s intuitive, integrated dashboard. Nodes can be added to databases or apps via a web UI, a command-line interface, or via an API call. The databases and app clusters are reconfigured automatically to accommodate the new nodes.

Morpheus serves as the central repository for all your most important data: users can locate, use, and share custom reports highlighting app and server capacity, and all recent activity. System, database, and app logs are collected automatically for all the systems you provision, complete with uptime monitoring configured automatically. There’s no better way to ensure you’re paying for only the cloud resources you need.

How Single Page Applications Influence Page Speed

$
0
0

Given the rise of Single Page Applications (SPA), along with the much wider use of asynchronous HTTP requests in general for web pages, it's more difficult to determine how page loading times affect overall user experience on a website. These newer techniques for delivering content offer a different type of interaction than in the past when a user would load one web page and certain interactions (such as clicking a link) would simply load another web page to deliver the new content.

When resources were simply loaded up front, the overall time for the page to load was easily measurable with a single timing test. As things have progressed; however, this metric alone may be less meaningful for sites or apps that make heavy use of asynchronous calls to retrieve additional content as the user interacts with the page.

Advantages of Single Page Applications

A traditional page load simply loads all of the assets a web page will need up front (though some may be seen more quickly than others). The measure here is simple, though - the load event in JavaScript is triggered when all of the requested assets have loaded. These assets can include the HTML itself, cascading style sheets (CSS), JavaScript code, images, and other media.

A typical HTTP request, which must be done for each asset that needs to be loaded. Source: MDN.

As JavaScript became more and more popular for developing additional interaction for the user without the need to load another page, some further innovations were made. Anything from switching out which image was displayed to showing and hiding pieces of content became commonplace. However, these were often done by having all of the content loaded with the page itself, so the traditional model for page loading time was still effective. 

In fact, on certain sites like Twitter, there is nothing displayed on the screen when the onload event fires in the browser.

Source: Craig Tobe, ConstantContact

Modern websites and apps may load little or no content before the load event occurs. This causes the traditional loading time to be excellent but does not tell the entire story of what the user may experience. With the content loaded later, it is those later loading times that may make a difference in how fast the user ultimately feels the site is.

Disadvantages of Single Page Applications

It can take several seconds for a mobile browser to receive the first byte on a mobile device and we only have 3 seconds to get the content to the user before up to 40% of them abandon the request.

Source: Matt Shull on DWB

In web development today, many choose to use some form of asynchronous behavior. Part of this is that it can help keep users from loading unnecessary resources until they are wanted or needed, which is certainly helpful for improving initial load times. However, in such cases, the testing cannot stop there, as the loading time of one of the subsequent requests could cause the user as much frustration as a slow initial page load.

For example, it could be handy to wait to load a list of nearby store locations until the user enters some information and requests the list. If done the traditional way, the loading of the subsequent page can be easily tested. However, if this list is loaded on the same page after the initial page load, this additional load time may not be accounted for in the quickness of loading on the site. If this takes more than a couple of seconds, it could cause the user to want to move on to another site just as a with a slow initial page loading time. 

As you can see, even though the initial loading time may be excellent, it would also be helpful to be able to test subsequent asynchronous loading times to ensure that all loading time are fast, not just the initial site or app load.

How to Optimize Single Page Applications

In addition to initial page loading, websites and apps that use asynchronous calls will likely need further testing to ensure those additional requests are also loading quickly. The User Timing API, documented on MDN, is a feature in modern web browsers that will allow you to access some information you can use to test the speed of various user interactions by providing you with helpful information and functions to do so.

This API allows you to set up marks that can be measured for the time it takes for any number of asynchronous request to complete. As you can see, this could come in handy for testing load times that occur after the initial page load and help you provide an even better experience for your users. Indeed, such testing may help discover an issue that could be helped with some load balancing, autoscaling, or other features that are easily set up in the cloud.

 

The Pros and Cons of Shadow IT

$
0
0

Companies are using up to 15x more cloud services to store critical company data than CIOs were aware of or had authorized according to a study conducted by Cisco. This statistic helps conceptualize the pervasiveness of shadow IT. Simply put, shadow IT is a reality of doing business. 

With a smartphone full of apps in every pocket, a cloud application a few mouse clicks away, and employees caring more about getting the job done than pleasing IT, it’s to be expected that “unapproved” apps will be used quite often. Is that inherently a bad thing? What’s a CIO to do when confronted with the challenge of addressing shadow IT while maintaining productivity and security? To answer these questions, it's best to consider both the pros and cons of shadow IT.

The Cons of Shadow IT

Shadow IT, by definition, is technology not regulated, provisioned, or formally approved by an organization’s IT team. Given that, there are often concerns about security, effectiveness, and reliability when discussing shadow IT and the use of unauthorized apps. Statistics indicate that these concerns are valid. According to a study by Frost & Sullivan, “You can expect that upwards of 35 percent of all SaaS apps in your company are purchased without oversight”. 

In a world where malware can take down systems in the blink of an eye, one wrong move can put an organization outside of PCI, HIPPA, or Sarbanes–Oxley, that can be a scary thought. A well intentioned-user can end up doing more harm than good and at the end of the day IT, and more specifically the CIO, will be on the hook. 

Beyond those potentially business-crippling consequences, there are some sneakier costs associated with shadow IT. In this Michaels, Ross, & Cole blog post they discussed some hidden costs including overpaying for licenses and investing time and money into the wrong solution. It’s bad enough when the budget takes a hit for software that doesn’t meet corporate standards, but even worse when the software does nothing to solve a business problem. 

That loss of time is something you can never get back. Given all the risks and potential hidden costs associated with shadow IT, it seems like a forgone conclusion IT teams everywhere should be auditing their networks and saying, “let’s become more strict and stop the use of anything unapproved, no excuses, no exceptions” right? Not exactly.

The Pros of Shadow IT

The aforementioned Stratecast | Frost and Sullivan study was referring to applications being used by members of the organization trying to get their job done. They were using shadow IT for a reason. A reason that may have solved a problem the “approved” corporate program could not. One of the more common motivators for a user of shadow IT to choose an  “unapproved” app is because it is more efficient and effective than what the IT department has chosen, and chances are pretty good that the employee hired to play a specific role may know a bit more about the tools of their trade than IT. 

In a previous post, we talked about these “superusers” and how IT teams can leverage their choices to make decisions that benefit everyone. In a nutshell, modern IT teams should determine what shadow IT applications are being used and why. This doesn’t just apply to “super-users," but they are a great microcosm of the bigger picture. 

From the perspective of a user, and arguably an objective observer, apps should be judged by their utility to the company, not by their presence on a list of approved apps. Denying talented people access to the best tools possible as a knee-jerk reaction just isn’t good for business or morale.

A Balanced Approach to Shadow IT

Shadow IT has its pros and cons, but what can be done to address the issue? It seems like on one side, IT teams have an incentive to be risk-averse, while users have an incentive to get the job done and step outside of the box to do it if needed. This creates situations where users of shadow IT aren’t quick to ask for approval, especially if the approval process hampers productivity and runs the risk of losing access to an app they need to get a job done. Such circumstances can create a combative relationship that isn’t beneficial to anyone. 

IT teams can address this by creating an environment that breeds trust and cooperation. Finding a way to leverage monitoring, surveys, and good old-fashioned open dialogue to work with users is important. The objective should be to get approved apps implemented where shadow IT just isn’t going to be secure, sustainable, or dependable enough for corporate use and learn where the organization as a whole can benefit from “authorizing” a new app. 

Both sides will need to be open minded and willing to find compromises that are best for business.  Taking this balanced approach will create a culture much more conducive to productivity and cooperation than simply outright banning anything currently unapproved or turning a blind eye to the situation altogether. Listen to your “superusers” and see if you can come together to bring good apps out of the shadows.  As Tracy Cashman, Senior Vice President and Partner at WinterWyman Executive Search said, “more progressive CIOs know that, given today's technology and the increasing savvy of the business, it's in their best interest to embrace shadow IT."

Cloud Governance in a Hybrid World: The New Role of IT

$
0
0

This guest post was written by Jeff Budge, VP of Advisory Consulting and Product Management, OneNeck IT Solutions

The Cloud as a Change Agent for IT

The continuing adoption of cloud is changing the role of IT departments and their relationship with external resource providers (technology providers, service providers, technology vendors, etc.). At the same time, end users are adopting a digital work style built on consumer technology that is creating demand for user-friendly IT services available on-demand.

Hybrid cloud infrastructures are starting to dominate and IT departments are being called upon to integrate existing systems with Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) cloud computing models, while delivering user-friendly services to tech-savvy employees. Employees are increasingly confident in seeking out their own cloud resources if enterprise resources are not easily accessible.

The result is a new IT-as-a-Service (ITaaS) model that is rapidly evolving with IT assuming responsibility for brokering business-critical systems to internal departments to ensure increased business agility, process flexibility, and shorter time-to-market. The challenge facing CIOs and internal IT management is how to exploit the advances of hybrid IT and cloud computing without undermining corporate policies and while maintaining governance over the end-to-end environment.

 

The Cloud + The Consumerization of IT

IT is now forced to think strategically about harnessing cloud services in a manner that offers the same easy-to-use, self-service model as consumer applications. IT has to learn from technology consumerization and take the lead in providing accessible, easy-to-use services at low cost.

With the onset of Bring Your Own Device (BYOD) came an employee expectation of the same convenience they get as consumers. Therefore, the next phase of enterprise evolution will require IT to think about harnessing cloud services in the same easy-to-use, self-service model as consumer applications, but specifically deployed for business.

 

Welcome to ITaaS.

The migration to an ITaaS model requires a mix of legacy and new IT systems, with agile development strategies to make the most of existing systems. To succeed, the CIO will need:

  • Cost visibility across all providers and projects
  • Decision and transformation engines to support current and future operations
  • Cloud management that provides a single pane of glass to manage and broker services

The ultimate goal is to create an ITaaS operational model, where the IT department functions as a distinct business unit serving other lines of business within the organization. Various departments have already started to use cloud-based services on their own (aka, Shadow IT), so ITaaS has to be a competitive services model, providing the same services more efficiently and cost-effectively. In fact, IT will need to maintain a competitive relationship with outside vendors, alternately competing for “business” serving company departments as customers and negotiating vendor services to include in its ITaaS catalog.

To provide consumer-like services, the ITaaS strategy will require the IT department to broker internal and external services and offer them in a consumer-like fashion, such as through a transaction portal. IT will have to present available technology solutions in a unified catalog of available services that integrates back-end services, managing assets, policies, pricing, and governance. Whether services are delivered in-house or from external vendors, access will be seamless to end users.

The ITaaS model can transform IT from a cost center to a service center, delivering services while controlling and mandating governance. By providing better services and a better user experience, IT will discourage shopping for alternate solutions and become a one-stop shop for internal technology needs. With ITaaS, IT becomes an in-house managed services provider.

 

ITaaS Requires Some Growing Up

Adopting an ITaaS strategy can’t happen overnight. It will require changing corporate culture as well as repackaging available technology. The analysts at 451 Research outline five levels in the ITaaS Maturity Model:

Level 1: Ad Hoc– Where most enterprise IT organizations function today. They are not thinking about ITaaS, but rather are working with SaaS and perhaps IaaS. They also are suffering from the consequences of consumer electronics and trying to rein in the adoption of shadow IT.

Level 2: Repeatable– Early stages of ITaaS. IT has created an inventory of services for use within its own department to manage basic processes and provide control over available applications. This still operates in the application layer and makes the use of some IaaS brokering.

Level 3: Defined– Employees are using their own devices to procure SaaS solutions. The service catalog is now available to the organization at large in some form and offers a consumer-like self-service model. Note that for most enterprises this won’t be an overnight implementation. There are likely one or two departments that serve as pilots. For example, a portal could be created for onboarding new employees, allowing managers to choose the necessary IT workplace resources.

Level 4: Managed– The IT department metamorphoses from a cost center to a service provider for internal departments. IT services are made available to departments which, in turn, pay for services using some form of chargeback model. The IT department is now the broker of both internal and cloud-based services.

Level 5: Optimized– Departments start requesting customized IT services beyond the catalog. The IT department becomes a managed services consultant, working to define and shape procurement requirements and allocate internal and external resources to create new services.

Note that these five phases are truly evolutionary, not revolutionary, and it is virtually impossible to skip past specific levels. If the IT department is considering an ITaaS model, it has to lay the foundation for brokering services (Levels 1 and 2), and then the organization has to be ready to embrace the role of IT as a service provider (Levels 3, 4, and 5). The model won’t work until the organization understands and accepts the new strategic role that IT is going to play.

 

It Takes a Village

The success of any ITaaS strategy also hinges on the organization’s working relationship with key technology providers. Providing hybrid cloud services and other strategic applications requires partnering with reliable suppliers that can fill the gaps in the ITaaS catalog.

There are basic attributes to consider when assessing any potential technology service provider:

  • They must be reliable, capable and trustworthy
  • They must show an interest in helping the organization succeed
  • They must see beyond the immediate technology needs and suggest ways to realize long-term business goals
  • They need to offer the resources to sustain an ongoing “go-to” working relationship

These are just the basic criteria. When choosing strategic partners, it’s important to look beyond “one-size-fits-all” and partner with vendors that understand your organization and its business goals.

As the IT department evolves to become an in-house managed service provider, external managed service partners need to be prepared to offer a consistent layer of services and management. The goal is to make sure the end user has a consistent experience no matter where the service itself resides. This requires a cultural understanding and alignment with third-party resource providers.

However, before taking the steps to select a managed services partner, the first step should be an internal IT “personality assessment.” This entails a careful evaluation of the IT team to determine what type of IT personality they are striving to achieve. There are essentially three types of IT service personalities, depending on the organization’s ITaaS maturity:

  • Supply Chain– The traditional approach where IT determines the need, identifies the necessary resources, negotiates the terms, procures or builds the components and delivers the service.
  • Broker– IT serves as a mediator between strategic service providers and internal business units. IT’s role is to protect operations and support the business by negotiating terms, price, SLAs, etc.
  • Concierge– The supply chain and the broker approaches are combined in a best-of-both-worlds approach and IT is in control of the end product. Business units come to IT seeking assistance with a problem, and it is IT’s job is to understand the goal and find the resources to provide that service.

Defining the IT personality will make it easier to identify the right strategic partners that are capable of meeting the company’s ongoing needs. Once the internal IT department truly understands their own personality, they are well positioned to choose strategic partners that align on more dimensions than just technical capability.

 

Bringing It All Together

Recognizing the influence of forces such as cloud and the consumerization of IT, along with understanding who the internal IT department is and where they are on the ITaaS maturity model, forms the foundation for true progress in becoming a strategic service provider to the business.

5 Tips for Production Code Readiness

$
0
0

When moving code to production, you always want to ensure things are as thoroughly tested as possible before a production update is put into place. Not having good testing and procedures in place can lead to service outages, broken apps, and lots of unneeded stress for you and your team. 

While getting new features or fixes out quickly is important, it is a good idea to ensure that you take the time to do all of the change control, reviewing, testing, and preparation that will be needed for a production release. Ideally, such a release is something in which you will have a high degree of confidence of being successful and not presenting any new problems for you or your users.

Use Code Branches

When changing the codebase, you should use a tool such as Git to track changes. Basically, such tools are used for version control, which allows you to easily roll back a change that was made if something goes wrong. 

Branching with a service like this gives you the opportunity to keep a “master” branch that should always be tested and working properly, while developers can be making changes in their own ”feature” branches. The individual branches can then be merged into the master branch as they become ready. 

As a result, if a feature branch gets merged into master and something stops working, the changes from the feature branch can be rolled back so that your master branch is again in a working state.

Use Unit Testing

Unit testing should be used to help ensure particular modules or functions are doing what they are supposed to be doing. For example, if a function should return a positive integer and a unit test shows that it can return a string, then that unexpected result can be fixed so that it does not occur or can be accounted for as one of the expected results.

As you can see, this can help catch potential problems early in the development process, and that is always better than finding them in production. This also ensures that if changes are made that affect your existing code in a way that breaks something, particular unit tests should fail and the developer will be able to go back and fix any issues that might be causing the failures.

Oftentimes, unit tests are run automatically when a developer performs a certain action. For example making a commit, building, sending a pull request, or other similar actions could all be used to trigger the test runner. Thus, you could be unit testing at any or all of these types of points where changes are potentially being introduced.

Use Load and QA Testing

You always want to be sure that any changes to your code don’t introduce any speed or other user experience problems. Both load testing and QA (Quality Assurance) testing can be used to help further ensure that everything is in good working order before being deployed to production.

With load testing, you find out at what point your system cannot handle any further simultaneous traffic or transactions such as large database queries. If a change to the code introduces enough of a slowdown from your previous load test, it is likely a good idea to take another look at the potential change to see if anything can be optimized.

QA testing is a way to allow one or more people to use the system as a potential user would. With this, any odd bugs can be found and fixed before an actual user is introduced to them, which helps avoid such things as unexpected hotfixes or rollbacks. 

Use Linting Tools for Non-compile Languages

Compiled languages ensure that your syntax is checked by a compiler before being allowed to run. Non-compiled languages often do not have such a feature, so it is recommended to use some type of linting tool as part of your testing process to ensure that any changed code meets the standards it needs to in order to help avoid any potential problems.

For example, JavaScript is often minified, but this process can break the code if a developer missed the insertion of a standard semicolon somewhere. This oversight is possible because JavaScript has a feature to automatically insert missing semicolons at line breaks at runtime. This will hide the missing semicolon from a developer testing the code, but won’t hide it once the code is minified for production. A good linting tool can find this as well as a number of other things that could cause problems if not fixed.

A linting tool working inside of a code editor. Source: Treehouse.

Use Code Reviews

Having one or more other developers review the code before merging the changes into the master branch can be another helpful method of avoiding problems when moving to production. A human reviewer can oftentimes find optimizations or potentially bad practices in the code that can be remedied quickly and help ensure that those issues aren’t merged into the master branch. 

When making sure your code is ready for production, all of these techniques can be extremely helpful for finding and fixing any issues before your production release.

 


The Benefits of Converged Infrastructure

$
0
0

With more and more services being moved into the cloud, a new trend toward the convergence of the various service types, which allows you to have your infrastructure, platform, and software as a service in one centralized place. This can be beneficial, as you can simply use one provider and choose which services you need for your organization.

Multiple Service Offerings

As cloud services have evolved, more and more types of services have become available. There have been companies providing Infrastructure as a Service (IaaS), Platform as a Services (PaaS), Software as a Service (SaaS), databases, backups, recovery, monitoring, and numerous other offerings. Many times, each of these service types was offered by different vendors, so those wanting to use a more converged environment had to get some services from one provider and others from another.

The various services provided by IaaS, PaaS, and Sass. Source: Silverlight Hack

Recent trends have shown that numerous service providers are trending toward a service model that is a convergence of all of these varying service types. As a result, you will likely have a much easier time on your end as well, since you can much more easily work with a single vendor to provide you both IaaS and PaaS if you are looking to move both of those responsibilities to the cloud.

Evaporating Distinctions

Where the distinctions between service types used to be more pronounced, and as mentioned also offered by different providers, these distinctions have lessened as more and more organizations have moved to using the cloud for more than just one of the services. 

For example, many companies now have both infrastructure and platform in the cloud and use both IaaS and PaaS at the same time, which blurs the lines of distinction between them since all of it is simply part of the new workflow for these IT teams and the differences are not as pronounced in this new environment.

Benefits of Cloud Convergence

While converged infrastructure, in general, has gained quite a bit of momentum, it's been followed by cloud services also moving toward convergence. This is in part due to the advantages both types of convergence offer to IT and management.

Being able to pool IT resources is a great advantage, as teams can work together more easily and with a standard set of tools and procedures. With cloud service convergence, this again allows IT resources to use a single service for all of the various service needs.

Automated provisioning of resources is also eased with the convergence of cloud services. Since one vendor can provide a single and more consistent interface for its users, IT teams can more easily and quickly provision any needed resources since they will already be familiar with the system they need to use to perform the task.

Scaling can also be eased by this convergence. Since companies often have to scale infrastructure, platforms, backup capacity, and numerous other things at around the same time and do so quickly, using a central resource is a much more efficient use of time and resources.

Enterprise Examples of Converged Infrastructure: Microsoft

Microsoft began offering its Azure service as PaaS. This morphed over time into a converged model where they also made infrastructure such as servers available for provisioning, thus producing a service that is converged PaaS and IaaS. Since the developers they were targeting with their original service test and launch their platforms on some sort of server infrastructure, the users would likely more than welcome the ability to perform both tasks in a single environment. 

With this type of trend happening in numerous environments, it is only natural that other cloud service providers have done much the same thing, combining multiple service types into a converged service infrastructure that will make management of these resource far easier for IT teams and managers.

Why Is This 30-Year-Old Website Architecture So Popular in 2017?

$
0
0

A popular infrastructure for web applications is the “shared nothing” architecture (sometimes also referred to as “share nothing”). Interestingly enough, even though shared nothing architecture is widely used for web development, the idea was thought of way back in 1986. According to this Wikipedia article, Michael Stonebraker at the University of California, Berkeley used the term in a 1986 database paper.

While the idea has indeed been around for some time, it wasn't until recently that it became a more popular means of deploying systems.

What Is “Shared Nothing” Architecture?

A shared nothing architecture is one in which you have a number of separate nodes that do not share particular resources, most notably disk space and memory, though this can be expanded to include other resources such as databases that also should not be shared.  

Comparison of several architectures: share everything, share disks, and share nothing. Source: SlideShare

Operating under numerous self-sufficient nodes rather than having a single source of particular resources offers several advantages: easier scaling, non-disruptive upgrades, elimination of a single point of failure, and self-healing capabilities.

Benefits of Shared Nothing Architecture

Scaling becomes simpler when things such as disks are not shared. For example, scaling up a single shared disk to get more storage space can lead to enormous problems if things do not go well, as all of the other resources require that disk to be able to do their work. 

On the other hand, if you are using several nodes that do not share the space, scaling up the disk space on any or all of the resources becomes quite a bit easier with less potential problems. If the scaling should fail on one of the resources, the others will still continue to do their work normally. According to Gerardnico"This architecture is followed by essentially all high-performance, scalable, DBMSs, including Teradata, Netezza, Greenplum, as well as several Morpheus integrations. It is also used by most of the high-end e-commerce platforms, including Amazon, Akamai, Yahoo, Google, and Facebook."

Enables Non-disruptive Upgrades

Similar to the scaling advantages, you can use shared nothing architecture to perform non-disruptive upgrades to your services. Instead of having a certain amount of downtime while you are upgrading an infrastructure with shared resources, you can upgrade a node at a time, and the redundancy in the other nodes will continue to run so that you do not need to shut everything down for the amount of time it takes to perform the upgrade.

For example, if you need to upgrade an app, you can do so on each node while the others are running. Since these nodes don’t share disk space or memory and instead redundant copies of the app are on separate disks, you can simply update each one in turn without taking everything down. If one of the upgrades fails along the way, it will only take down the single node rather than the entire service. This can make upgrading much less stressful.

Eliminates Single Point of Failure

With shared systems, a single point of failure can take down your site or app entirely. As noted, the ability to have separate systems on separate nodes with redundancy can make things much easier while avoiding the disaster of a single failure causing unexpected downtime.

For instance, the failure of a shared disk could prove quite disastrous to your system. Since it is a single point of failure, none of your other services can run properly until the failed disk is restored. If there is no recent backup, then you may also incur a data loss, which could be small or very large in scope. 

The next step, in this scenario, is to stand up a new disk with the latest backup you have (or in really bad cases this could be only the basic data you need to run the system from scratch). This will likely take some time and your services will be down or non-functional until the recovery work is complete.

It is indeed much better to have some redundancy in separate nodes as well as knowing the other nodes won’t fail due to a disk failure on a single node.

Avoids Unexpected Downtime

Along with all of the other advantages, "shared nothing" architecture allows for some amount of self-healing that can be another line of defense against unexpected downtime. For example, when you have redundant copies of data or databases on different disks, a disk that loses data may be able to recover it when the redundancies are synced. 

Had it instead been a single, shared disk, the data would be lost and downtime would inevitably ensue. As you can see, shared nothing architecture can be very helpful to your organization, and perhaps save you from any number of outages that might otherwise occur when certain resources are shared. Even though the initial idea was introduced more than three decades ago, today's technology makes shared nothing architecture a much more viable option.

 

 

How to Create a Docker Backup With Morpheus

$
0
0

Docker and Morpheus go together like bread and butter. Want proof? Look no further than the process of creating container backups, which are an essential element of Docker management.

Consider this typical scenario: You’ve created a data volume for your container and now want to backup both the changes to the container and the container data volume. Backing up the container changes is as easy as committing and pushing the image to Docker Hub, or alternatively to your locally stored private cloud.

That leaves the data volume unprotected because, by definition, the data volume is not committed as part of the image. You need to commit the data volume because when you work on a cluster, such as coreOS, you don’t know where the image is going to run. That’s why you need access to the data volume along with the image.

The manual method for backing up, restoring, and migrating data volumes is presented in this Docker tutorial. That approach stores the volumes locally. DZone’s Eran Avidan extends the technique to allow the backup to be pushed to Docker Hub so it can be restored to any location.

Automate Container Backups in a Flash via the Morpheus Dashboard

If you like entering arcane operators from the command line, you’ll love these manual container backup techniques. Keep in mind, there’s a faster and more efficient way to backup containers that’s just as effective: Let Morpheus do the heavy lifting.

Step 1: Add a Docker Host to Any Cloud

As explained in a tutorial on the Morpheus support site, you can add a Docker host to the cloud of your choice in a matter of seconds. Start by choosing Infrastructure on the main Morpheus navigation bar. Select Hosts at the top of the Infrastructure window, and click the “+Container Hosts” button at the top right.

To back up a Docker host to a cloud via Morpheus, navigate to the Infrastructure screen and open the “+Container Hosts” menu. Source: Morpheus

Choose a container host type on the menu, select a group, and then enter data in the five fields: Name, Description, Visibility, Select a Cloud and Enter Tags (optional). Click Next, and then configure the host options by choosing a service plan. Note that the Volume, Memory, and CPU count fields will be visible only if the plan you select has custom options enabled.

Here is where you add and size volumes, set memory size and CPU count, and choose a network. You can also configure the OS username and password, the domain name, and the hostname, which by default is the container name you entered previously. Click Next, and then add any Automation Workflows (optional). Finally, review your settings and click Complete to save them.

Step 2: Add Docker Registry Integration to Public or Private Clouds

Adam Hicks describes in another Morpheus tutorial how simple it is to integrate with a private Docker Registry. (No added configuration is required to use Morpheus to provision images with Docker’s public hub using the public Docker API.)

Select Integrations under the Admin tab of the main navigation bar, and then choose the “+New Integration” button on the right side of the screen. In the Integration window that appears, select Docker Repository in the Type drop-down menu, enter a name and add the private registry API endpoint. Supply a username and password for the registry you’re using, and click the Save Changes button.

Integrate a Docker Registry with a private cloud via the Morpheus “New Integration” dialog box. Source: Morpheus

To provision the integration you just created, choose Docker under Type in the Create Instance dialog, select the registry in the Docker Registry drop-down menu under the Configure tab, and then continue provisioning as you would any Docker container.

Step 3: Manage Backups

Once you’ve added the Docker host and integrated the registry, a backup will be configured and performed automatically for each instance you provision. Morpheus support provides instructions for viewing backups, creating an instance backup, and creating a server backup.

Start by choosing Backups on the main navigation bar. Select Backups on the sub-navigation bar that appears, and click the backup name to view details about it. To create an instance backup, click the Add Backup button, which opens the Create Backup Wizard.

Select the Instance radio button in the wizard, give the backup job a name, and choose the instance you want to backup from the drop-down menu. Click Next, and then enter such pertinent details as the database name, username, password, and container. Click Next again, and then schedule your backup by choosing the days and time. Click Complete to save and activate the backup.

Step 4: Configure and Integrate Backups

To alter or disable a scheduled backup, select the Admin tab on the main navigation bar and choose Backups on the sub-navigation bar. In the Backup Settings window, you can toggle on and off the options for Scheduled Backups, Backup Appliance, and Default Backup Provider. You can also pick your default backup provider via the drop-down menu, and enter the minimum number of successful backups to retain.

Step 5: Provision a Docker Host into Hyper-V

An article on Morpheus support about Hyper-V integration highlights a great timesaver for any Docker installation: the ability to support multiple containers per Docker host. The Add Instance catalog in Provisioning is used to provision virtual machine-based instances.

Start by provisioning the Docker host in Hyper-V by navigating to the Cloud detail page or the Infrastructure Hosts section. Choose the “+Container Host” button to add a Hyper-V Docker host. Morpheus treats the host like any other Hypervisor, except that it is used to run containerized images rather than virtualized ones. A green checkmark indicates that the Docker host has been provisioned successfully and is ready for use.

If the provisioning fails, an explanation of the error appears when you click into the relevant host. For example, a network connectivity glitch may be traced to the host’s inability to resolve the Morpheus appliance URL. This setting can be configured in Admin > Settings.

Control Plane Versus Data Plane

$
0
0

The terms control plane and data plane get tossed around quite a bit these days, but it seems that many have a difficult time understanding the key differences between these two terms, with the ambiguity even leading to debate amongst CCIE candidates. This is understandable since the terms are abstractions of a sort, but like many abstractions, they serve an important purpose if you can grasp the core concept.

As Peter Singer put it, "Whatever cannot be said clearly is probably not being thought clearly either." In a nutshell, the control plane is where network signaling, learning, and planning occur and the data plane is where the packets are actually moved between endpoints. This Cisco Learning Network Post comparing the control plane to planning public transportation routes and the data plane to actually moving people in buses and trains is a good reference for those looking to further conceptualize the terms.

With the concepts defined, the reason CIOs and tech teams should concern themselves with these abstractions should become a bit clearer. Now it becomes time to move past the concepts and into the realities of what to do. While the “how to's” associated with the data plane may be clear enough, how can one address the challenges of the control plane in modern IT environments that include on-premise physical servers, hypervisors, virtual machines, and various cloud solutions? Starting with a true control plane solution is the first step.

Why Morpheus Is a Control Plane Solution

As mentioned in this Cloud Computing Today Q&A with our CEO, Jeff Drazan, Morpheus is a unique solution that sits in the control plane. Jeff points out that many PaaS solutions lock users into a specific prepackaged environment. This approach can be successful as it achieves the goal of abstracting the underlying infrastructure and serves its purpose in many use cases, but oftentimes the lack of flexibility offered by these solutions can impact business.

What happens when there is a requirement to run services on a variety of platforms ranging from public cloud to on-premise servers? This is where Morpheus stands out. In a single-pane of glass, Morpheus offers users the ability to shift between platforms and provision apps in public and private clouds or on-premise servers seamlessly. This additional level of flexibility and platform agnosticism are what differentiate Morpheus from other cloud solutions. 

What that Means for Your Cloud

If you’re part of a modern IT organization, it’s likely that cloud adaptation in your corporate environment will only continue to grow in the near future. In fact, Gartner projects that the public cloud services market will grow 18 % to $246.8 billion in 2017.  The continued adoption of cloud services in your organization probably won’t be one clean transition from on-premise to one specific cloud-based infrastructure, such a solution is too simplistic for most environments (and that’d make your job way too easy, right?).

On the contrary, you’ll probably face a transition where apps are run in multiple environments and cloud adoption for a given solution means another environment to manage.

These high-paced and varied environments are exactly the instances where Morpheus shines. Using a simple yet powerful GUI, users can provision applications, databases, and other app stack components in seconds.  

Why that Makes Morpheus the Perfect CMP

Because Morpheus resides in the control plane, the Morpheus Control Panel is uniquely capable of mitigating many of the complexities of life in the cloud. If you have ever been tasked with migrating an application from an on-premise server or vice versa, you’ll appreciate the fact that Morpheus can handle the move in minutes.

Coupling this flexibility and ease of use with built-in monitoring, logging, access and role management, a robust API and CLI, and backup & recovery solution make Morpheus the ideal CMP for a wide variety of use cases. Built from the ground up to solve real world business problems, Morpheus comes ready to integrate with any aspect of your application stack from day one.

All these features come together elegantly to make Morpheus the premier cloud management platform and a truly holistic solution. Contact us today to see what Morpheus can do for you.

How to Do DevOps Testing with Morpheus

$
0
0

The widespread adoption of agile development methods has transformed the software design timeline into a “time circle.” Everything happens simultaneously: design, test, deploy, maintain. Considering the fundamental change this represents, it’s no wonder companies often struggle to implement continuous integration/continuous delivery for their vital data systems.

The area of agile development that trails all others is testing. Software Development Times cites a recent study by test tool vendor Zephyr that found 70 percent of the small companies surveyed have adopted agile processes, but only 30 percent have automated their testing operations. Another survey sponsored by Sauce Labs (pdf) reports agile adoption has reached nearly 88 percent at organizations of all sizes, yet only 26 percent of the companies have implemented test automation.

DevOps teams have to manage design, test, deploy, and maintain products while keeping in mind the needs of four distinct, diverse groups: 

  • QA who are in charge of use cases, test cases, test plans 
  • Developers/testers who implement the test cases 
  • DevOps who integrate testing into the continuous-delivery pipeline 
  • Line managers who generate and act on test reports
Addressing the DevOps Testing Needs of Various Stakeholders

In a DZone post, Sergio Sacristan runs down the testing features each distinct segment of the continuous-deployment community requires. At the testing stage, the first two required characteristics are robustness and stability. Testers look for an environment that requires little training to use, that keeps development and maintenance simple, and that is easy to port between applications.

At the execution stage, the key is seamless integration of testing with deployment tools. The test environment must be flexible and adaptable to accommodate desktop apps, web apps, mobile apps, cloud services, and test scenarios. It should be easy to apply the same test with different parameters, runtimes, and application properties. Other considerations are the ability to run tests in parallel, to scale tests without sacrificing performance, and to ensure testing tools are easy to access.

The continuous-delivery component that DevOps is most likely to overlook is the one that's most critical to managers: reporting. The test environment must be capable of sending reports—whether via email or text—that communicate pertinent test results in a timely manner. After all, the primary purpose of testing is to identify and address problems. In addition to complete, up-to-date logs, the report function has to provide access to a history of executions and offer customizations to allow performance to be measured between versions.

Continuous Testing's Goal: Prevention Replaces Detection

Imagine an application deployed to production with zero defects. What many DevOps teams would consider a pipe dream is an achievable goal according to Alex Martins, who explains in a DZone post the important distinctions between continuous testing and test automation. In a nutshell, test automation is a process that is applied at some level to nearly all IT operations. Conversely, continuous testing is a philosophy whose goal is to prevent errors from finding their way into code rather than detecting and correcting errors after the fact.

Martins breaks continuous testing into four "pillars": code quality, pipeline automation, customer experience, and application quality.

Continuous testing attempts to eliminate coding errors before the application has reached the traditional testing phase. Source: Alex Martins, via DZone

Two popular approaches for ensuring code quality are code style guides and regular peer reviews. Both techniques are intended to keep code consistent across present and future dev team members to facilitate maintenance and updates. Martins points out that code quality does not automatically translate into application quality.

Even the highest-quality code will fall flat if it can't travel freely throughout the production environment. The second of the four pillars calls for the pipeline to be automated to allow network traffic to reach its destination with no manual intervention. Whenever a human gets involved, you've created an opportunity for errors to be introduced, such as someone using an incorrect library or applying the wrong configuration files.

Devising a comprehensive, unambiguous test environment requires getting business analysts, developers, testers, and operations staff together so each area understands the needs of all the other areas. Just as the test timeline becomes a test "circle," the formerly siloed departments become a single unified organization working concurrently rather than consecutively.

Morpheus Automates DevOps Testing out of the Box

Traditional approaches to DevOps testing are anything but simple. Are you dealing with virtual machines? Docker containers? A resource service? Or maybe a full Mongo cluster? Now stitch these and other components together in an integrated, continuous test environment. Definitely not simple.

What you need is the ability to wrap a handful of containers together with a half dozen VMs and then throw in a separate set of resource services, all manageable as a single instance. That's the Morpheus difference in a nutshell. 

As explained on the Morpheus support site, an instance can have multiple nodes, each representing a container or a virtual machine, depending on the provisioning engine that created the instance. Other tutorials describe how to create instances using Morpheus' automatic provisioning, and how to edit, delete, and manage Morpheus instances.

Drill Down to the Heart of Morpheus Instances

All four components of the continuous testing model are represented in the Morpheus Instance Details section. The instance info screen includes the name, description, environment, status, group, version, creation date, layout, max memory and storage, and other data. The summary window shows a summary of CPU, memory, storage, IOPS, and network throughput. The deploy screen lets you easily track deployment history.

Morpheus helps automate DevOps testing and deployment by placing all necessary information in easy-to-access instance information screens. Source: Morpheus

Once your instances are configured and deployed, use Morpheus checks to ensure peak performance. Some check types are selected automatically when you provision a service or instance type, such as database checks, web checks, and message checks. You can customize checks to run custom queries, change queue sizes, or adjust severity levels and check intervals.

Finally, use Morpheus policies to make governance and auditing quick, simple, and inexpensive. Policies can be applied to all instances at the Group or Cloud level (Cloud policies will override conflicting Group policies during provisioning). Among the policy types are expiration, host name, instance name, max containers, max cores, max hosts, max memory, max storage, and max VMs.

21 Automated Deployment Tools You Should Know

$
0
0

Bill Gates was quoted as saying, “The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.” The DevOps movement over the last few years seems to be strong empirical evidence for that statement, and the numbers seem to support it too. An Enterprise Management Associates research report indicates that companies in which “Continuous Delivery” frequency increased by 10% or more were 2.5 times more likely to experience double-digit (≥10%) revenue. 

Safe to say, there are compelling reasons to do your homework on automation. Maximizing efficiency and shortening the feedback loop are vital to creating and maintaining a competitive edge. To help you get started, here is our list of 21 automated deployment tools you should know.

List of 21 Automated Deployment Tools

Jenkins

One of the leading Continuous Delivery (CD) and Continuous Integration (CI) tools on the market, Jenkins is an automation server with a high-level of extensibility and a large community of users. Jenkins forked from Oracle’s Hudson-CI in 2011 during a time when there were some public differences of opinion expressed amongst members of the developer community and Oracle.

ElectricFlow

Electric Flow is a release automation tool that offers a free community edition you can run on VirtualBox. ElecticFlow supports a number of plugins and Groovy-based DSL, CLI & APIs.

Microsoft Visual Studio

One of the cornerstones of Microsoft’s DevOps offerings is Visual Studio. Visual Studio allows users to define release definitions, run automations, track releases, and more.

Octopus Deploy

Octopus Deploy is built with the intent of automating deployment for .NET applications. You can install Octopus Deploy on a server or host an instance in Azure.

IBM UrbanCode

Purchased by IBM in 2013, UrbanCode automates the deployment to on-premise or cloud environments.

AWS CodeDeploy

Amazon’s automated deployment tool, CodeDeploy, boasts an impressive list of featured customers and platform and language agnosticism.

DeployBot

DeployBot connects with any git repository and allows for manual or automatic deployments to multiple environments.  DeployBot offers a myriad of integrations including the ability to deploy through Slack

Shippable

Shippable defines their own “Pillars of DevOps” and Shippable’s CI platform runs builds on Docker-based containers called minions.

TeamCity

TeamCity is a CI server from Jet Brains. TeamCity comes with smart configuration features and has official Docker images for servers and agents.

Bamboo

Bamboo Server is the CI offering from the people at Atlassian, the makers of Jira and Confluence. Bamboo advertises “integrations that matter” and offers a “small teams” package that donates proceeds to the Room to Read charity.

Codar

Codar is a continuous deployment solution from HP. Deployments are triggered using Jenkins.

CircleCI

CircleCI is a CI solution that puts an emphasis on its flexibility, reliability, and speed. CircleCI offers solutions from source to build to deploy and supports a variety of languages and applications.

Gradle

Gradle is a build tool used by some of the biggest names in the industry like LinkedIn, Netflix, and Adobe. Gradle uses Groovy build scripts, build-by-convention frameworks, and considers its build tool to be a general purpose tool along the same lines as Apache’s Ant.

Automic

Automic seeks to apply DevOps principles to some of the back-end apps allowing them to benefit from the same practices that many front-end web-based apps have over the past few years.

Distelli

Distelli specializes in deploying Kubernetes Clusters anywhere but can be used with any cloud or physical server. According to this TechCrunch article, Distelli secured $2.8 million in Series A funding in December 2015 and was founded by former AWS employee Rahul Singh.

XL Deploy

XL Deploy is an application release automation tool from XebiaLabs that supports a variety of plugins and environments and uses an agentless architecture.

Codeship

Codeship is a hosted CI solution that supports customization through native Docker support.   

Open-Source Deployment Tools 

GoCD

A CD server with an emphasis on visualizing workflows, GoCD is an open-source project sponsored by ThoughtWorks Inc.

Capistrano

Capistrano is an open-source deployment tool programmed in Ruby. The documentation for Capistrano boasts its script-ability and “sane, expressive API."

Travis CI

Travis CI can be synced to your GitHub account and allows for automated testing and deployment. Travis CI is free for open-source projects.

BuildBot

BuildBot is an open-source Python-based CI framework that describes itself as “Framework with Batteries Included”. BuildBot is geared towards use cases where canned solutions just are not flexible enough.

 

Depending on your use case and environment, one or more of these tools may be right for you. Regardless of what tools you use a cloud management platform that resides in the control plane like Morpheus can serve as the backbone of your holistic DevOps solution. With compatibility and integrations for everything from Docker to Jenkins Morpheus helps make provisioning and migration an elegant process. Contact us today to learn more about what Morpheus can do for you.

How MSPs Can Meet the Cloud Needs of Their Customers

$
0
0

You hear all this talk about how the cloud changes everything. Not really. The most important things stay the same. If you want to succeed, you need to have a plan. 

Not just any plan, but your plan. The plan that's custom-made to reach your business goals. The trick is to find the cloud service that will turn your plan into action. The service has to be as open. flexible, scalable, and agile as the cloud itself.

For a growing number of companies of all sizes, the services that are best suited to achieving their cloud goals are those offered by managed service providers. The best MSPs focus on knowing what their customers need: cloud apps accessed by LOB managers via one-stop dashboards and programmable APIs.

For MSPs to succeed as cloud providers, they have to reverse their past emphasis on infrastructure (hardware) sales and focus instead on profitable software and services. A model for this transition is Organised Computer Systems Ltd., an HPE reseller that also partners with Microsoft and other cloud services. OCSL provides a mix of cloud services, managed services, and professional services; the company has increased its service revenue by 30 percent and expects services to represent 70 percent of total revenue in three years.

A 'future' cloud service that's available today

The reason for OCSL's service growth is a decision made three years ago to emphasize cloud service provision through partnerships with diverse vendors: AWS, Microsoft Azure, and emerging cloud technologies such as Azure Stack and Morpheus. In Morpheus's case, what OCSL and other cloud services deliver is a taste of what's in store for tomorrow's cloud-centric IT.

Start with the Morpheus dashboard interface that puts all of the most important information about your company's apps and databases in one place. You can see at a glance your overall system status as well as the status of individual apps. There's no faster way to deploy databases, apps, and app stack components than by using Morpheus's one-click provisioning, which lets you provision on any server or public/private/hybrid cloud in just seconds.

The Morpheus instance catalog serves as a one-stop shop for choosing the items you want to provision, and deciding how those items will be pieced together. Source: Morpheus

Flexibility is key for any cloud app management system. Open REST APIs allow you to integrate Morpheus data seamlessly with heterogeneous systems. You can also use a standards-based command-line interface to manage your apps and databases. Access and role management is just as straightforward: define roles for teams and individuals based on geography, server groups, or individual apps and databases. Morpheus is the purest way to jumpstart your IT operations into the 21st century.

Opportunity for MSPs to extend, expand their customer reach

The biggest mistake MSPs make, according to attendees at the CompTIA Annual Membership Meeting in Chicago in late March 2017, is to think they're selling their customers managed services. MSPMentor's T.C. Doyle reports on the meeting in a March 22, 2017, article.

Passportal President Dan Wensley says customers aren't clamoring for cloud computing, they "only want a solution to a business problem." Rather than selling cloud services as add-on point solutions, MSPs should be integrating cloud tools with their core deliverables. Analysts caution that to capitalize on the opportunities the cloud presents, MSPs have to attract more customers than the 50 to 100 that have been typical of transaction businesses in the past.

MSPs that haven't embraced cloud services are leaving money on the table, according to David Molnar, CEO of Dave's Computers. In a March 23, 2017, article on Business Solutions (registration required), Molnar credits the "exploding" popularity of cloud services as a primary driver of recent growth in the MSP sector. Instead of adopting cloud technology directly, companies are turning to MSPs to manage the transition of their IT operations from mostly in-house to mostly cloud-based.

According to a recent survey by Clutch of enterprise IT managers, the primary reasons for adopting cloud services are increased efficiency, security, data storage, and flexibility. Source: Clutch

Other factors driving the increased demand for MSP services, according to Molnar, are a heightened emphasis on security, speed and efficiency in adopting scalable IT operations, compliance with HIPPA and other government regulations, and more predictable budgeting. Perhaps the greatest benefit to companies partnering with MSPs is the ability to streamline internal IT functions while partnering with services offering expertise in the latest cloud technologies.

Say good-bye to 'one-and-done' MSP services

Just a few years ago, IT was in-house and MSPs generated the bulk of their revenue by installing Microsoft and other third-party software in data centers large and small. Once the system goes live, the MSP's work is done, essentially. The shift to the cloud means MSPs have to maintain on-going relationships with their customers.

Unigma's Kirill Bensonoff writes on MSPMentor that success in offering cloud services "requires ongoing diligence to ensure you find instances, volumes, and other resources that are not being used, and rightsize them at the right times to save your customers money." Bensonoff recommends focusing on specific markets you're knowledgeable about, create service bundles optimized for your specialties, and always starting with a clear understanding of precisely what the customer needs, today and tomorrow.

When it comes to establishing long-term relationships with SMBs in particular, there's no better way for an MSP to win return business than by offering quality end-user training. Neal Bradbury of Intronis MSP Solutions explains on the Business Solutions blog that users are the weakest link of any security plan.

One way to pitch security-awareness training services to SMBs is by presenting user training as a component of the company's business continuity and data protection plan. Once again, it comes back to helping customers devise a rock-solid plan that's easy to adjust as conditions change. After all, the only thing we can be certain of is that conditions will change.


Hybrid Clouds: Here to Stay or Stopgap Technology?

$
0
0

When something is labeled a "hybrid," the natural inclination is to consider it temporary, a stopgap between what was and what will be. A gas-electric hybrid car, for example, combines old and new technologies. This may be a good choice for drivers today, but as most analysts will tell you, in the automobile industry of the future, gas powered engines will be dinosaurs.

Will hybrid cloud networks meet the same fate as their four-wheel counterparts? Nearly every cloud forecast paints a rosy picture for hybrid clouds, which combine the efficiency and scalability of public cloud services with the security and control of in-house IT systems. Yet history tells us the hybrid cloud phenomenon is transitory.

Some analysts will tell you this is one of those rare occasions when history is wrong. They claim hybrid cloud's unique combination of performance, affordability, reliability, and security means the technology is here to stay. Other experts believe hybrid cloud promoters overlook many of the approach's built-in shortcomings, especially hybrid cloud's cost premium and its more complicated management.

Today's best cloud approach may not be tomorrow's

There's no denying the growing popularity of the hybrid cloud approach. Research firm ReportsWeb forecasts that 82 percent of enterprises will have a hybrid cloud strategy in place in 2017. The overall hybrid cloud market is projected to experience a compound annual growth rate of 34.2 percent from 2016 through 2022. By 2020, hybrid cloud will be the most popular category of cloud computing, as forecast by research firm Gartner.

Another result of the ReportsWeb survey may be more telling: The percentage of enterprises having established cloud governance policies in place will increase from fewer than 30 percent at the start of 2017 to more than 50 percent by the beginning of 2018. As Datamation's Cynthia Harvey writes in a March 28, 2017, article, changing perceptions about security have played a big role in the growing popularity of public cloud services.

The popularity of hybrid clouds continues to grow, making the hybrid approach the most popular among IT executives by 2019, according to a survey by Saugatuck Technology. Source: Datamation

In the early days of cloud computing, IT managers believed their internal systems were more secure than the cloud. Now, security expertise is a selling point for leading cloud providers: few companies can compete with the talent and experience that Amazon, Microsoft, IBM, Google, and other cloud giants benefit from. In fact, the difficulty in attracting the best developers, security specialists, and other IT pros is a primary reason companies cite for going all in with the public cloud.

The CIO's take on the business case for hybrid cloud

The larger the organization, the more care that must be taken when pinning your company's fortunes to the cloud. Any enterprise CIO knows the board will want a full accounting of where the cost savings will come from -- and when the savings will arrive. Ensono Europe COO Paul Morris writes in a March 30, 2017, article on The Stack that a common reason for adopting hybrid cloud is to mitigate migration costs.

In fact, the feasibility of migrating apps and systems to the cloud often determines which apps stay in-house and which are good candidates for migration to the cloud. You maximize the value of existing systems while avoiding migration costs until current apps start showing their age. Then you're better able to justify the cost of reimagining the systems in cloud-native form.

Increases in revenue and margin, and a reduction in total cost of ownership are the top two key performance indicators used by organizations to gauge the success of their cloud migration projects. Source: IBM

An obstacle to hybrid cloud adoption is the problem of having to monitor and provide access to systems in two distinct environments: the legacy systems running in-house, and the apps you've migrated to the public cloud. The best way to manage the added complexity of managing both aspects of hybrid networks is by using a cloud management console backed up by fast and accurate performance logs.

The Morpheus cloud application management platform delivers the efficiency, reliability, and affordability the cloud is famous for, combined with a world-class interface that puts powerful reporting, logging, and troubleshooting features in users' hands. Morpheus delivers the unified, comprehensive management interface hybrid cloud users are clamoring for.

Cloud value starts and ends with application performance

Observable Networks CEO Bryan writes in a March 27, 2017, article on Data Center Knowledge that most concerns of CIOs about cloud security have been addressed sufficiently to show cloud benefits far outweigh security risks. Now the gravest concern of many IT operations is the fear that their key apps and data systems will underperform on the cloud. Complicating the matter is the difficulty in assessing the app performance you can expect prior to migrating the assets to the cloud.

Ensuring adequate application performance is a key aspect of your cloud migration plan, but as TechTarget's Jeff Byrne explains in a March 2017 article, assessing app performance is more subjective than confirming that security and compliance requirements have been met. The cloud infrastructure configuration, network bandwidth, and traffic volumes are unique and variable. There is no way to anticipate how well a latency sensitive application will run without testing it in a real-world environment under production loads.

One solution is to use an on-premises cloud storage gateway, such as AWS Storage Gateway or Azure StorSimple, to cache frequently accessed data and resources locally. Another alternative is to pay for a premium cloud storage service combined with instances that are optimized for I/O-intensive workloads, or with others that are susceptible to latency, such as CRM, messaging, and transaction databases. The obvious downside is the added expense of such services, but for some apps, the extra cost may lead to savings in support, maintenance, and other areas.

How to Use Netstat for Network Troubleshooting

$
0
0

When debugging networks problems on a Linux server, ping and traceroute are often helpful, but you may need to have further network details on hand to help track down an issue and get it fixed. One such command is netstat, which can offer you details on the networks sockets as well as other helpful information. As with ping and traceroute, you can simply use netstat from the command line and get results quickly.

What is Netstat?
The netstat command in Linux is a very useful tool when dealing with networking issues.

Source: IBM developerWorks

Netstat, short for the phrase “network statistics”, is a tool Linux (as well as other operating systems such as Windows and OS X) can make use of in order to display incoming and outgoing network connections. In addition to this, it can be used to get information on network statistics, protocol statistics, and routing tables.

You can use netstat to find network problems and measure the amount of network traffic, so it can be a really useful tool to help you gather the information you need to solve any outage, slow down, or bottleneck issues on your network.

Basic Netstat

For a basic listing of all the current connections, you would simply call netstat with the -a option.

> netstat -a
Active Internet connections (servers and established)
Proto   Recv-Q   Send-Q   Local Address       Foreign   Address   State      
tcp     0        0        localhost:ipp       *:*                 LISTEN     
tcp6    0        0        ip6-localhost:ipp   [::]:*              LISTEN     
udp     0        0        *:bootpc            *:*                                
udp     0        0        localhost:ntp       *:*                                
udp     0        0        *:ntp               *:*                                
udp6    0        0        ip6-localhost:ntp   [::]:*                             
udp6    0        0        [::]:ntp            [::]:*                             
udp6    0        0        [::]:mdns           [::]:*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  1      [ ACC ]     STREAM     LISTENING     11091    @/tmp/XX
unix  1      [ ACC ]     STREAM     LISTENING     39045    @/tmp/Cx

This provides some basic information on connections from different types of protocols like TCP and UDP, as well as active Unix domain sockets. However, netstat allows you to get more specific information that can be more helpful in debugging.

Filter by Connection Type

Sometimes filtering the results based on the connection type can be useful when trying to find the information you need. For example, if you want to see only the TCP connections, you can add the “t” option in addition to the “a” option, as shown below:

> netstat -at
Active Internet connections (servers and established)
Proto   Recv-Q   Send-Q   Local Address      Foreign Address   State      
tcp     0        0        host:domain        *:*               LISTEN     
tcp     0        0        localhost:ipp      *:*               LISTEN     
tcp     0        0        host.local:45789   host-:http        ESTABLISHED

Similarly, by using netstat -au, you can list only the UDP connections.

Filter by Listening Connections

If you want to only see the connections that are listening, you can do so by using the “l” option and remove the “a” option. Here is an example of this:

> netstat -l
Active Internet connections (only servers)
Proto   Recv-Q   Send-Q   Local Address   Foreign Address   State      
tcp     0        0        localhost:80    0.0.0.0:*         LISTEN     
tcp     0        0        localhost:443   0.0.0.0:*         LISTEN     

As with the “a” option, you can use netstat -lt and netstat -lu in order to further filter and to get only the listening TCP or UDP connections. In this way, you can easily see if a particular port is open and listening and determine whether a website or app is able to be up and running as expected.

See Network Statistics
> netstat -s
Ip:
  73419 total packets received
  0 forwarded
  0 incoming packets discarded
  73419 incoming packets delivered
  37098 requests sent out
  45 outgoing packets dropped
Icmp:
  119 ICMP messages received
  0 input ICMP message failed.
  ICMP input histogram:
    destination unreachable: 119
  102 ICMP messages sent
  0 ICMP messages failed
  ICMP output histogram:
    destination unreachable: 102
... OUTPUT TRUNCATED ...

As you can see, this offers some statistics that may be useful to you while debugging, such as total, incoming, and outgoing packets as well as ICMP messages that were received, sent, and failed.

The Drawbacks of Running Containers on Bare Metal Servers

$
0
0

Marketing hype isn't limited to the tech industries by any means, but the tech press certainly cranks up the hype machine to unprecedented levels. Among the recent darlings of the pundits is containerization. 

A prime example of the container worshiping phenomenon is The VAR Guy's Christopher Tozzi, who lists in an April 3, 2017, article all the ways that Docker saves companies money. From Tozzi's description, you would think Docker can transform lead into gold: it's simpler to maintain, it speeds up delivery of new software, it uses resources more efficiently, and to top it all off, Docker saves you money.

Quick, where do I sign?

It's a good thing technology customers have a heightened sensitivity to such unadulterated claims of IT nirvana. A reality check reveals that no technology shift as fundamental as containerization is as simple, as fast, or as inexpensive as its backers claim it is. Yes, containers can be a boon to your data-management operations. No, you can't realize the benefits of the technology without doing your due diligence.

Here are some tips for avoiding the pitfalls of implementing containers in multi-cloud and hybrid cloud environments.

Implementing containers on servers: Where are the bottlenecks?

A primary advantage of containers over hypervisor-based virtualization is capacity utilization. Jim O'Reilly writes on TechTarget that because containers use the OS, tools, and other binaries of the server host, you can fit three times the number of containers on the server as VMs. As microservices are adopted more widely, the container capacity advantage over VMs will increase, according to O'Reilly.

Containers achieve their light weight, compared to VMs, by doing without a guest OS. Instead, containers share an OS and applicable bins/libraries. Source: LeaseWeb

In addition, there is typically much less overhead with containers than with VMs, so containers' effective load is actually closer to five times that of hypervisor-based virtualization. This has a negative effect on other system components, however. An example is the impact of the increased loads on CPU caches, which become much less efficient. Obviously, a bigger cache to address the higher volume usually means you have to use bigger servers.

Storage I/O and network connections also face increased demands to accommodate containers' higher instance volumes. Non-volatile dual inline memory modules promise access speeds up to four times those of conventional solid-state drives. NVM Express SSDs have become the medium of choice for local primary storage rather than serial-attached SCSI or SATA. In addition to reducing CPU overhead, an NVMe can manage as many as 64,000 different queues at one time, so it can deliver data directly to containers no matter the number of instances they hold.

Distribute containers on demand among in-house servers or cloud VMs

The fact is, not many companies are ready to make another huge capital investment in big, next-gen servers with a cache layer for fast DRAM, serial connections, and 32GB of cache for each CPU, which is the recommended configuration for running containers on in-house servers. On the Docker Blog, Mike Coleman claims you can have it both ways: implement some container apps on bare metal servers, and others as VMs on cloud servers.

Rather than deciding the appropriate platform for your entire portfolio of applications, base the in-house/cloud decision on the attributes of the specific app: What will users need in terms of performance, scalability, security, reliability, cost, support, integration with other systems, and any number of other factors? Generally speaking, low latency apps do better in-house, while capacity optimization, mixed workloads, and disaster recovery favor virtualization.

VMs are typically heavier and slower than containers, but because they are fully isolated, VMs tend to be more secure than containers' process level isolation. Source: Jaxenter

Other factors affecting the decision are the need to maximize existing investments in infrastructure, whether workloads can share kernels (multitenancy), reliance on APIs that bare metal servers don't accommodate, and licensing costs (bare metal eliminates the need to buy hypervisor licenses, for example).

Amazon ECS enhances container support

A sign of the growing popularity of containers in virtual environments is the decision by Amazon Web Services to embed support for container networking in the EC2 Container Service (ECS). Michael Vizard explains in an April 19, 2017, article on SDxCentral that the new version of AWS's networking software can be attached directly to any container. This allows both stateless and stateful applications to be deployed at scale, according to Amazon ECS general manager Deepak Singh.

The new ECS software will be exposed as an AWS Task that can be invoked to link multiple containers automatically. Any virtual network associated with the container is removed when the container disappears. Singh says AWS sees no need to make containers available on bare metal servers because there are better options for organizations concerned about container performance: graphical processing units, and field programmable gate arrays.

For many IT operations, going all-in with containers running on bare metal servers may be tomorrow's ideal, but VMs are today's reality as the only viable alternative for hybrid and public clouds. Enterprises typically run multiple Linux distributions and a mix of Windows Server and Linux OSes. It follows that to optimize workloads in such mixed environments, you'll run some containers on bare metal and others in VMs.

According to Docker technical evangelist, Mike Coleman, licensing schemes tailored to virtualization make VMs more affordable, as TechTarget's Beth Pariseau reports. Coleman points out that companies nearly always begin using containers in a small way and wind up using them in a big way.

Mastering the Art of Container Management

$
0
0

Managing containers in cloud environments would be a lot simpler if there were just one type of container you and your staff needed to deal with. Any technology as multifaceted as containerization comes with a bucket full of new management challenges. The key is to get ahead of the potential pitfalls before you find yourself hip deep in them.

The simplest container scenario is packaging and distributing an existing application as a Docker container: all of the app's dependencies are wrapped into a Docker image, and a single text file (Docker file) is added to explain how to create the image. Then each server can run only that instance of the container, just as the server would run one instance of the app itself.

Monitoring these simple containers is the same as monitoring the server, according to TechTarget's Alastair Cooke in an April 2017 article. The app's use of processes and resources can be viewed on the server. There's no need to confirm that the server has all the required components because they're all encapsulated in the Docker image and controlled by the Docker file, including the proper Java version and Python libraries.

How containers alter server management tasks

Placing apps in a Docker container means you no longer have to update and patch Java on the server. Instead, you update the Docker file and build a new Docker image. In fact, you may not need to install Java on the server at all. You'll have to scan the Docker files for vulnerabilities and unsupported components in the image. You may also need to set and enforce policies relating to the maximum age of Docker images because dependency versions are set inside the image and can only be updated by creating a new image.

By removing a layer of software from the standard virtualization model, containers promise performance increases of 10 percent to 20 percent. Source: Conetix

Containers can be started and stopped in a fraction of a second, making them a lot faster than VMs. Microservice-enabled apps will use far more containers, most of which have short lifespans. Each of the app's microservices will have its own Docker image that the microservice can scale up or down by creating or stopping containers. Standard server-monitoring tools aren't designed for managing such a fast-changing, dynamic environment.

A new type of monitor is necessary that keeps pace with these quick-change containers. A solution being adopted by a growing number of companies is the Morpheus cloud application management platform. Morpheus's instance catalog includes basic containers and VMs as well as custom services for SQL and NoSQL databases, cache stores, message busses, web servers, and full-fledged applications.

Morpheus support steps you through the process of adding a Docker host to any cloud via the intuitive Morpheus interface. Under Hosts on the main Infrastructure screen, choose one of nine Host types from the drop-down menu, select a Group, add basic information about the host in the resulting dialog box, set your Host options, and add Automation Workflows (optional), Once the Container Host is saved, it will start provisioning and will be ready for containers.

Provisioning Container Hosts via the Morpheus cloud app management service is as simple as entering a handful of data in a short series of dialog boxes. Source: Morpheus

Overcoming obstacles to container portability

Since your applications rely on data that lives outside the container, you need to make arrangements for storage. The most common form is local storage on a container cluster and addition of a storage plug-in to each container image. For public cloud storage, AWS and Azure use different storage services.

TechTarget's Kurt Marko explains in an April 2017 article that Google Cloud's Kubernetes cluster manager offers more flexibility than Azure or AWS in terms of sharing and persistence after a container restarts. The shortcoming of all three services is that they limit persistent data to only one platform, which complicates application portability.

Another weakness of cloud container storage is inconsistent security policies. Applying and enforcing security and access rules across platforms is never easy. The foundation of container security is the user/group directory and the identity management system that you use to enforce access controls and usage policies. 

Containers or VMs? The factors to consider

The difference between containers and VMs is where the virtualization occurs. VMs use a hypervisor to partition the server below the OS level, so VMs share only hardware. A container's virtualization occurs at the OS level, so containers share the OS and some middleware. Running apps on VMs is more like running them on bare metal servers. Conversely, apps running on containers have to conform to a single software platform.

VMs offer the flexibility of bare metal servers, while containers optimize utilization by requiring fewer resources to be reserved. Source: Jelastic

On the other hand, containers have less overhead than VMs because there's much less duplication of platform software as you deploy and redeploy your apps and components. Not only can you run more apps per server, containers let you deploy and redeploy faster than you can with VMs.

TechTarget's Tom Nolle highlights a shortcoming of containers in hybrid and public clouds: co-hosting. It is considered best practice when deploying an application in containers to co-host all of the app's components to speed up network connections. However, co-hosting makes it more difficult to manage cloud bursting and failover to public cloud resources, which are two common scenarios for hybrid clouds.

Cloud management platforms such as Morpheus show the eroding distinctions between containers and VMs in terms of monitoring in mixed-cloud environments. It will likely take longer to bridge the differences between containers and VMs relating to security and compliance, according to Nolle.

Darknet Busters: Taking a Bite Out of Cybercrime-as-a-Service

$
0
0
The first step in combatting the perpetrators of Internet crimes is to uncover the Darknet in which they operate.

It's getting easier and easier for criminals to infiltrate your company’s network and help themselves to your financial and other sensitive information, and that of your customers. There's a ready market for stolen certificates that make malware look legitimate to antivirus software and other security systems.

The crooks even place orders for stolen account information: One person is shopping for purloined Xbox, GameStop, iTunes, and Target accounts; another is interested only in accounts belonging to Canadian financial institutions. Each stolen record costs from $4 to $10, on average, and customers must buy at least $100 worth of these hijacked accounts. Many of the transactions specify rubles (hint, hint).

Loucif Kharouni, Senior Threat Researcher for security service Damballa, writes in a September 21, 2015, post that the cybercrime economy is thriving on the so-called Darknet, or Dark Web. Criminals now offer cybercrime-as-a-service, allowing anyone with an evil inclination to order up a malware attack, made to order -- no tech experience required.

Criminal sites operate beyond the reach of law enforcement

Sadly, thieves aren't the only criminals profiting from the Darknet. Human traffickers, child pornographers, even murderers are taking advantage of the Internet to commit their heinous crimes, as Dark Reading's Sara Peters reports in a September 16, 2015, article.

Peters cites a report by security firm Bat Blue Networks that claims there are between 200,000 and 400,000 sites on the Darknet. In addition to drug sales and other criminal activities, the sites are home to political dissidents, whistleblowers, and extremists of every description. It's difficult to identify the servers hosting the sites because they are shrouded by virtual private networks and other forms of encryption, according to Bat Blue's researchers.

Most people access the sites using The Onion Router (Tor) anonymizing network. That makes it nearly impossible for law enforcement to identify the criminals operating on the networks, let alone capture and prosecute them. In fact, Bat Blue claims "nation-states" are abetting the criminals, whether knowingly or unknowingly.

The Darknet is populated by everyone from public officials to religious extremists, for as wide a range of purposes. Source: Bat Blue Networks

While hundreds of thousands of sites comprise the Darknet, you won’t find them using the web’s Domain Name System. Instead, the sites communicate by delivering an anonymous service, called a “hidden service,” via updates to the Tor network. Rather than getting a domain from a registrar, the sites authenticate each other by using self-generated public/private key pair addresses.

The public key generates a 16-character hash that ends in .onion to serve as the address that accesses the hidden service. When the connection is established, keys are exchanged to create an encrypted communication channel. In a typical scenario, the user installs a Tor client and web server on a laptop, takes the laptop to a public WiFi access point (avoiding the cameras that are prevalent at many such locations), and uses that connection to register with the Tor network.

The Tor Project explains the six-step procedure for using a hidden service to link anonymously and securely via the Tor network:

  1. Party A builds circuits to select introduction points on the Tor network.
  2. A hidden service descriptor containing the public key and summaries of each introduction point, and signed by the private key, is uploaded to a distributed hash table on the network.
  3. Party B finds the hidden service’s .onion address and downloads the descriptor from the distributed hash table to establish the protected connection to it.
  4. Party B creates an “introduce” message encrypted to the hidden service's public key; the message includes the address of the rendezvous point and the one-time secret. The message is sent to one of the introduction points for delivery to the hidden service. (This step is shown in the image below.)
  5. The hidden service decrypts the message, finds the rendezvous address and one-time secret, creates a circuit to the rendezvous point, and sends a rendezvous message that contains another one-time secret.
  6. The rendezvous point notifies Party B that the connection has been established, and then Party B and the hidden service pass protected messages back and forth.

In the fourth of the six steps required to establish a protected connection to the Tor network, Party B (Ann) sends an “introduce” message to one of the hidden service’s introduction points created by Party A (Bob). Source: Tor Project

Defeating the Darknet starts by removing its cloak of invisibility

Criminals cannot be allowed to operate unfettered in the dark shadows of the Internet. But you can’t arrest what you can’t spot. That’s why the first step in combatting Darknet crime is to shine a light on it. That’s one of the primary goals of the U.S. Defense Research Projects Agency’s Memex program, which Mark Stockley describes in a February 16, 2015, post on the Sophos Naked Security site.

Memex is intended to support domain-specific searches, as opposed to the broad, general scope of commercial search engines such as Google and Bing. Initially, it targets human trafficking and slavery, but its potential uses extend to the business realm, as Computerworld’s Katherine Noyes reports in a February 13, 2015, article.

For example, a company could use Memex to spot fraud attempts and vet potential partners. However, the ability to search for information that isn’t indexed by Google and other commercial search engines presents companies with a tremendous competitive advantage, according to analysts. After all, knowledge is power, and not just for the crooks running amok on the Darknet.

Viewing all 1101 articles
Browse latest View live