Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

DevOps: How to Give Your Business Velocity

$
0
0

When you want to streamline your ability to release or update products quickly, integrating DevOps into your organization can prove to be well worth any changes that need to be made in order to implement the new practices. One of the biggest advantages to DevOps is what is often called velocity - working with both speed and direction. By breaking down barriers between IT and business, this close and speedy collaboration can provide the direction that propels your business forward.

First Order: Collaboration

There is clear evidence to suggest that a DevOps approach to tech development can have a significant impact on the velocity of an IT organization.

Source: Chris Cancialosi, Forbes

When people can directly collaborate, they tend to hash out any potential issues with a project much more quickly. Organizations the are separated by function may find that development on a project moves much more slowly than hoped. 

For example, selected business team members might meet with selected IT members at the beginning of a project. They then each go their own way with expectations that may change before their next meeting, which could be after several weeks, months, or longer have passed.

The intersection of various departments as DevOps. Source: Wikipedia

On the other hand, employing DevOps allows for the more collaborative function between teams. There can be employees from each group that meets daily or team members from one group embedded within one or more other groups so that the different functional groups are all working directly with one another each and every day. 

This allows for communication of any new needs or issues to happen right away, where differences can be hashed out immediately, rather than waiting for a meeting that could be derailed by one issue like this for the entire meeting duration and being unable to address other needs and concerns during that precious time.

DevOps helps the relationship between Development and Operations, the relationship between IT and the business, and the relationship between the company and its customers and employees.

Source: DEVOPS digest

Development with Velocity

DevOps is often implemented when an organization uses the agile method of software development, which often leads to the need for the actual organizational changes that lead to beginning to use DevOps practices.

The good news is that in recent years there are two healthy and related trends in the Agile community: continuous delivery (CD) and DevOps.

Source: Scott W. Ambler, Dr. Dobbs

Agile development is a method that encourages speed and flexibility in response to an ever-changing set of requirements during software development. Instead of needing to wait for new ideas, this flexibility allows development to continue with little interruption or delay in the process as needs change. This ability to change rapidly also increases the number of releases of a product. Thus new releases and updates happen at a much faster rate, which encourages continuous delivery of a product.

The speed of agile typically requires a toolset that allows for the automation of certain processes so that speed is not hindered by processes that need to be performed manually over and over again. Not only can that be tedious, but it can use up the valuable time your developers or other IT staff could be using to further enhance the product and deliver further releases. Automation allows for many of the processes to be free of manual involvement, which frees up your staff for other work.

Our experience suggests, for instance, that companies can reduce the average number of days required to complete code development and move it into live production from 89 days to 15 days, a mere 17 percent of the original time.

Source: McKinsey & Company

When DevOps is employed alongside agile development to support it, your velocity will be increased even further and allow you to make your customers happier and do so more quickly!

Total Velocity

While DevOps can be a challenge to get going as it requires a number of organizational changes tailored to your specific business needs, it can be well worth the effort needed to reorganize and regroup. The increased speed of delivery of your products and updates can really help you, especially if it helps you to be first to market on a new idea. 

Not only will you get improved speed, but it can help your staff feel more like they are getting along and are contributing to projects that make it to completion. Seeing something they contributed to in action can be helpful in motivating staff to find ways to streamline processes even further and thus enhance your velocity that much more!

 


How to Reduce Latency on Public Clouds

$
0
0

The cloud offers companies nearly every feature they need for managing their information resources: efficiency, scalability, capacity, affordability, reliability, security, and adaptability. What's missing from this list of cloud benefits is a deal-breaker for many organizations: performance.

A growing number of firms are removing mission-critical applications from the public cloud and returning them to in-house data centers because they need speed that the cloud infrastructure doesn't deliver. In a sponsored article on InfoWorld, HPE Cloud Group VP and Chief Engineer Gary Thome cites Dropbox as the poster child for big-name services that have turned their backs on the public cloud.

The culprit, according to Thome, is time-sharing: Public cloud services may offer unlimited capacity, but their business model relies on capping performance. That's a problem you don't have when you manage your data systems, which you can generally scale up to whatever performance level your apps require.

The public-cloud challenge: Accommodating apps with a low tolerance for latency

Financial systems are the principal category of applications that require instant response to user and system requests. To address public-cloud latency, Thome suggests combining containers with composable infrastructure, which pools compute, storage, and network resources and "self-assembles" dynamically based on the needs of the workload or app.

Four attributes of composable infrastructure are 1) the disaggregation of compute, memory, I/O, and storage; 2) re-aggregation (composition) and orchestration; 3) API-based automation and management, and; 4) matching apps to available resources to optimize performance. Source: Forbes

By controlling the software-defined resources programmatically via a unified API, infrastructure becomes "a single line of code" that is optimized for that specific workload, according to Thome. With the composable infrastructure approach, you lose the public cloud's cost, efficiency, and speed benefits over on-premises data centers.

A more forward-looking approach is to address directly the causes of latency in systems hosted on the public cloud. That's the angle taken by two relatively new technologies: software-defined WANs and availability zones.

A lingua franca for network management

SD-WANs promise simpler network monitoring by accommodating a range of connections, including MPLS, broadband, and LTE. An SD-WAN's primary advantage is connecting between multiple cloud services and enterprise networks. The technology reduces latency by choosing the fastest path based on each network's policies and logic. TechTarget's Lee Doyle writes that SD-WANs will become more popular as companies increase their use of SaaS applications such as Salesforce, Google Docs, and Microsoft Office 365.

An expensive alternative to dealing with latency on the public internet is to pay for direct interconnection, which offers direct links between multiple cloud services, telecom carriers, and enterprises. eWeek's Christopher Preimesberger lists seven criteria for comparing dedicated interconnection services to the public internet.

  • Improved performance by eliminating the public internet's points of contention
  • Enhanced hybrid cloud security by maintaining control over proprietary data
  • Better network availability through elimination of WAN connections between data centers and the public cloud
  • More control over costs by avoiding contracts for a set amount of bandwidth, much of which is never used
  • The flexibility of accessing physical and virtual connections through a single portal
  • A broader choice of cloud service providers, without having to enter into individual contracts with each one
  • Easier collaboration with business partners by establishing dedicated links for transferring large files securely, quickly, and efficiently - without the volatility of the public internet

Morpheus: The low-latency alternative to expensive dedicated links

The Morpheus cloud application management system offers a unique approach to guaranteeing high availability for latency-sensitive applications. Jeff Wheeler explains how the Morpheus Appliance's high-availability mode supports deployment in multi-tier environments.

All of Morpheus's components are designed to be distributable to facilitate deployment in distributed clouds and increase uptime. A stand-alone Morpheus configuration includes several tiers: web, application, cache, message queue, search index, and database. Each of these tiers except for cache is distributable and deployable on separate servers; the cache is currently localized to each application server. A shared storage tier contains artifacts and backup objects.

Nginx is used as a reverse proxy for the application tier, as well as for access to the localized package repository required for deploying data nodes and VMs. Source: Morpheus

For optimal performance, avoid crossing WAN boundaries with high latency links. In all other situations, external services can be configured in any cloud provider, on-premises cloud/data center, or virtual environment. The external load balancer that routes requests to a pool of web/app servers can be set to connect to each server via TLS to simplify configuration, but the balancer also supports non-TLS mode to support SSL offloading.

How edge networks increase rather than reduce the strain on cloud bandwidth

Peter Levine, an analyst for Andreessen Horowitz, raised a lot of eyebrows last December with his presentation explaining why he believed cloud computing would soon be replaced by edge networks. Levine reasons that the devices we use every day will soon generate too much data to be accommodated by existing network bandwidth. The only way to make the burgeoning Internet of Things practical is by moving storage and processing to the edge of the network, where the data is initially collected.

There's one element of edge networks Levine fails to address: management. Dan Draper of the financial services firm Vertiv writes in an April 17, 2017, article on Data Center Frontier that edge networks place data and processing in locations that IT departments can't access easily. Depending on the system, there could be thousands or even millions of such remote data points in a typical enterprise network.

According to Draper, the solution to the bandwidth demands of IoT is the creation of an integrated infrastructure. Network nodes will be like those nested Russian dolls, or "matryoshkas," scaling from smart sensors monitoring sewer lines and similar inaccessible spots, all the way up to public cloud installations processing terabytes of data in the blink of an eye.

Draper points out two requirements for such an integrated infrastructure that remain works in progress: enhanced remote monitoring and power management. Services such as the Morpheus cloud application management system give companies a leg up by preparing them for "cloud" computing that extends all the way to the four corners of the earth.

The New Shadow IT: Custom Applications in the Cloud

$
0
0

There's a new kind of shadow IT arising in companies of all types and sizes: user-created cloud applications. This is one form of unauthorized IT that many firms are embracing rather than fighting, albeit cautiously.

The trend is more than just a version of "if you can't beat 'em, join 'em." For many IT managers, it's an acknowledgment that their customers -- line managers and employees -- know what tools they need to get their work done better than anyone else. Cynics might point out that working with rogue developers in their organizations is an admission that shadow IT is now too prevalent to stop.

Dark Reading's Kaushik Narayan writes in an April 7, 2017, article that "the consumerization of IT has spurred a free-for-all in the adoption of cloud services." Narayan claims that CIOs routinely underestimate the number of unauthorized applications being used in their organizations, at times by a factor of 10. One CIO told Narayan there were 100 such apps in place when an examination of the company's network logs put the number at about 1,000.

A 2016 survey of IT professionals by NTT Communications found that 83 percent report employees store company data on unsanctioned cloud services and 71 percent say the practice has been going on for two or more years. Source: CensorNet

The type of applications being developed and implemented outside IT's supervision include HR benefits, code-sharing platforms, and customer service. These apps put sensitive company information beyond the control of data security precautions, including payment details, confidential IP, and personally identifiable information. As Narayan points out, IT departments lack the personnel and resources required to retrofit key data center apps with cloud-specific security.

Tapping 'citizen developers' to reduce application backlog

According to statistics cited by CSO's George V. Hulme in an April 17, 2017, article, 62 percent of enterprises report having "deep app development backlogs," often numbering 10 or more in the dev pipeline. In 76 percent of enterprises, it takes three months or longer to complete an app, extending to one year in 11 percent of companies.

Many organizations are cutting into their app-development workload by enlisting the services of citizen developers among the ranks of business managers and employees. The challenge for IT is to ensure the apps created and deployed by non-developers meet all security, cost, and performance requirements. You don't manage volunteer developers the same way you manage in-house IT staff.

A recent survey by FileMaker reports that improved work processes (83 percent) and greater work satisfaction (48 percent) were the two factors most likely to motivate citizen developers. Source: CIO Insight

Start by making security as easy as possible to bake into key application components likely to be used by citizen developers. An example is to incorporate security services in a simple, accessible API. Once you've identified apps in use but developed outside the IT department, you determine the sensitive information the app may include. The only way to monitor which cloud apps employees are using is by paying close attention to where your data is going.

VMware director of security John Britton suggests four requirements for managing citizen developers:

  • Limit the non-pro developers to Java, JavaScript, or another memory-managed language (they're probably using web-based or mobile apps).
  • Make sure the developers always use encrypted connections to protect data in transit, and that they encrypt all stored data.
  • Provide the developers with mobile SDKs so the apps can be managed remotely for updates, revoking access, and wiping data on a lost or stolen phone.
  • Mentor the developers via an advisory board to help them improve the quality of their applications.

Malware lurks in 'unsanctioned' cloud storage services

Living with the unauthorized use of cloud services by employees can quickly become an unending game of whack-a-mole. Research by cloud security firm Netskope found that the average number of cloud services used by companies increased 4 percent in the fourth quarter of 2016 from the year-earlier period to a total of 1,071. Only 7 percent of the cloud services are considered enterprise-ready, according to Netskope. Eweek's Robert Lemos reports on the study in an April 25, 2017, article.

It's no surprise that use of cloud services to spread malware is on the upswing as well. Netskope estimates that 75 percent of cloud-based malware is categorized as "high severity": 37 percent of cloud threats are a form of backdoor, 14 percent are adware, and 4.2 percent are ransomware. One reason cloud-based malware is forecast to increase is how easy it is for malware to spread via cloud services.

Turning an IT liability into a corporate asset

The mistake some IT departments make is to consider the shadow IT phenomenon as a risk that must be minimized rather than an opportunity that they can capitalize on. For example, you can overcome the resistance of some employees to new technologies by highlighting the productive use of cloud services by their coworkers. This is one of the many benefits of shadow IT described by TechTarget's Kerry Doyle.

The pace of business shows no signs of slacking, which argues in favor of increased use of cloud-based tools to reduce the requirement cycles of application development. The nature of development changes when developers work directly with business departments rather than toiling away in a separate IT division. The developers learn first-hand how the department functions and what employees need to ensure their success. At the same time, line managers and workers gain a better understanding of the compliance, security, and other requirements of internal IT policies.

Doyle writes that once IT departments abandon their traditional role as the sole source for all workplace technology, they are free to expend their scarce resources more productively. IT staff then can become leaders and directors of company-wide cloud initiatives in which users play a more active role. By serving as an "innovation broker," IT enters into partnerships with business departments that lead to streamlined planning, acquisition, implementation, and maintenance of increasingly vital information services.

2 Great Reasons for Making Your Cloud Data Location Aware

$
0
0

Two of the greatest challenges IT departments face when managing cloud data are latency and compliance. By ensuring that the data and apps you place on cloud platforms are designed to know and report their physical location, you make your data caches more efficient, and you meet regulatory requirements for keeping sensitive data resources safe.

Virtualization has transformed nearly every aspect of information management. No longer are the bulk of an organization's data and applications associated with a specific physical location. The foundation of public cloud services is the virtual machine -- a nomad data form that appears whenever it's needed and runs wherever it can do so most efficiently and reliably. The same holds true for VMs' more-compact counterpart, containers.

You may be inclined to infer from the rootlessness of VMs that it no longer matters where a particular data resource resides at any given time. Such an inference could not be further from the truth. It's because of the lack of a permanent home that IT managers need to be tuned into the physical context in which they operate. In particular, two important management concerns cry out for location awareness: latency and compliance.

Location awareness boosts cloud response times

If intelligent caching is important for optimizing the performance of in-house machines, it is downright crucial for ensuring peak response times when caching cloud systems. TechTarget's Marc Staimer explains that cloud storage needs to be aware of its location relative to that of the app presently reading or writing the data. Location awareness is applied via policies designed to ensure frequently accessed data is kept as close as possible to the app or user requesting it.

Keeping data close to the point of consumption can't be done efficiently simply by copying and moving it. For one thing, there's the time and effort involved. You could be talking about terabytes of data for a large operation, and even multiple smaller migrations will take their toll on performance and accessibility. More importantly, the data is likely being accessed by multiple sources, so location awareness needs to support dispersed distributed read access.

An example of maximizing location awareness to enhance cloud elasticity and reduce latency is the Beacon framework that is part of the European Union's Horizon 2020 program. After initial deployment on a private cloud, an application's load may be dispersed to specific geographic locations based on user and resource demand. The new component is placed in the cloud region that's closest to the demand source.

Location aware elasticity is built into the EU's Beacon framework, which is designed to reduce cloud latency by placing data resources in cloud regions nearest to the users and resources accessing the data most often. Source: Beacon Project

Addressing cloud latency via proximity is the most straightforward approach: choose a cloud service that's located in the same region as your data center. As CIO's Anant Jhingran writes in an April 24, 2017, article, the choices are rarely this simple in the real world.

A typical scenario is an enterprise that keeps backends and APIs in on-premise data centers while shifting management and analytics operations to cloud services. Now you're round-tripping APIs from the data center to the cloud. To reduce the resulting latency, some companies use lightweight, federated cloud gateways that transmit analytics and management services asynchronously to the cloud while keeping APIs in the data center.

Confirming residency of cloud data regardless of location changes

The efficiencies that cloud services are famous for is possible only because the data they host can be relocated as required to meet the demands of the moment. This creates a problem for companies that need to keep tabs on just where sensitive data is stored and accessed, primarily to ensure compliance with regulations mandating preset security levels. These regulations include HIPAA in the U.S. and the forthcoming European General Data Protection Regulation (GDPR), which takes effect in May 2018.

Researchers at the National University of Singapore School of Computing have developed a method of verifying the residency of data hosted on any cloud server (pdf). Rather than depend on audits of the servers used by cloud providers to confirm the presence of specific data, the new technique verifies residency of outsourced data by demanding proof that each file is maintained in its entirety on the local drives of a specific cloud server.

Government data residency requirements differ greatly from data transfer requirements and data retention rules; some residency requirements may compromise data protection rather than enhance it. Source: Baker McKenzie Inform

The researchers contend that data residency offers greater assurances than approaches that focus on the retrievability of the data. The technique enhances auditing for compliance with service level agreements because it lets you identify the geographic location of the cloud server a specific file is stored on. That level of identification is not possible with after-the-fact auditing of a cloud service's logs.

Another advantage of the data residency approach is the ability to verify simultaneously replications of data on geographically dispersed cloud servers. The Proof of Data Residency (PoDR) protocol the researchers propose is shown to be accurate (low false acceptance and false rejection rates) and applicable without increasing storage or audit overhead.

From Machine Learning to Superclouds: Competing Visions of Cloud 2.0

$
0
0

Point releases are a grand tradition of the software industry. The practice of designating an update "2.0" is now frequently applied to entire technologies. Perhaps the most famous example of the phenomenon is the christening of "Web 2.0" by Dale Dougherty and Tim O'Reilly in 2003 to mark the arrival of "the web as platform": Users are creators rather than simply consumers of data and services.

It didn't take long for the "2.0" moniker to be applied to cloud computing. References to "Cloud 2.0" appeared as early as 2010 in an InformationWeek article by John Soat, who used the term to describe the shift by organizations to hybrid clouds. As important as the rise of hybrid clouds has been, combining public and private clouds doesn't represent a giant leap in technology.

So where is that fundamental shift in cloud technology justifying the "2.0" designation most likely to reveal itself? Here's an overview of the cloud innovations vying to become the next transformative technology.

Google's take on Cloud 2.0: It's all about machine-learning analytics

The company with the lowest expectation for what it labels Cloud 2.0 is Google, which believes the next great leap in cloud technology will be the transition from straightforward data storage to the provision of cloud-based analytics tools. Computerworld's Sharon Gaudin cites Google cloud business Senior VP Diane Greene as saying CIOs are ready to move beyond simply storing data and running apps in the cloud.

Greene says data analytics tools based on machine learning will generate "incredible value" for companies by providing insights that weren't available to them previously. The tremendous amount of data being generated by businesses is "too expensive, time consuming and unwieldy to analyze" using on-premise systems, according to Gaudin. The bigger the data store, the more effective machine learning can be in answering thorny business problems.

Gaudin quotes an executive for a cloud consultancy who claims Google's analytics tools are perceived to have "much more muscle" than those offered by Amazon Web Services. The challenge for Google, according to the analyst, is to show potential customers the use cases for its analytics and deep learning expertise. Even though companies have more data than they know what to do with, they will hesitate before investing in cloud-based analytics until they are confident of a solid return on that investment in the form of fast and accurate business intelligence.

Companies are expected to balk at adopting Google's machine learning analytics until they are convinced that there are legitimate use cases for the technology. Source: Kunal Dea from Google Cloud Team, via Slideshare

The race to be the top AI platform is wide open

After decades of hype, artificial intelligence is a technology whose time has finally come. Machine learning is a key component of commercial AI implementations. A May 21, 2017, article on Seeking Alpha cites a study by Accenture that found AI could increase productivity by up to 40 percent in 2035. AI wouldn't be possible without the cloud's massively scalable and flexible storage and compute resources. Even the largest enterprises would be challenged to meet AI's appetite for data, power, bandwidth, and other resources by relying solely on in-house hardware.

The Seeking Alpha article points out that Amazon's big lead in cloud computing is substantial, but it is far from insurmountable. The market for cloud services is "still in its infancy," and the three leading AI platforms - Amazon's open machine learning platform, Google's TensorFlow, and Microsoft's Open Cognitive Toolkit and Machine Learning - are all focusing on attracting third-party developers.

AWS's strong lead in cloud services over Microsoft, Google, and IBM is not as commanding in the burgeoning area of AI, where competition for the attention of third-party developers is fierce. Source: Seeking Alpha

Google's TensorFlow appears to have a head in front of the AI competition right out of the gate. According to The Verge, TensorFlow is the most popular software of its type on the Github code repository; it was developed by Google for its in-house AI work and was released to developers for free in 2015. A key advantage of TensorFlow is the ability of the two billion Android devices in use each month to be optimized for AI. This puts machine learning and other AI features in the hands of billions of people.

Plenty of 'Superclouds' to choose from

Competition is also heating up for ownership of the term "supercloud." There is already a streaming music service using the moniker, as well as an orchestration platform from cloud service firm Luxoft. That's not to mention the original astronomical use of the term that Wiktionary defines as "a very large cloud of stellar material."

Two very different "supercloud" research projects are pertinent to next-generation cloud technology: one is part of the European Union's Horizon 2020 program, and the other is underway at Cornell University and funded by the National Science Foundation. Both endeavors are aimed at creating a secure multicloud environment.

The EU's Supercloud project is designed to allow users to craft their own security requirements and "instantiate the policies accordingly." Compliance with security policies is enforced automatically by the framework across compute, storage, and network layers. Policy enforcement extends to "provider domains" by leveraging trust models and security mechanisms. The overarching goal of the project is the creation of secure environments that apply to all cloud platforms, and that are user-centric and self-managed.

The EU's Supercloud program defines a "distributed architectural plane" that links user-centric and provider-centric approaches in expanded multicloud environments. Source: Supercloud Project

A different approach to security is at the heart of the Supercloud research being conducted at Cornell and led by Hakim Weatherspoon and Robbert van Renesse. By using "nested virtualization," the project packages virtual machines with the network and compute resources they rely on to facilitate migration of VMs between "multiple underlying and heterogeneous cloud infrastructures." The researchers intend to create a Supercloud manager as well as network and compute abstractions that will allow apps to run across multiple clouds.

The precise form of the next iteration of cloud computing is anybody's guess. However, it's clear that AI and multicloud are technologies poised to take cloud services by storm in the not-too-distant future.

How Machine Learning is Helping Drive Cloud Adoption

$
0
0

An interesting trend that is helping drive cloud adoption is the rise of machine learning. As organizations seek ways to automate more processes, the use of machine learning is also increasing in order to meet those needs. As machine learning has increased, it has helped to increase the use of the cloud to help store and process the massive amounts of data that can be required for it. 

What is machine learning?

Since the invention of computers people have been trying to answer the question of whether a computer can ‘learn’...

Source: Andres Munoz, Courant Institute of Mathematical Sciences

Machine learning is a way for computers to “learn” things without needing to have as much specific programming done as would a typical application. Machine learning can help create algorithms which can use data and either learn from the data or make various types of predictions based on identified patterns in the data. Such learning can be useful for a number of tasks that would otherwise be much more difficult, such as OCR (optical character recognition), email filtering, medical monitoring, text analysis, photo analysis, video analysis, translation, and speech recognition, and many others.

For example, email filtering can be done much more simply, since patterns used by spam or scam emails can be identified and allow for a better chance that those types of messages will be moved to a “spam” folder or blocked entirely. This is certainly useful in helping to keep inboxes free of messages most users do not want to sift through on their own every day! 

Machine Learning and Cloud Innovation

Image depicting machine learning as the focal point of the cloud. Source: Forbes

As noted in the image from Forbes, there are a number of things that are driven by machine learning that also helps to drive cloud innovation. Things such as business intelligence, personal assistants, IOT (Internet of Things), bots, and cognitive computing are brought about by machine learning, which in turn allows for the cloud to be a desirable place to collect, store, analyze and retrieve the data needed for these various applications.

 For instance, IOT is big on connecting machines so that they can communicate with one another and exchange data. Machine learning helps to drive these types of interactions, and using the cloud makes it even easier for machines to exchange data with one another as there will be an easy way to make those connections. 

 As a result, the cloud has seen much innovation in its ability to handle this type of data exchange as cloud systems have become more flexible and offer the ability to scale much more easily than with a traditional data center.

Cloud infrastructure and scalability

Communications with remote systems across factories and transportation, real-time data gathering and analytics, and the ability to integrate with enterprise software that drives your business are often requirements of any IoT system that promise value in additional automation, value discovery, cost savings, and new ways to improve customer service.

Source: Tracy Siclair, HP Enterprise

 Machine learning and cloud services make an excellent combination, as many cloud services, such as Morpheus, make it easy to provision the resources needed for the collection, storage, and retrieval of large amounts of data. The biggest reasons this works is that such cloud services offer both flexibility and scalability.

 A cloud service is typically flexible enough to allow you to provision servers with different specifications, depending on the needs of the various pieces required for the machine learning that needs to occur. For example, if you need different operating systems for different servers, it is as simple as selecting them and spinning up the server. 

 The same applies to provisioning various different databases such as MySQL, MongoDB, and so on - you can simply spin up what you need easily and move on to any setup and programming that needs to be done without worrying about acquiring hardware.

 Scalability is another big reason that machine learning and the cloud work well together. Since machine learning often needs a progressively larger amount of storage space, using a cloud service makes a lot of sense. 

 With a standard data center, you can end up spending a great deal on what you determine the maximum amount of storage space is that you will need up front, with the potential to still need more later. This can be quite costly both in the beginning and as time goes on.

On the other hand, a typical cloud service allows you to purchase only the storage space you need up front, then scale up to more space as you need it. This definitely saves money in the beginning and allows you to add more space as needed rather than simply jumping to an incredibly large amount of space before you need it. As you can see, this would definitely help to drive cloud adoption!

 

The Good, the Bad, and the Ugly Among Redis Pagination Strategies

$
0
0

If you need to use pagination in your Redis app, there are a couple of strategies you can use to achieve the necessary functionality. While pagination can be challenging, a quick overview of each of these techniques should be helpful in making your job of choosing a method and implementing it a little easier. There are several strategies for pagination in Redis. Find out what they are and the pros and cons of each!

 

In Redis, you have a couple of options from which to choose. You can use the SSCAN command or you can use sorted sets. Each of these has their own advantages, so choose the one that works best for your application and its infrastructure.

Using the SSCAN Command

The SSCAN command is part of a group of commands similar to the regular SCAN command. These include:

  • SCAN - Used to iterate over the set of keys in the current database.
  • SSCAN - Used to iterate over elements of sets.
  • HSCAN - Used to iterate fields hashes and associated values.
  • ZSCAN - Used to iterate elements of sorted sets and their scores.

Example of scan iteration. Source: Redis.

So, while the regular SCAN command iterates over the database keys, the SSCAN command can iterate over elements of sets. By using the returned SSCAN cursor, you could paginate over a Redis set.

The downside is that you need some way to persist the value of the cursor, and if there are concurrent users this could lead to some odd behavior, since the cursor may not be where it is expected. However, this can be useful for applications where traffic to these paginated areas may be lighter.

Using Sorted Sets

In Redis, sorted sets are a non-repeating collection of strings associated with a score. This score is used to order the set from the smallest to the largest score. This data type allows for fast updating by giving you easy access to elements, even if the elements are in the middle of the set.

An example of sorted set elements Source: Redis.

To paginate, you can use the ZRANGE command to select a range of elements in a sorted set based on their scores. So, you could, for example, select scores from 1-20, 21-40, and so on. By programmatically adjusting the range as the user moves through the data, you can achieve the pagination you need for your application.

Since sorted sets and ZRANGE do this task more intuitively than using a scan, it is often the preferred method of pagination, and is easier to implement with multiple users, since you can programmatically keep track of which ZRANGE each user is selecting at any given time.

In the end, you can choose which method works for your particular situation. If you have a smaller application with less traffic, a scan may work for you. If; however, you need a more robust solution for larger data sets or more highly utilized applications, it may be best to go ahead and use ZRANGE with sorted sets to achieve pagination in your application.

Morpheus helps you get more out of Redis. To find our how Morpheus can save you time, money and sanity sign up for a demo now!

Is the Cloud Ready for Speech APIs?

$
0
0

Everybody's talking -- to cloud-connected devices. The question is whether your company has the speech APIs it will need to collect, analyze, and act on valuable speech data.

Many people see natural language processing as the leading edge of artificial intelligence inroads into everyday business activities. The AI-powered voice technologies at the heart of popular virtual assistants such as Apple's Siri, Microsoft's Cortana, Amazon's Alexa, and Samsung's Bixby are considered "less than superhuman," as IT World's Peter Sayer writes in a May 24, 2017, article.

Any weaknesses in the speech AI present in those consumer products are easier for many companies to abide because the systems are also noted for their ability to run on low-power, plain vanilla hardware -- no supercomputers required.

Coming soon to a cloud near you: Automatic speech analysis

Machine voice analysis took a big step forward with the recent general availability of Google's Cloud Speech API, which was first released as an open beta in the summer of 2016. The Cloud Speech API uses the same neural-network technology found in the Google Home and Google Assistant voice products. As InfoQ's Kent Weare writes in a May 6, 2017, article, the API supports services in 80 different languages.

Google product manager Dan Aharon cites three typical human-computer use cases for the Cloud Speech API: mobile, web, and IoT.

Human-computer interactions that speech APIs facilitate include search, commands, messaging, and dictation. Source: Google, via InfoQ

Among the advantages of cloud speech interfaces, according to Aharon, are speed (150 words per minute, compared to 20-40 wpm for typing); interface simplicity; hands-free input; and the increasing popularity of always-listening devices such as Amazon Echo, Google Home, and Google Pixel.

A prime example of a lost voice-based resource for businesses is customer-service phone calls. Interactive Tel CTO Gary Graves says the Cloud Speech API lets the company gather "actionable intelligence" from recordings of customer calls to the service department. In addition to providing managers with tools for improving customer service, the intelligence collected holds employees accountable. Car dealerships have shown the heightened accountability translates directly into increased sales.

Humanizing and streamlining automated voice interactions

Having to navigate through multi-level trees of voice-response systems can make customers pine for the days of long hold times. Twilio recently announced the beta version of its Automated Speech Recognition API that converts speech to text, which lets developers craft applications that respond to callers' natural statements. Compare: "Tell us why you're calling" to "If you want to speak to a support person, say 'support'." Venture Beat's Blair Hanley Frank reports on the beta release in a May 24, 2017, article.

Twilio's ASR is based on the Google Cloud Speech API; it processes 89 different languages and dialects, and it costs from two cents for each 15 seconds of recognition. Also announced by Twilio is a Universal API that delivers to applications information about the intent of the natural language it processes. The Understand API works with Amazon Alexa in addition to Twilio's own Voice and SMS tools.

Could Amazon's cloud strength be a weakness in voice services?

By comparison, Amazon's March 2017 announcement of the Amazon Connect cloud-based contact center attempts to leverage its built-in integration with Amazon DynamoDB, Amazon Redshift, Amazon Aurora, and other existing AWS services. TechCrunch's Ingrid Lunden writes in a March 28, 2017, article that Amazon hopes to leverage its low costs compared to call centers that are not cloud-based, as well as requiring no up-front costs or long-term contracts.

A key feature of Amazon Connect is the ability to create automatic responses that integrate with Amazon Alexa and other systems based on the Amazon Lex AI service. Rather than being an advantage, the tight integration of Amazon Connect with Lex and other AWS offerings could prove to be problematic for potential customers, according to Zeus Kerravala in an April 25, 2017, post on No Jitter.

Kerravala points out that the "self-service" approach Amazon takes to the deployment of a company's custom voice-response system presents a challenge for developers. Businesses wishing to implement Amazon Connect must first complete a seven-step AWS sign-up process that includes specifying the S3 bucket to be used for storage.

The Amazon Connect contact center requires multiple steps to configure for a specific business once you've completed the initial AWS setup process. Source: Amazon Web Services

Developers aren't the traditional customers for call-center systems, according to Kerravala, and many potential Amazon Connect customers will struggle to find the in-house talent required to put the many AWS pieces together to create an AI-based voice-response system.

The democratization of AI, Microsoft style

In just two months, Microsoft's Cognitive Toolkit open-source deep learning system went from release candidate to version 2.0, as eWeek's Pedro Hernandez reports in a June 2, 2017, article. Formerly called the Computational Network Toolkit (CNTK), the general-availability release features support for the Keras neural network library, whose API is intended to support rapid prototyping by taking a "user-centric" approach. The goal is to allow people with little or no AI experience to include machine learning in their apps.

Microsoft claims its Cognitive Toolkit delivers clear speed advantages over Caffe, Torch, and Google's TensorFlow via efficient scaling in multi-GPU/multi-server settings. Source: Microsoft Developer Network

According to Microsoft, three trends are converging to deliver AI capabilities to everyday business apps:

  • The computational power of cloud computing
  • Enhanced algorithms and machine learning capabilities
  • Access to huge stores of cloud-based data

TechRepublic's Mark Kaelin writes in a May 22, 2017, article that Microsoft's Cognitive Services combine with the company's Azure cloud system to allow implementation of a facial recognition function simply by adding a few lines of code to an existing access-control app. Other "ready made" AI services available via Azure/Cognitive Services mashups are Video API, Translator Speech API, Translator Text API, Recommendations API, and Bing Image Search API; the company promises more such integrated services and APIs in the future.

While it is certain that AI-based applications will change the way people interact with businesses, it is anybody's guess whether the changes will improve those interactions, or make them even more annoying than many automated systems are today. If history is any indication, it will likely be a little of both.


Five Tips for Protecting Cloud Resources from Internal Threats

$
0
0

New approaches are required to secure data in an age of networks without boundaries. Although traditional approaches to internal IT security have been rendered obsolete, many tried-and-true techniques are adaptable to the cloud era. Here are five aspects to consider when crafting your company's data security plan.

If you remain unconvinced of the need to update your internal data security approach to make it cloud-ready, consider the fate of Smyth Jewelers, a retail jewelry chain headquartered in Maryland. Robert Uriarte and Christina Von der Ahe of the Orrick Trade Secrets Group write in a June 16, 2017, article on JD Supra that the company found itself locked out of its Dropbox account after the departure of the lone employee responsible for maintaining the account.

Among the proprietary company documents stored in the account were business plans, vendor details, confidential information about employees, customer lists, purchase histories, and "valuable customer account metrics," according to the authors. All it took to place these vital data resources off limits was for the employee to change the email address associated with the Dropbox account from the person's company email address to his private address.

As they say, hindsight is 20-20. The quandary Smyth Jewelers found itself in was easily preventable. To ensure your organization's cloud data is protected against attacks from the inside, follow these simple steps. As they also say, an ounce of prevention is worth a pound of cure.

Tip #1: Safeguard your cloud account details

The obvious approach to prevent a disgruntled employee from locking down the company's cloud assets is to establish administrative login credentials that IT controls. Another layer of prevention is available by enabling the cloud service's notifications whenever an important system setting changes. To be effective, alerts must reach the right parties in a timely manner.

Further, use a cloud service that provides multiple tiers of access services allows you to designate critical documents and resources that receive an added layer of protection. These may be as simple as a folder-level password, or file-level restrictions on viewing, printing, downloading, or editing a document.

Tip #2: Think in terms of 'governance' rather than 'controls' and 'monitoring'

There are few rules enforced by IT departments that employees can't figure out how to break. A better approach for managing information risk in your company is governance, according to Matt Kelly of RadicalCompliance in a June 19, 2017, article on Legaltech News. By focusing on governance, you devise policies for handling data that apply to everyone in the organization. The policies serve as a framework that employees can use for making judgments about information as new, unanticipated risks arise.

Kelly explains the key difference between governance and controls: governance educates users about the reasons why they need to be mindful of the risks associated with the information they handle; while controls are perceived as a fixed set of rules that apply in specific situations.

Organizations encountered an average of 23.2 separate cloud-related threats per month in the fourth quarter of 2016, an 18.4 percent increase from the year-earlier period; the highest single category was insider threats, reported by 93.5 percent of companies. Source: Skyhigh Networks

Typical scenarios of information risk are employees who expose sensitive company data on an insecure cloud app, who collect private information from minors without acquiring their parents' consent, and who destroy data that needs to be preserved for litigation. No set of controls would prevent these occurrences, but in each case, employees mindful of the risks inherent in these situations via governance would know how to respond accordingly.

Tip #3: Implement multifactor authentication, without ticking off users

The single most effective way to prevent unauthorized access to your company's cloud assets is by using two-factor or multifactor authentication. The single most likely way to turn users against you is by implementing multifactor authentication in a way that makes it harder for employees to get their work done. You have to find the middle ground that delivers ample protection but isn't too onerous to workers.

In a June 21, 2017, article on TechTarget, Ramin Edmond describes the single sign-on (SSO)  technique as a way to strengthen authentication without overburdening employees. SSO creates two layers of authentication, but users only need to be authenticated in both layers once to gain access to a range of apps, databases, and documents. Mobile implementations of SSO allow secure access to multiple mobile apps after a single two-factor authentication.

Multifactor authentication and single-sign on will account for larger shares of the global cloud identity access management market in 2020. Source: Allied Market Research

Tip #4: Work with HR to educate and train users about data security policies

Responsibility for crafting data security policies and training employees in the application and enforcement of those policies is shared by the human resources and IT departments. In many companies, IT either takes on too much of the job of employee education at the exclusion of HR or attempts to offload the bulk of the training work to HR.

In a June 21, 2017, article on JD Supra, Jennifer Hodur identifies three areas where HR and IT need to work together:

  • Educate users about how to spot and avoid phishing attempts, ransomware, and other scams
  • Identify and react to potential and actual data breaches
  • Respond consistently to violations of data security policies by employees

Tip #5: Be stingy in granting requests for privileged accounts

Haystax Technology recently conducted a crowd-based survey of 300,000 members of the LinkedIn Information Security Community about their approach to insider threats. Security Intelligence's Rick M. Robinson reports on the survey results in a June 22, 2017, article. Sixty percent of the survey respondents identified privileged users as the source of internal data breaches.

Not all the breaches traced back to a compromised privileged account were malicious in nature. The IT pros surveyed claim negligence and old-fashioned mistakes are the sources of a great number of serious data breaches and data loss. Contractors and temporary workers were identified by 57 percent of the survey respondents as the cause of internal data breaches, whether because they are less loyal or have not been trained in the company's data security policies.

DevOps: How to Achieve Rapid Mobile App Development

$
0
0

When you are launching a mobile app, being able to update it quickly when things happen is imperative, as users can easily move on to another app if yours is not working at its full potential. When you need to develop and update applications rapidly, taking a DevOps approach can be extremely helpful in meeting your goals.

What is DevOps?

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

Source: The Agile Admin

DevOps is a way of approaching the delivery of applications in such as way as to streamline the development and deployment processes so that business operations, IT management, and developers can all work together in a meaningful way. 

In the past, operations and development often worked separately, and software updates might occur a few times a year at most. With the onset of agile development, this process was sped up greatly on the development side, as updates could occur in terms of hours rather than months. However, without the development team being able to work with people on the business side to ensure that updates helped to meet their needs in a timely fashion, it was still difficult to increase the speed at which new apps and updates were released.

With DevOps, selected members from both business operations and development are given the opportunity to work together daily to ensure that not only can updates happen quickly, but that those quick updates are things that are helping to ensure the needs and requirements of the business team are met. 

With the teams working together daily, there is far less chance that something is implemented in such a way that one team is happy with it, but the other is not, which often requires additional business planning and development work to complete the release or update. Instead, agreements can be made as things are being developed, and the teams will be far less likely to have the need to have a meeting to fix what one side or the other understood was going to be implemented differently.

Mobile App Development

While mobile apps may sometimes be localized to specific devices and not need much integration with other services, many of these apps not only are needed on multiple devices and platforms but also need access to information from the backend from one or more of your own databases.

With this in mind, a large number of mobile apps are simply a front end that is tailored to specific platforms and then simply sends and retrieves data through a larger backend system you have set up to handle what is done with the data as it comes and goes. This type of model works well with DevOps, especially when you are able to combine DevOps with the cloud, which can speed up the delivery process even further.

Adding the Cloud

Cloud adoption has increased rapidly over the last few years, and this is due in large part to the advantages the cloud can offer in the way of speeding up and normalizing the way in which infrastructure is set up for the numerous projects that are often being worked on within an organization.

Cloud adoption in 2016. Source: RightScale

For example, obtaining a new server machine in a data center can often take days, weeks, or even months depending on the process and costs involved in setting up a new machine. If costs are high, IT and business must decide whether the new machine can be implemented or not. This means that the project may have to move toward sharing a machine with other applications/databases or being put on hold until the financial needs can be met.

On the other hand, use of the cloud allows you to scale much more easily, and often with less cost. If you use a cloud service such as Morpheus, you have the ability to instantly put additional servers online. If you have a server setup that works for most or all of the project you will launch, you can save that setup so that you can simply click and have your server up and running in no time! From here, you can easily implement the code needed to run any new applications and make quick updates anytime you need to afterward.

DevOps and the Cloud

With both cloud adoption and DevOps, you can really help to reduce or eliminate shadow IT, which is often used as a method of avoiding the usual business and/or IT restrictions that are in place to ensure apps are meeting business needs as well as IT security and consistency concerns. An app that is built outside of these restrictions is often placed on borrowed hardware, such as an employee’s personal computer or on a server being shared with another department. These type of setups can easily lead to security breaches either via the network, app, or hardware!

With the cloud, you make getting the necessary servers much easier and faster for teams working on a project, so that there is less chance they will want to “work around” it to save time. Also, with business and IT working together daily with DevOps, the individual teams will be far less likely to simply develop and implement something on the side because they don’t want to wait for input from the other team.

With both DevOps and the cloud, mobile and other web applications can be developed and delivered at a much faster rate, making both you and your customers much happier!

How to Prepare for the Next Cloud Outage

$
0
0

A big part of an IT manager's job is asking, "What if...?" As cloud services become more popular, disaster plans are updated to include contingencies for the moment when the company's cloud-hosted data resources become unavailable.

Two recent service failures highlight the damage that can result when a cloud platform experiences an outage:

  • On February 28, 2017, an Amazon employee working on the company's Simple Storage Service (S3) executed a command that the person thought would remove a small number of servers from an S3 subsystem. As HyTrust's Ashwin Krishnan writes in a May 4, 2017, article on Light Reading, the command was entered incorrectly, which resulted in the removal of many more servers than intended.
  • On March 21, 2017, many customers of Microsoft's Azure, Outlook, Hotmail, OneDrive, Skype, and Xbox Live services were either unable to access their accounts, or they experienced severely degraded service. The company traced the outage to a "deployment task" gone awry; rolling back the task remedied the problem.

The proactive approach to mitigating the effects of a cloud failure

If you think the threat of an outage would deter companies from taking advantage of cloud benefits, you obviously haven't been working in IT very long. There's nothing new about ensuring the high availability of crucial apps and data when trouble strikes. On the contrary, keeping systems running through glitches large and small is the IT department's claim to fame.

NS1's Alex Vayl explains in an article on IT Pro Portal that the damage resulting from an outage can be greatly reduced by application developers following best practices. Vayl identifies six keys for designing and deploying high availability applications:

  1. Prioritize the components comprising the app stack based on the project's cost/complexity, the impact of a failure on users, and the likelihood of each component failing.
  2. Minimize the risk that could be introduced by third parties, primarily cloud service providers, by ensuring they follow best practices themselves.
  3. Consider the app components that need to be maintained in private or hybrid clouds due to compliance and regulatory considerations.
  4. When technically feasible, configure critical app components in hybrid clouds that are backed up via a separate cloud service; alternatively, replicate data across at least two zones and automatically reroute traffic if one zone fails.
  5. Implement an intelligent DNS setup that shifts traffic dynamically based on real-time user, network, application, and infrastructure telemetry.
  6. Make sure your DNS isn't a single point of failure by contracting with a managed DNS provider, as well as with a secondary DNS provider.

"

The cloud lets you implement abstraction at the infrastructure, platform, or application level, but improving availability via multi-zone, multi-region, or multi-cloud increases management complexity. Source: Cloudify

Redundancy nearly always adds to the cost and complexity of cloud networks. However, the expenses incurred as a result of a prolonged network outage could dwarf your investment in redundant data systems. The best way to minimize the risk of a catastrophic system failure is by eliminating single points of failure. The best way to avoid single points of failure is by investing in redundancy.

In other words, you can pay now for redundancy, or pay later for massive data losses.

Getting a grip on the 'three vectors of control'

Preparing for a cloud service outage isn't much different than getting ready for any system failure, according to HyTrust's Krishnan. No matter the nature of the network, there will always be three pinch points, or "vectors of control," that managers need to master.

The first is scope, which is the number of objects each admin or script is authorized to act upon at a particular time. Using the Microsoft outage as an example, a deployment task's scope would limit the number of containers it could operate on at one time.

The second control vector is privilege, which controls what type of action an admin or script (task) can take on an object. An example of a privilege restriction would be a task that is allowed to launch a container but not to destroy one.

The third control point is the governance model, which implements best practices and your policy for enforcing the scope and privileges described above in a "self-driven" manner. For example, a governance policy would limit the number of containers an admin or script can act on at one time to no more than 100 (scope) while also providing a predefined approval process for exceptions to the policy.

Despite the highly publicized outage of February 28, 2017, AWS has experienced much less downtime than cloud-platform competitors Microsoft and Google. Source: The Information

Multi-cloud's biggest selling points: Backup and redundancy

The major cloud platforms -- Amazon, Microsoft, and Google -- have different attitudes about transferring data between their respective systems. Google's new Cloud Endpoints API is designed to integrate with AWS's Lambda function-as-a-service product, as NetworkWorld's Brandon Butler writes in an April 27, 2017, article. This allows you to use Endpoints to manage API calls associated with a Lambda application.

Among AWS's cloud connectivity products are Direct Connect, Amazon Virtual Private Cloud, and AWS Storage Gateway; coming soon is an easier way to run VMware workloads in AWS clouds. Yet AWS executives continue to downplay the use of multiple clouds, even for backup and redundancy. They insist it's easier, less expensive, and just as effective to split your workloads among various regions of the AWS cloud.

Many analysts counter this argument by pointing out the future belongs to multi-cloud, primarily because of the approach's built-in redundancy and resiliency in the event of outages -- whether caused by human error or a natural disaster. Butler suggests that one way to hedge your bets as the multi-cloud era begins is to adopt a third-party cloud integration tool rather than to rely on your cloud provider's native tools.

(For a great primer on everything multi-cloud, check out TechRepublic's "Multi-cloud: The Smart Person's Guide," published on May 4, 2017.)

Database - Beginning with Cloud Database As A Service

$
0
0

Note: When we recently launched, we were thrilled to have SQL Guru Pinal Dave give Morpheus a spin. It turns out that he had a great experience, and as he is indeed an SQLAuthority, we thought we'd share his post here as well. Without further delay, Pinal shares his thoughts below:

Pinal Dave

I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them.

I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search. Provisioning and deploying these databases often require extensive expertise and time. Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases. Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed.

Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud:

  • Database provisioning & orchestration
  • Slow speed due to hardware issues
  • Poor Monitoring Tools
  • High Network Latency

Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well.

Here are the top three things which can be problems if you do not have application expert: 

  • Replication and Clustering
  • Simple provisioning of the hard drive space
  • Automatic Sharding

Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators.

What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Redis as a service. Thus users can pick and chose any combination of these databases. All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface. The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions. Inaddition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases.

Here are the few steps on how one can get started with Morpheus. Follow along with me. First go to http://www.gomorpheus.com and register for a new and free account.

Step 1: Signup

It is very simple to signup for Morpheus.

Step 2: Select your database

I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.

Step 3: Create User

Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.

Step 4: Configure Your MySQL Client

I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.

That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database.

Dashboard

While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature. Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis.

The SQL Vulnerability Hackers Leverage to Steal Your IDs, Passwords, and More

$
0
0

[TL:DR] The theft of hundreds of millions of user IDs, passwords, and email addresses was made possible by a database programming technique called dynamic SQL, which makes it easy for hackers to use SQL injection to gain unfettered access to database records. To make matters worse, the dynamic SQL vulnerability can be avoided by using one of several simple programming alternatives.

How is it possible for a simple hacking method which has been publicized for as many as 10 years to be used by Russian cybercriminals to amass a database of more than a billion stolen user IDs and passwords? Actually, the total take by the hackers in the SQL injection attacks revealed earlier this month by Hold Security was 1.2 billion IDs and passwords, along with 500 million email addresses, according to an article written by Nicole Perlroth and David Gelles in the August 5, 2014, New York Times.

Massive data breaches suffered by organizations of all sizes in recent years can be traced to a single easily preventable source, according to security experts. In an interview with IT World Canada's Howard Solomon, security researcher Johannes Ullrich of the SANS Institute blames an outdated SQL programming technique that continues to be used by some database developers. The shocker is that blocking such malware attacks is as easy as using two or three lines of code in place of one. Yes, according to Ullrich, it's that simple.

The source of the vulnerability is dynamic SQL, which allows developers to create dynamic database queries that include user-supplied data. The Open Web Application Security Project (OWASP) identifies SQL, OS, LDAP, and other injection flaws as the number one application security risk facing developers. An injection involves untrusted data being sent to an interpreter as part of a command or query. The attacker's data fools the interpreter into executing commands or accessing data without authentication.

A1 Injection

According to OWASP, injections are easy for hackers to implement, difficult to discover via testing (but not by examining code), and potentially severely damaging to businesses.

The OWASP SQL Injection Prevention Cheat Sheet provides a primer on SQL injection and includes examples of unsafe and safe string queries in Java, C# .NET, and other languages.

String Query

An example of an unsafe Java string query (top) and a safe Java PreparedStatement (bottom).

Dynamic SQL lets comments be embedded in a SQL statement by setting them off with hyphens. It also lets multiple SQL statements to be strung together, executed in a batch, and used to query metadata from a standard set of system tables, according to Solomon.

Three simple programming approaches to SQL-injection prevention

OWASP describes three techniques that prevent SQL injection attacks. The first is use of prepared statements, which are also referred to as parameterized queries. Developers must first define all the SQL code, and then pass each parameter to the query separately, according to the OWASP's prevention cheat sheet. The database is thus able to distinguish code from data regardless of the user input supplied. A would-be attacker is blocked from changing the original intent of the query by inserting their own SQL commands.

The second prevention method is to use stored procedures. As with prepared statements, developers first define the SQL code and then pass in the parameters separately. Unlike prepared statements, stored procedures are defined and stored in the database itself, and subsequently called from the application. The only caveat to this prevention approach is that the procedures must not contain dynamic SQL, or if it can't be avoided, then input validation or another technique must be employed to ensure no SQL code can be injected into a dynamically created query.

The last of the three SQL-injection defenses described by OWASP is to escape all user-supplied input. This method is appropriate only when neither prepared statements nor stored procedures can be used, whether because doing so would break the application or render its performance unacceptable. Also, escaping all user-supplied input doesn't guarantee your application won't be vulnerable to a SQL injection attack. That's why OWASP recommends it only as a cost-effective way to retrofit legacy code. 

All databases support one or more character escaping schemes for various types of queries. You could use an appropriate escaping scheme to escape all user-supplied input. This prevents the database from mistaking the user-supplied input for the developer's SQL code, which in turn blocks any SQL injection attempt.

The belt-and-suspenders approach to SQL-injection prevention

Rather than relying on only one layer of defense against a SQL injection attack, OWASP recommends a layered approach via reduced privileges and white list input validation. By minimizing the privileges assigned to each database account in the environment, DBAs can reduce the potential damage incurred by a successful SQL injection breach. Read-only accounts should be granted access only to those portions of database tables they require by creating a specific view for that specific level of access. Database accounts rarely need create or delete access, for example. Likewise, you can restrict the stored procedures certain accounts can execute. Most importantly, according to OWASP, minimize the privileges of the operating system account the database runs under. MySQL and other popular database systems are set with system or root privileges by default, which likely grants more privileges than the account requires.

 

Adopting the database-as-a-service model limits vulnerability

Organizations of all sizes are moving their databases to the cloud and relying on services such as Morpheus to ensure safe, efficient, scalable, and affordable management of their data assets. Morpheus supports MongoDB, MySQL, Redis, ElasticSearch, and other DB engines. The service's real-time monitoring lets you analyze and optimize the performance of database applications.

In addition to 24/7 monitoring of your databases, Morpheus provides automatic backup, restoration, and archiving of your data, which you can access securely via a VPN connection. The databases are stored on Morpheus's solid-state drives for peak performance and reliability.  

Don't Drown Yourself With Big Data: Hadoop May Be Your Lifeline

$
0
0

Hadoop Opener

 

 

 

TL; DR: The tremendous growth predicted for the open-source Hadoop architecture for data analysis is driven by the mind-boggling increase in the amount of structured and unstructured data in organizations, and the need for sophisticated, accessible tools to extract business and market intelligence from the data. New cloud services such as Morpheus let organizations of all sizes realize the potential of Big Data analysis.

The outlook is rosy for Hadoop -- the open-source framework designed to facilitate distributed processing of huge data sets. Hadoop is increasingly attractive to organizations because it delivers the benefits of Big Data while avoiding infrastructure expenses.

A recent report from Allied Market Research concludes that the Hadoop market will realize a compound annual growth rate of 58.2 percent from 2013 to 2020, to a total value of $50.2 billion in 2020, compared to $1.5 billion in 2012.

 

Hadoop Market Size

 

Allied Market Research forecasts a $50.2 billion global market for Hadoop services by the year 2020.

Just how "big" is Big Data? According to IBM, 2.5 quintillion bytes of data are created every day, and 90 percent of all the data in the world was created in the last two years. Realizing the value of this huge information store requires data-analysis tools that are sophisticated enough, cheap enough, and easy enough for companies of all sizes to use.

Many organizations continue to consider their proprietary data too important a resource to store and process off premises. However, cloud services now offer security and availability equivalent to that available for in-house systems. By accessing their databases in the cloud, companies also realize the benefits of affordable and scalable cloud architectures.

The Morpheus database-as-a-service offers the security, high availability, and scalability organizations require for their data-intelligence operations. Performance is maximized through Morpheus's use of 100-percent bare-metal SSD hosting. The service offers ultra-low latency to Amazon Web Services and other peering points and cloud hosting platforms.

 

The Nuts and Bolts of Hadoop for Big Data Analysis

The Hadoop architecture distributes both data storage and processing to all nodes on the network. By placing the small program that processes the data in the node with the much larger data sets, there's no need to stream the data to the processing module. The processor splits its logic between a map and a reduce phase. The Hadoop scheduling and resource management framework executes the map and reduce phases in a cluster environment.

The Hadoop Distributed File System (HDFS) data storage layer uses replicas to overcome node failures and is optimized for sequential reads to support large-scale parallel processing. The market for Hadoop really took off when the framework was extended to support the Amazon Web Services S3 and other cloud-storage file systems.

Adoption of Hadoop in small and midsize organizations has been slow despite the framework's cost and scalability advantages because of the complexity of setting up and running Hadoop clusters. New services do away with much of the complexity by offering Hadoop clusters that are managed and ready to use: there's no need to configure or install any services on the cluster nodes.

 

Netflix data warehouse combines Hadoop and Amazon S3 for infinite scalability

For its petabyte-scale data warehouse, Netflix chose Amazon's Storage Service (S3) over the Hadoop Distributed File System for the cloud-based service's dynamic scalability and limitless data and computational power. Netflix collects data from billions of streaming events from televisions, computers, and mobile devices.

With S3 as its data warehouse, Hadoop clusters with hundreds of nodes can be configured for various workloads, all able to access the same data. Netflix uses Amazon's Elastic MapReduce distribution of Hadoop and has developed its own Hadoop Platform as a Service, which it calls Genie. Genie lets users submit jobs from Hadoop, Pig, Hive, and other tools without having to provision new clusters or install new clients via RESTful APIs.

 

 

Netflix Hadoop S3 Data Warehouse

 

The Netflix Hadoop-S3 data warehouse offers unmatched elasticity in terms of data and computing power in a widely distributed network.

There is clearly potential in combining Hadoop and cloud services, as Wired's Marco Visibelli explains in an August 13, 2014, article. Visibelli describes how companies leverage Big Data for forecasting by scaling from small projects via Amazon Web Services and scaling up as their small projects succeed. For example, a European car manufacturer used Hadoop to combine several supplier databases into a single 15TB database, which saved the company $16 million in two years.

Hadoop opens the door to Big Data for organizations of all sizes. Projects that leverage the scalability, security, accessibility, and affordability of cloud services such as Morpheus's database as a service have a much greater chance of success.

 

No "Buts" About It: The Cloud Is Transforming Your Company's Business Processes

$
0
0

TL;DR: As IT managers gain confidence in the reliability and security of cloud services, it becomes more difficult for them to ignore the cloud's many benefits for all their business's operations. Companies have less hardware to purchase and maintain, they spend only for the storage and processing they need, and they can easily monitor and manage their applications. With the Morpheus database as a service you get all of the above running on a high-availability network that features 24-7 support.

Give any IT manager three wishes and they'll probably wish for three fewer things to worry about. How about 1) having less hardware to buy and manage, 2) having to pay for only the storage and processing you need, and 3) being able to monitor and test applications from a single easy-to-use console?

Knowing the built-in cynicism of many data-center pros, they're likely to scoff at your offer, or at least suspect that it can't be as good as it sounds. That's pretty much the reception cloud services got in the early days, circa 2010.

An indication of IT's growing acceptance of cloud services for mainstream applications is KPMG's annual survey of 650 enterprise executives in 16 countries about their cloud strategies. In the 2011 survey, concerns about data security, privacy, and regulatory compliance were cited as the principal impediments to cloud adoption in large organizations.

According to the results of the most recent KPMG cloud survey, executives now consider cloud integration challenges and control of implementation costs as their two greatest concerns. There's still plenty of fretting among executives about the security of their data in the cloud, however. Intellectual property theft, data loss/privacy, and system availability/business continuity are considered serious problems, according to the survey.

International Cloud Survey

Executives rate such cloud-security challenges as intellectual property theft, data loss, and system availability greater than 4 on a scale of 1 (not serious) to 5 (very serious). Credit: KPMG

Still, security concerns aren't dissuading companies from adopting cloud services. Executives told KPMG that in the next 18 months their organizations planned cloud adoption in such areas as sourcing and procurement; supply chain and logistics; finance, accounting and financial management; business intelligence and analytics; and tax.

Cloud 'migration' is really a 'transformation'

Three business trends are converging to make the cloud an integral part of the modern organization: the need to collect, integrate, and analyze data from all internal operations; the need to develop applications and business processes quickly and inexpensively; and the need to control and monitor the use of data resources that are no longer stored in central repositories.

In a September 2, 2014, article on Forbes.com, Robert LeBlanc explains that cloud services were initially perceived as a way to make operations more efficient and less expensive. But now organizations see the cloud architecture as a way to innovate in all areas of the company. Business managers are turning to cloud services to integrate big data, mobile computing, and social media into their core processes.

 

BI Deployment Preferences

 

Mobile and collaboration are leading the transition in organizations away from on-site management and toward cloud platforms. Credit: Ventana Research

George Washington University discovered first-hand the unforeseen benefits of its shift to a cloud-based data strategy. Zaid Shoorbajee describes in the March 3, 2014, GW Hatchet student newspaper how a series of campus-wide outages motivated the university to migrate some operations to cloud services. The switch saved the school $700,000 and allowed its IT staff to focus more on development and less on troubleshooting.

The benefits the school realized from the switch extend far beyond IT, however. Students now have the same "consumer and social experience" they've become accustomed to in their private lives through Google, iTunes, and similar services, according to a university spokesperson.

Four approaches to cloud application integration

Much of the speed, efficiency, and agility of cloud services can be lost when organizations become bogged down in their efforts to adapt legacy applications and processes. In a TechTarget article (registration required), Amy Reichert presents four approaches to cloud application integration. The process is anything but simple, due primarily to the nature of the applications themselves and the need to move data seamlessly and accurately between applications to support business processes.

One of the four techniques is labeled integration platform as a service (iPaas), in which the cloud service itself provides integration templates featuring such tools as connectors, APIs, and messaging systems. Organizations then customize and modify the templates to meet their specific needs.

In cloud-to-cloud integration, the organization's cloud applications have an integration layer built in to support any required data transformations, as well as encryption and transportation. The cloud-to-integrator-to-cloud model relies on the organization's existing middleware infrastructure to receive, convert, and transport the data between applications.

Finally, the hybrid integration approach keeps individual cloud apps separate but adds an integration component to each. This allows organizations to retain control over the data, maximize its investment in legacy systems, and adopt cloud services at the company's own pace.

Regardless of your organization's strategy for adopting and integrating cloud applications, the Morpheus database as a service can play a key role by providing a flexible, secure, and reliable platform for monitoring and optimizing database applications. Morpheus's SSD-backed infrastructure ensures lightning fast performance, and direct patches into EC2 offer ultra-low latency.

Morpheus protects your data via secure VPC connections and automatic backups, replication, and archiving. The service supports ElasticSearch, MongoDB, MySQL, and Redis, as well as custom storage engines. Create your free database during the beta period.


Why More Is Better with Database Management: The Multicloud Approach

$
0
0

 

TL;DR: At one time, organizations planning their cloud strategy adopted an either-or approach: Either store and manage data on a secure private cloud, or opt for the database-as-a-service model of the public cloud. Now companies are realizing the benefits of both options by adopting a multicloud strategy that places individual applications on the platform that best suits them.

In IT's never-ending quest to improve database performance and reduce costs, a new tactic has surfaced: multicloud. Rather that process all database queries on either the private cloud or public cloud, shift the processing to the platform best able to handle it in terms of speed and efficiency.

InfoWorld's David Linthicum explains in an August 5, 2014, article that a multicloud architecture "gives those who manage large distributed databases the power to use only the providers who offer the best and most cost-effective service -- or the providers who are best suited to their database-processing needs."

Managing the resulting complexity isn't as daunting as it may sound, according to Linthicum. In fact, a cloud-management system could soon become a requirement for IT departments of all sizes. Product lifecycle management (PLM) expert Oleg Shilovitsky claims in an August 5, 2014, article on BeyondPLM.com that three trends are converging to make distributed database architectures mandatory.

The first trend is the tsunami of data that is overwhelming information systems and pushing traditional database architectures to their physical limits. The second trend is the increasingly distributed nature of organizations, which are adopting a design-anywhere, build-anywhere philosophy. The third trend is the demand among users for ever-faster performance on many different platforms to keep pace with the changes in the marketplace.

Multicloud: More than simply pairing public and private

In a July 12, 2013, article, InfoWorld's Linthicum compared the process of adopting a multicloud strategy to the transition a decade or more ago to distributed internal systems customized to the specific demands of the business. A key to managing the increased complexity of multicloud systems is carefully choosing your service provider to ensure a good fit between their offerings and your company's needs.

Three key considerations in this regard are security, accessibility, and scalability. These are three areas where the Morpheus database-as-a-service shines. In addition to lightning-fast SSD-based infrastructure that increases IOPs by 100 times, Morpheus provides real-time monitoring for identifying and optimizing database queries that are impeding database performance.

Morpheus offers ultra-low latency to leading Internet peering points and cloud hosts. Additionally, fault tolerance, disaster recovery, and automated backups make Morpheus a unique Database as a service. You connect to your databases via secure VPC. Visit the Morpheus site for pricing information or to create a free account during the beta period.

Mixing old and new while maximizing adaptability

Businesses of all types and sizes are emphasizing the ability to shift gears quickly in anticipation of industry trends. No longer can you simply react to market changes: You must be there ahead of the competition.

A principal benefit of the multicloud database architecture is flexibility. In an August 25, 2014, article on Forbes.com, IBM's Jeff Borek highlights the ability of multicloud databases to leverage existing IT infrastructure while realizing the agility, speed, and cost savings of cloud services.

A typical multicloud approach is use of the private cloud as a point-of-control interface to public cloud services. MSPMentor's Michael Brown describes such an architecture in an August 27, 2014, article.

Many companies use a private cloud to ensure regulatory compliance for storing health, financial, and other sensitive data. In such systems, the private cloud may serve as the gateway to the public cloud in a two-tier structure. In addition to providing a single interface for users, the two levels allow applications and processes to be customized for best fit while keeping sensitive data secure.

A multicloud-application prototype: Managing multiple application servers

There's no denying that managing a distributed database system is more complicated than maintaining the standard top-down RDBMS of yesteryear. In a July 23, 2013, article on GitHub, German Ramos Garcia presents a prototype multicloud application development model based on the Hydra service. The model addresses much of the complexity entailed in managing multiple application servers.

The web application is first divided into static elements (images, Javascript, static HTML, etc.), dynamic elements on a backend server, and a database to support the backend servers.

Multi Cloud

 

A prototype multicloud application architecture separates static, dynamic, and database-support servers.

The distributed architecture must provide mechanisms for controlling the various servers, balancing traffic between servers, and recovering from failures. It must also control sessions between servers and determine where to store application data.

An alternative approach to multicloud management is presented by Mauricio J. Rojas in a blog post from March 25, 2014. The model Rojas proposes is a mash-up of management tools from many different cloud services.

Multi-cloud manager

 

Management tools for distributed cloud-based databases should focus on user needs and offer best of breed from various providers.

Rojas recommends creating a single set of management components for both the public and private clouds. This allows you to "create the same conditions in both worlds" and move seamlessly between the public and private domains.

In addition to security, important considerations in developing a multicloud management system are auto-scaling and high availability. With the Morpheus database-as-a-service, you're covered in all three areas right out of the box--even Pinal Dave, the SQL Authority uses Morpheus.  Make Morpheus a key element of your multicloud strategy.

Can A Silicon Valley CTO Save Government Software From Itself

$
0
0

 

TL;DR: Following several high-profile development disasters, government IT departments have received a mandate to change their default app-development approach from the traditional top-down model to the agile, iterative, test-centric methodology favored by leading tech companies. While previous efforts to dynamite the entrenched, moribund IT-contracting process have crashed in flames, analysts hold out hope for the new 18F and U.S. Digital Service initiatives. Given the public's complete lack of faith in the government's ability to provide digital services, failure is simply not an option.

Can Silicon Valley save the federal government from itself? That's the goal of former U.S. Chief Technology Officer Todd Park, who relocated to California this summer and set about recruiting top-tier application developers from the most innovative tech companies on the planet to work for the government.

As Wired's Steven Levy reports in an August 28, 2014, article, Park hopes to appeal to developers' sense of patriotism. "America needs you," Levy quotes Park telling a group of engineers at the Mozilla Foundation headquarters. A quick review of recent federal-government IT debacles demonstrates the urgency of Park's appeal.

Start with the $300 million spent over the past six years by the Social Security Administration on a disability-claim filing system that remains unfinished. Then check out the FBI's failed Virtual Case File case-management initiative that had burnt through $600 million before being replaced by the equally troubled Sentinel system, as Jason Bloomberg explains in an August 22, 2012, CIO article.

But the poster child of dysfunctional government app development is HealthCare.gov., which Park was brought in to save after its spectacularly failed launch in October 2013. For their $300 million investment, U.S. taxpayers got a site that took eight seconds to respond to a mouse click and crashed so often that not one of the millions of people visiting the site on its first day of operation was able to complete an application.

Healthcare.gov homepage

 

Healthcare.gov's performance in the weeks after its launch highlight what can happen when a $300 million development project proceeds with no one in the driver's seat. Credit: The Verge

The dynamite approach to revamping government IT processes

Just months before HealthCare.gov's epic crash-and-burn, Park had established the Presidential Innovation Fellows program to attract tech professionals to six-month assignments with the government. The program was envisioned as a way to seed government agencies with people who could introduce cutting-edge tools and processes to their development efforts. After initial successes with such agencies as Medicare and Veterans Affairs, the group turned its attention to rescuing HealthCare.gov -- and perhaps the entire Affordable Care Act.

The source of the site's problems quickly became obvious: the many independent contractors assigned to portions of the site worked in silos, and no single contractor was responsible to ensure the whole shebang actually worked. Even as the glitches stacked up following the failed launch, contractors continued to work on new "features" because they were contractually required to meet specific goals.

The culprit was the federal contracting process. Bureaucrats farmed out contracts to cronies and insiders, whose only motivation was to be in good position to win the next contract put up for bid, according to Levy. Park's team of fixers was met with resistance at every turn despite being given carte blanche to ignore every rule of government development and procurement.

With persistence and at least one threat of physical force, the ad-hoc team applied a patchwork of monitoring, testing, and debugging tools that got the site operational. By April 2014, HealthCare.gov had achieved its initial goal of signing up 8 million people for medical insurance.

How an agile-development approach could save democracy

The silver lining of the HealthCare.gov debacle is the formation of two new departments charged with bringing an agile approach to government app development. The General Services Administration's 18F was established earlier this year with a mandate to "fail fast" rather than follow the standard government-IT propensity to fail big.

As Tech President's Alex Howard describes in an August 14, 2014, article, 18F is assisting agencies as they develop free, open-source services offered to the public via GitHub and other open-source repositories. Perhaps an even-bigger shift in attitude by government officials is the founding last month of the U.S. Digital Service, which is modeled after a successful U.K. government app-development program.

To help agencies jettison their old development habits in favor of modern approaches, the White House released the Digital Services Playbook that provides 13 "plays" drawn from successful best practices in the private and public sectors. Two of the plays recommend deploying in a flexible hosting environment and automating testing and deployment.

Digital Service Plays

 

The government's Digital Services Playbook calls for agencies to implement modern development techniques such as flexible hosting and automated testing.

That's precisely where the Morpheus database-as-a-service (DBaas) fits into the government's plans. Morpheus lets users spin up a new database instance in seconds -- there's no need to wait for lengthy IT approval to procure and provision a new DB. Instead it's all done in the cloud within seconds.

In addition, users' core elastic, scalable, and reliable DB infrastructure is taken care for them. Developers can focus on building the core functionality of the app rather than having to spend their time making the infrastructure reliable and scalable. Morpheus delivers continuous availability, fault tolerance, fail over, and disaster recovery for all databases running on its service. Last but definitely not least, it's cost efficient for users to go with Morpheus: there's no upfront setup cost, and they pay only for actual usage.

The Morpheus cloud database as a service (DBaaS) epitomizes the goals of the government's new agile-development philosophy. The service's real-time monitoring makes continuous testing a fundamental component of database development and management. Morpheus's on-demand scalability ensures that applications have plenty of room to grow without incurring large up-front costs. You get all this plus industry-leading performance, VPN security, and automatic backups, archiving, and replication.

Government IT gets the green light to use cloud app-development services

As groundbreaking as the Digital Services Playbook promises to be for government IT, another publication released at the same time may have an even-greater positive impact on federal agencies. The TechFAR Handbook specifies how government contractors can support an "iterative, customer-driven software development process."

Tech President's Howard quotes Code for America founder Jen Pahlka stating that the handbook makes it clear to government IT staff and contractors alike that "agile development is not only perfectly legal, but [is] in fact the default methodology."

Critics point out that this is not the government's first attempt to make its application development processes more open and transparent. What's different this time is the sense of urgency surrounding efforts such as 18F and the U.S. Digital Service. Pahlka points out that people have lost faith in the government's ability to provide even basic digital services. Pahlka is quoted in a July 21, 2014, Government Technology interview by Colin Wood and Jessica Mulholland as stating, "If government is to regain the trust and faith of the public, we have to make services that work for users the norm, not the exception."

Cloud Database Security, Farms and Restaurants: The Importance of Knowing Your Sources

$
0
0

TL;DR: Securing your company's cloud-based assets starts by applying tried-and-true data-security practices modified to address the unique characteristics of virtual-network environments. Cloud services are slowly gaining the trust of IT managers who are justifiably hesitant to extend the security perimeters to accommodate placing their company's critical business assets in the cloud.

The fast pace of technological change doesn't faze IT pros, who live the axiom "The more things change, the more they stay the same." The solid security principles that have protected data centers for generations apply to securing your organization's assets that reside in the cloud. The key is to anticipate the new threats posed by cloud technology -- and by cyber criminals who now operate with a much higher level of sophistication.

In a September 18, 2014, article, ZDNet's Ram Lakshminarayanan breaks down the cloud-security challenge into four categories: 1) defending against cloud-based attacks by well-funded criminal organizations 2) unauthorized access and data breaches that use employees' stolen or compromised mobile devices 3) maintenance and monitoring of cloud-based APIs, and 4) ensuring compliance with the growing number and complexity of government regulations.

IT departments are noted for their deliberate approach to new technologies, and cloud-based data services are no different. According to a survey published this month by the Ponemon Institute of more than 1,000 European data-security practitioners (pdf), 64 percent believe their organization's use of cloud services reduces their ability to protect sensitive information.

The survey, which was sponsored by Netskope, blames much of the distrust on the cloud multiplier effect: IT is challenged to track the increasing number and type of devices connecting to the company's networks, as well as the cloud-hosted software employees are using, and the business-critical applications being used in the "cloud workspace."

Building trust between cloud service providers and their IT customers

No IT department will trust the organization's sensitive data to a service that fails to comply with privacy and data-security regulations. The Ponemon survey indicates that cloud services haven't convinced their potential customers in Europe of their trustworthiness: 72 percent of respondents strongly disagreed, disagreed, or were uncertain whether their cloud-service providers were in full compliance with privacy and data-security laws.

Data-security executives remain leery of cloud services' ability to secure their organization's critical business data. Credit: Ponemon Institute

Even more troubling for cloud service providers is the survey finding that 85 percent of respondents strongly disagreed, disagreed, or weren't sure whether their cloud service would notify them immediately in the event of a data breach that affected their company's confidential information or intellectual property.

The Morpheus database-as-a-service puts data security front and center by offering VPN connections to your databases in addition to online monitoring and support. Your databases are automatically backed up, replicated, and archived on the service's SSD-backed infrastructure.

Morpheus also features market-leading performance, availability, and reliability via direct connections to EC2 and colocation with the fastest peering points available. The service's real-time monitoring lets you identify and optimize the queries that are slowing your database's performance. Visit the Morpheus site for pricing information and to sign up for a free account.

Overcoming concerns about cloud-service security

Watching your data "leave the nest" can be difficult for any IT manager. Yet cloud service providers offer a level of security at least on par with that of their on-premises networks. In a September 15, 2014, article on Automated Trader, Bryson Hopkins points out that Amazon Web Services and Microsoft Azure are two of the many public cloud services that comply with Service Organization Control (SOC), HIPPA, FedRAMP, ISO 27001, and other security standards.

The SANS Institute's Introduction to Securing a Cloud Environment (pdf) explains that despite the cloud's increased "attack surface" when compared with in-house servers, the risk of cloud-based data being breached is actually less than that of losing locally hosted data. Physical and premises security are handled by the cloud service but can be enhanced by applying a layered approach to security that uses virtual firewalls, security gateways, and other techniques.

Cloud services avoid resource contention and other potential problems resulting from multi-tenancy by reprovisioning virtual machines, overprovisioning to crowd out other tenants, and using fully reserved capacities.

Another technique for protecting sensitive data in multi-tenant environments is to isolate networks by configuring virtual switches or virtual LANs. The virtual machine and management traffic must be isolated from each other at the data link layer (layer 2) of the OSI model.

The key to protecting sensitive data in a multi-tenant cloud environment is to isolate virtual machine and management traffic at the data link layer. Credit: SANS Institute

In a June 27, 2014, article on CloudPro, Davey Winder brings the issue of cloud security full circle by highlighting the fact that the core principles are the same as for other forms of data security: an iron-clad policy teamed with encryption. The policy must limit privileged-user access by the service's employees and provide a way for customers to audit the cloud network.

One way to compare in-house data management and cloud-based management is via the farmer-restaurant analogy described in a September 15, 2014, article by Arun Anandasivam on IBM's Thoughts on Cloud site. If you buy your food directly from the farmer, you have a first-hand impression of the person who grew your food, but your options may be limited and you have to do the preparation work. If you buy your food from a restaurant, you likely have a wider selection to choose from and you needn't prepare the meal, but you have less control over the food's path from farm to kitchen, and you have fewer opportunities to determine beforehand whether the food meets your quality requirements.

That's not to say farmers are any more or less trustworthy than restaurants. You use the same senses to ensure you're getting what you paid for, just in different ways. So check out the Morpheus database-as-a-service to see what's on the menu!

Why is Google Analytics so Fast? A Peek Inside

$
0
0

TL;DR: Google Analytics stores a massive amount of statistical data from web sites across the globe. Retrieving reports quickly from such a large amount of data requires Google to use a custom solution that is easily scalable whenever more data needs to be stored.

At Google, any number of applications may need to be added to their infrastructure at any time, and each of these could potentially have extremely heavy workloads. Resource demands such as these can be difficult to meet, especially when there is a limited amount of time to get the required updates implemented.

If Google were to use the typical relational database on a single server node, they would need to upgrade their hardware each time capacity is reached. Given the amount of applications being created and data being used by Google, this type of upgrade could quite possibly be necessary on a daily basis!

The load could also be shared across multiple server nodes, but once more than a few additional nodes are required, the complexity of the system becomes extremely difficult to maintain.

With these things in mind, a standard relational database setup would not be a particularly attractive option due to the difficulty of upgrading and maintaining the system on such a large scale.

Finding a Scalable Solution

In order to maintain speed and ensure that such incredibly quick hardware upgrades are not necessary, Google uses its own data storage solution called BigTable. Rather than store data relationally in tables, it stores data as a multi-dimensional sorted map.

This type of implementation falls under a broader heading for data storage, called a key/value store. This method of storage can provide some performance benefits and make the process of scaling much easier.

Information Storage in a Relational Database

Relational databases store each piece of information in a single location, which is typically a column within a table. For a relational database, it is important to normalize the data. This process ensures that there is no duplication of data in other tables or columns.

For example, customer last names should always be stored in a particular column in a particular table. If a customer last name is found in another column or table within the database, then it should be removed and the original column and table should be referenced to retrieve the information.

The downside to this structure is that the database can become quite complex internally. Even a relatively simple query can have a large number of possible paths for execution, and all of these paths must be evaluated at run time to find out which one will be the most optimal. The more complex the database becomes, the more resources will need to be devoted to determining query paths at run time.

Information Storage in a Key/Value Store

With a key/value store, duplicate data is acceptable. The idea is to make use of disk space, which can easily and cost-effectively be upgraded (especially when using a cloud), rather than other hardware resources that are more expensive to bring up to speed.

This data duplication is beneficial when it comes to simplifying queries, since related information can be stored together to avoid having numerous potential paths that a query could take to access the needed data.

Instead of using tables like a relational database, key/value stores use domains. A domain is a storage area where data can be placed, but does not require a predefined schema. Pieces of data within a domain are defined by keys, and these keys can have any number of attributes attached to them.

The attributes can simply be string values, but can also be something even more powerful: data types that match up with those of popular programming languages. These could include arrays, objects, integers, floats, Booleans, and other essential data types used in programming.

With key/value stores, the data integrity and logic are handled by the application code (through the use of one or more APIs) rather than by using a scheme within the database itself. As a result, data retrieval becomes a matter of using the correct programming logic rather than relying on the database optimizer to determine the query path from a large number of possibilities based on the relation it needs to access.

Getting Results

Google needs to store and retrieve copious amounts of data for many applications, included among them are Google Analytics, Google Maps, Gmail, and their popular web index for searching. In addition, more applications and data stores could be added at any time, making their BigTable key/value store an ideal solution for scalability.

BigTable is Google’s own custom solution, so how can a business obtain a similar performance and scalability boost to give its users a better experience? The good news is that there are other key/value store options available, and some can be run as a service from a cloud. This type of service is easily scalable, since more data storage can easily be purchased as needed on the cloud.

A Key/Value Store Option

There are several options for key/value stores. One of these is Mongo, which is designed as an object database that stores information in JSON format. This format is ideal for web applications since JSON data makes it easy to pass data around in a standard format among the various parts of an application that need it.

For example, Mongo is part of the MEAN stack: Mongo, Express, AngularJS, and NodeJS—a popular setup for programmers developing applications. Each of these pieces of the puzzle will send data to and from other one or more of the other pieces. Since everything, including the database, can use the JSON format, passing the data around among the various parts becomes much easier and more standardized.

How to Make Use of Mongo

Mongo can be installed and used on various operating systems, including Windows, Linux, and OS X. In this case, the scalability of the database would need to be maintained by adding storage space to the server on which it is installed.

Another option is to use Mongo as a service on the cloud. This allows for easy scalability, since a request can be made to the service provider to up the necessary storage space at any time. In this way, new applications or additional data storage needs can be handled quickly and efficiently.

Morpheus is a great option for this service. Mongo is offered, as well as a number of other databases. Using Morpheus, a highly scalable database as a service can be running in no time!

DevOps: The Slow Tsunami That's Transforming IT

$
0
0

TL;DR: Old divisions in IT departments between app development and operations are crashing to the ground as users demand more apps with more features, and right now! By combining agile-development techniques and a hybrid public-private cloud methodology, companies realize the benefits of new technologies and place IT at the center of their operations.

The re-invention of the IT department is well underway. The end result will put technology at the core of every organization.

Gone are the days when IT was perceived as a cost center whose role was to support the company's revenue-generating operations. Today, software is imbued in every facet of the organization, whether the company makes lug nuts or space crafts, lima beans or Linux distros.

The nexus of the IT transformation is the intersection of three disparate-yet-related trends: the merger of development and operations (DevOps), the wide-scale adoption of agile-development methodologies, and the rise of hybrid public/private clouds.

In a September 12, 2014, article, eWeek's Chris Preimesberger quotes a 2013 study by Puppet Labs indicating the switch to DevOps is well underway: 66 percent of the organizations surveyed had adopted DevOps or planned to do so, and 88 percent of telecoms use or intend to use a DevOps approach. The survey also found that DevOps companies deploy code 30 times more frequently than their traditional counterparts.

Closing the loop that links development and operations

A successful DevOps approach requires a closed loop connecting development and operations via continuous integration and continuous deployment. This entails adoption of an entirely new and fully automated development toolset. Traditional IT systems simply can't support the performance, scalability, and latency requirements of a continuous-deployment mentality. These are the precise areas where cloud architectures shine.

Agile DevOps

Agile development combines with DevOps to create a service-based approach to the provisioning, support, and maintenance of apps. Source: Dev2Ops

For example, the Morpheus database-as-a-service offers ultra-low latency via direct patches into EC2 and colocation with among the fastest peering points available. You can monitor and optimize your apps in real time and spot trends via custom metrics. Morpheus's support staff and advanced robots monitor your database infrastructure continuously, and custom MongoDB and MySQL storage engines are available.

In addition, you're assured high availability via secure VPC connections to the network, which uses 100-percent bare-metal SSD storage. Visit the Morpheus site for pricing information and to sign up for a free account.

Continuous integration + continuous delivery = continuous testing

Developers steeped in the tradition of delivering complete, finished products have to turn their thinking around 180 degrees. Dr. Dobb's Andrew Binstock explains in a September 16, 2014, article that continuous delivery requires deploying tested, usable apps that are not feature-complete. The proliferation of mobile and web interfaces makes constant tweaks and updates not only possible but preferable.

Pushing out 10 or more updates in a day would have been unheard of in a turn-of-the-millennium IT department. The incessant test-deploy-feedback loop is possible only if developers and operations staff work together to ensure smooth roll-outs and fast, effective responses to the inevitable deployment errors and other problems.

Integrating development and operations so completely requires not just a reorganization of personnel but also a change in management philosophy. However, the benefits of such a holistic approach to IT outweigh the short-term pain of the organizational adjustments required.

A key to smoothing out some of the bumps is use of a hybrid-cloud philosophy that delivers the speed, scalability, and cost advantages of the public cloud while shielding the company's mission-critical applications from the vagaries of third-party platforms. Processor, storage, and network resources can be provisioned quickly as services by using web interfaces and APIs.

Seeing apps as a collection of discrete services

Imagine a car that's still drivable with only three of its four wheels in place. That's the idea behind developing applications as a set of discrete services, each of which is able to function independently of the others. Also, the services can be swapped in and out of apps on demand.

This is the "microservice architecture" described by Martin Fowler and James Lewis in a March 25, 2014, blog post. The many services that comprise such an app run in their own processes and communicate via an HTTP resource API or other lightweight mechanism. The services can be written in different programming languages and can use various storage technologies because they require very little centralized management.

Microservice Architecture

 

The microservice architecture separates each function of the app as a separate service rather than encapsulating all functions in a single process. Source: Martin Fowler

By using services rather than libraries as components, the services can be deployed independently. When a service changes, only that service needs to be redeployed -- with some noteworthy exceptions, such as changes to service interfaces.

No longer are applications "delivered" by developers to users. In the world of DevOps, the team "developing" the app owns it throughout its lifecycle. Thus the "developers" take on the sys-admin and operations support/maintenance roles. Gone are the days of IT working on "projects." Today, all IT staff are working on "products." This cements to position the company's technology workers at the center of all the organization's operations.

Viewing all 1101 articles
Browse latest View live