Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

No "Buts" About It: The Cloud Is Transforming Your Company's Business Processes

$
0
0

TL;DR: As IT managers gain confidence in the reliability and security of cloud services, it becomes more difficult for them to ignore the cloud's many benefits for all their business's operations. Companies have less hardware to purchase and maintain, they spend only for the storage and processing they need, and they can easily monitor and manage their applications. With the Morpheus database as a service you get all of the above running on a high-availability network that features 24-7 support.

Give any IT manager three wishes and they'll probably wish for three fewer things to worry about. How about 1) having less hardware to buy and manage, 2) having to pay for only the storage and processing you need, and 3) being able to monitor and test applications from a single easy-to-use console?

Knowing the built-in cynicism of many data-center pros, they're likely to scoff at your offer, or at least suspect that it can't be as good as it sounds. That's pretty much the reception cloud services got in the early days, circa 2010.

An indication of IT's growing acceptance of cloud services for mainstream applications is KPMG's annual survey of 650 enterprise executives in 16 countries about their cloud strategies. In the 2011 survey, concerns about data security, privacy, and regulatory compliance were cited as the principal impediments to cloud adoption in large organizations.

According to the results of the most recent KPMG cloud survey, executives now consider cloud integration challenges and control of implementation costs as their two greatest concerns. There's still plenty of fretting among executives about the security of their data in the cloud, however. Intellectual property theft, data loss/privacy, and system availability/business continuity are considered serious problems, according to the survey.

International Cloud Survey

Executives rate such cloud-security challenges as intellectual property theft, data loss, and system availability greater than 4 on a scale of 1 (not serious) to 5 (very serious). Credit: KPMG

Still, security concerns aren't dissuading companies from adopting cloud services. Executives told KPMG that in the next 18 months their organizations planned cloud adoption in such areas as sourcing and procurement; supply chain and logistics; finance, accounting and financial management; business intelligence and analytics; and tax.

Cloud 'migration' is really a 'transformation'

Three business trends are converging to make the cloud an integral part of the modern organization: the need to collect, integrate, and analyze data from all internal operations; the need to develop applications and business processes quickly and inexpensively; and the need to control and monitor the use of data resources that are no longer stored in central repositories.

In a September 2, 2014, article on Forbes.com, Robert LeBlanc explains that cloud services were initially perceived as a way to make operations more efficient and less expensive. But now organizations see the cloud architecture as a way to innovate in all areas of the company. Business managers are turning to cloud services to integrate big data, mobile computing, and social media into their core processes.

 

BI Deployment Preferences

 

Mobile and collaboration are leading the transition in organizations away from on-site management and toward cloud platforms. Credit: Ventana Research

George Washington University discovered first-hand the unforeseen benefits of its shift to a cloud-based data strategy. Zaid Shoorbajee describes in the March 3, 2014, GW Hatchet student newspaper how a series of campus-wide outages motivated the university to migrate some operations to cloud services. The switch saved the school $700,000 and allowed its IT staff to focus more on development and less on troubleshooting.

The benefits the school realized from the switch extend far beyond IT, however. Students now have the same "consumer and social experience" they've become accustomed to in their private lives through Google, iTunes, and similar services, according to a university spokesperson.

Four approaches to cloud application integration

Much of the speed, efficiency, and agility of cloud services can be lost when organizations become bogged down in their efforts to adapt legacy applications and processes. In a TechTarget article (registration required), Amy Reichert presents four approaches to cloud application integration. The process is anything but simple, due primarily to the nature of the applications themselves and the need to move data seamlessly and accurately between applications to support business processes.

One of the four techniques is labeled integration platform as a service (iPaas), in which the cloud service itself provides integration templates featuring such tools as connectors, APIs, and messaging systems. Organizations then customize and modify the templates to meet their specific needs.

In cloud-to-cloud integration, the organization's cloud applications have an integration layer built in to support any required data transformations, as well as encryption and transportation. The cloud-to-integrator-to-cloud model relies on the organization's existing middleware infrastructure to receive, convert, and transport the data between applications.

Finally, the hybrid integration approach keeps individual cloud apps separate but adds an integration component to each. This allows organizations to retain control over the data, maximize its investment in legacy systems, and adopt cloud services at the company's own pace.

Regardless of your organization's strategy for adopting and integrating cloud applications, the Morpheus database as a service can play a key role by providing a flexible, secure, and reliable platform for monitoring and optimizing database applications. Morpheus's SSD-backed infrastructure ensures lightning fast performance, and direct patches into EC2 offer ultra-low latency.

Morpheus protects your data via secure VPC connections and automatic backups, replication, and archiving. The service supports ElasticSearch, MongoDB, MySQL, and Redis, as well as custom storage engines. Create your free database during the beta period.


How Database Failure Left Harry Potter and Thousands of Travelers Stranded Outside the US

$
0
0

Daniel Radcliffe

 

Daniel Radcliffe. Photograph: Perou/Guardian

TL;DR As the US State Department coped with massive database failure, thousands of travelers (and one Harry Potter star) were prevented entry to the United States. Even once the database was brought back online, it only worked in a limited capacity, resulting in extensive backlogs that added days, if not a full week, to wait time for visas and passports. Former U.S. Chief Technology Officer Todd Park wants government IT to move in the direction of open source, cloud-based computing. If you aren’t using SSD-backed cloud database infrastructure, it’s time to catch up.

The U.S. government might be able to afford the database downtime that most IT professionals price at $20,000 per hour minimum (some put that number in the hundreds of thousands), but chances are, most businesses are not equipped to suffer the consequences of database failure.

After the massive freak show that was the first iteration of Healthcare.gov, U.S. Chief Technology Officer (until a few days ago) Todd Park told Wired’s Steven Levy that he’d like to move government IT into the future with the rest of us, employing cloud-based, open source, rapid-iteration computing. He’s approached Silicon Valley tech gurus, imploring them to step in and champion for change. And he’s not shy about how dire the situation is: “America needs you! Not a year from now! But right. The. Fuck. Now!”

But that sort of modernization definitely hadn’t happened in Washington by mid-July, when State Department operations worldwide ground to a near standstill after the Consolidated Consular Database (CCD) crashed before their eyes.

As a result, an untold number of travelers were stuck waiting at embassies for authorizations that took, on average, a week longer to deliver than usual. Students, parents adopting new babies abroad, and vacationers alike found themselves trapped across the world from their destinations, all due to a system backup failure.

Database Crash Destroyed State Department Productivity

So what happened here? According to the DOS, an Oracle software upgrade “interfered with the smooth interoperability of redundant nodes.” On July 20, the Department began experiencing problems with the CCD, creating an ever-growing backlog of visa applications. While the applications were not lost, the crash rendered the CCD mostly unusable for at least a week.

Included in the backlog were only applications for non-immigrant visas. While the DOS would not confirm how many travelers were affected by the outage, State Department metrics show that 9 million non-immigrant visas were issued in 2012. Records are not yet available for more recent years, but United Press International reports that DOS issues 370,000 visas weekly and was operating at less than 50 percent productivity with a minimally functional system throughout the second half of July.

Worldwide Nonimmigrant Visa Issuances

Nearly 9 million non-immigrant visas issued in 2012 alone. Backlogs due to database failure can be crippling. Credit: US Department of State

DOS’s Crashed Database Trapped Harry Potter, US Citizens, and Visitors Abroad

Daniel Radcliffe, forever known for his title role in the eight Harry Potter films, was among many people who faced impeded travel after the CCD failure. En route to Comic-Con in San Diego after a brief trip to Toronto for a premiere, even Radcliffe had to wait at the border due to delays in processing his new visa.

But while Radcliffe got an emergency pass, many less famous travelers weren’t so lucky. Several dozen families were living in a Guangzhou, China hotel after being unable to obtain visas for their newly adopted children. One Maryland family of seven was stuck for days, and they weren’t alone. The Washington Post reported that at least 30 other families were waiting, too, unable to return home as the DOS coped with the tech glitch.

Chinese students headed to the States for university studies were also delayed, alongside non-citizen Ireland residents traveling to the US for vacations. The US State Department’s Facebook page shows posts as late as August 22 asking for advice regarding delays in passport issuance.

Businesses Can’t Afford to Rely on Archaic Database Solutions

The Department of State posted an FAQ on August 4 in which they claimed that while they had made “significant progress,” they were still working to return the Consular Consolidated Database, brought back online July 23 only partially functional, back to “full operational capacity.” They still didn’t know the precise cause of the breakdown. The State Department hasn’t issued any statements since the August 4 update.

Needless to say, this debacle has caused a massive headache for the government and for travelers alike. But downtime causes headaches for companies in every industry. An International Data Corporation study reports that 98% of US and 96% of UK companies have to perform file recovery at least once per week. With downtime costing at least $20K per hour for both groups, and often considerably more, it’s imperative that businesses use database solutions that promise quick recovery. 

Average Cost of unplanned data center

Downtime Costs in More Ways than One Credit: Emerson Network Power

Morpheus Backs Up Automatically and Is Lightning Fast

Clearly, few businesses can withstand the downtime from which the State Department continues to recover. You need your database to work quickly and reliably. Morpheus cloud database-as-a-service offers auto backups, replication, and archiving. Since it operates via online console, you’ll never have to worry about losing access to your systems and data. Its SSD-backed infrastructure increases IOPs by over 100 times, making it reliably fast. Direct connection to EC2 vastly reduces latency.

Todd Park wouldn’t want to move government IT to the cloud if he didn’t trust the security. Morpheus is secured by VPN and is safe from interference from the public internet. The Morpheus platform is continually monitored and managed by the sharply dedicated and experienced team at Morpheus, as well as sophisticated robots made for the job. You can also monitor in real time the queries that could potentially bog down your app performance. Support is available 24 hours a day.

Morpheus works with Elasticsearch, MongoDB, MySQL, and Redis. While Morpheus is in beta, you can try it at no cost. Prices after beta and supported platforms are listed on the Morpheus web site.

Can A Silicon Valley CTO Save Government Software From Itself?

$
0
0

 

TL;DR: Following several high-profile development disasters, government IT departments have received a mandate to change their default app-development approach from the traditional top-down model to the agile, iterative, test-centric methodology favored by leading tech companies. While previous efforts to dynamite the entrenched, moribund IT-contracting process have crashed in flames, analysts hold out hope for the new 18F and U.S. Digital Service initiatives. Given the public's complete lack of faith in the government's ability to provide digital services, failure is simply not an option.

Can Silicon Valley save the federal government from itself? That's the goal of former U.S. Chief Technology Officer Todd Park, who relocated to California this summer and set about recruiting top-tier application developers from the most innovative tech companies on the planet to work for the government.

As Wired's Steven Levy reports in an August 28, 2014, article, Park hopes to appeal to developers' sense of patriotism. "America needs you," Levy quotes Park telling a group of engineers at the Mozilla Foundation headquarters. A quick review of recent federal-government IT debacles demonstrates the urgency of Park's appeal.

Start with the $300 million spent over the past six years by the Social Security Administration on a disability-claim filing system that remains unfinished. Then check out the FBI's failed Virtual Case File case-management initiative that had burnt through $600 million before being replaced by the equally troubled Sentinel system, as Jason Bloomberg explains in an August 22, 2012, CIO article.

But the poster child of dysfunctional government app development is HealthCare.gov., which Park was brought in to save after its spectacularly failed launch in October 2013. For their $300 million investment, U.S. taxpayers got a site that took eight seconds to respond to a mouse click and crashed so often that not one of the millions of people visiting the site on its first day of operation was able to complete an application.

Healthcare.gov homepage

 

Healthcare.gov's performance in the weeks after its launch highlight what can happen when a $300 million development project proceeds with no one in the driver's seat. Credit: The Verge

The dynamite approach to revamping government IT processes

Just months before HealthCare.gov's epic crash-and-burn, Park had established the Presidential Innovation Fellows program to attract tech professionals to six-month assignments with the government. The program was envisioned as a way to seed government agencies with people who could introduce cutting-edge tools and processes to their development efforts. After initial successes with such agencies as Medicare and Veterans Affairs, the group turned its attention to rescuing HealthCare.gov -- and perhaps the entire Affordable Care Act.

The source of the site's problems quickly became obvious: the many independent contractors assigned to portions of the site worked in silos, and no single contractor was responsible to ensure the whole shebang actually worked. Even as the glitches stacked up following the failed launch, contractors continued to work on new "features" because they were contractually required to meet specific goals.

The culprit was the federal contracting process. Bureaucrats farmed out contracts to cronies and insiders, whose only motivation was to be in good position to win the next contract put up for bid, according to Levy. Park's team of fixers was met with resistance at every turn despite being given carte blanche to ignore every rule of government development and procurement.

With persistence and at least one threat of physical force, the ad-hoc team applied a patchwork of monitoring, testing, and debugging tools that got the site operational. By April 2014, HealthCare.gov had achieved its initial goal of signing up 8 million people for medical insurance.

How an agile-development approach could save democracy

The silver lining of the HealthCare.gov debacle is the formation of two new departments charged with bringing an agile approach to government app development. The General Services Administration's 18F was established earlier this year with a mandate to "fail fast" rather than follow the standard government-IT propensity to fail big.

As Tech President's Alex Howard describes in an August 14, 2014, article, 18F is assisting agencies as they develop free, open-source services offered to the public via GitHub and other open-source repositories. Perhaps an even-bigger shift in attitude by government officials is the founding last month of the U.S. Digital Service, which is modeled after a successful U.K. government app-development program.

To help agencies jettison their old development habits in favor of modern approaches, the White House released the Digital Services Playbook that provides 13 "plays" drawn from successful best practices in the private and public sectors. Two of the plays recommend deploying in a flexible hosting environment and automating testing and deployment.

Digital Service Plays

 

The government's Digital Services Playbook calls for agencies to implement modern development techniques such as flexible hosting and automated testing.

That's precisely where the Morpheus database-as-a-service (DBaas) fits into the government's plans. Morpheus lets users spin up a new database instance in seconds -- there's no need to wait for lengthy IT approval to procure and provision a new DB. Instead it's all done in the cloud within seconds.

In addition, users' core elastic, scalable, and reliable DB infrastructure is taken care for them. Developers can focus on building the core functionality of the app rather than having to spend their time making the infrastructure reliable and scalable. Morpheus delivers continuous availability, fault tolerance, fail over, and disaster recovery for all databases running on its service. Last but definitely not least, it's cost efficient for users to go with Morpheus: there's no upfront setup cost, and they pay only for actual usage.

The Morpheus cloud database as a service (DBaaS) epitomizes the goals of the government's new agile-development philosophy. The service's real-time monitoring makes continuous testing a fundamental component of database development and management. Morpheus's on-demand scalability ensures that applications have plenty of room to grow without incurring large up-front costs. You get all this plus industry-leading performance, VPN security, and automatic backups, archiving, and replication.

Government IT gets the green light to use cloud app-development services

As groundbreaking as the Digital Services Playbook promises to be for government IT, another publication released at the same time may have an even-greater positive impact on federal agencies. The TechFAR Handbook specifies how government contractors can support an "iterative, customer-driven software development process."

Tech President's Howard quotes Code for America founder Jen Pahlka stating that the handbook makes it clear to government IT staff and contractors alike that "agile development is not only perfectly legal, but [is] in fact the default methodology."

Critics point out that this is not the government's first attempt to make its application development processes more open and transparent. What's different this time is the sense of urgency surrounding efforts such as 18F and the U.S. Digital Service. Pahlka points out that people have lost faith in the government's ability to provide even basic digital services. Pahlka is quoted in a July 21, 2014, Government Technology interview by Colin Wood and Jessica Mulholland as stating, "If government is to regain the trust and faith of the public, we have to make services that work for users the norm, not the exception."

Cloud Database Security, Farms and Restaurants: The Importance of Knowing Your Sources

$
0
0

TL;DR: Securing your company's cloud-based assets starts by applying tried-and-true data-security practices modified to address the unique characteristics of virtual-network environments. Cloud services are slowly gaining the trust of IT managers who are justifiably hesitant to extend the security perimeters to accommodate placing their company's critical business assets in the cloud.

The fast pace of technological change doesn't faze IT pros, who live the axiom "The more things change, the more they stay the same." The solid security principles that have protected data centers for generations apply to securing your organization's assets that reside in the cloud. The key is to anticipate the new threats posed by cloud technology -- and by cyber criminals who now operate with a much higher level of sophistication.

In a September 18, 2014, article, ZDNet's Ram Lakshminarayanan breaks down the cloud-security challenge into four categories: 1) defending against cloud-based attacks by well-funded criminal organizations 2) unauthorized access and data breaches that use employees' stolen or compromised mobile devices 3) maintenance and monitoring of cloud-based APIs, and 4) ensuring compliance with the growing number and complexity of government regulations.

IT departments are noted for their deliberate approach to new technologies, and cloud-based data services are no different. According to a survey published this month by the Ponemon Institute of more than 1,000 European data-security practitioners (pdf), 64 percent believe their organization's use of cloud services reduces their ability to protect sensitive information.

The survey, which was sponsored by Netskope, blames much of the distrust on the cloud multiplier effect: IT is challenged to track the increasing number and type of devices connecting to the company's networks, as well as the cloud-hosted software employees are using, and the business-critical applications being used in the "cloud workspace."

Building trust between cloud service providers and their IT customers

No IT department will trust the organization's sensitive data to a service that fails to comply with privacy and data-security regulations. The Ponemon survey indicates that cloud services haven't convinced their potential customers in Europe of their trustworthiness: 72 percent of respondents strongly disagreed, disagreed, or were uncertain whether their cloud-service providers were in full compliance with privacy and data-security laws.

Data-security executives remain leery of cloud services' ability to secure their organization's critical business data. Credit: Ponemon Institute

Even more troubling for cloud service providers is the survey finding that 85 percent of respondents strongly disagreed, disagreed, or weren't sure whether their cloud service would notify them immediately in the event of a data breach that affected their company's confidential information or intellectual property.

The Morpheus database-as-a-service puts data security front and center by offering VPN connections to your databases in addition to online monitoring and support. Your databases are automatically backed up, replicated, and archived on the service's SSD-backed infrastructure.

Morpheus also features market-leading performance, availability, and reliability via direct connections to EC2 and colocation with the fastest peering points available. The service's real-time monitoring lets you identify and optimize the queries that are slowing your database's performance. Visit the Morpheus site for pricing information and to sign up for a free account.

Overcoming concerns about cloud-service security

Watching your data "leave the nest" can be difficult for any IT manager. Yet cloud service providers offer a level of security at least on par with that of their on-premises networks. In a September 15, 2014, article on Automated Trader, Bryson Hopkins points out that Amazon Web Services and Microsoft Azure are two of the many public cloud services that comply with Service Organization Control (SOC), HIPPA, FedRAMP, ISO 27001, and other security standards.

The SANS Institute's Introduction to Securing a Cloud Environment (pdf) explains that despite the cloud's increased "attack surface" when compared with in-house servers, the risk of cloud-based data being breached is actually less than that of losing locally hosted data. Physical and premises security are handled by the cloud service but can be enhanced by applying a layered approach to security that uses virtual firewalls, security gateways, and other techniques.

Cloud services avoid resource contention and other potential problems resulting from multi-tenancy by reprovisioning virtual machines, overprovisioning to crowd out other tenants, and using fully reserved capacities.

Another technique for protecting sensitive data in multi-tenant environments is to isolate networks by configuring virtual switches or virtual LANs. The virtual machine and management traffic must be isolated from each other at the data link layer (layer 2) of the OSI model.

The key to protecting sensitive data in a multi-tenant cloud environment is to isolate virtual machine and management traffic at the data link layer. Credit: SANS Institute

In a June 27, 2014, article on CloudPro, Davey Winder brings the issue of cloud security full circle by highlighting the fact that the core principles are the same as for other forms of data security: an iron-clad policy teamed with encryption. The policy must limit privileged-user access by the service's employees and provide a way for customers to audit the cloud network.

One way to compare in-house data management and cloud-based management is via the farmer-restaurant analogy described in a September 15, 2014, article by Arun Anandasivam on IBM's Thoughts on Cloud site. If you buy your food directly from the farmer, you have a first-hand impression of the person who grew your food, but your options may be limited and you have to do the preparation work. If you buy your food from a restaurant, you likely have a wider selection to choose from and you needn't prepare the meal, but you have less control over the food's path from farm to kitchen, and you have fewer opportunities to determine beforehand whether the food meets your quality requirements.

That's not to say farmers are any more or less trustworthy than restaurants. You use the same senses to ensure you're getting what you paid for, just in different ways. So check out the Morpheus database-as-a-service to see what's on the menu!

DevOps: The Slow Tsunami That's Transforming IT

$
0
0

TL;DR: Old divisions in IT departments between app development and operations are crashing to the ground as users demand more apps with more features, and right now! By combining agile-development techniques and a hybrid public-private cloud methodology, companies realize the benefits of new technologies and place IT at the center of their operations.

The re-invention of the IT department is well underway. The end result will put technology at the core of every organization.

Gone are the days when IT was perceived as a cost center whose role was to support the company's revenue-generating operations. Today, software is imbued in every facet of the organization, whether the company makes lug nuts or space crafts, lima beans or Linux distros.

The nexus of the IT transformation is the intersection of three disparate-yet-related trends: the merger of development and operations (DevOps), the wide-scale adoption of agile-development methodologies, and the rise of hybrid public/private clouds.

In a September 12, 2014, article, eWeek's Chris Preimesberger quotes a 2013 study by Puppet Labs indicating the switch to DevOps is well underway: 66 percent of the organizations surveyed had adopted DevOps or planned to do so, and 88 percent of telecoms use or intend to use a DevOps approach. The survey also found that DevOps companies deploy code 30 times more frequently than their traditional counterparts.

Closing the loop that links development and operations

A successful DevOps approach requires a closed loop connecting development and operations via continuous integration and continuous deployment. This entails adoption of an entirely new and fully automated development toolset. Traditional IT systems simply can't support the performance, scalability, and latency requirements of a continuous-deployment mentality. These are the precise areas where cloud architectures shine.

Agile DevOps

Agile development combines with DevOps to create a service-based approach to the provisioning, support, and maintenance of apps. Source: Dev2Ops

For example, the Morpheus database-as-a-service offers ultra-low latency via direct patches into EC2 and colocation with among the fastest peering points available. You can monitor and optimize your apps in real time and spot trends via custom metrics. Morpheus's support staff and advanced robots monitor your database infrastructure continuously, and custom MongoDB and MySQL storage engines are available.

In addition, you're assured high availability via secure VPC connections to the network, which uses 100-percent bare-metal SSD storage. Visit the Morpheus site for pricing information and to sign up for a free account.

Continuous integration + continuous delivery = continuous testing

Developers steeped in the tradition of delivering complete, finished products have to turn their thinking around 180 degrees. Dr. Dobb's Andrew Binstock explains in a September 16, 2014, article that continuous delivery requires deploying tested, usable apps that are not feature-complete. The proliferation of mobile and web interfaces makes constant tweaks and updates not only possible but preferable.

Pushing out 10 or more updates in a day would have been unheard of in a turn-of-the-millennium IT department. The incessant test-deploy-feedback loop is possible only if developers and operations staff work together to ensure smooth roll-outs and fast, effective responses to the inevitable deployment errors and other problems.

Integrating development and operations so completely requires not just a reorganization of personnel but also a change in management philosophy. However, the benefits of such a holistic approach to IT outweigh the short-term pain of the organizational adjustments required.

A key to smoothing out some of the bumps is use of a hybrid-cloud philosophy that delivers the speed, scalability, and cost advantages of the public cloud while shielding the company's mission-critical applications from the vagaries of third-party platforms. Processor, storage, and network resources can be provisioned quickly as services by using web interfaces and APIs.

Seeing apps as a collection of discrete services

Imagine a car that's still drivable with only three of its four wheels in place. That's the idea behind developing applications as a set of discrete services, each of which is able to function independently of the others. Also, the services can be swapped in and out of apps on demand.

This is the "microservice architecture" described by Martin Fowler and James Lewis in a March 25, 2014, blog post. The many services that comprise such an app run in their own processes and communicate via an HTTP resource API or other lightweight mechanism. The services can be written in different programming languages and can use various storage technologies because they require very little centralized management.

Microservice Architecture

 

The microservice architecture separates each function of the app as a separate service rather than encapsulating all functions in a single process. Source: Martin Fowler

By using services rather than libraries as components, the services can be deployed independently. When a service changes, only that service needs to be redeployed -- with some noteworthy exceptions, such as changes to service interfaces.

No longer are applications "delivered" by developers to users. In the world of DevOps, the team "developing" the app owns it throughout its lifecycle. Thus the "developers" take on the sys-admin and operations support/maintenance roles. Gone are the days of IT working on "projects." Today, all IT staff are working on "products." This cements to position the company's technology workers at the center of all the organization's operations.

NoSQL Will Protect You From The Onslaught of Data Overload (or a bull charging down an alley)

$
0
0

TL;DR: As the amount of unstructured data being collected by organizations skyrockets, their existing databases come up short: they're too slow, too inflexible, and too expensive. What's needed is a DBMS that isn't constricted by the relational schema, and one that accommodates object-oriented data structures without the complexity and latency of object-relational mapping frameworks. NoSQL (a.k.a. Not Only SQL) provides the flexibility, scalability, and availability required to manage the deluge of unstructured data, albeit with some shortcomings of its own.

Data isn't what it used to be. Gone (or going) is the well-structured model of data stored neatly in tables and rows queried via the data's established relations. Along comes Google, Facebook, Twitter, Amazon, and untold other sources of unstructured data that simply doesn't fit comfortably in a conventional relational database.

That isn't to say RDBMSs are an endangered species. An August 24, 2014, article on TheServerSide.com points out that enterprises continue to prefer SQL databases, primarily for their reliability through compliance with the atomicity, consistency, isolation, and durability (ACID) model. Also, there are plenty of DBAs with relational SQL experience, but far fewer with NoSQL skills.

Still, RDBMSs don't accommodate unstructured data easily -- at least not in their current form. The future is clearly one in which the bulk of data in organizations is unstructured. As far back as June 2011 an IDC study (pdf) predicted that 90 percent of the data generated worldwide in the next decade would be unstructured. How much data are we talking? How about 8000 exabytes in 2015, which is the equivalent of 8 trillion gigabytes.

Growth in Data

The tremendous growth in the amount of data in the world -- most of which is unstructured -- requires a non-relational approach to management. Credit: IDC

As the IDC report points out, enterprises can no longer afford to "consume IT" as part of their internal infrastructure, but rather as an external service. This is particularly true as cloud services such as the Morpheus database as a service (DBaaS) incorporate the security and reliability required for companies to ensure the safety of their data and regulatory compliance.

By supporting both MongoDB and MySQL, Morpheus offers organizations the flexibility to transition existing databases and their new data stores to the cloud. They can use a single console to monitor their queries in real time to find and remove performance bottlenecks. Connections are secured via VPN, and all data is backed up, replicated, and archived automatically. Morpheus's SSD-backed infrastructure ensures fast connections to data stores, and direct links to EC2 provide ultra-low latency. Visit the Morpheus site for pricing information or to sign up for a free account.

Addressing SQL's scalability problem head-on

A primary shortcoming of SQL is that as the number of transactions being processed goes up, performance goes down. The traditional solution is to add more RDBMS servers, but doing so is expensive, not to mention a management nightmare as optimization and troubleshooting become ever more complex.

With NoSQL, your database scales horizontally rather than vertically. The resulting distributed databases host data on thousands of servers that can be added or deleted without affecting performance. Of course, reality is rarely this simple. In a November 20, 2013, article on InformationWeek, Joe Masters explains that high availability is simple to achieve on read-only distributed systems. Writing to those systems is much trickier.

As stated in the CAP theorem (or Brewer theorem, named after Eric Brewer), you can have strict availability, or strict consistency, but not both. NoSQL databases lean toward the availability side, at the expense of consistency. However, distributed databases are getting better at handling timeouts, although there's no way to do so without affecting the database's performance.

Another NoSQL advantage is that it doesn't lock you into a rigid schema the way SQL does. As Jnan Dash explains in a September 18, 2013, article on ZDNet, revisions to the data model can cause performance problems, but rarely do designers know all the facts about the data model before it goes into production. The need for a dynamic data model plays into NoSQL's strength of accommodating changes in markets, changes in the organization, and even changes in technology.

The benefits of NoSQL's data-model flexibility

NoSQL data models are grouped into four general categories: key-value (K-V) stores, document stores, column-oriented stores, and graph databases. Ben Scofield has rated these NoSQL database categories in comparison with relational databases. (Note that there is considerable variation between NoSQL implementations.)

 

The four general NoSQL categories are rated by Ben Scofield in terms of performance, scalability, flexibility, complexity, and functionality. Credit: Wikipedia

The fundamental data model of K-V stores is the associative array, which is also referred to as a map or directory. Each possible key of a key-value pair appears in a collection no more than once. As one of the simplest non-trivial data models, K-V stores are often extended to more-powerful ordered models that maintain keys in lexicographic order, among other purposes.

The documents that comprise the document store encapsulate or encode data in standard formats, typically XML, YAML, or JSON (JavaScript Object Notation), but also binary BSON, PDF, and MS-Office formats. Documents in collections are somewhat analogous to records in tables, although the documents in a collection won't necessarily share all fields the way records in a table do.

A NoSQL column is a key-value pair comprised of a unique name, value, and timestamp. The timestamp is used to distinguish valid content from stale content and thus addresses the consistency shortcomings of NoSQL. Columns in distributed databases don't need the uniformity of columns in relational databases because NoSQL "rows" aren't tied to "tables," which exist only conceptually in NoSQL.

Graph databases use nodes, edges, and properties to represent and store data without need of an index. Instead, each database element has a pointer to adjacent elements. Nodes can represent people, businesses, accounts, or other trackable items. Properties are data elements that pertain to the nodes, such as "age" for a person. Edges connect nodes to other nodes and to properties; they represent the relationships between the elements. Most of the analysis is done via the edges.

Once you've separated the NoSQL hype from the reality, it becomes clear that there's plenty of room in the database environments of the future for NoSQL and SQL alike. Oracle, Microsoft, and other leading SQL providers have already added NoSQL extensions to their products, as InfoWorld's Eric Knorr explains in an August 25, 2014, article. And with DBaaS services such as Morpheus, you get the best of both worlds: MongoDB for your NoSQL needs, and MySQL for your RDBMs. It's always nice to have options!

How Is Google Analytics So Damn Fast?

$
0
0

TL; DR: Google Analytics stores a massive amount of statistical data from web sites across the globe. Retrieving reports quickly from such a large amount of data requires Google to use a custom solution that is easily scalable whenever more data needs to be stored.

At Google, any number of applications may need to be added to their infrastructure at any time, and each of these could potentially have extremely heavy workloads. Resource demands such as these can be difficult to meet, especially when there is a limited amount of time to get the required updates implemented.

If Google were to use the typical relational database on a single server node, they would need to upgrade their hardware each time capacity is reached. Given the amount of applications being created and data being used by Google, this type of upgrade could quite possibly be necessary on a daily basis!

The load could also be shared across multiple server nodes, but once more than a few additional nodes are required, the complexity of the system becomes extremely difficult to maintain.

With these things in mind, a standard relational database setup would not be a particularly attractive option due to the difficulty of upgrading and maintaining the system on such a large scale.

Finding a Scalable Solution

In order to maintain speed and ensure that such incredibly quick hardware upgrades are not necessary, Google uses its own data storage solution called BigTable. Rather than store data relationally in tables, it stores data as a multi-dimensional sorted map.

This type of implementation falls under a broader heading for data storage, called a key/value store. This method of storage can provide some performance benefits and make the process of scaling much easier.

Information Storage in a Relational Database

Relational databases store each piece of information in a single location, which is typically a column within a table. For a relational database, it is important to normalize the data. This process ensures that there is no duplication of data in other tables or columns.

For example, customer last names should always be stored in a particular column in a particular table. If a customer last name is found in another column or table within the database, then it should be removed and the original column and table should be referenced to retrieve the information.

The downside to this structure is that the database can become quite complex internally. Even a relatively simple query can have a large number of possible paths for execution, and all of these paths must be evaluated at run time to find out which one will be the most optimal. The more complex the database becomes, the more resources will need to be devoted to determining query paths at run time.

 

Information Storage in a Key/Value Store

With a key/value store, duplicate data is acceptable. The idea is to make use of disk space, which can easily and cost-effectively be upgraded (especially when using a cloud), rather than other hardware resources that are more expensive to bring up to speed.

This data duplication is beneficial when it comes to simplifying queries, since related information can be stored together to avoid having numerous potential paths that a query could take to access the needed data.

Instead of using tables like a relational database, key/value stores use domains. A domain is a storage area where data can be placed, but does not require a predefined schema. Pieces of data within a domain are defined by keys, and these keys can have any number of attributes attached to them.

The attributes can simply be string values, but can also be something even more powerful: data types that match up with those of popular programming languages. These could include arrays, objects, integers, floats, Booleans, and other essential data types used in programming.

With key/value stores, the data integrity and logic are handled by the application code (through the use of one or more APIs) rather than by using a scheme within the database itself. As a result, data retrieval becomes a matter of using the correct programming logic rather than relying on the database optimizer to determine the query path from a large number of possibilities based on the relation it needs to access.

Data Access

 

How data access differs between a relational database and a key/value database. Source: readwrite

Getting Results

Google needs to store and retrieve copious amounts of data for many applications, included among them are Google Analytics, Google Maps, Gmail, and their popular web index for searching. In addition, more applications and data stores could be added at any time, making their BigTable key/value store an ideal solution for scalability.

BigTable is Google’s own custom solution, so how can a business obtain a similar performance and scalability boost to give its users a better experience? The good news is that there are other key/value store options available, and some can be run as a service from a cloud. This type of service is easily scalable, since more data storage can easily be purchased as needed on the cloud.

A Key/Value Store Option

There are several options for key/value stores. One of these is Mongo, which is designed as an object database that stores information in JSON format. This format is ideal for web applications since JSON data makes it easy to pass data around in a standard format among the various parts of an application that need it.

For example, Mongo is part of the MEAN stack: Mongo, Express, AngularJS, and NodeJS—a popular setup for programmers developing applications. Each of these pieces of the puzzle will send data to and from other one or more of the other pieces. Since everything, including the database, can use the JSON format, passing the data around among the various parts becomes much easier and more standardized.

MySQL vs. MongoDB

How mySQL and Mongo perform the same tasks. Source: Rick Osborne

How to Make Use of Mongo

Mongo can be installed and used on various operating systems, including Windows, Linux, and OS X. In this case, the scalability of the database would need to be maintained by adding storage space to the server on which it is installed.

Another option is to use Mongo as a service on the cloud. This allows for easy scalability, since a request can be made to the service provider to up the necessary storage space at any time. In this way, new applications or additional data storage needs can be handled quickly and efficiently.

Morpheus is a great option for this service, offering Mongo as a highly scalable service in the cloud: Users of Morpheus get three shared nodes, full replica sets, and can seamlessly provision MongoDB instances. In addition, all of this runs on a high-performance, Solid State Drive (SSD) infrastructure, making it a very reliable data storage medium. Using Morpheus, a highly scalable database as a service can be running in no time!

The Key to Distributed Database Performance: Scalability

$
0
0

TL;DR: The realities of modern corporate networks make the move to distributed database architectures inevitable. How do you leverage the stability and security of traditional relational database designs while making the transition to distributed environments? One key consideration is to ensure your cloud databases are scalable enough to deliver the technology's cost and performance benefits.

Your conventional relational DBMS works without a hitch (mostly), yet you're pressured to convert it to a distributed database that scales horizontally in the cloud. Why? Your customers and users not only expect new capabilities, they need them to do their jobs. Topping the list of requirements is scalability.

David Maitland points out in an October 7, 2014, article on Bobsguide.com that startups in particular have to be prepared to see the demands on their databases expand from hundreds of requests per day to millions -- and back again -- in a very short time. Non-relational databases have the flexibility to grow and contract almost instantaneously as traffic patterns fluctuate. The key is managing the transition to scalable architectures.

Availability defines a distributed database

A truly distributed database is more than an RDBMS with one master and multiple slave nodes. One with multiple masters, or write nodes, definitely qualifies as distributed because it's all about availability: if one master fails, the system automatically rolls over to the next and the write is recorded. InformationWeek's Joe Masters Emison explains the distinction in a November 20, 2013, article.

The Evolving Web Paradigm

 

The evolution of database technology points to a "federated" database that is document and graph based, as well as globally queryable. Source: JeffSayre.com

The CAP theorem states that you can have strict availability or strict consistency, but not both. It happens all the time: a system is instructed to write different information to the same record at the same time. You can either stop writing (no availability) or write two different records (no consistency). In the real world, everything falls between these two extremes: business processes favor high availability first and deal with inconsistencies later.

Kyle Kingsbury's Call Me Maybe project measured the ability of distributed databases such as NoSQL to handle multiple partitions in real-world conflict situations. InformationWeek's Joe Masters Emison describes the project in a September 5, 2013, article. The upshot is that distributed databases fail -- as all databases sometimes do -- but they do so less cleanly than single-node databases, so tracking and correcting the resulting data loss requires asking a new set of questions.

The Morpheus database-as-a-service (DBaaS) delivers the flexibility modern databases require while ensuring the performance and security IT managers require. Morpheus provides the reliability of 100% bare-metal SSD hosting on a high-availability network with ultra-low latency to major peering points and cloud hosts. You can optimize queries in real time and analyze key database metrics.

Morpheus supports heterogeneous ElasticSearch, MongoDB, MySQL, and Redis databases. Visit the Morpheus site for pricing information or to sign up for a free trial account.

Securing distributed databases is also more complex, and not just because the data resides in multiple physical and virtual locations. As with most new technologies, the initial emphasis is on features rather than safety. Also, as the databases are used in production settings, unforeseen security concerns are more likely to be addressed as they arise. (The upside of this equation is that because the databases are more obscure, they present a smaller profile to the bad guys.)

The advent of the self-aware app

Databases are now designed to monitor their connections, available bandwidth, and other environmental factors. When demand surges, such as during the holiday shopping season, the database automatically puts more cloud servers online to handle the increased demand, and similarly puts them offline when demand returns to normal.

This on-demand flexibility relies on the cloud service's APIs, whether they use proprietary API calls or open-source technology such as OpenStack. Today's container-based architectures, such as Docker, encapsulate all resources required to run the app, including frameworks and libraries.


Contain(er) Yourself: Separating Docker Hype from the Tech's Reality

$
0
0

TL;DR: Even jaded IT veterans are sitting up and taking notice of the potential benefits of Docker's microservice model of app development, deployment, and maintenance. By containerizing the entire runtime environment, Docker ensures apps will function smoothly on any platform. By separating app components at such a granular level, Docker lets you apply patches and updates seamlessly without having to shut down the entire app.

The tech industry is noted for its incredibly short "next-big-thing" cycles. After all, "hype" and "tech" go together like mashed potatoes and gravy. Every now and then, one of these over-heated breakthroughs actually lives up to all the blather.

Docker is a Linux-based development environment designed to make it easy to create distributed applications. The Docker Engine is the packaging tool that containerizes all resources comprising the app's runtime environment. The Docker Hub is a cloud service for sharing application "artifacts" via public and private repositories, and automating the build pipeline.

Because Docker lets developers ship their code in a self-contained runtime environment, their apps run on any platform without the portability glitches that often drive sysadmins crazy when a program hiccups on a platform other than the one on which it was created.

How Docker out-virtualizes VMs

It's natural to compare Docker's microservice-based containers to virtual machines. As Lucas Carlson explains in a September 30, 2014, article on VentureBeat, you can fit from 10 to 100 times as many containers on a server than VMs. More importantly, there's no need for a hypervisor intermediation layer that's required to manage the VMs on the physical hardware, as Docker VP of Services James Turnbull describes in a July 9, 2014, interview with Jodi Biddle on OpenSource.com.

Virtual Machines and Docker

Docker containers are faster and more efficient than virtual machines in part because they require no guest OS or separate hypervisor management layer. Source: Docker

Because Docker offers virtualization at the operating system level, the containers run user space atop the OS kernel, according to Turnbull, which makes them incredibly lightweight and fast. Carlson's September 14, 2014, article on JavaWorld compares Docker development to a big Lego set of other people's containers that you can combine without worrying about incompatibilities.

You get many of the same plug-and-play capabilities when you choose to host your apps with the Morpheus cloud database-as-a-service (DBaaS). Morpheus lets you provision, deploy, and host MySQL, MongoDB, Redis, and ElasticSearch on a single dashboard. The service's SSD-based infrastructure and automatic daily backups ensure the reliability and accessibility your data requires.

Morpheus deploys all your database instances with a free full replica set, and the service's single-click DB provisioning allows you to bring up a new instance of any SQL, NoSQL, or in-memory database in seconds. Visit the Morpheus site for pricing information or to sign up for a free trial account.

Services such as Morpheus deliver the promise of burgeoning technologies such as Docker while allowing you to preserve your investment in existing database technologies. In a time of industry transition, it's great to know you can get the best of both worlds, minus the departmental upheaval.

The New Reality: Microservices Apply the Internet Model to App Development

$
0
0

TL;DR: As software becomes the force driving industries of all types and sizes, the nature of app development and management is changing fundamentally. Gone are the days of centralized control via complex, interdependent, hierarchical architectures. Welcome to the Internet model of software: small pieces, loosely joined via the microservice architecture. At the forefront of the new software model are business managers, who base software-design decisions on existing and future business processes.

Anyone who works in technology knows change is constant. But change is also hard -- especially the kind of transformational change presently occurring in the software business with the arrival of the microservices model of app development, deployment, and maintenance. As usual, not everybody gets it.

Considering how revolutionary the microservices approach to software design is, the misconceptions surrounding the technology are understandable. Diginomica's Phil Wainewright gets to the heart of the problem in a September 30, 2014, article. When Wainewright scanned the agenda for an upcoming conference on the software-defined enterprise, he was flabbergasted to see all the focus on activities within the data center: virtualization, containerization, and software-defined storage and networking.

As Wainewright points out, the last thing you want to do is add a layer of "efficient flexibility underneath a brittle and antiquated business infrastructure." That's the approach that doomed the service-oriented architectures of a decade ago. Instead, the data center must be perceived as merely one component of a configurable and extensible software-defined enterprise. The foundation of tomorrow's networks are simple, easily exchangeable microservices that permeate the organization rather than residing in a single, central repository.

Microservices complete the transition from tightly coupled components through SOA's loose coupling to complete decoupling to facilitate continuous delivery. Source: PricewaterhouseCoopers

To paraphrase a time-worn axiom, if you love your software, let it go. The company's business managers must drive the decisions about technology spending based on what they know of the organization's goals and assets.

Microservices: fine-grained, stateless, self-contained

Like SOA, microservices are designed to be more responsive and adaptable to business processes and needs. What doomed SOA approaches was the complexity they added to systems management by applying a middleware layer to software development and deployment. As ZDNet's Joe McKendrick explains in a September 30, 2014, article, the philosophy underlying microservices is to keep it simple.

The services are generally constructed using Node.js or other Web-oriented languages, or in functional languages such as Scala or the Clojure Lisp library, according to PricewaterhouseCoopers analysts Galen Gruman and Alan Morrison in their comprehensive microservices-architecture overview. Another defining characteristic is that microservices are perfect fit for the APIs and RESTful services that are increasingly the basis for enterprise functions.

Microservice architectures are distinguished from service-oriented architectures in nearly every way. Source: PricewaterhouseCoopers

In the modern business world, "rapid" development simply isn't fast enough. The goal for app developers is continuous delivery of patches, updates, and enhancements. The discrete, self-contained, and loosely coupled nature of microservices allow them to be swapped out or ignored without affecting the performance of the application.

The March 25, 2014, microservices overview written by Martin Fowler and James Lewis provides perhaps the most in-depth examination of the technology. Even more important than the technical aspects of the microservices approach is the organizational changes the technology represents. In particular, development shifts from a project model, where the "team" hands off the end result and disbands, to a product model, where the people who build the app take ownership of it: "You build it, you run it."

The same development-maintenance integration is evident in the Morpheus database as a service, which allows you to provision, deploy, and host MySQL, MongoDB, Redis, and Elasticsearch databases using a single, simple console. The ability to spin up instances for elastic scalability based on the demands of a given momentm, whether growing rapidly or shrinking marginally, means that your instances will be far more productive and efficient. In addition to residing on high-performance solid-state drives, your databases are provisioned with free live replicas for fault tolerance and fail over. Visit the Morpheus site for to create a free account.

How to Handle Huge Databaase Tables

$
0
0

[title 1] How to Handle Huge Database Tables [title 2] Make Sure Your Database Tables Don't Slow Queries to a Crawl [title 3] Simple Ways to Ensure Your Database's Tables Don't Slow Queries 

Design your huge database tables to ensure they can handle queries without slowing the database's performance to a crawl.

TL;DR: Get a jump on query optimization in your databases by designing tables with speed in mind. This entails choosing the best data types for table fields, choosing the correct fields to index, and knowing when and how to split your tables. It also helps to be able to distinguish table partitioning from sharding.

It's a problem as old as databases themselves: large tables slow query performance. Out of this relatively straightforward problem has sprung an industry of indexing, tuning, and optimizing methodologies. The big question is, Which approach is best for your database system?

For MySQL databases in particular, query performance starts with the design of the table itself. Justin Ellingwood explains the basics of query optimization in MySQL and MariaDB in a Digital Ocean article from November 11, 2013, and updated on May 30, 2014.

For example, data elements that will be updated frequently should be in their own table to prevent the query cache from being dumped and rebuilt repeatedly. Generally speaking, the smaller the table, the faster the updates.

Similarly, by limiting data sizes up front you avoid wasted storage space, such as by using the "enum" type rather than "varchar" when a field that takes string values has a limited number of valid entries.

There's more than one way to 'split' a table

Generally speaking, the bigger the database table, the longer it takes to access and modify. Unfortunately, database performance optimization isn't as simple as dividing big tables into several smaller ones. Michael Tocker describes 10 ways to improve the speed of large MySQL tables in an October 24, 2013, post on his Master MySQL blog.

One of the 10 methods is to use partitioning to reduce the size of indexes by creating several "tables" out of one. This minimizes index->lock contention. Tocker also recommends using InnoDB rather than MyISAM even though MyISAM can be faster at inserts to the end of a table. MyISAM's table locking restricts updates and deletes, and its use of a single lock to protect the key buffer when loading or removing data from disk causes contention.

Much confusion surrounds the concept of database table partitioning, particularly how partitioning is distinguished from sharding. When the question was posed on Quora, Mosaic CTO Tony Bako explained that partitioning divides logical data elements into multiple entities to improve performance, availability, and maintainability.

Conversely, sharding is a form of horizontal partitioning that creates replicas of the schema and then divides the data stored in each shard by the shard key. This requires that DBAs distribute load and space evenly across shards based on data-access patterns and space considerations.

Sharding uses horizontal partitioning to store data in physically separate databases; here a user table is sharded by values in the "s_age" field. Source: CUBRID

With the Morpheus database-as-a-service (DBaaS) you can monitor your MySQL, MongoDB, Redis, and ElasticSearch databases via a single dashboard. Morpheus lets you bring up a new instance of any SQL, NoSQL, or in-memory database with a single clock. Automatic daily backups and free live replica sets for each provisioned database ensure that your data is secure.

In addition, database performance is optimized via Morpheus's SSD-backed infrastructure and direct patching into EC2 for ultra-low latency. Visit the Morpheus site for pricing information or to create a free account.

Morpheus Lessons: Best Practices for Upgrading MySQL

$
0
0

TL;DR: Thinking about upgrading your MySQL database? When performing an upgrade, there are some factors you need to consider and some best practices that can be followed to help ensure the process goes as smoothly as possible. You will need to consider if an upgrade is necessary, whether it is a minor or major upgrade, and changes to query syntax, results, and performance.

Do You Need to Upgrade?

The need to upgrade is based on the risk versus the reward. Any upgrade carries with it the risk of losing functionality (breaks something) or data (catastrophic loss). With that in mind, you may be running into bugs that are resolved in a later release, performance problems, or growing concerns about the security of the database as the current version continues to age. Any of these factors could cause an upgrade to be necessary, so you will need to follow some best practices to help mitigate as much risk as possible.

An example MySQL setup. Source: Programming Notes

Will the Upgrade be Minor or Major?

A minor upgrade is typically one where there is a small change in the third release number. For example, upgrading version 5.1.22 to 5.1.25 would be considered a minor upgrade. As long as the difference is relatively small, the risk to upgrade will be relatively low.

A major upgrade, on the other hand, involves a change in the second or the first number. For example, upgrading version 5.1.22 to 5.3.1 or 4.1.3 to 5.1.0 would usually be considered a major upgrade. In such cases, the risk becomes higher because more changes to the system have been implemented.

Consider the Changes

Before upgrading, it is best to examine the changes that have been made between the two versions. Changes to query syntax or the results of queries can cause your application to have erroneous data, errors, or even stop working. It is important to know what changes will need to be made in your queries to ensure that your system continues to function after the upgrade takes place.

Also, an upgrade could either cause increased or decreased performance, depending on what has changed and the system on which MySQL is running. If the upgrade could cause a decrease in performance, you will certainly want to consider if this is the right time to update.

Performance on a single thread comparison. Source: PERCONA

Performing the Upgrade

Typically, the best practice when upgrading is to follow this procedure:

  • Dump your user grant data
  • Dump your regular data
  • Restore your regular data in the new version
  • Restore your user grant data in the new version

Doing this, you significantly reduce your risk of losing data, since you will have backup dump files. In addition, since you are using the MySQL dump and restore, the restore process will use the format of the new MySQL version, which helps mitigate compatibility issues.

Easy Upgrades

If you want to upgrade even more easily, consider using a database as a service in the cloud. Such services make it easy to provision, replicate and archive your database, and make upgrading easier via the use of available tools.

One such service is Morpheus, which offers not only MySQL, but also lets you use MongoDB, ElasticSearch, or Redis. In addition, all databases are deployed on a high performance infrastructure with Solid State Drives and are automatically backed up, replicated, and archived. So, take a look at pricing information or open a free account today to begin taking advantage of this service!

Password Encryption: Keeping Hackers from Obtaining Passwords in Your Database

$
0
0

TL;DR: When dealing with a user password, you want to be very careful in how this information is saved. Passwords stored in plain text within your database are a serious security risk both to you and your users, especially if your business is working with any of your users' financial or personal information. To keep from saving passwords in plain text, you can encrypt them using a salt and a hashing algorithm.

Plain Text Password Problems

While storing plain-text passwords can be handy when making prototypes and testing various systems, they can be disastrous when used in a production database. If an attacker somehow gains access to the database and its records, the hacker now can instantly make use of every user account. The reason: the passwords are all right there in plain text for the taking!

Back in 2006, the web site Reddit, a discussion forum, had a backup copy of its database stolen. Unfortunately, all of the passwords were stored in plain-text. The person that had the data could have easily taken over any of the accounts that were stored in the backup database by making use of the user names and passwords available.

This may not seem like a major problem for a discussion forum. If the administrator and moderator passwords were changed quickly, the intruder likely would only be able to post spam or other types of messages the user would not normally write. However, these same users may have used the same login information for other tasks, such as online banking or credit card accounts. This would indeed be a problem for the user once a hacker had access to such an account!

Plain text passwords are not a game, they are a security risk! Source: MacTrast

Salting and Hashing a Password

To avoid having plain-text passwords in your database, you need to store a value that has been altered in a way that will be very difficult to crack. The first step is to add a salt, which is a random string that is added to the password. This value can be either prepended or appended to the password, and should be long in order to provide the best security.

After the password is salted, it should then be hashed. Hashing will take the salted password and turn it into a string of characters that can be placed into the database instead of the plain-text password. There are a number of hashing algorithms, such as SHA256, SHA512, and more.

While implementing a salted password hashing can be more time consuming, it could save your users from having their passwords exposed or stolen. It is definitely a good idea to do this as a safeguard for the people using your services.

An example of password creation and verification with salting and hashing in place. Source: PacketLife

Further Protection

Another way to help protect your users is to make sure the database itself is secure. Keeping the database on site may be difficult for your business, but there are companies that offer databases as a service in the cloud.

One such company is Morpheus, which includes VPN connections to databases and online monitoring to help keep your database secure. In addition, databases are backed up, replicated, and archived automatically on an SSD-backed infrastructure. So, give Morpheus a try and get a secure, reliable database for your business!

Making Software Development Simpler: Look for Repeatable Results, Reusable APIs and DBaaS

$
0
0

TL;DR: Software is complex -- to design, develop, deliver, and maintain. Everybody knows that, right? New app-development approaches and fundamental changes to the way businesses of all types operate are challenging the belief that meeting customers' software needs requires an army of specialists working in a tightly managed hierarchy. Focusing on repeatable results and reusable APIs helps take the complexity out of the development process.

What's holding up software development? Seven out of 10 software development teams have workers in different locations, and four out of five are bogged down by having to accommodate legacy systems. Life does not need to be like this. The rapidly expanding capabilities of cloud-based technologies and external services (like our very own Database-As-A-Service) allow developers to focus more time on application development. The end result: better software products. 

The results of the 2014 State of the IT Union survey are presented in a September 9, 2014, article in Dr. Dobb's Journal. Among the findings are that 58 percent of development teams are comprised of 10 or fewer people, while 36 percent work in groups of 11 to 50 developers. In addition, 70 percent of the teams have members working in different geographic locations, but that drops to 61 percent for agile development teams.

A primary contributor to the complexity of software development projects is the need to accommodate legacy software and data sources: 83 percent of the survey respondents reported having to deal with "technical debt," (obsolete hardware and software) which increases risk and development time. Software's inherent complexity is exacerbated by the realities of the modern organization: teams working continents apart, dealing with a tangled web of regulations and requirements, while adapting to new technologies that are turning development on its head.

The survey indicates that agile development projects are more likely to succeed because they focus on repeatable results rather than processes. It also highlights the importance of flexibility in managing software projects, each of which is as unique as the product it delivers.

Successful agile development requires discipline

Organizations are realizing the benefits of agile development, but often in piecemeal fashion as they are forced to accommodate legacy systems. There's more to agile development than new tools and processes, however. As Ben Linders points out in an October 16, 2014, article on InfoQ, the key to success for agile teams is discipline.

The misconception is that agile development operates without a single methodology. In fact, it is even more important to adhere to the framework the team has selected -- whether SCRUM, Kanban, Extreme Programming (XP), Lean, Agile Modeling, or another -- than it is when using traditional waterfall development techniques.

The keys to successfully managing an agile development team have little to do with technology and much to do with communication. Source: CodeProject

Focusing on APIs helps future-proof your apps

Imagine building the connections to your software before you build the software itself. That's the API-first approach some companies are taking in developing their products. Tech Target's Crystal Bedell describes the API-first approach to software development in an October 2014 article.

Bedell quotes Jeff Kaplan, a market analyst for ThinkStrategies, who sees APIs as the foundation for interoperability. In fact, your app's ability to integrate with the universe of platforms and environments is the source of much of its value.

Another benefit of an API-centric development strategy is the separation of all the functional components of the app, according to Progress Software's Matt Robinson. As new standards arise, you can reuse the components and adapt them to specific external services.

The Morpheus database-as-a-service also future-proofs your apps by being the first service to support SQL, NoSQL, and in-memory databases. You can provision, host, and deploy MySQL, MongoDB, Redis, and ElasticSearch using a simple, single-click interface. Visit the Morpheus site now to create a free account.

The Three Most Important Considerations in Selecting a MongoDB Shard Key

$
0
0

TL;DR: The efficient operation of your MongoDB database depends on which field in the documents you designate as the shard key. Since you have to select the shard key up front and can't change it later, you need to give the choice due consideration. For query-focused apps, the key should be limited to one or a few shards; for apps that entail a lot of scaling between clusters, create a key that writes efficiently.

The outlook is rosy for MongoDB, the most popular NoSQL DBMS. Research and Markets' March 2014 report entitled Global NoSQL Market 2014-2018 predicts that the overall NoSQL market will grow at a compound annual rate of 53 percent between 2013 and 2018. Much of the increase will be driven by increased use of big data in organizations of all sizes, according to the report.

Topping the list of MongoDB's advantages over relational databases are efficiency, easy scalability, and "deep query-ability," as Tutorialspoint's MongoDB Tutorial describes it. As usual, there's a catch: MongoDB's efficient data storage, scaling, and querying depend on sharding, and sharding depends on the careful selection of a shard key.

As the MongoDB Manual explains, every document in a collection has an indexed field or compound indexed field that determines how the collection's documents are distributed among a cluster's shards. Sharding allows the database to scale horizontally across commodity servers, which costs less than scaling vertically by adding processors, memory, and storage.

A mini-shard-key-selection vocabulary

When a MongoDB collection grows beyond its cluster, it chunkifies its documents based on ranges of values in the shard key. Keep in mind that once you choose a shard key, you're stuck with it: you can't change it later.

The characteristic that makes a chunk easy to divide is cardinality. The MongoDB Manual recommends that your shard keys have a high degree of randomness to ensure the cluster's write operations are distributed evenly, which is referred to as write scaling. Conversely, when a field has a high degree of randomness, it becomes a challenge to target specific shards. By using a shard key that is tied to a single shard, queries run much more efficiently; this is called query isolation.

When a collection doesn't have a field suitable to use as a shard key, a compound shard key can be used, or a field can be added to serve as the key.

Choice of shard key depends on the nature of the collection

How do you know which field to use as the shard key? A post by Goran Zugic from May 2014 explains the three types of sharding MongoDB supports:

  • Range-based sharding splits collections based on shard key value.
  • Hash-based sharding determines hash values based on field values in the shard key.
  • Tag-aware sharding ties shard key values to specific shards and are commonly used for location-based applications.

The primary consideration when deciding which shard key to designate is how the collection will be used. Zugic presents it as a balancing act between query isolation and write scaling: the former is preferred when queries are routed to one shard or a small number of shards; the latter when efficient scaling of clusters between servers is paramount.

MongoDB ensures that all replica sets have the same number of chunks, as Conrad Irwin describes in a March 2014 post on the BugSnag site. Irwin lists three factors that determine choice of shard key:

  • Distribution of reads and writes: split reads evenly across all replica sets to scale working set size linearly among several shards, and to avoid writing to a single machine in a cluster.
  • Chunk size: make sure your shard key isn't used by so many documents that your chunks grow too large to move between shards.
  • Query hits: if your queries have to hit too many servers, latency increases, so craft your keys so queries run as efficiently as possible.

Irwin provides two examples. The simplest approach is to use a hash of the _id of your documents:

Source: BugSnag

In addition to distributing reads and writes efficiently, the technique guarantees that each document will have its own shard key, which maximizes chunk-ability.

The other example groups related documents in the index by project while also applying a hash to distinguish shard keys:

Source: BugSnag

A mini-decision tree for shard-key selection might look like this:

  • Hash the _id if there isn't a good candidate to serve as a grouping key in your application.
  • If there is a good grouping-key candidate in the app, go with it and use the _id to prevent your chunks from getting too big.
  • Be sure to distribute reads and writes evenly with whichever key you use to avoid sending all queries to the same machine.

This and other aspects of optimizing MongoDB databases can be handled through a single dashboard via the Morpheus database-as-a-service (DBaaS). Morpheus lets you provision, deploy, and host heterogeneous MySQL, MongoDB, Redis, and Elasticsearch databases. It is the first and only DBaaS that supports SQL, NoSQL, and in-memory databases. Visit the Morpheus site to sign up for a free account!


"Too Many Connections": How to Increase the MySQL Connection Count To Avoid This Problem

$
0
0

If you don't have enough connections open to your MySQL server, your users will begin to receive a "Too many connections" error while trying to use your service. To fix this, you can increase the maximum number of connections to the database that are allowed, but there are some things to take into consideration before simply ramping up this number.

Items to Consider

Before you increase the connections limit, you will want to ensure that the machine on which the database is housed can handle the additional workload. The maximum number of connections that can be supported depends on the following variables:

  • The available RAM – The system will need to have enough RAM to handle the additional workload.
  • The thread library quality of the platform - This will vary based on the platform. For example, Windows can be limited by the Posix compatibility layer it uses (though the limit no longer applies to MySQL v5.5 and up). However, there remains memoray usage concerns depending on the architecture (x86 vs. x64) and how much memory can be consumed per application process. 
  • The required response time - Increasing the number could increase the amount of time to respond to request. This should be tested to ensure it meets your needs before going into production.
  • The amount of RAM used per connection - Again, RAM is important, so you will need to know if the RAM used per connection will overload the system or not.
  • The workload required for each connection - The workload will also factor in to what system resources are needed to handle the additional connections.

Another issue to consider is that you may also need to increase the open files limit–This may be necessary so that enough handles are available.

Checking the Connection Limit

To see what the current connection limit is, you can run the following from the MySQL command line or from many of the available MySQL tools such as phpMyAdmin:

 

The show variables command.

This will display a nicely formatted result for you:

 

Example result of the show variables command.

Increasing the Connection Limit

To increase the global number of connections temporarily, you can run the following from the command line:

 

 

An example of setting the max_connections global.

If you want to make the increase permanent, you will need to edit the my.cnf configuration file. You will need to determine the location of this file for your operating system (Linux systems often store the file in the /etc folder, for example). Open this file add a line that includes max_connections, followed by an equal sign, followed by the number you want to use, as in the following example:

 

example of setting the max_connections

The next time you restart MySQL, the new setting will take effect and will remain in place unless or until this is changed again.

Easily Scale a MySQL Database

Instead of worrying about these settings on your own system, you could opt to use a service like Morpheus, which offers databases as a service on the cloud. With Morpheus, you can easily and quickly set up your choice of several databases (including MySQL, MongoDB, Redis, and Elasticsearch).

In addition, MySQL and Redis have automatic back ups, and each database instance is replicated, archived, and deployed on a high performance infrastructure with Solid State Drives. You can start a free account today to begin taking advantage of this service!

Your Options for Optimizing the Performance of MySQL Databases

$
0
0

A database can never be too optimized, and DBAs will never be completely satisfied with the performance of their creations. As your MySQL databases grow in size and complexity, taking full advantage of the optimizing tools built into the MySQL Workbench becomes increasingly important.

DBAs have something in common with NASCAR pit crew chiefs: No matter how well your MySQL database is performing, there's always a little voice in your head telling you, "I can make it go faster."

Of course, you can go overboard trying to fine-tune your database's performance. In reality, most database tweaking is done to address a particular performance glitch or to prevent the system from bogging down as the database grows in size and complexity.

One of the tools in the MySQL Workbench for optimizing your database is the Performance Dashboard. When you mouse over a graph or other element in the dashboard, you get a snapshot of server, network, and InnoDB metrics.

The Performance Dashboard in the MySQL Workbench provides at-a-glance views of key metrics of network traffic, server activity, and InnoDB storage. Source: MySQL.com

Other built-in optimization tools are Performance Reports for analyzing IO hotspots, high-cost SQL statements, Wait statistics, and InnoDB engine metrics; Visual Explain Plans that offer graphical views of SQL statement execution; and Query Statistics that report on client timing, network latency, server execution timing, index use, rows scanned, joins, temporary storage use, and other operations.

A maintenance release of the MySQL Workbench, version 6.2.4, was announced on November 20, 2014, and is described on the MySQL Workbench Team Blog. Among the new features in MySQL Workbench 6.2 are a spatial data viewer for graphing data sets with GEOMETRY data; enhanced Fabric Cluster connectivity; and a Metadata Locks View for finding and troubleshooting threads that are blocked or stuck waiting on a lock.

Peering deeper into your database's operation

One of the performance enhancements in MySQL 5.7 is the new Cost Model, as Marcin Szalowicz explains in a September 25, 2014, post on the MySQL Workbench blog. For example, Visual Explain's interface has been improved to facilitate optimizing query performance.

MySQL 5.7's Visual Explain interface now provides more insight for improving the query processing of your database. Source: MySQL.com

The new query results panel centralizes information about result sets, including Result Grid, Form Editor, Field Types, Query Stats, Spatial Viewer, and both traditional and Visual Execution Plans. Also new is the File > Run SQL Script option that makes it easy to execute huge SQL script files.

Attempts to optimize SQL tables automatically via the OPTIMIZE TABLE command often go nowhere. A post from March 2011 on Stack Overflow demonstrates that you may end up with slower performance and more storage space used rather than less. The best approach is to use "mysqlcheck" at the command line:

Run "mysqlcheck" at the command line to optimize a single database or all databases at once. Source: Stack Overflow

Alternatively, you could run a php script to optimize all the tables in a database:

A php script can be used to optimize all the tables in a database at one time. Source: Stack Overflow

A follow-up to the above post on DBA StackExchange points out that MySQL Workbench has a "hidden" maintenance tool called the Schema Inspector that opens an editor area in which you can inspect and tweak several pages at once.

What is evident from these exchanges is that database optimization remains a continuous process, even with the arrival of new tools and techniques. A principal advantage of the Morpheus database-as-a-service (DBaaS) is the use of a single dashboard to access statistics about all your MySQL, MongoDB, Redis, and ElasticSearch databases.

With Morpheus you can provision, deploy, and host SQL, NoSQL, and in-memory databases with a single click. The service supports a range of tools for connecting, configuring, and managing your databases, and automated backups for MySQL and Redis.

Visit the Morpheus site to create a free account. Database optimization has never been simpler!

MySQL's Index Hints Can Improve Query Performance, But Only If You Avoid the 'Gotchas'

$
0
0

In most cases, MySQL's optimizer chooses the fastest index option for queries automatically, but now and then it may hit a snag that slows your database queries to a crawl. You can use one of the three index hints -- USE INDEX, IGNORE INDEX, or FORCE INDEX -- to specify which indexes the optimizer uses or doesn't use. However, there are many limitations to using the hints, and most query-processing problems can be resolved by making things simpler rather than by making them more complicated.

The right index makes all the difference in the performance of a database server. Indexes let your queries focus on the rows that matter, and they allow you to set your preferred search order. Covering indexes (also called index-only queries) speed things up by responding to database queries without having to access data in the tables themselves.

Unfortunately, MySQL's optimizer doesn't always choose the most-efficient query plan. As the MySQL Manual explains, you view the optimizer's statement execution plan by preceding the SELECT statement with the keyword EXPLAIN. When this occurs, you can use index hints to specify which index to use for the query.

The three syntax options for hints are USE INDEX, IGNORE INDEX, and FORCE INDEX: The first instructs MySQL to use only the index listed; the second prevents MySQL from using the indexes listed; and the third has the same effect as the first option, but with the added limitation that table scans occur only when none of the given indexes can be used.

MySQL's index_hint syntax lets you specify the index to be used to process a particular query. Source: MySQL Reference Manual

Why use FORCE INDEX at all? There may be times when you want to keep table scans to an absolute minimum. Any database is likely to field some queries that can't be satisfied without having to access some data residing only in the table, and outside any index.

To modify the index hint, apply a FOR clause: FOR JOIN, FOR ORDER BY, or FOR GROUP BY. The first applies hints only when MySQL is choosing how to find table rows or process joins; while the second and third apply the hints only when sorting or grouping rows, respectively. Note that whenever a covering index is used to access the table, the optimizer ignores attempts by the ORDER BY and GROUP BY modifiers to have it ignore the covering index.

Are you sure you need that index hint?

Once you get the hang of using index hints to improve query performance, you may be tempted to overuse the technique. In a Stack Overflow post from July 2013, a developer wasn't able to get MySQL to list his preferred index when he ran EXPLAIN. He was looking for a way to force MySQL to use that specific index for performance tests.

In this case, it was posited that no index hint was needed. Instead, he could just change the order of the specified indexes so that the left-most column in the index is used for row restriction. (While this approach is preferred in most situations, the particulars of the database in question made this solution impractical.)

SQL performance guru Markus Winand classifies optimizer hints as either restricting or supporting. Restricting hints Winand uses only reluctantly because they create potential points of failure in the future: a new index could be added that the optimizer can't access, or an object name used as a parameter could be changed at some point.

Supporting hints add some useful information that make the optimizer run better, but Winand claims such hints are rare. For example, the query may be asking for only a specified number of rows in a result, so the optimizer can choose an index that would be impractical to run on all rows.

Troubleshooting the performance of your databases doesn't get simpler than using the point-and-click interface of the Morpheus database-as-a-service (DBaaS). You can provision, deploy, and host heterogeneous MySQL, MongoDB, Redis, and ElasticSearch databases via Morpheus's single dashboard. Each database instance includes a free full replica set and daily backups.

Morpheus is the first and only DBaaS to support SQL, NoSQL, and in-memory databases. The service lets you use a range of tools for monitoring and optimizing your databases. Visit the Morpheus site to create a free account.

A New Twist to Active Archiving Adds Cloud Storage to the Mix

$
0
0

Companies large and small are taking a fresh look at their data archives, particularly how to convert them into active archives that deliver business intelligence while simultaneously reducing infrastructure costs. A new approach combines tape-to-NAS, or tNAS, with cloud storage to take advantage of tape's write speed and local availability, and also the cloud's cost savings, efficiency, and reliability.

Archival storage has long been the ugly step-sister of information management. You create data archives because you have to, whether to comply with government regulations or as your backup of last resort. About the only time you would need to access an archive is in response to an emergency.

Data archives were perceived as both a time sink for the people who have to create and manage the old data, and as a hardware expense because you have to pay for all those tape drives (usually) or disk drives (occasionally). Throw in the need to maintain a remote location to store the archive and you've got a major money hole in your IT department's budget.

This way of looking at your company's data archive went out with baggy jeans and flip phones. Today's active archives bear little resemblance to the dusty racks of tapes tucked into even-dustier closets of some backwater remote facility.

The two primary factors driving the adoption of active archiving are the need to extract useful business intelligence from the archives (thus treating the archive as a valuable resource); and the need to reduce storage costs generally and hardware purchases specifically.

Advances in tape-storage technology, such as Linear Tape Open (LTO) generations 6, 7, and beyond, promise to extend tape's lifespan, as IT Jungle's Alex Woodie explains in a September 15, 2014, article. However, companies are increasingly using a mix of tape, disk (solid state and magnetic), and cloud storage to create their active archives.

Tape as a frontend to your cloud-based active archive

Before your company trusts its archive to cloud storage, you have to consider worst-case scenarios: What if you can't access your data? What if uploads and downloads are too slow? What if the storage provider goes out of business or otherwise places your data at risk?

To address these and other possibilities, Simon Watkins of the Active Archive Alliance proposes using tape-to-NAS (tNAS) as a frontend to a cloud-based active archive. In a December 1, 2014, article on the Alliance's blog, Watkins describes a tape-library tNAS that runs NAS gateway software and stores data in the Linear Tape File System (LTFS) format.

The tNAS approach addresses bandwidth congestion by configuring the cloud as a tNAS tier: data is written quicker to tape, and subsequently transferred to the cloud archive when bandwidth is available. Similarly, you always have an up-to-date copy of your data to use should the cloud archive become unavailable for any reason. This also facilitates transferring your archive to another cloud service.

white paper published by HP in October 2014 presents a tNAS architecture that is able to replicate the archive concurrently to both tape and cloud storage. The simultaneous cloud/tape replication can be configured as a mirror or as tiers.

 

This tNAS design combines tape and cloud archival storage and supports concurrent replication. Source: HP

To mirror the tape and cloud replication, place both the cloud and tape volumes behind the cache, designating one primary and the other secondary. Data is sent from the cache to both volumes either at a threshold you set or when the cache becomes full.

Tape-cloud tiering takes advantage of tape's fast write speeds and is best when performance is paramount. In this model, tape is always the primary archive, and users are denied access to the cloud archive.

With the Morpheus database-as-a-service (DBaaS) replication of your MySQL, MongoDB, Redis, and ElasticSearch databases is automatic -- and free. Morpheus is the first and only DBaaS that supports SQL, NoSQL, and in-memory databases.

Morpheus lets you monitor all your databases from a single dashboard. The service's SSD-backed infrastructure ensures high availability and reliability. Visit the Morpheus site to create a free account.

Cloud-based Disaster Recovery: Data Security Without Breaking the Bank

$
0
0

The necessity of having a rock-solid disaster-recovery plan in place has been made abundantly clear by recent high-profile data breaches. Advances in cloud-based DR allow organizations of all sizes to ensure they'll be up and running quickly after whatever disaster may happen their way.

It just got a lot easier to convince senior management at your company that they should allocate some funds for implementation of an iron-clad disaster-recovery program. That may be one of the few silver linings of the data breach that now threatens to bring down Sony Pictures Entertainment.

It has always been a challenge for IT managers to make a business case for disaster-recovery spending. Computing UK's Mark Worts explains in a December 1, 2014, article that because DR is all about mitigating risks, senior executives strive to reduce upfront costs and long-term contracts. Cloud-based DR addresses both of these concerns by being inexpensive to implement, and by allowing companies to pay for only the resources they require right here, right now.

Small and midsized businesses, and departments within enterprises are in the best position to benefit from cloud-based DR, according to Worts. Because of their complex, distributed infrastructures, it can be challenging for enterprises to realize reasonable recovery time objectives (RTO) and recovery point objectives (RPO) relying primarily on cloud DR services.

Components of a cloud-based DR configuration

Researchers from the University of Massachusetts and AT&T Labs developed a model for a low-cost cloud-based DR service (PDF) that has the potential to enhance business continuity over existing methods. The model depends on warm standby replicas (standby servers are available but take minutes to get running) rather than hot standby (synchronous replication for immediate availability) or cold standby (standby servers are not available right away, so recovery may take hours or days).

The first challenge is for the system to know when a failure has occurred; transient failures or network segmentation can trigger false alarms, for example. Cloud services can help detect system failures by monitoring across distributed networks. The system must also know when to fall back once the primary system has been restored.

 

A low-cost cloud-based disaster recovery system configured with three web servers and one database at the primary site. Source: University of Massachusetts

The researchers demonstrate that their RUBiS system offers significant cost savings over use of a colocation facility. For example, only one "small" virtual machine is required to run the DR server in the cloud's replication mode, while colocation DR entails provisioning four "large" servers to run the application during failover.

 

The cloud-based RUBiS DR solution is much less expensive to operate than a colocation approach for a typical database server implementation. Source: University of Massachusetts

A key cloud-DR advantage: Run your apps remotely

The traditional approaches to disaster recovery usually entail tape storage in some musty, offsite facility. Few organizations can afford the luxury of dual data centers, which duplicate all data and IT operations automatically and offer immediate failover. The modern approach to DR takes advantage of cloud services' ability to replicate instances of virtual machines, as TechTarget's Andrew Reichman describes in a November 2014 article.

By combining compute resources with the stored data, cloud DR services let you run your critical applications in the cloud while your primary facilities are restored. SIOS Technology's Jerry Melnick points out in a December 10, 2014, EnterpriseTech post that business-critical applications such as SQL Server, Oracle, and SAP do not tolerate downtime, data loss, or performance slowdowns.

It's possible to transfer the application failover of locally managed server clusters to their cloud counterparts by using SANless clustering software to synchronize storage in cloud cluster nodes. In such instances, efficient synchronous or asynchronous replication creates virtualized storage with the characteristics of SAN failover software.

Failover protection is a paramount feature of the Morpheus database-as-a-service (DBaaS). Morpheus includes a free full replica set with every database instance you create. The service supports MySQL, MongoDB, Redis, and ElasticSearch databases; it is the first and only DBaaS that works with SQL, NoSQL, and in-memory databases.

With Morpheus's single-click provisioning, you can monitor all your databases via a single dashboard. Automatic daily backups are provided for MySQL and Redis databases, and your data is safely stored on the service's SSD-backed infrastructure. Visit the Morpheus site to create a free account.

Viewing all 1101 articles
Browse latest View live