Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

How Did MongoDB Get Its Name?

$
0
0

How MongoDB got it's name

Curious how MongoDB got its name? Here's your quick history lesson for the day. 

Example of a MongoDB query. Source: MongoDB.

The company behind MongoDB

MongoDB was originally developed by MongoDB, Inc., which at the time (2007) was named 10gen. The company was founded by former DoubleClick founders and engineers, specifically Dwight Merriman, Kevin P. Ryan, and Eliot Horowitz.

At first, 10gen wanted to build an open-source platform as a service. The company wanted all of the components of its software to be completely open-source, but could not find a database that met their needs and provided the type of scalability needed for the applications they were building.

The platform 10gen was working on was named Babble and was going to be similar to the Google App Engine. As it turned out, there wasn't a big market for Babble, but both users and non-users of Babble agreed that the database 10gen had created to accompany the platform was excellent and would be happy to use it on its own.

While originally simply dubbed "p", the database was officially named MongoDB, with "Mongo" being short for the word humongous. Given the input 10gen had received about MongoDB, the company decided it would indeed be best to scrap the Babble project and release MongoDB on its own as an open-source database platform in 2009.

By 2012, 10gen had been named number nine on "The Next Big Thing 2012" published by the Wall Street Journal and had 6 offices located in various parts of the world. In 2013, 10gen renamed itself to MongoDB, Inc., wanting to make the strong association with its popular primary product.

The impact of MongoDB

As time went on, MongoDB moved up the ranks to become the most popular type of database for document stores, and the fourth most popular database system overall. It is used by other highly successful companies like eBay, Abobe, LinkedIn, Foursquare, McAfee, Shutterfly, and others.

It is also used by software developers as part of the MEAN stack, which includes MongoDB (database), Express (web app framework), AngularJS (MVC JavaScript front-end framework) and NodeJS (platform for server-side apps). Part of the popularity of this stack is that JavaScript and/or JSON/BSON notation can be used across all members of the stack, allowing developers to easily move through and develop within each piece of the stack.

The MEAN stack. Source: modernweb.

All in all, MongoDB can be an excellent choice for a database for your applications, especially if you deal with large amounts of data that will continually expand over time!

To see how Morpheus can help you get more out of your MongoDB sign up for a demo today!


Hosting For Freelance Developers: PaaS, VPS, Cloud, And More

$
0
0

By Nermin Hajdarbegovic, Technical Editor at Toptal

At a glance, the hosting industry may not appear exciting, but it's grunts in data centres the world over that keep our industry going. They are, quite literally, the backbone of the Internet, and as such they make everything possible: from e-commerce sites, to smart mobile apps for our latest toys. The heavy lifting is done in boring data centres, not on our flashy smartphones and wafer thin notebooks. 

Whether you’re creating a virtual storefront, deploying an app, or simply doing some third-party testing and development, chances are you need some server muscle. The good news is that there is a lot to choose from. The hosting industry may not be loud or exciting, but it never sleeps; it’s a dog eat dog world, with cutthroat pricing, a lot of innovation behind the scenes, and cyclical hardware updates. Cloud, IaaS and PaaS have changed the way many developers and businesses operate, and these are relatively recent innovations.

In this post I will look at some hosting basics from the perspective of a freelance developer: what to choose and what to stay away from. Why did I underline freelance software engineers? Well, because many need their own dev environment, while at the same time working with various clients. Unfortunately, this also means that they usually have no say when it comes to deployment. For example, it’s the client’s decision how and where a particular web app will be hosted, and a freelancer hired on short-term basis usually has no say in the decision. This is a management issue, so I will not address it in this post other than to say that even freelancers need to be aware of options out there. Their hands may be tied, but in some cases clients will ask for their input and software engineers should help them make an informed decision. Earlier this week, we covered one way of blurring the line between development and operations: DevOps. In case you missed that post, I urge you to check it out and see why DevOps integration can have an impact on hosting as well.

Luckily, the hosting industry tries to cater to dev demand, so many of hosting companies offer plans tailored for developers. But wait, aren’t all webhosting plans just as good for developers as these “developer” plans? Is this just clever marketing and a cheap SEO trick?

Filtering Out the Noise 

So, how does one go about finding the right hosting plan? Google is the obvious place to start, so I tried searching for “hosting for developers.” By now, you can probably see where I am going with this. That particular search yielded 85 million results and enough ads to make Google shareholders pop open a bottle of champagne.


If you’re a software engineer looking for good hosting, it’s not a good idea to google for answers. Here’s why.

There is a very good reason for this, and I reached out to some hosting specialists to get a better idea of what goes on behind the scenes.

Adam Wood, Web Hosting Expert and Author of Ultimate Guide to Web Hosting explained: 

“Stay away from Googling ‘hosting for developers.’ That shows you hosts that have spent a lot of money on SEO, not a lot of energy on building an excellent platform.” 

Wood confirmed what most of us knew already: A lot of “hosting for developers” plans are marketing gimmicks. However, he stressed that they often offer perfectly fine hosting plans in their own right.

“The ‘hosting’ is real, the ‘for developers’ part is just marketing,” he added.

Although Wood works for hosting review site WhoIsHostingThis, he believes developers searching for a new host should rely on more than online searches.

Instead of resorting to Google, your best bet for finding the perfect plan for your dev needs is word of mouth and old-fashioned research:

  • Check out major tech blogs from developers using the same stack as you.
  • Reach out to the community and ask for advice.
  • Take a closer look at hosting plans offered by your current host. Look for rapid deployment tools, integration to other developer tools, testing support and so on.
  • Make sure you have clear needs and priorities; there’s no room for ambiguity.
  • Base your decision on up-to-date information.
Small Hosts May Have Trouble Keeping Up

But what about the hundreds of thousands of hosting plans tailored for developers? Well, they’re really not special and in most cases you can get a similar level of service and support on a “plain Jane” hosting plan.

Is there even a need for these small and inexpensive plans? Yes, there is. Although seasoned veterans probably won’t use them, they are still a piece of the puzzle, allowing small developers, hobbyists and students to hone their skills on cheap, using shared hosting plans that cost less than a gym membership. Nobody is going to host a few local hobby sites on AWS, and kids designing their first WordPress sites won’t get a VPS. In most cases, they will use the cheapest option out there.

Cheap, shared hosting plans are the bread and butter of many hosting outfits, so you can get one from an industry leader, or a tiny, regional host. The trouble with small hosts is that most of them rely on conventional reseller hosting or re-packaging cloud hosting from AWS and other cloud giants. These plans are then marketed as shared hosting plans, VPS plans, or reseller plans.

Bottom line: If something goes wrong with your small reseller plan, who are you going to call in the middle of the night?

Small hosts are fading and this is more or less an irreversible trend. Data centres are insanely capital-intensive; they’re the Internet equivalent of power stations, they keep getting bigger and more efficient, while at the same time competing to offer lower pricing and superior service. This obviously involves a lot of investment, from huge facilities with excellent on-site security and support through air-conditioning, redundant power supply and amazingly expensive Internet infrastructure. On top of that, hosts need a steady stream of cutting edge hardware. Flagship Xeons and SAS SSDs don’t come cheap.

There is simply no room for small players in the data centre game.

Small resellers still have a role to play, usually by offering niche services or a localisation, including local support in various languages not supported by the big host. However, most of these niches and potential advantages don’t mean a whole lot for the average developer.

The PaaS Revolution

Less than a decade ago, the industry revolved around dedicated and shared hosting, and I don’t think I need explain what they are and how they work.

Cloud services entered the fray a few years ago, offering unprecedented reliability and scalability. The latest industry trends offer a number of exciting possibilities for developers in the form of developer-centric Platform-as-a-Service (PaaS) offerings.


PaaS is the new black for many developers. How does it compare to traditional hosting?

Most developers are already familiar with big PaaS services like HerokuPantheon and OpenShift. Many of these providers began life as platforms for a specific framework or application. For example, Heroku was a Ruby-on-Rails host, while Pantheon was a Drupal managed-hosting provider, which expanded to WordPress.

PaaS services can be viewed as the next logical step in the evolution of managed hosting. However, unlike managed hosting, PaaS is geared almost exclusively toward developers. This means PaaS services are tailored to meet the needs of individual developers and teams. It’s not simply about hosting; PaaS is all about integrating into a team’s preferred workflow by incorporating a number of features designed to boost productivity. PaaS providers usually offer a host of useful features:

·       Ability to work with other developer tools like GitHub.

·       Supports Continuous Integration (CI) tools like Drone.io, Jenkins, and Travis CI.

·       Allows the creation of multiple, clonable environments for development, testing, beta, and production.

·       Supports various automated testing suites.

Best of all, many PaaS providers offer free developer accounts. Heroku and Pantheon both allow developers to sample the platform, thus encouraging them to use it for projects later on. In addition, if one of these experimental projects takes off, developers are likely to remain on the platform. 

It’s clever marketing, and it’s also an offer a lot of developers can’t afford to ignore. PaaS is here to stay and if you haven’t taken the plunge yet, perhaps it is time to do a little research and see what’s out there.

Traditional Hosting And Cloud Offerings

Dedicated and shared hosting aren’t going anywhere. They were the mainstays of web hosting for two decades and they’re still going strong. A lot of businesses rely on dedicated servers or VPS servers for their everyday operations. Some businesses choose to use cloud or PaaS for specific tasks, alongside their existing server infrastructure.

In some situations, PaaS can prove prohibitively expensive, but powerful dedicated servers don’t come cheap, either. The good news is that PaaS can give you a good idea of the sort of resources you will need before you decide to move to a dedicated server. Further, PaaS services tend to offer better support than managed VPS servers or dedicated servers.

Of course, all this is subjective and depends on your requirements and budget.


PaaS, dedicated servers, VPS plans, or your own slice of the Cloud. What should a freelance software engineer choose?

Call me old-fashioned, but I still believe dedicated servers are the best way of hosting most stuff. However, this only applies to mature projects; development is a whole other ball game. Managed dedicated servers offer exceptional reliability and good levels of support, along with good value for money.

Properly used, dedicated servers and PaaS can speed up deployment as well, as Adam Wood explains:

“I can spin up a new Ruby-on-Rails app on Heroku in a matter of minutes. Doing the same thing on AWS takes me a half a day, and I constantly feel like I’m about to break something.”

Cloud services are inherently more efficient than dedicated hardware because you only use the resources you need at any given time. For example, if you are operating a service that gets most of its traffic during office hours (from users in the Americas), your dedicated server will be underutilised for 12 to 16 hours. Despite this obvious efficiency gap, dedicated servers can still end up cheaper than cloud solutions. In addition, customers can customise and upgrade them the way they see fit.

Cloud is catching up, but dedicated servers will still be around for years to come. They’re obviously not a good solution for individual developers, but are for a lot of businesses. VPS plans cost a lot less than dedicated servers and are easily within the reach of individual developers, even though they don’t offer the same level of freedom as dedicated servers.

What Does This Mean For Freelancers?

The good news is that most freelance software engineers don’t need to worry about every hosting option out there. While it’s true that different clients have different ways of doing things, in most cases it’s the client’s problem rather than yours.

This does not mean that different hosting choices have no implications on freelancers; they do, but they are limited. It is always a good idea to familiarise yourself with the infrastructure before getting on board a project, but there is not much to worry about. Most new hosting services were developed to make developers’ lives easier and keep them focused on their side of the project. One of the positive side-effects on PaaS and cloud adoption is increasing standardisation; most stacks are mature and enjoy wide adoption, so there’s not a lot that can go wrong.

Besides, you can’t do anything about the client’s choice of infrastructure, for better or for worse. But what about your own server environment?

There is no one-size-fits-all solution; it all depends on your requirements, your stack, and your budget. PaaS services are gaining popularity, but they might not be a great solution for developers on a tight budget, or those who don’t need a hosting environment every day. For many freelancers and small, independent developers, VPS is still the way to go. Depending on what you do, an entry-level managed dedicated server is an option, and if you do small turnkey web projects, you may even consider some reseller packages. 

The fact that big hosting companies continue to compete for developers’ business is, ultimately, a good thing. It means they’re forced to roll out timely updates and offer better support across all hosting packages in order to remain competitive. They are not really competing with PaaS and cloud services, but they still want a slice of the pie.

Remember how PaaS providers offer developers various incentives to get on board, just so they could get their business in the long run? It could be argued that conventional hosting companies are trying to do the same by luring novice developers to their platform, hoping that they will be loyal customers and use their servers to host a couple of dozen projects a few years down the road.

The Future Of Hosting

Although the hosting industry may not appear as vibrant and innovative as other tech sectors, this is not entirely fair. Of course, it will always look bland and unexciting compared to some fast-paced sectors, but we’re talking about infrastructure, not some sort of get rich quick scheme.

The hosting industry is changing, and it is innovative. It just takes a bit longer to deploy new technology, that’s all. For example, a logistics company probably changes its company smartphones every year or two, but its delivery vehicles aren’t updated nearly as often, yet they’re the backbone of the business.

Let’s take a quick look at some hosting industry trends that are becoming relevant from a software development perspective:

·       Continual development and growth of Cloud and PaaS services.

·       Evolution of managed hosting into quasi-PaaS services.

·       Increasing integration with industry standard tools.

·       New hardware might make dedicated servers cheaper.

Cloud and PaaS services will continue to mature and grow. More importantly, as competition heats up, prices should come down. The possibility of integrating various development tools and features into affordable hosting plans will continue to make them attractive from a financial perspective. Moving up on the price scale, managed hosting could also evolve to encompass some features and services offered by PaaS. If you’re interested in hosting industry trends, I suggest you check out this Forbes compilation of cloud market forecasts for 2015 and beyond.

Dedicated servers will never be cheap, at least not compared to shared and VPS plans. However, they are getting cheaper, and they could get a boost in the form of frugal and inexpensive ARM hardware. ARM-based processors tend to offer superior efficiency compared to x86 processors, yet they are relatively cheap to develop and deploy. Some flagship smartphones ship with quad-core chips, based on 64-bit Cortex-A57 CPU cores, and the same cores are coming to ARM-based server processors.

As a chip geek, I could go on, but we intend to take an in-depth look at the emerging field of ARM servers in one of our upcoming blog posts, so if you’re interested, stay tuned.

This article originally appeared in Toptal link at https://www.toptal.com/it/hosting-for-freelance-developers-paas 

To try out Morpheus' leading PaaS offering sign up for a free demo here

What is Data Logging

$
0
0
Data logging is one of the most important aspects of most IT pros. So, do you know what it is?

 

Data logging is often talked about as a helpful tool that you can use when trying to maintain your various servers, databases, and other systems that go into an application. So, what is data logging and what does it do that helps you maintain your applications more easily?

Data Logging Defined Generally speaking, data logging is the recording of data over a period of time by a computer system or a special standalone device which can be tailored to a specific use case. The recorded data can then be retrieved and analyzed to help determine if things ran smoothly during the time the data was being recording, and to help identify what happened if there were any issues that would be in need or further attention. Standalone data loggers are used in many familiar environments to gather information such as weather conditions, traffic conditions, wildlife research, and many others. These devices make it possible for the recording of data to take place 24/7 and automatically, without the need for a person to be present with the data logger. 

A data logger for a weather station. Source: Wikipedia.

For instance, when performing wildlife research, it can be beneficial to have such automated logging, as wildlife may behave differently when one or more humans are present. For the purposes of application monitoring, data logging records information pertinent to the maintenance of the infrastructure that is required for an application to run.

How Data Logging Helps With App Maintenance When maintaining apps, it is always helpful to know when and where something went wrong. In many cases, such logging can help you avoid problems by alerting you that an issue may arise soon (a server beginning to respond slowly, for instance). Data logging can also help you keep track of statistics over time, such as the overall uptime, the uptime of specific servers, average response time, and other data that can help you tweak your applications for optimum uptime and performance.

Morpheus and Monitoring

If you are looking for a monitoring system with excellent data logging and analysis reports, you should give Morpheus a try. With Morpheus, data logging is automatic as you provision servers and apps. Using the available tools, you can monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.

 

 

The Morpheus interface is clean and easy to use. Source: Morpheus.

Morpheus also allows you to provision apps in a single click and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. Sign up for a demo and we'll let you try out Morpheus for free today

The Good, the Bad, and the Ugly Among Redis Pagination Strategies

$
0
0

If you need to use pagination in your Redis app, there are a couple of strategies you can use to achieve the necessary functionality. While pagination can be challenging, a quick overview of each of these techniques should be helpful in making your job of choosing a method and implementing it a little easier. There are several strategies for pagination in Redis. Find out what they are and the pros and cons of each!

 

In Redis, you have a couple of options from which to choose. You can use the SSCAN command or you can use sorted sets. Each of these has their own advantages, so choose the one that works best for your application and its infrastructure.

Using the SSCAN Command

The SSCAN command is part of a group of commands similar to the regular SCAN command. These include:

  • SCAN - Used to iterate over the set of keys in the current database.
  • SSCAN - Used to iterate over elements of sets.
  • HSCAN - Used to iterate fields hashes and associated values.
  • ZSCAN - Used to iterate elements of sorted sets and their scores.

Example of scan iteration. Source: Redis.

So, while the regular SCAN command iterates over the database keys, the SSCAN command can iterate over elements of sets. By using the returned SSCAN cursor, you could paginate over a Redis set.

The downside is that you need some way to persist the value of the cursor, and if there are concurrent users this could lead to some odd behavior, since the cursor may not be where it is expected. However, this can be useful for applications where traffic to these paginated areas may be lighter.

Using Sorted Sets

In Redis, sorted sets are a non-repeating collection of strings associated with a score. This score is used to order the set from the smallest to the largest score. This data type allows for fast updating by giving you easy access to elements, even if the elements are in the middle of the set.

An example of sorted set elements Source: Redis.

To paginate, you can use the ZRANGE command to select a range of elements in a sorted set based on their scores. So, you could, for example, select scores from 1-20, 21-40, and so on. By programmatically adjusting the range as the user moves through the data, you can achieve the pagination you need for your application.

Since sorted sets and ZRANGE do this task more intuitively than using a scan, it is often the preferred method of pagination, and is easier to implement with multiple users, since you can programmatically keep track of which ZRANGE each user is selecting at any given time.

In the end, you can choose which method works for your particular situation. If you have a smaller application with less traffic, a scan may work for you. If; however, you need a more robust solution for larger data sets or more highly utilized applications, it may be best to go ahead and use ZRANGE with sorted sets to achieve pagination in your application.

Morpheus helps you get more out of Redis. To find our how Morpheus can save you time, money and sanity sign up for a demo now!

Using DNS to Debug Downtime

$
0
0

 

At times, a web app or web site may appear to be down when the server it is on appears to be functioning properly. When this happens, it is important to know where the issue resides, as it may be easy to fix, or may require a lot of work or contacting others. One of the possibilities when a site is in this state is whether or not the DNS server is up to date and pointing others to the proper server in order to load your site or app.

What is DNS?

DNS stands for Domain Name System. It is the tool that allows a typical URL, such as http://gomorpheus.com , to point to the server on which the actual web site or app resides. Once a computer finds the DNS information it needs for mapping a base URL to a server address, it will remember it for a period of time, until its TTL (Time To Live) has been reached.

How DNS can contribute to downtime

DNS can contribute to downtime in several ways:

  1. The DNS server has the wrong information stored about the server to which the domain should be pointed. For example, the server is actually at the IP address 204.268.130.100, but the DNS entry has the server at 204.268.120.100. Here, changing the entry to the proper address will fix the situation.
  2. The DNS server is down. In such a case, computers that do not have the DNS information cached cannot reach the DNS server to look up the proper address. This will require getting your DNS server back up and running, or contacting the proper people to do this if it is not your server.
  3. The changes haven’t propagated and updated caches yet. Since computers cache DNS information in the operating system and browser, this could be the case.

If the user is affected by number three above, there are a couple of things to try:

  1. Have the user close the web browser, reopen it, and try again. Browsers have a tendency to cache DNS information, so this may solve the issue.
  2. Have the user clear the DNS cache on their operating system. This can be done from a shell, for example, the commands to do this in Windows and OSX are shown below:

#Windows:

ipconfig /flushdns

#OSX:
sudo killall -HUP mDNSResponder

Examples of clearing the DNS cache


Monitoring with Morpheus

Do you want to be notified when your site or app is having issues? If you are looking for a monitoring system with excellent data logging and analysis reports, you should give Morpheus a try. With Morpheus, data logging is automatic as you provision servers and apps. Using the available tools, you can monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.


The Morpheus interface is clean and easy to use. 

Morpheus allows you to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. So, why not register an account or try out Morpheus for free today?

10 Most Common Web Security Vulnerabilities

$
0
0

By Gergely Kalman, Security Specialist at Toptal

For all too many companies, it’s not until after a breach has occurred that web security becomes a priority. During my years working as an IT Security professional, I have seen time and time again how obscure the world of IT Security is to so many of my fellow programmers.

An effective approach to IT security must, by definition, be proactive and defensive. Toward that end, this post is aimed at sparking a security mindset, hopefully injecting the reader with a healthy dose of paranoia.

In particular, this guide focuses on 10 common and significant web security pitfalls to be aware of, including recommendations on how they can be avoided. The focus is on the Top 10 Web Vulnerabilities identified by the Open Web Application Security Project (OWASP), an international, non-profit organization whose goal is to improve software security across the globe.


A little web security primer before we start – authentication and authorization  

When speaking with other programmers and IT professionals, I often encounter confusion regarding the distinction between authorization and authentication. And of course, the fact the abbreviation auth is often used for both helps aggravate this common confusion. This confusion is so common that maybe this issue should be included in this post as “Common Web Vulnerability Zero”.

So before we proceed, let’s clearly the distinction between these two terms:

  • Authentication: Verifying that a person is (or at least appears to be) a specific user, since he/she has correctly provided their security credentials (password, answers to security questions, fingerprint scan, etc.).
  • Authorization: Confirming that a particular user has access to a specific resource or is granted permission to perform a particular action.

Stated another way, authentication is knowing who an entity is, while authorization is knowing what a given entity can do. 

Common Mistake #1: Injection flaws

Injection flaws result from a classic failure to filter untrusted input. It can happen when you pass unfiltered data to the SQL server (SQL injection), to the browser (XSS – we’ll talk about this later), to the LDAP server (LDAP injection), or anywhere else. The problem here is that the attacker can inject commands to these entities, resulting in loss of data and hijacking clients’ browsers. 

Anything that your application receives from untrusted sources must be filtered, preferably according to a whitelist. You should almost never use a blacklist, as getting that right is very hard and usually easy to bypass. Antivirus software products typically provide stellar examples of failing blacklists. Pattern matching does not work. 

Prevention: The good news is that protecting against injection is “simply” a matter of filtering your input properly and thinking about whether an input can be trusted. But the bad news is that all input needs to be properly filtered, unless it can unquestionably be trusted (but the saying “never say never” does come to mind here).

In a system with 1,000 inputs, for example, successfully filtering 999 of them is not sufficient, as this still leaves one field that can serve as the Achilles heal to bring down your system. And you might think that putting an SQL query result into another query is a good idea, as the database is trusted, but if the perimeter is not, the input comes indirectly from guys with malintent. This is called Second Order SQL Injection in case you’re interested.

Since filtering is pretty hard to do right (like crypto), what I usually advise is to rely on your framework’s filtering functions: they are proven to work and are thoroughly scrutinized. If you do not use frameworks, you really need to think hard about whether not using them really makes sense in your environment. 99% of the time it does not.

Common Mistake #2: Broken Authentication

This is a collection of multiple problems that might occur during broken authentication, but they don’t all stem from the same root cause.

Assuming that anyone still wants to roll their own authentication code in 2014 (what are you thinking??), I advise against it. It is extremely hard to get right, and there are a myriad of possible pitfalls, just to mention a few:

  1. The URL might contain the session id and leak it in the referer header to someone else.
  2. The passwords might not be encrypted either in storage or transit.
  3. The session ids might be predictable, thus gaining access is trivial.
  4. Session fixation might be possible.
  5. Session hijacking might be possible, timeouts not implemented right or using HTTP (no SSL), etc…

Prevention: The most straightforward way to avoid this web security vulnerability is to use a framework. You might be able to implement this correctly, but the former is much easier. In case you do want to roll your own code, be extremely paranoid and educate yourself on what the pitfalls are. There are quite a few.

Common Mistake #3: Cross Site Scripting (XSS)

This is a fairly widespread input sanitization failure (essentially a special case of common mistake #1). An attacker gives your web application JavaScript tags on input. When this input is returned to the user unsanitized, the user’s browser will execute it. It can be as simple as crafting a link and persuading a user to click it, or it can be something much more sinister. On page load the script runs and, for example, can be used to post your cookies to the attacker.

Prevention: There’s a simple web security solution: don’t return HTML tags to the client. This has the added benefit of defending against HTML injection, a similar attack whereby the attacker injects plain HTML content (such as images or loud invisible flash players) – not high-impact but surely annoying (“please make it stop!”). Usually, the workaround is simply converting all HTML entities, so that script is returned as <script>. The other often employed method of sanitization is using regular expressions to strip away HTML tags using regular expressions on < and > , but this is dangerous as a lot of browsers will interpret severely broken HTML just fine. Better to convert all characters to their escaped counterparts. 

Related: How Did MongoDB Get It's NameCommon Mistake #4: Insecure Direct Object References

This is a classic case of trusting user input and paying the price in a resulting security vulnerability. A direct object reference means that an internal object such as a file or database key is exposed to the user. The problem with this is that the attacker can provide this reference and, if authorization is either not enforced (or is broken), the attacker can access or do things that they should be precluded from.

For example, the code has a download.php module that reads and lets the user download files, using a CGI parameter to specify the file name (e.g.download.php?file=something.txt). Either by mistake or due to laziness, the developer omitted authorization from the code. The attacker can now use this to download any system files that the user running PHP has access to, like the application code itself or other data left lying around on the server, like backups. Uh-oh.

Another common vulnerability example is a password reset function that relies on user input to determine whose password we’re resetting. After clicking the valid URL, an attacker can just modify the usernamefield in the URL to say something like “admin”.

Incidentally, both of these examples are things I myself have seen appearing often “in the wild”.

Prevention: Perform user authorization properly and consistently, and whitelist the choices. More often than not though, the whole problem can be avoided by storing data internally and not relying on it being passed from the client via CGI parameters. Session variables in most frameworks are well suited for this purpose.

Common Mistake #5: Security misconfiguration

In my experience, web servers and applications that have been misconfigured are way more common than those that have been configured properly. Perhaps this because there is no shortage of ways to screw up. Some examples:

  1. Running the application with debug enabled in production.
  2. Having directory listing enabled on the server, which leaks valuable information.
  3. Running outdated software (think WordPress plugins, old PhpMyAdmin).
  4. Having unnecessary services running on the machine.
  5. Not changing default keys and passwords. (Happens way more frequently than you’d believe!)
  6. Revealing error handling information to the attackers, such as stack traces.

Prevention: Have a good (preferably automated) “build and deploy” process, which can run tests on deploy. The poor man’s security misconfiguration solution is post-commit hooks, to prevent the code from going out with default passwords and/or development stuff built in.

Common Mistake #6: Sensitive data exposure

This web security vulnerability is about crypto and resource protection. Sensitive data should be encrypted at all times, including in transit and at rest. No exceptions. Credit card information and user passwords should never travel or be stored unencrypted, and passwords should always be hashed. Obviously the crypto/hashing algorithm must not be a weak one – when in doubt, use AES (256 bits and up) and RSA (2048 bits and up).

And while it goes without saying that session IDs and sensitive data should not be traveling in the URLs and sensitive cookies should have the secure flag on, this is very important and cannot be over-emphasized.

Prevention:

  • In transit: Use HTTPS with a proper certificate and PFS (Perfect Forward Secrecy). Do not accept anything over non-HTTPS connections. Have the secure flag on cookies.
  • In storage: This is harder. First and foremost, you need to lower your exposure. If you don’t need sensitive data, shred it. Data you don’t have can’t be stolen. Do not store credit card information ever, as you probably don’t want to have to deal with being PCI compliant. Sign up with a payment processor such asStripe or Braintree. Second, if you have sensitive data that you actually do need, store it encrypted and make sure all passwords are hashed. For hashing, use of bcrypt is recommended. If you don’t use bcrypt, educate yourself on salting and rainbow tables.

And at the risk of stating the obvious, do not store the encryption keys next to the protected data. That’s like storing your bike with a lock that has the key in it. Protect your backups with encryption and keep your keys very private. And of course, don’t lose the keys!

Common Mistake #7: Missing function level access control

This is simply an authorization failure. It means that when a function is called on the server, proper authorization was not performed. A lot of times, developers rely on the fact that the server side generated the UI and they think that the functionality that is not supplied by the server cannot be accessed by the client. It is not as simple as that, as an attacker can always forge requests to the “hidden” functionality and will not be deterred by the fact that the UI doesn’t make this functionality easily accessible. Imagine there’s an /adminpanel, and the button is only present in the UI if the user is actually an admin. Nothing keeps an attacker from discovering this functionality and misusing it if authorization is missing.

Prevention: On the server side, authorization must always be done. Yes, always. No exceptions or vulnerabilities will result in serious problems.

Common Mistake #8: Cross Site Request Forgery (CSRF)

This is a nice example of a confused deputy attack whereby the browser is fooled by some other party into misusing its authority. A 3rd party site, for example, can make the user’s browser misuse it’s authority to do something for the attacker.

In the case of CSRF, a 3rd party site issues requests to the target site (e.g., your bank) using your browser with your cookies / session. If you are logged in on one tab on your bank’s homepage, for example, and they are vulnerable to this attack, another tab can make your browser misuse its credentials on the attacker’s behalf, resulting in the confused deputy problem. The deputy is the browser that misuses its authority (session cookies) to do something the attacker instructs it to do.

Consider this example:

Attacker Alice wants to lighten target Todd’s wallet by transfering some of his money to her. Todd’s bank is vulnerable to CSRF. To send money, Todd has to access the following URL:

http://example.com/app/transferFunds?amount=1500&destinationAccount=4673243243

After this URL is opened, a success page is presented to Todd, and the transfer is done. Alice also knows, that Todd frequently visits a site under her control at blog.aliceisawesome.com, where she places the following snippet:

img src="http://example.com/app/transferFunds?amount=1500&destinationAccount=4673243243" width="0" height="0" />

Upon visiting Alice’s website, Todd’s browser thinks that Alice links to an image, and automatically issues an HTTP GET request to fetch the “picture”, but this actually instructs Todd’s bank to transfer $1500 to Alice.

Incidentally, in addition to demonstrating the CSRF vulnerability, this example also demonstrates altering the server state with an idempotent HTTP GET request which is itself a serious vulnerability. HTTP GET requestsmust be idempotent (safe), meaning that they cannot alter the resource which is accessed. Never, ever, ever use idempotent methods to change the server state.

Fun fact: CSRF is also the method people used for cookie-stuffing in the past until affiliates got wiser.

Prevention: Store a secret token in a hidden form field which is inaccessible from the 3rd party site. You of course always have to verify this hidden field. Some sites ask for your password as well when modifying sensitive settings (like your password reminder email, for example), although I’d suspect this is there to prevent the misuse of your abandoned sessions (in an internet cafe for example).

Common Mistake #9: Using components with known vulnerabilities

The title says it all. I’d again classify this as more of a maintenance/deployment issue. Before incorporating new code, do some research, possibly some auditing. Using code that you got from a random person onGitHub or some forum might be very convenient, but is not without risk of serious web security vulnerability.

I have seen many instances, for example, where sites got owned (i.e., where an outsider gains administrative access to a system), not because the programmers were stupid, but because a 3rd party software remained unpatched for years in production. This is happening all the time with WordPress plugins for example. If you think they will not find your hidden phpmyadmininstallation, let me introduce you to dirbuster.

The lesson here is that software development does not end when the application is deployed. There has to be documentation, tests, and plans on how to maintain and keep it updated, especially if it contains 3rd party or open source components.

Prevention:

  • Exercise caution. Beyond obviously using caution when using such components, do not be a copy-paste coder. Carefully inspect the piece of code you are about to put into your software, as it might be broken beyond repair (or in some cases, intentionally malicious).
  • Stay up-to-date. Make sure you are using the latest versions of everything that you trust, and have a plan to update them regularly. At least subscribe to a newsletter of new security vulnerabilities regarding the product.
Common Mistake #10: Unvalidated redirects and forwards

This is once again an input filtering issue. Suppose that the target site has a redirect.php module that takes a URL as a GETparameter. Manipulating the parameter can create a URL on targetsite.comthat redirects the browser to malwareinstall.com. When the user sees the link, they will see targetsite.com/blahblahblahwhich the user thinks is trusted and is safe to click. Little do they know that this will actually transfer them onto a malware drop (or any other malicious) page. Alternatively, the attacker might redirect the browser to targetsite.com/deleteprofile?confirm=1.

It is worth mentioning, that stuffing unsanitized user-defined input into an HTTP header might lead to header injection which is pretty bad.

Prevention: Options include:

  • Don’t do redirects at all (they are seldom necessary).
  • Have a static list of valid locations to redirect to.
  • Whitelist the user-defined parameter, but this can be tricky.
Epilogue

I hope that I have managed to tickle your brain a little bit with this post and to introduce a healthy dose of paranoia and web security vulnerability awareness.

The core takeaway here is that age-old software practices exist for a reason and what applied back in the day for buffer overflows, still apply for pickled strings in Python today. Security helps you write correct(er) programs, which all programmers should aspire to.

Please use this knowledge responsibly, and don’t test pages without permission!

For more information and more specific attacks, have a look at:https://www.owasp.org/index.php/Category:Attack.

This post originally appeared in the Toptal blog: https://www.toptal.com/security/10-most-common-web-security-vulnerabilities

 To see how Morpheus can help you get more out of your MongoDB sign up for a demo today!

How to Manage App Uptime Like a Boss

$
0
0

Find out how you can more easily manage your app’s uptime.

 

Keeping your apps up and running with a good uptime rate can be a tedious task of monitoring servers, logs, databases, and more. If an incident does occur, you have to go through piece by piece to find out where the problem is – and then work on fixing it. All of this can take precious time away from you and your staff, and increases the amount of time your app is unavailable to users while you find and fix the issue. For example, billions are being spent on election campaigns this year. If something goes down in one of these campaigns, the results can be catastrophic — the results could mean the difference between sitting in the white house and watching from home. 

$4.4 billion is expected to be spent on election campaigns this year. That means managing uptime is a matter of watching from the white house or watching from home.  

 

Average cost of downtime. Source: Disaster Recovery Journal.

Save Time with PaaS

A Platform as a Service (PaaS) can help you organize and streamline your app management, making it far easier for you and your staff to meet uptime requirements and demands. A good example of this is the Morpheus service, which makes locating and reporting on issues quick and simple.

 

The user-friendly Morpheus interface. Source: Morpheus

With Morpheus, you can provision apps and databases in real-time to public, private, and hybrid clouds and spin up databases, apps, environments and more with a few simple clicks. You can use the monitoring service to keep track of overall app uptime and response time, while also tracking the vital statistics for each individual piece of your app. Keep track of your database server, app server, and any other piece of your app 24/7. With all of this information at your fingertips, you may be able to prevent a number of incidents just by checking into any apps that are responding slower than usual. Being able to fix an issue before there is any outage can certainly ease the burden on you and your staff that downtime brings. Morpheus also provides automatic logging and backups of your systems. Look through easy to use reports on your systems to find out where issues occurred. For backups, you can determine the backup time and frequency to ensure you get the backups you need when you need them. Instead of worrying about whether you have a recent backup, let Morpheus ensure you have one when it is needed most! Downtime is minimized, and you get to keep your piece of mind knowing you can restore from a backup right away. In addition to all of this, Morpheus takes care of infrastructure, setup, scaling and more, and sends you alerts when your attention is needed, thus giving you more freedom to take care of other business. With all of these features, why not give

Keep track of your database server, app server, and any other piece of your app 24/7. With all of this information at your fingertips, you may be able to prevent a number of incidents just by checking into any apps that are responding slower than usual. Being able to fix an issue before there is any outage can certainly ease the burden on you and your staff that downtime brings.

Morpheus also provides automatic logging and backups of your systems. Look through easy to use reports on your systems to find out where issues occurred. For backups, you can determine the backup time and frequency to ensure you get the backups you need when you need them. Instead of worrying about whether you have a recent backup, let Morpheus ensure you have one when it is needed most! Downtime is minimized, and you get to keep your piece of mind knowing you can restore from a backup right away.

In addition to all of this, Morpheus takes care of infrastructure, setup, scaling and more, and sends you alerts when your attention is needed, thus giving you more freedom to take care of other business. With all of these features, why not give Morpheus a try today?

How to Measure the ROI of Your Cloud Spend

$
0
0
Quantifying the cost of cloud ownership is no simple task.Take some of these tips to measuring the ROI of your cloud spend

How-to-measure-ROI-Cloud-Computing

Getting an accurate assessment of the total costs of public, private, and hybrid clouds requires thinking outside the invoices. 

Quantifying Cloud Cost of Ownership

The big question facing organizations of all types and sizes is this: “Which is more cost-effective for hosting our apps and data, in-house or cloud?” While it may not be possible to achieve a truly apples-to-apples comparison of the two options, a careful cost accounting can be the key to achieving the optimal balance of public, private, and hybrid cloud alternatives.

If accountants ruled the world, all business decisions would come down to one number: maximum profit for the current quarter or year. In fact, basing your company’s strategy solely on short-term financial returns is one of the fastest ways to sink it. Yet so many IT decision makers look to a single magical, mystical (some would say mythical) figure when planning their tech purchases: total cost of ownership, or TCO.

Determining TCO is particularly elusive when assessing cloud alternatives to in-house development and management of an organization’s apps and data. How do you quantify accurately the benefits of faster time to market, for example? Or faster and simpler application updates? Or an IT staff that’s more engaged in its work? These are some of the questions posed by Gigaom Research’s David S. Linthicum in a May 9, 2014, article.

Linthicum points out that while tools such as Amazon Web Services’ TCO calculator, Google’s TCO Pricing Calculator, and the collection of cost calculators at The Cloud Calculator help you understand the “simple costs and benefits” of using cloud services, they exclude many of the most important aspects of the to-cloud-or-not-to-cloud decision.

The AWS TCO Calculator is intended to provide customers with an accurate cost comparison of public-cloud services vs. on-premises infrastructure. Source: AWS Official Blog

The most glaring shortcoming of TCO calculators is their one-size-fits-all nature. By failing to consider the unique qualities of your company – your business processes, the skill level of your staff, your existing investment in hardware, software, and facilities – the calculators present only a part of the big picture.

An even-greater challenge for organizations, according to Linthicum, is how to quantify the value of agility and faster time to market. By including these more-nebulous benefits in the TCO calculation, the most cost-effective choice may be the cloud even when a traditional hardware-software-facilities TCO analysis gives the edge to in-house systems. Linthicum recommends seven aspects to include in your “living model”:

  1. The cost of “sidelining” the assets in your existing infrastructure
  2. The amount of training for existing staff and hiring of new staff needed to acquire the requisite cloud skills
  3. The cost of migrating your apps and data to the cloud, and the degree of re-engineering the migration will require
  4. The cost of using public cloud services over an extended time, including potential changes in operational workload
  5. The value the company places on faster time to market, faster updates for apps and data, and the ability to respond faster to changing business conditions
  6. The savings resulting from reduced capital expenditures in the future, which often boils down to opex vs. capex
  7. The risk of the potential failure to comply with rules and regulations governing particular industries, such as healthcare, insurance, and financial services
Boiling down the public cloud vs. private cloud equation

As tricky as it can be to get an accurate read on the total cost of public cloud services, calculating the many expenses entailed in operating a private cloud setup can leave experienced IT planners scratching their heads. In a September 9, 2014, article on ZDNet, Intel’s Ram Lakshminarayanan outlines four areas where the costs of public and private clouds differ.

While public cloud services don’t usually charge software licensing fees, their per-hour compute instance fees are likely to be higher for proprietary OSes such as Microsoft Windows than for Linux and other open-source systems. By contrast, proprietary cloud providers often apply a licensing fee for their virtualization software by the CPUs or CPU cores they require. (Use of OpenStack and other open-source private cloud software does not entail a licensing fee.)

The biggest cost difference between public and private cloud setups is in infrastructure. Private clouds require upfront expenditures for compute, network, and storage hardware, as well as ongoing costs for power, cooling, and other infrastructure. Public cloud services charge based on pro-rata, per-hour use, although their rates cover the providers’ hardware and facilities costs.

Likewise, support costs that are built into public cloud rates are a separate line item for most private clouds and must be negotiated separately in most cases. Finally, IT staff training must be considered both an upfront and continuing cost that will likely be higher for private clouds than their public counterparts, which benefit from straightforward dashboard interfaces designed for end users. A prime example is the Morpheus application management service, which features an intuitive UI for provisioning databases, apps, and app stack components on private, public, and hybrid clouds in just seconds.

Hybrid clouds in particular depend on cost management

The Goldilocks cloud solution – the one that is “just right” for a great number of small and large organizations – is the hybrid approach that mixes public and private components. This allows the companies to benefit from the public cloud’s efficiency and cost savings while protecting critical data assets in a private cloud. Accurate cost accounting is imperative to ensure both sides of the cloud equation are applied to best advantage for your firm’s data needs.

CFO’s Penny Collen writes in a March 12, 2015, column that TCO for hybrid clouds shouldn’t be approached from a project-life-cycle perspective, but rather as presenting a big-picture view of all operational costs. That’s the only way to get an accurate assessment of all your provisioning alternatives, according to Collen. She identifies four areas in which specific costs must be identified:

  1. Hardware acquisition (asset records, purchase orders, vendor price lists)
  2. Hardware maintenance (as a percentage of acquisition costs and maintenance averages)
  3. Software (based on historic hardware/software ratios in your company)
  4. Infrastructure and tech support (connectivity, facilities, disaster recovery, administration)
One-time costs include the following:
  1. Design
  2. Architecture
  3. Data migration
  4. Data porting
  5. Data cleansing and archiving
  6. User and technical support training
  7. Standardization, upgrades, and customization

Categories in cloud cost accounting include servers, storage, software, labor, networking, facilities, and support. Source: CFO

Cloud-specific costs that must be part of the analysis include the following:

  1. The need for more network capacity
  2. Vendor fees (primarily for public cloud services)
  3. Administration of cloud invoices
  4. Management of relationships with cloud vendors

Last but not least, consider fees related to the cancellation terms stipulated in the cloud service’s contract. Also factor in the cost of migrating to an alternative cloud provider to avoid being squeezed by vendor lock-in.

To find out how Morpheus' PaaS solution can help you avoid cloud lock-in to save time and money, download the use case here.


How Important Is It for Your Apps to Be ‘Cloud Native’?

$
0
0

Note: This article was written by Morpheus Data's CTO Brian Wheeler and was originally published by DevOps.com. To see the original article click here

There’s plenty of life left in a great number of the applications that companies have been relying on and profiting from for years. The future may belong to cloud-native apps that deliver the performance and efficiency benefits of microservices and containers, but in the here-and-now, organizations are looking for ways to combine the best of the old with the promise of the new.

With continuous development, you could argue that the software industry has eliminated the concept of product lifecycle altogether. Yet every IT manager knows that some of the applications currently used by their organization will likely be running long after they have hung up their compiler.

These legacy applications have proven their worth many times over. Some are one-offs that easily accommodate and are accommodated by new systems as they are implemented. Some are so good at what they do they epitomize “If it ain’t broke, don’t fix it.” In the mass migration of apps and data to the cloud, it’s natural that such grand old apps would come under scrutiny as candidates for replacement by their cloud-native counterparts.

In a December 8, 2015, article on The Stack, Red Hat’s Gordon Haff examines the results of a recent IDC survey of enterprise DevOps teams, which found that 83 percent anticipate having to support legacy applications and infrastructure through 2019.

It may seem counterintuitive, but the organizations that have gone furthest in adopting “distributed, scale-out, microservices-based applications” are twice as likely to delay converting existing apps to cloud architectures.

What characteristics make an application ‘cloud-native’?

The simplest definition of a cloud-native application is one that is “decoupled from the physical infrastructure,” as TechTarget’s Sean M. Kerner writes in a December 2015 article. In addition to being designed from the ground up to run in the cloud – whether private or public – the app must also scale on demand, both up and down, and across nodes.

Cloud-native apps are fault-tolerant, built to scale, and designed to process requests asynchronously, using queues to decouple functionality. Source: Munish Kumar Gupta, via Slideshare.

Another feature that is quickly becoming standard on cloud-native apps is use of Docker containers, which facilitates deployment on public Amazon Web Services (AWS) clouds and private OpenStack clouds, among other options. A more formal definition of “cloud native” will likely arrive soon from the Cloud Native Computing Foundation, an effort sponsored by the Linux Foundation and others. InfoWorld’s Sarder Yegulalp describes the foundation’s goals in a December 17, 2015, article.

Containers are a key component of the CNCF’s proposed standard for applications and services that run natively on public and private clouds, and that move seamlessly between the two environments. Rather than writing specs on paper first and then implementing the code based on the specs, CNCF will create the code for its reference implementations. Google, a CNCF member, has donated its Kubernetes container-orchestration system and stands to benefit via the foundation’s support for private and hybrid clouds, a technology the company currently lacks.

The Continuing Evolution of Cloud-Native

It’s tempting to limit the definition of a “cloud-native” app to those two primary characteristics: decoupled from the infrastructure; and scalable up, down, and across nodes. If this approach strikes you as too simplistic, you’re not alone. InformationWeek’s Charles Babcock explains in a July 30, 2015, article that the key to cloud-native is microservices – which he refers to as “discrete application services” – running in their own containers and connected via a network to create custom applications. The result is an environment for “user-driven systems” comprised of standardized parts and governed by “standardized deployment and operational procedures.”

The combination of microservices and containers pushes down the app stack and simplifies deployment in computing environments of all types. Source: Richard Harvey of Ngineered, via SlideShare

The payoff for companies is still far off, according to Babcock, but ultimately cloud-native apps will keep firms focused on their customers as the software landscape shifts around them. This translates into a competitive advantage via continuous deployment and delivery of “fresh” software. There’s no better bridge connecting today’s still-valuable legacy apps with tomorrow’s user-driven software environment than Morpheus Data. Click here to sign up for a demo now. 

Tips for Documenting REST APIs

$
0
0

Documenting-REST-API

Having a REST API available to interact with your system can be a great asset, and can help get more and more developers to use your service. However, incomplete or hard to read documentation of your API can make it difficult for people to make use of the API, so good documentation is a must if you want to get as many people up and running with your API as possible. So, what are some things that can be done?

Include as much helpful information as possible

This probably goes without saying, but you can help developers immensely by providing helpful, straightforward information that lets people know what to expect when they make an API call. For example, here are some key pieces of information you may wish to include in your documentation:

  1. Title: It is a good idea to lead with a meaningful title that describes the action that will be taken. Leading with the URL can be confusing, as it is less likely to convey the action to be taken quickly. For example, a title could be “Get all widgets” or “Post new message”.

  2. URL: List the path that will be used to make a call. Variable parameters are signified by a colon. Some examples:
    /widgets
    /widgets/:id
    /widgets?type=:type

  3. Request type: Be sure to include what type of request needs to be made – GET, POST, PUT or DELETE.

  4. Parameters: Include any parameters that are available, and whether each is required or optional.

  5. Expected responses: What responses should the user expect on success or failure? For example, the API might return 200 on success or 401 if the request is unauthorized.

Of course, these are just the basics. You may wish to add special notes or provide examples of what will be posted or returned for each route, all of which will prove helpful for developers making use of your API!

Be consistent

In addition to providing the right information, it is a good idea to be consistent in its presentation. Be sure to use a consistent template for displaying the documentation for each route (commonly a layout that looks visually like a table. Also, use consistent naming conventions.

For example, try to avoid using two terms that refer to the same thing, such as using the terms “users” and “members” to refer to the users of the system. If they are two different entities in your system, then be sure to make the distinction clear in your documentation, as it will help avoid confusion and make your API easier to use!

SaaS with Morpheus

Do you need to deploy APIs, apps, databases, and other infrastructure quickly? Try out Morpheus, which provides Software as a Service with excellent flexibility.

Morpheus-Product-Dashboard
The Morpheus interface is clean and easy to use. Source: Morpheus.

Morpheus allows you to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. So, why not register an account or sign up for a Morpheus demo today?

IT Managers! What's Keeping You Up At Night?

$
0
0
Do you know what keeps IT managers awake at night? I'll give you a hint, it's no small list. If any of these sound familiar to you, take heart, you're not alone...
  • Workers going through your IT budget like it’s bottomless
  • Employees spending nearly their entire workday doing maintenance tasks
  • One-off apps and services that turn into money pits
  • Getting stuck with a huge investment in outdated technology
  • Users leaking the organization’s sensitive, valuable data assets into unsupported, unsecured cloud services
  • The need to support legacy systems that some division or department continues to rely on – and refuses to let go of
  • And that old, insomnia-inducing standby: bad data

Is it any wonder IT managers are losing sleep like never before? The problems seem to go on forever, but fortunately, new solutions arrive in the nick of time (we hope). Here’s a look at how two IT pros solved their biggest IT and DevOps problems that robbed them of their REMs.

Automation and alerts minimize the impact of disruptions

Brent Gray, CTO of Dupray Inc., seller of steam cleaners and steam irons in six countries, shares his favorite solutions to common IT glitches:

You know what keeps me up at night? Putting in long, inefficient man-hours towards IT maintenance/upkeep problems. My team used to get majorly bogged down by this.

How have we gotten around that? Automation.

I used to be heavily invested in the day-to-day IT operations of the company, and would often sink a significant amount of time on the minor, micro details…. We decided to pivot to macro. To allow this pivot to occur, we absolutely needed to ensure that the day-to-day operations were sufficiently automated to allow us to not worry about them. Our first step was to hire the appropriate employees … who could develop the tools we needed to automate our processes. As a concrete example, we switched from the OpenERP to the much more robust Netsuite ERP.

Our most crucial processes are now integrated into our Netsuite ERP. We have automatic shipping, automatic relaying of invoices, specific tax calculations, inventory assessments, inventory restocking and transfers, automated priority customer service, automated bid adjustments for Adwords, PLAs, Bing, Amazon, Facebook, Twitter, and Instagram advertising (minimum oversight).

We have slight automation that pulls data from Google Analytics and gives us advanced analysis. We have alerts that tell us where we need to focus attention, where our brand is being mentioned, and by whom. We have automated fraud detection that has saved us thousands of dollars.

Obviously, these processes all required trial and error. Every situation will have an odd case or two that our software can’t to handle. These weird exceptions must be discovered before they can be integrated safely without hanging our code.

Moreover, these processes are generally extremely complex. We are a highly multinational corporation and consequently, we deal with significantly different laws, taxes, rules, models, and cultural mores. Creating, finding, or using tools are all exceedingly complex. Yet, when we see a repetition of orders or demands, we believe that with enough investment in automation, a process could be created which takes away the need for human oversight.

Best tool to help us with IT maintenance tasks:

Alerts. Our business automation notifies the right employee when something is not working properly, or something requires attention. For example, one of our programs notifies the phones of our employees when multiple emails are being sent by the same sender. This usually indicates a pressing issue or an unsatisfied customer who needs to be contacted immediately.

The biggest challenge remains: Bridging old technologies with new ones

It isn’t often a company can start afresh with their IT operations. In nearly every case, there are systems in use that are vital yet prove difficult or impossible to migrate or upgrade. Frank S. Nestore of technical consulting company Mathtech identifies three areas that must be addressed when adopting cloud technologies:

I am an IT consultant who works with companies to help them transform how they operate and use technology. A lot of it is modernizing legacy computer systems. Along with that comes a lot of recurring problems related to standalone “pet” systems, cultural change management (getting folks to use new systems rather than their old, manual but comfortable ones), and incorrect or incomplete data. 

 

You can’t predict every condition your data and network are likely to encounter. But, with a little planning – and a rock-solid cloud management platform– you can be confident that your company will keep rolling through the many little glitches, and, most importantly, you won't find yourself losing any more sleep over hopeless IT and DevOps problems. 

Continuous Delivery: The Holy Grail of Cloud App Management

$
0
0

The stepwise approach to software development no longer cuts it, not just because it’s too slow and too expensive, but most importantly because it fails to provide users with the mobile-ready, agile apps they need to achieve their organization’s goals.

In IT departments large and small, you can hear the sound of silos crashing to the ground. No longer do development teams, operations teams, or any other subset of a company’s technical operations work in isolation. The goal is to get everyone in the organization – from CTO to weekend help-desk operator – thinking end-to-end. Projects no longer have start and end dates. Timelines become nebulous, more flexible, and better able to adjust to fast-changing business conditions.

Welcome to the world of continuous integration, a key component of which is continuous delivery. No longer do development, testing, deployment, and maintenance happen as distinct steps in a predictable sequence. Now the process is a loop in which everything is happening at once, and everyone is working in a single team: managers, developers, operators, and end users participate at each stage of the software process.

Chris Shayan's Continuous Delivery Maturity Matrix

The continuous delivery maturity matrix devised by Chris Shayan presents five stages of implementation: from novice to expert. Source: Chris Shayan, Atlassian

It’s no surprise the switch to continuous integration/continuous delivery has met with some resistance. In a February 1, 2016, post, Software Development Times’ Christina Mulligan writes that continuous delivery requires that developers “learn new skills, work in new environments and work at a new pace.” What’s the payoff? A closer connection to the customer, which ultimately translates into higher-quality software created faster and more efficiently.

How does continuous delivery make good on such promises? Think small. The development pipeline is populated with smaller releases whose failures occur more quickly and are easier to discover and fix. By failing faster and smaller, you’re able to try more new things without worrying about long-lasting effects; any negative feedback arrives almost instantly, and changes can be made nearly as fast.

Continuous delivery requires automated testing and monitoring

The enemy of continuous integration/continuous delivery is anything that creates a bottleneck. The most likely place to encounter a bottleneck in the process is change management. According to CA Technologies executive Aruna Ravichandran, teams need to establish the value of every change based on user feedback, and they must make sure they’re always working on the most valuable components in a way that reduces risk. 

According to Mulligan, making CI/CD work requires two things: a bottom-up approach driven by developers who are inventive and curious; and the orchestration, testing, automation, container, and other tools designed for small, agile applications. Counter-intuitively, bottom-up development only works when it is supported at the top. The CTO is going to insist on the auditing, traceability, reporting, and insight that are standard components of traditional IT operations.

According to a recent Continuous Delivery Survey, few organizations currently meet the three criteria of continuous delivery:

  • The teams are confident that their code is in a shippable state after changes are made and any tests are completed.
  • Their teams perform pushbutton or fully automated deployments of code changes to any environment.
  • All stakeholders have visibility into the software delivery environment.

Martin Fowler's continuous-delivery process

Martin Fowler’s description of a successful continuous-delivery process entails making sure software is always deployable, near-instant feedback on production readiness, and pushbutton deployments of any version to any platform. Source: Martin Fowler

IT Pro Portal’s John Esposito reports on the survey results in an April 6, 2016, article. Among the findings are that only 18 percent of the 600 IT professionals who responded to the survey believe their organization achieves all three continuous-delivery goals. The 41 percent of respondents who stated they have some continuous-delivery projects in place represents a 9 percent decrease from the number recorded in the previous survey.

The three most-common obstacles cited by respondents in their efforts to realize continuous delivery are a lack of corporate culture (57 percent), insufficient time to implement continuous delivery (51 percent), and lack of required skills (49 percent). It’s no surprise that lack of corporate culture was cited more often by people at large organizations, and lack of time was the chief deterrent reported by respondents from small companies.

It’s easy for IT managers to grasp the benefits of migrating their apps and databases to the cloud. Yet many IT pros continue to manage their cloud-based systems the same way they managed their in-house hardware. Logicworks’ Jason McKay writes in a March 14, 2016, article on The Whir that treating each server instance and network configuration as a “snowflake” duplicates an in-house management approach that is unsuitable to the cloud. Soon your cloud operations are suffering from the same unnecessary overhead that plagued your on-premises servers: slow updates, poor visibility, and cumbersome troubleshooting.

By contrast, treating infrastructure as code introduces such healthy software practices as easy accommodation of the inevitable requirements changes; frequent updates; automatic, continual testing; and more time spent coding than is spent documenting and putting out fires. McKay lists six characteristics of a healthy cloud-management strategy:

  1. Think modular: Separate cloud functions into independent modules that fail independently and are easy to reuse and reconfigure.
  2. Use a configuration tool such as Puppet or Ansible to keep your infrastructure and configuration separate.
  3. Use cloud templates and configuration scripts as the central documentation for the cloud environment’s current state, and for all of its previous states.
  4. Favor standardized tools and practices over custom ones to promote best practices while also encouraging flexibility and agility.
  5. Maintain a constant iterative state that repeatedly cycles through develop, deploy, test, learn.
  6. Simulate the worst worst-case scenarios you can imagine so you can fail (and recover) early and often.

One of the greatest challenges to adopting a cloud-first approach to managing your company’s information assets is replacing the long-standing distinction between hardware and software with a singular view of apps and infrastructure as code that is in constant motion. The entire network – the apps, data, and platform they run on – is perpetually and simultaneously being developed, deployed, tested, and updated. As usual, the only constant is change.

MSSQL Server Now Available on Linux - What You Need To Know

$
0
0

 

Linux-Morpheus

In an unprecedented move, Microsoft announced on March 7, 2016, that it would provide a version of MSSQL Server for Linux in addition to Windows Server. This is big news for those that use Linux but also want to use the MSSQL database due to either preference or familiarity with the database.

Microsoft also mentioned in the announcement some other new features that will be in the newest version of the software:

  • New security encryption capabilities – Data can now be encrypted either at rest, in motion, or in memory.
  • In-memory database performance increases.
  • Business Intelligence on an increased number of devices – such as iOS devices, Android devices, and Windows Phones.

This announcement is big news for developers familiar with MSSQL Server that may wish to use it now that it is not restricted to a Windows Server environment.

How does this affect the marketplace?

It appears with this move that Microsoft is trying to get market share in a place they have not been able to before (Linux), where Oracle has reigned supreme as far as commercial databases go. It is unlikely those who prefer open-source, such as PostGres SQL or MariaDB, will want to switch, but the move may convince those who were using those databases only due to the unavailability of MSSQL Server on Linux to switch. Also, those using other commercial databases may decide to make the switch now that they can.

One other reason for this move is that more and more businesses are moving much or all of their app architecture into the cloud. With SaaS (Software as a Service) becoming quite popular, it would be more difficult for Microsoft to retain customers who move to the cloud, where Linux servers are a very popular option. Giving customers the option to continue using MSSQL Server in a Linux environment is a huge move that may help them retain customers as well as acquire new ones as databases continue to be moved into the cloud.

Databases and monitoring with Morpheus

If you are going to provision a database for your apps, a good way to go about it is to use Morpheus, a PaaS solution which allows you to provision databases, including SQL and NoSQL databases, as well as apps and servers with a user-friendly interface. In addition to this, Morpheus can monitor your apps, servers, and databases. With Morpheus, data logging is automatic as you provision servers and apps. Using the available tools, you can monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.

Morpheus-Product-Dashboard
The Morpheus interface is clean and easy to use. Source: Morpheus.

Morpheus allows you to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. If you want to learn more click here for an exclusive demo.

 

Tips for Becoming a JavaScript Developer

$
0
0

Learn-Javascript-Developer

If you are looking to become a JavaScript developer, there are a number of things to consider before jumping into the fray. Whether you develop solely on the front end (for a web browser or other client) or server-side using Node.js, you will need to learn a number of things that will help you on your path to becoming a professional JavaScript developer.

Learn the Basics

First, you will need to learn the basics of the language. However, most projects (whether in Node or on the front end) require you to have at minimum a familiarity with (and more often than not expertise with) the other major building blocks of web pages – HTML and CSS.

The image below shows how HTML, CSS, and JavaScript all cross paths for a front end developer. You will likely need to have a good understanding of all three, since JavaScript uses and interacts with elements from the other two technologies. For example, JavaScript can be used to identify an HTML element that needs updating, or to change the CSS styling of that HTML element.

Javascript-Developer
The various specialties within front end development. Source: SlideShare.net.

So, you will probably want to learn HTML and CSS, then move into learning the basics of JavaScript once you have the other two languages tackled.

Learn the Web Browsers and the Console

When it comes to developing on the front end, it is essential to know how to test in all of the web browsers that may need to be supported for a given project.  This could include not only the latest version of any given browser (e.g., Chrome, Firefox, IE, Safari), but may also include older versions that may require additional testing or workarounds to make things work.

With this, it is helpful to know how to use the developer tools/consoles that come with each browser, so that you can identify and debug problems more quickly when they arise.

Morpheus-Product-Dashboard
Example of a web browser console. Source: Mozilla.

Learn Related Libraries and Technology

While knowing the language itself is great, you will almost surely find it helpful to understand some related libraries. For example, JQuery is used in a vast number of programs and apps to help alleviate many of the browser inconsistencies you may face. Knowing how to use it can be helpful  not only for those advantages, but also for being able to transition that same code back to vanilla JavaScript if a project moves away from using it.

In addition, learning a technology such as Node.js, which allows I/O on the server with JavaScript, can be quite an addition to your resume’. This can allow you (along with learning to use and interact with a database) to transition into becoming a backend or full-stack developer if you desire. Needless to say, JavaScript offers numerous opportunities for you should you choose to learn to become a JavaScript developer!

Get PaaS with Morpheus

Looking for a good PaaS service to run your apps? Whether you are using JavaScript and its syntax throughout, or combining it with other languages, why not try out Morpheus, which makes it easy for you to develop in either case. You can provision databases, servers, and more quickly, and have your app up and running in no time! Using the available tools, you can also monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.

Morpheus allows you to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. So, why not grab your Morpheus demo now?

SQL Database Hacks Using AS and ORDER BY

$
0
0

This post originally appeared in Solutions Review. The original article can be found here.

Morpheus Makes sense out of your SQL Databases

Morpheus Data’s CTO contributes with this SQL HowTo, that walks users through some advanced SQL hacks guaranteed to save time if they use SQL regularly. Morpheus allows users to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. Backups are automatic, and users can have redundancy as needed to avoid potentially long waits for disaster recovery to take place.

SQL Database Hacks Using AS and ORDER BY. 

By Brian Wheeler, CTO | Morpheus Data

When retrieving data from an SQL database, it is often handy to be able to reference a column name with a shorter or more meaningful name when you make use of the results. Also, many times it is very helpful if the results are ordered in ascending or descending order by the content of a particular column. These things can be done in an SQL query using AS and ORDER BY.

How to use AS

In SQL, the AS keyword can be used to make referencing a column easier. For example, you may be querying a table with one or more long column names, like the following table:

Table: orders

+—-+———————————- +———————————–+

| id | number_of_rocks_ordered | number_of_socks_ordered |

+—-+———————————–+———————————–+

| 1 | 2 | 1 |

| 2 | 3 | 3 |

+—-+———————————–+———————————–+

To make the column names easier to deal with in clauses, subqueries, or on the programming side, you can use AS in your query to provide a shorter alias to use later, as in the following example:

SELECT id, number_of_rocks_ordered AS rocks FROM orders;

Now, the column name can later be referred to as simply ‘rocks’ later.

The results will look like the following:

+—-+——–+

| id | rocks |

+—-+——–+

| 1 | 2 |

| 2 | 3 |

+—-+——-+

This can be especially helpful when providing an API for programmers, as the shorter alias can be returned from the API route to make for a little less typing on the programming side.

How to use ORDER BY

The ORDER BY command can be used to order the results of a query by a particular column. So, using the table from above, you could decide to order the results by the number or rocks ordered, from most to least. To do this, add ORDER BY, followed by the column name or alias, followed by ASC (for ascending) or DESC (for descending), as in the following example:

SELECT

id,

number_of_rocks_ordered

FROM

orders

ORDER BY

number_of_rocks_ordered DESC;

Since you wanted the number of rocks ordered from most to least, DESC was used. This produces the following set of results:

+—-+———————————–+

| id | number_of_rocks_ordered |

+—-+———————————–+

| 2 | 3 |

| 1 | 2 |

+—-+———————————–+

Combining AS and ORDER BY

Since you can use an alias created by the AS keyword in a later clause, you can combine AS and ORDER BY for this example to provide a result set that is both ordered by most rocks ordered to least, but also has the shorter column name of ‘rocks’. Here is the query to do this:

SELECT

id,

number_of_rocks_ordered AS rocks

FROM

orders

ORDER BY

rocks DESC;

Notice that the shorter ‘rocks’ alias is used in the ORDER BY clause, making that a little shorter. The result set from using both AS and ORDER BY looks like this:

+—-+——–+

| id | rocks |

+—-+——–+

| 2 | 3 |

| 1 | 2 |

+—-+——–+

Now the result set is both ordered and uses a shorter name for the number of rocks ordered!

Databases and monitoring with Morpheus

Morpheus is a SaaS solution which allows you to provision databases, including SQL and NoSQL databases, as well as apps and servers with a user-friendly interface. In addition to this, Morpheus can monitor your apps, servers, and databases. With Morpheus, data logging is automatic as you provision servers and apps. Using the available tools, you can monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.


The Benefits of Serverless Applications in a Data-Rich World

$
0
0

Serverless applications are the source of much confusion of late. For one thing, they’re not “serverless” at all. In fact, it’s more accurate to refer to them as “multiserver apps” since their components are distributed among cloud servers far and wide, and assembled on demand.

This begs the question, “Do you know where your app is?” In today’s microservices-based computing environments, the question becomes meaningless. Exactly where particular pieces of a full-blown application live matters much less than the network’s ability to retrieve and assemble the required components accurately, and in a timely manner, when and where they’re needed.

Such widely distributed applications demand a new approach to development, deployment, maintenance, and updates. Forbes’ Janakiram writes in a March 22, 2016, article that the growing interest in serverless apps is driven by two trends: mobility, and the Internet of Things. In both cases, applications need to follow the IFTTT model: respond in an instant as current circumstances dictate in terms of location, time, input, and context.

A serverless framework meets the needs of such on-the-spot delivery of services to users via microservice-based apps. The framework is comprised of a distributed repository of code snippets that assemble themselves in response to an external event – the “if this, then that” trigger.

How serverless apps can benefit businesses of all types

In a February 26, 2016, post, InformationWeek’s Charles Babcock describes scenarios in which the serverless approach meets a real-time business need. A customer placing an order wants confirmation that a critical part is available before committing to the purchase. A business analyst uploads a JavaScript snippet (a microservice) to AWS that adds a parts-confirmation box to the order form. Selecting the box fires the snippet that triggers the required inventory check via the AWS Lambda cloud service, which is based on a serverless framework.

With Lambda, the task is completed in short order, without involving another application or requiring that a virtual server to be spun up. In addition to being done for only a few cents’ worth of AWS runtime, the snippet scales automatically to accommodate changes in demand. Any number of a business’s microservices would reside in the public cloud and would be called on when needed. The business receives the benefits of its applications without being responsible for any in-house servers – that’s what makes the applications “serverless.”

AWS Lambda introduces on-demand, instantaneous, and short-lived application deployment. (Credit: Adrian Cockcroft, Battery Ventures)

What stays in-house is the development team – the brains of the operation. Developers are responsible for ensuring that the apps are accessible, and that they run smoothly on all devices. The biggest difference between mobile apps and their web-delivered counterparts is in where the bulk of the application logic resides. On tablets and phones, much of the app logic lives on the device, whereas Web apps put their logic on the Internet server.

Whereas web apps get by with a thin user interface inside the browser, mobile apps rely on the device to ensure a pleasurable user experience: the phone or tablet hosts the business logic, and the Internet data center’s server provides the bandwidth. A big upside for developers is that the cloud service provider is responsible for the database server, the messaging system, and security functions.

When a serverless approach may not be the best choice

Running your app with 99.99 percent availability and no servers to maintain sounds too good to be true, which naturally causes seasoned developers to think, “There’s gotta be a catch.” In a March 18, 2015, post, tech blogger Hartley Brody examines some of the impediments to creating serverless apps.

Topping the list is the need for a centralized datastore across multiple clients. Such situations require a database server and application logic that runs on top of it. Possible workarounds are to use a third-party API such as Firebase or Parse, or to rely on an API from your company’s CRM system or other internal data store. Brody also recommends looking for a way to do without the database, if users don’t need an account. This limits the amount of data you need to collect and store.

Serverless apps can also be stymied by the need to integrate with third-parties for storing data, sending messages, user analytics, and other operations. Whenever you need to authenticate users, you’re looking to a server to avoid sending API keys and other credentials to clients. Similarly, apps that rely on sensitive, private business logic should run that logic on a secure server.

Another shortcoming of the serverless approach is the lack of information following an application crash. A 503 occurring on a web server generates a pretty comprehensive stack trace from the server logs, but when a JavaScript fails on a client, you’re left completely in the dark. That means you have to anticipate all the ways the app could crash or otherwise misbehave beforehand, and devise a way to communicate what happened back to your system, and to the user.

Finally, JavaScript can be overwhelmed when tasked with processing large amounts of data. On a web server, you can move some of that processing offline, then store the result and notify the user when it’s available. The Web Worker API can help prevent overtaxing a JavaScript app.

Forecast for serverless: Sunny but variable

In his March 22, 2016, post listing the five top serverless frameworks, Forbes’ Janakiram puts AWS Lambda in the top spot for its integration with other AWS services, and its support for mobile and IoT developers. For voice applications in particular, AWS Lambda seamless integration with the Alexa Skills Kit makes it a good choice for Amazon Echo voice-activated apps.

AWS Lambda integrates smoothly with other Amazon services, such as Kinesis Stream, Kinesis Firehose, and RedShift to create a serverless ETL pipeline, as in this example. Credit: Abhishek Tiwari

Google’s recently acquired Cloud Functions will be hosted as containers running on the Google Compute Engine, and while it will support only Node.js initially, other languages will likely be added over time. An obvious advantage of Google Cloud Functions is built-in API support for Gmail, Maps, Cloud Messaging, and other Google services.

Another likely competitor in serverless frameworks is Iron.io’s Project Kratos, which runs existing AWS Lambda functions packaged as Docker containers in public, private, and mixed cloud environments. IBM positions its OpenWhisk project as an open source alternative to AWS Lambda; OpenWhisk supports Node.js and runs snippets written in Swift; it is integrated with the IBM BlueMax PaaS environment based on CloudFoundry and integrates with third-party services via WebHooks.

While Microsoft has not yet joined Amazon, Google, and IBM in offering a serverless framework, the Azure Web Jobs runtime environment can be used to invoke code snippets on demand or via scheduled cron jobs. Analysts expect a complete serverless framework from Microsoft based on the company’s Service Fabric PaaS environment.

BigTec UK Adds Multi-Cloud Orchestration to its Web Scale Software-Defined Datacentre Solution Stack with Morpheus Data

$
0
0

BigTec and Morpheus Data partnership announcement

This post originally appeared on the BigTec blog. To see the original post, click here.  

Datacentre transformation VAD targets matrix of fast-growing hybrid cloud and DevOps opportunities with new vendor partner

BigTec UK has signed a distribution agreement with Morpheus Data, bringing the US-based vendor’s infrastructure-agnostic cloud management and orchestration platform into its reference architecture for the web-scale software defined datacentre (SDDC). Morpheus has chosen BigTec to propel its 2016 expansion plans into the EMEA region, providing new reseller partner opportunities to satisfy demand for enterprise DevOps and hybrid cloud management, as part of its 100% channel-only approach.

“Morpheus empowers IT directors to take control over their entire hybrid, multi-cloud environments, delivering complete visibility, management and analytics so that – for the first time – the circumstances of location, infrastructure and choice of hypervisor are irrelevant to driving value, business agility and good governance,” said Jason Dance, managing director at BigTec UK. “It’s a keystone at the top of our solution stack, because without it capitalising on the software-defined, virtual IT cloud revolution risks being an expensive and time-consuming management headache. Assured compliance, improved DevOps, lower infrastructure costs, more IT automation, stamping out of ‘shadow IT’ – these are just some of the requirements that reseller partners can address with Morpheus’ single management console for sprawling hybrid environments.”

BigTec UK is committing product management and sales resources to Morpheus’ expansion, as well as comprehensive marketing, value-added services and logistical support, to recruit partners and deliver opportunities. The companies’ first objective is to build out a dedicated channel community of the right partners, and deliver rapid sales growth.

“BigTec have an unmatched grasp and understanding of the next generation hybrid and hyper converged datacentre. We value their experience with our contemporaries and their depth and breadth of expertise as a value-added distributor,” said Jeff Drazan, chairman of Morpheus Data.

The Morpheus platform is a game changing technology for partners, enabling them to bring a hybrid cloud tool to their customers and enter conversations where typical hardware/software partners have been unsuccessful. This allows them to transform their own businesses as well as supporting transformation and modernisation within their clients.

PaaS is Amazingly Helpful, but at What Point Do You Need It?

$
0
0

 When do you need PaaS

Platform as a Service, or PaaS, can be extremely helpful to you when it comes to developing and maintaining your applications. The question is - when should you consider using PaaS for your apps?

As your company grows, the usefulness of PaaS will also grow, as it can save you quite a bit of time on things like maintenance and consistency of your architecture. Taking a look at the primary advantages of using PaaS is a helpful way to decide if your company is ready to make the move now.

Reduces Setup Costs While Maintaining Control

Setting up servers on the cloud is simple, you simply provision them as needed. Rather than worrying about buying servers and trying to integrate the new physical server into your infrastructure, you can simply provision a new server using PaaS. This leaves the hardware worries to the PaaS provider, allowing you to concentrate on configuration and management of the new server.

While the physical setup pains are eliminated, you still get to configure and maintain the servers the way you need to, allowing you the control needed to develop your applications with the necessary tools in place.

Scalable as Needed

database scaling

The difference between vertical and horizontal scaling. 

Not sure how quickly your app will grow? 

When you are purchasing the servers on your own, you have to consider this more closely, as you will need the infrastructure to handle any scaling that needs to happen as your application and number of users grows. Whether you choose a vertical or horizontal scaling strategy, this can be expensive and requires additional setup time to implement.

On the other hand, if you use a PaaS solution, you can simply scale as needed, without worrying about possibly setting up or purchasing additional hardware. PaaS can provide a good cost-savings here, as scaling can be done automatically with many PaaS services. 

This allows you to pay as needed for scaling, rather than providing resources before they are necessary. Another advantage is that you again do not need to set up additional hardware on your end, saving you and your staff of setup time.

Decreases Maintenance Time

With PaaS, your maintenance time can be decreased dramatically, as you do not need to keep as much physical hardware maintained. Instead, the PaaS provider keeps things in working order while you work on configuring and developing your applications.

In the end, switching to PaaS is a good idea when you get to the point that setup, scaling, maintenance, or all three become too costly or time-consuming for your company. If resources are thin, PaaS can be an extremely helpful solution to keep things moving forward while the demand for your services increases.

Get PaaS with Morpheus

Looking for a good PaaS service to run your apps? Why not try out Morpheus, which makes provisioning, scaling, and maintenance a breeze. You can provision databases, servers quickly, and have your app up and running in no time! Using the available tools, you can also monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.

Morpheus Data dashboard

The Morpheus interface is clean and easy to use. Source: Morpheus.

Morpheus allows you to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. To grab a demo click here. 

Load Balancing: How to Quickly Boost the Performance of Your Apps

$
0
0
Load balancing

 

Load balancing can help boost the performance of your website and applications, as it allows more people to use your systems at the same time. Instead of all of the traffic going to a single destination, you can direct traffic to various machines so that no single machine becomes bogged down with handling all of the requests. This can be quite helpful when your systems begin to receive lots of traffic!

What is load balancing?

Load balancing is a method of handling a large amount of traffic to a system (web site, web app, database, etc.) in a way that keeps a single server or machine from becoming overloaded, which can greatly slow down your service or even cause it to become unable to handle the traffic at all. Since extremely slow load times and down time can be especially costly to a business, finding a way to avoid these issues is incredibly important.

Load balancing helps alleviate the traffic congestion by using one or more load balancers that direct the traffic to one of multiple possible machines, all of which have a copy of the content that would otherwise be on a single machine. On a single machine, a large amount of traffic would begin to slow response times pretty quickly. However, using load balancing, each request can be directed to the machine it needs to go to based on your needs, thus keeping the load on any individual machine to a minimum and keeping response times at a maximum.

Server load balanacing

An example of load balancing using servers (server array). Source: Tractionet.

Deciding how to direct requests

With load balancing, you have to decide to which machine a new request is directed. This involves at least a couple of considerations. The two biggest ones are whether or not a machine is responding and the amount of load on the machine.

To determine whether or not a machine is responding, you could use a simple PING, but this isn't necessarily reliable, and doesn't offer as many options. Oftentimes, a more advanced service PING, which can determine information such as whether or not a particular service is working on the machine. This allows the load balancer to continue to direct traffic to a machine that may have one service down, but another working. It can simply direct traffic to that machine only if the request is for a working service, and move on to another machine if the request is for a service that is not working.

To determine load, different measures can be used, from the current connection count on a machine to the use of the actual response time of the machine to requests. A good load balancing algorithm should take both availability and load into account when deciding how to direct traffic. The actual algorithm will depend on your needs, but getting the right load balancing in place can definitely have a great affect on your response times!

Get PaaS with Morpheus

Looking for a good PaaS service to run your apps and provide you with load balancing options? Why not try out Morpheus, which makes provisioning, scaling, and maintenance a breeze. You can provision databases, servers quickly, and have your app up and running in no time! Using the available tools, you can also monitor the various parts of your system to keep track of uptime, response time, and to be alerted if an issue does arise.

Morpheus-Product-Dashboard

The Morpheus interface is clean and easy to use. Source: Morpheus.

Morpheus allows you to provision apps in a single click, and provides ease of use for developers with APIs and a CLI. In addition, backups are also automatic, and you can have redundancy as needed to avoid potentially long waits for disaster recovery to take place. So, why not grab your own Morpheus demo today?

Matching Storage Model to Data Structure in Mixed Database Environments

$
0
0
Mixed database environments

Not so long ago, data storage was limited to three physical forms: direct-attached storage (DAS) such as traditional standbys SCSI and SATA disk drives; storage-area networks (SAN) that cluster disks into logical units (LUN) on servers accessed via a network; and network-attached storage (NAS) that allows concurrent access to the disk clusters via file-system protocols that abstract storage from physical machines to virtual machines.

NAS abstraction was a precursor to the real game-changer in storage technology: virtualization. As Brandon Salmon explains in a January 20, 2015, article on InfoWorld’s New Tech Forum, virtualization abstracts physical storage into virtual disks. A hypervisor creates an emulated hardware environment for each virtual machine: processor, memory, and storage. Just as local disks are perceived as part of the physical computer, virtual disks are part of the virtual machine rather than independent objects. When the VM is deleted, the virtual disk is deleted along with it.

Virtual environments such as VMware vSphere, Microsoft Hyper-V, Red Hat Enterprise Virtualization, and Xen platforms use a virtual-disk model. The I/O from a virtual machine goes to software in the hypervisor rather than to hardware via a device bus. This means the protocol used by the VM to communicate with the hypervisor doesn’t have to match the protocol used by the hypervisor to communicate with the storage. The storage model that is exposed upward to the VM and administrator is separated from the storage protocol used by the hypervisor to store the data.

Being able to mix and match storage models and storage protocols, and to switch protocols dynamically without affecting VMs, provides administrators with an unprecedented level of flexibility. As Salmon points out, the storage protocol is no longer application-specific and functionally dependent on the app; it is now part of the infrastructure and is chosen based on cost and performance.

Cloud services take storage abstraction to new levels

In the cloud, the entire storage stack is virtualized, so the application is completely separate from the infrastructure. One cloud storage model is instance storage, which is used the same as virtual disks and can be implemented as DAS (ephemeral storage), or more-reliable NAS or volume storage.

Volume storage is a hybrid of instance storage and SAN that is a primary unit of storage rather than a complete VM. This means you can detach a volume from one VM and attach it to another. In terms of scale and abstraction, volume storage is more like a file than a logical unit; it is considered reliable enough for storing user data. Because volume storage is a model rather than a protocol, it runs atop NFS and such block protocols as iSCSI.

Object storage, such as Amazon’s S3, offers a single logical name space across an entire region, but its “eventual consistency” means that not all users will get the same answers to their requests at any given time. Many cloud apps are designed to leverage this nearly unlimited name space to realize scale and cost advantages over NAS.

Typically, object stores are geared to use over high-latency WAN links that benefit from a simplified collection of data operations. Objects are listed in a bucket, read in their entirety, and have their data replaced with entirely new data. Conversely, NAS lets apps read and write small blocks within a file, change file sizes, and move files between directories, among other management operations.

The advantage of object stores is gigantic namespaces that can extend across great distances inexpensively and reliably. Object storage is popular with cloud-native apps for storing images, static web content, backups, and customer files. It’s not a good choice for NAS workloads requiring strong consistency, nor as a replacement for instance or volume storage, which offer strong consistency, small block updates, and write-intensive, random workloads.

Types of Cloud Storage

The three most common types of cloud storage – instance, volume, and object – are each suited to a particular storage infrastructure. Source: InfoWorld

Consider storage models when selecting a DBMS

Choice of storage model is generally dictated by the database environment: development operations favor simple storage models supporting lightweight prototyping, which helps explain the continuing popularity of document-based DBMSs such as Oracle, MySQL and SQL Server. IT Pro Portal’s John Esposito writes in an April 21, 2016, article that DBMS selection in production environments, where non-developer specialists typically manage data stores, is based on factors other than the optimal combination of data processing, storage, and retrieval mechanisms.

Conversely, non-production environments tend to be more amenable to NoSQL and other non-traditional DBMSs, where databases can be optimized for best structural fit and ease of access. A primary example is MongoDB, which features a static-schema-free document orientation, a document format similar to the popular JSON, and a wide range of connectors. This makes the systems easy to set up in terms of data modeling, and well suited to applications that aren’t particularly data-intensive.

A growing trend among developers is polyglot persistence, in which an application uses more than one storage model. Esposito posits that the near parity between applications using one storage model and those using two indicates that developers are looking to match persistence with the data structures requiring persistence.

Graph structures, which store most information in nodes and edges, don’t match well with the tabular structure of RDBMSs, which rely on data residing in columns and rows. Still, it is worthwhile to store data naturally modeled as a graph in an RDBMS because the relational model is time-tested, it is widely popular with developers and DBAs, and many powerful object-relational mappers are available to facilitate accessing relational data from application code.

Hybrid storage: Having your cake and eating it, too?

Despite the continuing enhancements in cloud storage security, performance, and reliability, companies still hesitate to go all-in on the cloud. Topping the list of concerns about cloud storage are the three IT standbys: security, compliance, and latency. Many companies adopt hybrid cloud storage as a way to combine the cost benefits and scalability of the cloud with the reliability and safety of in-house networks.

Hybrid Cloud Storage

Hybrid cloud storage combines the efficiency and scalability of the public cloud with the security and performance of the private cloud. Source: TechTarget

In a May 22, 2016, article on TechCrunch, Don Basile describes four different storage models intended to address the astonishing increase in the amount of data forecast to flood organizations in coming years. Further complicating future data-storage needs is the variety of data types tomorrow’s information will use.

• Hybrid data storage combines the scalability and cost-effectiveness of the cloud with the safety of storing sensitive data in-house.

• Flash data storage, which is increasingly common in consumer devices, is being enhanced to meet the big-data needs of enterprises, as evident in Pure Storage’s FlashBlade box that can store as much as 16 petabytes of data (the company expects to double that amount in a single box by the end of 2017).

• Intelligent Software Designed Storage (I-SDS) replaces the traditional proprietary hardware stacks with storage infrastructure managed and automated by intelligent software, offering more cost-efficiency and faster response times.

• Cold storage archiving takes advantage of slower-moving, less-expensive commodity disks used to store data that isn’t accessed very often, while “hot” data that is accessed more frequently is stored on faster, more expensive flash drives.

Viewing all 1101 articles
Browse latest View live