Quantcast
Channel: Morpheus Blog
Viewing all 1101 articles
Browse latest View live

How to Use Netstat for Network Troubleshooting

$
0
0

When debugging networks problems on a Linux server, ping and traceroute are often helpful, but you may need to have further network details on hand to help track down an issue and get it fixed. One such command is netstat, which can offer you details on the networks sockets as well as other helpful information. As with ping and traceroute, you can simply use netstat from the command line and get results quickly.

What is Netstat?
The netstat command in Linux is a very useful tool when dealing with networking issues.

Source: IBM developerWorks

Netstat, short for the phrase “network statistics”, is a tool Linux (as well as other operating systems such as Windows and OS X) can make use of in order to display incoming and outgoing network connections. In addition to this, it can be used to get information on network statistics, protocol statistics, and routing tables.

You can use netstat to find network problems and measure the amount of network traffic, so it can be a really useful tool to help you gather the information you need to solve any outage, slow down, or bottleneck issues on your network.

Basic Netstat

For a basic listing of all the current connections, you would simply call netstat with the -a option.

> netstat -a
Active Internet connections (servers and established)
Proto   Recv-Q   Send-Q   Local Address       Foreign   Address   State      
tcp     0        0        localhost:ipp       *:*                 LISTEN     
tcp6    0        0        ip6-localhost:ipp   [::]:*              LISTEN     
udp     0        0        *:bootpc            *:*                                
udp     0        0        localhost:ntp       *:*                                
udp     0        0        *:ntp               *:*                                
udp6    0        0        ip6-localhost:ntp   [::]:*                             
udp6    0        0        [::]:ntp            [::]:*                             
udp6    0        0        [::]:mdns           [::]:*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  1      [ ACC ]     STREAM     LISTENING     11091    @/tmp/XX
unix  1      [ ACC ]     STREAM     LISTENING     39045    @/tmp/Cx

This provides some basic information on connections from different types of protocols like TCP and UDP, as well as active Unix domain sockets. However, netstat allows you to get more specific information that can be more helpful in debugging.

Filter by Connection Type

Sometimes filtering the results based on the connection type can be useful when trying to find the information you need. For example, if you want to see only the TCP connections, you can add the “t” option in addition to the “a” option, as shown below:

> netstat -at
Active Internet connections (servers and established)
Proto   Recv-Q   Send-Q   Local Address      Foreign Address   State      
tcp     0        0        host:domain        *:*               LISTEN     
tcp     0        0        localhost:ipp      *:*               LISTEN     
tcp     0        0        host.local:45789   host-:http        ESTABLISHED

Similarly, by using netstat -au, you can list only the UDP connections.

Filter by Listening Connections

If you want to only see the connections that are listening, you can do so by using the “l” option and remove the “a” option. Here is an example of this:

> netstat -l
Active Internet connections (only servers)
Proto   Recv-Q   Send-Q   Local Address   Foreign Address   State      
tcp     0        0        localhost:80    0.0.0.0:*         LISTEN     
tcp     0        0        localhost:443   0.0.0.0:*         LISTEN     

As with the “a” option, you can use netstat -lt and netstat -lu in order to further filter and to get only the listening TCP or UDP connections. In this way, you can easily see if a particular port is open and listening and determine whether a website or app is able to be up and running as expected.

See Network Statistics
> netstat -s
Ip:
  73419 total packets received
  0 forwarded
  0 incoming packets discarded
  73419 incoming packets delivered
  37098 requests sent out
  45 outgoing packets dropped
Icmp:
  119 ICMP messages received
  0 input ICMP message failed.
  ICMP input histogram:
    destination unreachable: 119
  102 ICMP messages sent
  0 ICMP messages failed
  ICMP output histogram:
    destination unreachable: 102
... OUTPUT TRUNCATED ...

As you can see, this offers some statistics that may be useful to you while debugging, such as total, incoming, and outgoing packets as well as ICMP messages that were received, sent, and failed.


The Drawbacks of Running Containers on Bare Metal Servers

$
0
0

Marketing hype isn't limited to the tech industries by any means, but the tech press certainly cranks up the hype machine to unprecedented levels. Among the recent darlings of the pundits is containerization. 

A prime example of the container worshiping phenomenon is The VAR Guy's Christopher Tozzi, who lists in an April 3, 2017, article all the ways that Docker saves companies money. From Tozzi's description, you would think Docker can transform lead into gold: it's simpler to maintain, it speeds up delivery of new software, it uses resources more efficiently, and to top it all off, Docker saves you money.

Quick, where do I sign?

It's a good thing technology customers have a heightened sensitivity to such unadulterated claims of IT nirvana. A reality check reveals that no technology shift as fundamental as containerization is as simple, as fast, or as inexpensive as its backers claim it is. Yes, containers can be a boon to your data-management operations. No, you can't realize the benefits of the technology without doing your due diligence.

Here are some tips for avoiding the pitfalls of implementing containers in multi-cloud and hybrid cloud environments.

Implementing containers on servers: Where are the bottlenecks?

A primary advantage of containers over hypervisor-based virtualization is capacity utilization. Jim O'Reilly writes on TechTarget that because containers use the OS, tools, and other binaries of the server host, you can fit three times the number of containers on the server as VMs. As microservices are adopted more widely, the container capacity advantage over VMs will increase, according to O'Reilly.

Containers achieve their light weight, compared to VMs, by doing without a guest OS. Instead, containers share an OS and applicable bins/libraries. Source: LeaseWeb

In addition, there is typically much less overhead with containers than with VMs, so containers' effective load is actually closer to five times that of hypervisor-based virtualization. This has a negative effect on other system components, however. An example is the impact of the increased loads on CPU caches, which become much less efficient. Obviously, a bigger cache to address the higher volume usually means you have to use bigger servers.

Storage I/O and network connections also face increased demands to accommodate containers' higher instance volumes. Non-volatile dual inline memory modules promise access speeds up to four times those of conventional solid-state drives. NVM Express SSDs have become the medium of choice for local primary storage rather than serial-attached SCSI or SATA. In addition to reducing CPU overhead, an NVMe can manage as many as 64,000 different queues at one time, so it can deliver data directly to containers no matter the number of instances they hold.

Distribute containers on demand among in-house servers or cloud VMs

The fact is, not many companies are ready to make another huge capital investment in big, next-gen servers with a cache layer for fast DRAM, serial connections, and 32GB of cache for each CPU, which is the recommended configuration for running containers on in-house servers. On the Docker Blog, Mike Coleman claims you can have it both ways: implement some container apps on bare metal servers, and others as VMs on cloud servers.

Rather than deciding the appropriate platform for your entire portfolio of applications, base the in-house/cloud decision on the attributes of the specific app: What will users need in terms of performance, scalability, security, reliability, cost, support, integration with other systems, and any number of other factors? Generally speaking, low latency apps do better in-house, while capacity optimization, mixed workloads, and disaster recovery favor virtualization.

VMs are typically heavier and slower than containers, but because they are fully isolated, VMs tend to be more secure than containers' process level isolation. Source: Jaxenter

Other factors affecting the decision are the need to maximize existing investments in infrastructure, whether workloads can share kernels (multitenancy), reliance on APIs that bare metal servers don't accommodate, and licensing costs (bare metal eliminates the need to buy hypervisor licenses, for example).

Amazon ECS enhances container support

A sign of the growing popularity of containers in virtual environments is the decision by Amazon Web Services to embed support for container networking in the EC2 Container Service (ECS). Michael Vizard explains in an April 19, 2017, article on SDxCentral that the new version of AWS's networking software can be attached directly to any container. This allows both stateless and stateful applications to be deployed at scale, according to Amazon ECS general manager Deepak Singh.

The new ECS software will be exposed as an AWS Task that can be invoked to link multiple containers automatically. Any virtual network associated with the container is removed when the container disappears. Singh says AWS sees no need to make containers available on bare metal servers because there are better options for organizations concerned about container performance: graphical processing units, and field programmable gate arrays.

For many IT operations, going all-in with containers running on bare metal servers may be tomorrow's ideal, but VMs are today's reality as the only viable alternative for hybrid and public clouds. Enterprises typically run multiple Linux distributions and a mix of Windows Server and Linux OSes. It follows that to optimize workloads in such mixed environments, you'll run some containers on bare metal and others in VMs.

According to Docker technical evangelist, Mike Coleman, licensing schemes tailored to virtualization make VMs more affordable, as TechTarget's Beth Pariseau reports. Coleman points out that companies nearly always begin using containers in a small way and wind up using them in a big way.

Mastering the Art of Container Management

$
0
0

Managing containers in cloud environments would be a lot simpler if there were just one type of container you and your staff needed to deal with. Any technology as multifaceted as containerization comes with a bucket full of new management challenges. The key is to get ahead of the potential pitfalls before you find yourself hip deep in them.

The simplest container scenario is packaging and distributing an existing application as a Docker container: all of the app's dependencies are wrapped into a Docker image, and a single text file (Docker file) is added to explain how to create the image. Then each server can run only that instance of the container, just as the server would run one instance of the app itself.

Monitoring these simple containers is the same as monitoring the server, according to TechTarget's Alastair Cooke in an April 2017 article. The app's use of processes and resources can be viewed on the server. There's no need to confirm that the server has all the required components because they're all encapsulated in the Docker image and controlled by the Docker file, including the proper Java version and Python libraries.

How containers alter server management tasks

Placing apps in a Docker container means you no longer have to update and patch Java on the server. Instead, you update the Docker file and build a new Docker image. In fact, you may not need to install Java on the server at all. You'll have to scan the Docker files for vulnerabilities and unsupported components in the image. You may also need to set and enforce policies relating to the maximum age of Docker images because dependency versions are set inside the image and can only be updated by creating a new image.

By removing a layer of software from the standard virtualization model, containers promise performance increases of 10 percent to 20 percent. Source: Conetix

Containers can be started and stopped in a fraction of a second, making them a lot faster than VMs. Microservice-enabled apps will use far more containers, most of which have short lifespans. Each of the app's microservices will have its own Docker image that the microservice can scale up or down by creating or stopping containers. Standard server-monitoring tools aren't designed for managing such a fast-changing, dynamic environment.

A new type of monitor is necessary that keeps pace with these quick-change containers. A solution being adopted by a growing number of companies is the Morpheus cloud application management platform. Morpheus's instance catalog includes basic containers and VMs as well as custom services for SQL and NoSQL databases, cache stores, message busses, web servers, and full-fledged applications.

Morpheus support steps you through the process of adding a Docker host to any cloud via the intuitive Morpheus interface. Under Hosts on the main Infrastructure screen, choose one of nine Host types from the drop-down menu, select a Group, add basic information about the host in the resulting dialog box, set your Host options, and add Automation Workflows (optional), Once the Container Host is saved, it will start provisioning and will be ready for containers.

Provisioning Container Hosts via the Morpheus cloud app management service is as simple as entering a handful of data in a short series of dialog boxes. Source: Morpheus

Overcoming obstacles to container portability

Since your applications rely on data that lives outside the container, you need to make arrangements for storage. The most common form is local storage on a container cluster and addition of a storage plug-in to each container image. For public cloud storage, AWS and Azure use different storage services.

TechTarget's Kurt Marko explains in an April 2017 article that Google Cloud's Kubernetes cluster manager offers more flexibility than Azure or AWS in terms of sharing and persistence after a container restarts. The shortcoming of all three services is that they limit persistent data to only one platform, which complicates application portability.

Another weakness of cloud container storage is inconsistent security policies. Applying and enforcing security and access rules across platforms is never easy. The foundation of container security is the user/group directory and the identity management system that you use to enforce access controls and usage policies. 

Containers or VMs? The factors to consider

The difference between containers and VMs is where the virtualization occurs. VMs use a hypervisor to partition the server below the OS level, so VMs share only hardware. A container's virtualization occurs at the OS level, so containers share the OS and some middleware. Running apps on VMs is more like running them on bare metal servers. Conversely, apps running on containers have to conform to a single software platform.

VMs offer the flexibility of bare metal servers, while containers optimize utilization by requiring fewer resources to be reserved. Source: Jelastic

On the other hand, containers have less overhead than VMs because there's much less duplication of platform software as you deploy and redeploy your apps and components. Not only can you run more apps per server, containers let you deploy and redeploy faster than you can with VMs.

TechTarget's Tom Nolle highlights a shortcoming of containers in hybrid and public clouds: co-hosting. It is considered best practice when deploying an application in containers to co-host all of the app's components to speed up network connections. However, co-hosting makes it more difficult to manage cloud bursting and failover to public cloud resources, which are two common scenarios for hybrid clouds.

Cloud management platforms such as Morpheus show the eroding distinctions between containers and VMs in terms of monitoring in mixed-cloud environments. It will likely take longer to bridge the differences between containers and VMs relating to security and compliance, according to Nolle.

DevOps: How to Give Your Business Velocity

$
0
0

When you want to streamline your ability to release or update products quickly, integrating DevOps into your organization can prove to be well worth any changes that need to be made in order to implement the new practices. One of the biggest advantages to DevOps is what is often called velocity - working with both speed and direction. By breaking down barriers between IT and business, this close and speedy collaboration can provide the direction that propels your business forward.

First Order: Collaboration

There is clear evidence to suggest that a DevOps approach to tech development can have a significant impact on the velocity of an IT organization.

Source: Chris Cancialosi, Forbes

When people can directly collaborate, they tend to hash out any potential issues with a project much more quickly. Organizations the are separated by function may find that development on a project moves much more slowly than hoped. 

For example, selected business team members might meet with selected IT members at the beginning of a project. They then each go their own way with expectations that may change before their next meeting, which could be after several weeks, months, or longer have passed.

The intersection of various departments as DevOps. Source: Wikipedia

On the other hand, employing DevOps allows for the more collaborative function between teams. There can be employees from each group that meets daily or team members from one group embedded within one or more other groups so that the different functional groups are all working directly with one another each and every day. 

This allows for communication of any new needs or issues to happen right away, where differences can be hashed out immediately, rather than waiting for a meeting that could be derailed by one issue like this for the entire meeting duration and being unable to address other needs and concerns during that precious time.

DevOps helps the relationship between Development and Operations, the relationship between IT and the business, and the relationship between the company and its customers and employees.

Source: DEVOPS digest

Development with Velocity

DevOps is often implemented when an organization uses the agile method of software development, which often leads to the need for the actual organizational changes that lead to beginning to use DevOps practices.

The good news is that in recent years there are two healthy and related trends in the Agile community: continuous delivery (CD) and DevOps.

Source: Scott W. Ambler, Dr. Dobbs

Agile development is a method that encourages speed and flexibility in response to an ever-changing set of requirements during software development. Instead of needing to wait for new ideas, this flexibility allows development to continue with little interruption or delay in the process as needs change. This ability to change rapidly also increases the number of releases of a product. Thus new releases and updates happen at a much faster rate, which encourages continuous delivery of a product.

The speed of agile typically requires a toolset that allows for the automation of certain processes so that speed is not hindered by processes that need to be performed manually over and over again. Not only can that be tedious, but it can use up the valuable time your developers or other IT staff could be using to further enhance the product and deliver further releases. Automation allows for many of the processes to be free of manual involvement, which frees up your staff for other work.

Our experience suggests, for instance, that companies can reduce the average number of days required to complete code development and move it into live production from 89 days to 15 days, a mere 17 percent of the original time.

Source: McKinsey & Company

When DevOps is employed alongside agile development to support it, your velocity will be increased even further and allow you to make your customers happier and do so more quickly!

Total Velocity

While DevOps can be a challenge to get going as it requires a number of organizational changes tailored to your specific business needs, it can be well worth the effort needed to reorganize and regroup. The increased speed of delivery of your products and updates can really help you, especially if it helps you to be first to market on a new idea. 

Not only will you get improved speed, but it can help your staff feel more like they are getting along and are contributing to projects that make it to completion. Seeing something they contributed to in action can be helpful in motivating staff to find ways to streamline processes even further and thus enhance your velocity that much more!

 

How to Reduce Latency on Public Clouds

$
0
0

The cloud offers companies nearly every feature they need for managing their information resources: efficiency, scalability, capacity, affordability, reliability, security, and adaptability. What's missing from this list of cloud benefits is a deal-breaker for many organizations: performance.

A growing number of firms are removing mission-critical applications from the public cloud and returning them to in-house data centers because they need speed that the cloud infrastructure doesn't deliver. In a sponsored article on InfoWorld, HPE Cloud Group VP and Chief Engineer Gary Thome cites Dropbox as the poster child for big-name services that have turned their backs on the public cloud.

The culprit, according to Thome, is time-sharing: Public cloud services may offer unlimited capacity, but their business model relies on capping performance. That's a problem you don't have when you manage your data systems, which you can generally scale up to whatever performance level your apps require.

The public-cloud challenge: Accommodating apps with a low tolerance for latency

Financial systems are the principal category of applications that require instant response to user and system requests. To address public-cloud latency, Thome suggests combining containers with composable infrastructure, which pools compute, storage, and network resources and "self-assembles" dynamically based on the needs of the workload or app.

Four attributes of composable infrastructure are 1) the disaggregation of compute, memory, I/O, and storage; 2) re-aggregation (composition) and orchestration; 3) API-based automation and management, and; 4) matching apps to available resources to optimize performance. Source: Forbes

By controlling the software-defined resources programmatically via a unified API, infrastructure becomes "a single line of code" that is optimized for that specific workload, according to Thome. With the composable infrastructure approach, you lose the public cloud's cost, efficiency, and speed benefits over on-premises data centers.

A more forward-looking approach is to address directly the causes of latency in systems hosted on the public cloud. That's the angle taken by two relatively new technologies: software-defined WANs and availability zones.

A lingua franca for network management

SD-WANs promise simpler network monitoring by accommodating a range of connections, including MPLS, broadband, and LTE. An SD-WAN's primary advantage is connecting between multiple cloud services and enterprise networks. The technology reduces latency by choosing the fastest path based on each network's policies and logic. TechTarget's Lee Doyle writes that SD-WANs will become more popular as companies increase their use of SaaS applications such as Salesforce, Google Docs, and Microsoft Office 365.

An expensive alternative to dealing with latency on the public internet is to pay for direct interconnection, which offers direct links between multiple cloud services, telecom carriers, and enterprises. eWeek's Christopher Preimesberger lists seven criteria for comparing dedicated interconnection services to the public internet.

  • Improved performance by eliminating the public internet's points of contention
  • Enhanced hybrid cloud security by maintaining control over proprietary data
  • Better network availability through elimination of WAN connections between data centers and the public cloud
  • More control over costs by avoiding contracts for a set amount of bandwidth, much of which is never used
  • The flexibility of accessing physical and virtual connections through a single portal
  • A broader choice of cloud service providers, without having to enter into individual contracts with each one
  • Easier collaboration with business partners by establishing dedicated links for transferring large files securely, quickly, and efficiently - without the volatility of the public internet

Morpheus: The low-latency alternative to expensive dedicated links

The Morpheus cloud application management system offers a unique approach to guaranteeing high availability for latency-sensitive applications. Jeff Wheeler explains how the Morpheus Appliance's high-availability mode supports deployment in multi-tier environments.

All of Morpheus's components are designed to be distributable to facilitate deployment in distributed clouds and increase uptime. A stand-alone Morpheus configuration includes several tiers: web, application, cache, message queue, search index, and database. Each of these tiers except for cache is distributable and deployable on separate servers; the cache is currently localized to each application server. A shared storage tier contains artifacts and backup objects.

Nginx is used as a reverse proxy for the application tier, as well as for access to the localized package repository required for deploying data nodes and VMs. Source: Morpheus

For optimal performance, avoid crossing WAN boundaries with high latency links. In all other situations, external services can be configured in any cloud provider, on-premises cloud/data center, or virtual environment. The external load balancer that routes requests to a pool of web/app servers can be set to connect to each server via TLS to simplify configuration, but the balancer also supports non-TLS mode to support SSL offloading.

How edge networks increase rather than reduce the strain on cloud bandwidth

Peter Levine, an analyst for Andreessen Horowitz, raised a lot of eyebrows last December with his presentation explaining why he believed cloud computing would soon be replaced by edge networks. Levine reasons that the devices we use every day will soon generate too much data to be accommodated by existing network bandwidth. The only way to make the burgeoning Internet of Things practical is by moving storage and processing to the edge of the network, where the data is initially collected.

There's one element of edge networks Levine fails to address: management. Dan Draper of the financial services firm Vertiv writes in an April 17, 2017, article on Data Center Frontier that edge networks place data and processing in locations that IT departments can't access easily. Depending on the system, there could be thousands or even millions of such remote data points in a typical enterprise network.

According to Draper, the solution to the bandwidth demands of IoT is the creation of an integrated infrastructure. Network nodes will be like those nested Russian dolls, or "matryoshkas," scaling from smart sensors monitoring sewer lines and similar inaccessible spots, all the way up to public cloud installations processing terabytes of data in the blink of an eye.

Draper points out two requirements for such an integrated infrastructure that remain works in progress: enhanced remote monitoring and power management. Services such as the Morpheus cloud application management system give companies a leg up by preparing them for "cloud" computing that extends all the way to the four corners of the earth.

The New Shadow IT: Custom Applications in the Cloud

$
0
0

There's a new kind of shadow IT arising in companies of all types and sizes: user-created cloud applications. This is one form of unauthorized IT that many firms are embracing rather than fighting, albeit cautiously.

The trend is more than just a version of "if you can't beat 'em, join 'em." For many IT managers, it's an acknowledgment that their customers -- line managers and employees -- know what tools they need to get their work done better than anyone else. Cynics might point out that working with rogue developers in their organizations is an admission that shadow IT is now too prevalent to stop.

Dark Reading's Kaushik Narayan writes in an April 7, 2017, article that "the consumerization of IT has spurred a free-for-all in the adoption of cloud services." Narayan claims that CIOs routinely underestimate the number of unauthorized applications being used in their organizations, at times by a factor of 10. One CIO told Narayan there were 100 such apps in place when an examination of the company's network logs put the number at about 1,000.

A 2016 survey of IT professionals by NTT Communications found that 83 percent report employees store company data on unsanctioned cloud services and 71 percent say the practice has been going on for two or more years. Source: CensorNet

The type of applications being developed and implemented outside IT's supervision include HR benefits, code-sharing platforms, and customer service. These apps put sensitive company information beyond the control of data security precautions, including payment details, confidential IP, and personally identifiable information. As Narayan points out, IT departments lack the personnel and resources required to retrofit key data center apps with cloud-specific security.

Tapping 'citizen developers' to reduce application backlog

According to statistics cited by CSO's George V. Hulme in an April 17, 2017, article, 62 percent of enterprises report having "deep app development backlogs," often numbering 10 or more in the dev pipeline. In 76 percent of enterprises, it takes three months or longer to complete an app, extending to one year in 11 percent of companies.

Many organizations are cutting into their app-development workload by enlisting the services of citizen developers among the ranks of business managers and employees. The challenge for IT is to ensure the apps created and deployed by non-developers meet all security, cost, and performance requirements. You don't manage volunteer developers the same way you manage in-house IT staff.

A recent survey by FileMaker reports that improved work processes (83 percent) and greater work satisfaction (48 percent) were the two factors most likely to motivate citizen developers. Source: CIO Insight

Start by making security as easy as possible to bake into key application components likely to be used by citizen developers. An example is to incorporate security services in a simple, accessible API. Once you've identified apps in use but developed outside the IT department, you determine the sensitive information the app may include. The only way to monitor which cloud apps employees are using is by paying close attention to where your data is going.

VMware director of security John Britton suggests four requirements for managing citizen developers:

  • Limit the non-pro developers to Java, JavaScript, or another memory-managed language (they're probably using web-based or mobile apps).
  • Make sure the developers always use encrypted connections to protect data in transit, and that they encrypt all stored data.
  • Provide the developers with mobile SDKs so the apps can be managed remotely for updates, revoking access, and wiping data on a lost or stolen phone.
  • Mentor the developers via an advisory board to help them improve the quality of their applications.

Malware lurks in 'unsanctioned' cloud storage services

Living with the unauthorized use of cloud services by employees can quickly become an unending game of whack-a-mole. Research by cloud security firm Netskope found that the average number of cloud services used by companies increased 4 percent in the fourth quarter of 2016 from the year-earlier period to a total of 1,071. Only 7 percent of the cloud services are considered enterprise-ready, according to Netskope. Eweek's Robert Lemos reports on the study in an April 25, 2017, article.

It's no surprise that use of cloud services to spread malware is on the upswing as well. Netskope estimates that 75 percent of cloud-based malware is categorized as "high severity": 37 percent of cloud threats are a form of backdoor, 14 percent are adware, and 4.2 percent are ransomware. One reason cloud-based malware is forecast to increase is how easy it is for malware to spread via cloud services.

Turning an IT liability into a corporate asset

The mistake some IT departments make is to consider the shadow IT phenomenon as a risk that must be minimized rather than an opportunity that they can capitalize on. For example, you can overcome the resistance of some employees to new technologies by highlighting the productive use of cloud services by their coworkers. This is one of the many benefits of shadow IT described by TechTarget's Kerry Doyle.

The pace of business shows no signs of slacking, which argues in favor of increased use of cloud-based tools to reduce the requirement cycles of application development. The nature of development changes when developers work directly with business departments rather than toiling away in a separate IT division. The developers learn first-hand how the department functions and what employees need to ensure their success. At the same time, line managers and workers gain a better understanding of the compliance, security, and other requirements of internal IT policies.

Doyle writes that once IT departments abandon their traditional role as the sole source for all workplace technology, they are free to expend their scarce resources more productively. IT staff then can become leaders and directors of company-wide cloud initiatives in which users play a more active role. By serving as an "innovation broker," IT enters into partnerships with business departments that lead to streamlined planning, acquisition, implementation, and maintenance of increasingly vital information services.

2 Great Reasons for Making Your Cloud Data Location Aware

$
0
0

Two of the greatest challenges IT departments face when managing cloud data are latency and compliance. By ensuring that the data and apps you place on cloud platforms are designed to know and report their physical location, you make your data caches more efficient, and you meet regulatory requirements for keeping sensitive data resources safe.

Virtualization has transformed nearly every aspect of information management. No longer are the bulk of an organization's data and applications associated with a specific physical location. The foundation of public cloud services is the virtual machine -- a nomad data form that appears whenever it's needed and runs wherever it can do so most efficiently and reliably. The same holds true for VMs' more-compact counterpart, containers.

You may be inclined to infer from the rootlessness of VMs that it no longer matters where a particular data resource resides at any given time. Such an inference could not be further from the truth. It's because of the lack of a permanent home that IT managers need to be tuned into the physical context in which they operate. In particular, two important management concerns cry out for location awareness: latency and compliance.

Location awareness boosts cloud response times

If intelligent caching is important for optimizing the performance of in-house machines, it is downright crucial for ensuring peak response times when caching cloud systems. TechTarget's Marc Staimer explains that cloud storage needs to be aware of its location relative to that of the app presently reading or writing the data. Location awareness is applied via policies designed to ensure frequently accessed data is kept as close as possible to the app or user requesting it.

Keeping data close to the point of consumption can't be done efficiently simply by copying and moving it. For one thing, there's the time and effort involved. You could be talking about terabytes of data for a large operation, and even multiple smaller migrations will take their toll on performance and accessibility. More importantly, the data is likely being accessed by multiple sources, so location awareness needs to support dispersed distributed read access.

An example of maximizing location awareness to enhance cloud elasticity and reduce latency is the Beacon framework that is part of the European Union's Horizon 2020 program. After initial deployment on a private cloud, an application's load may be dispersed to specific geographic locations based on user and resource demand. The new component is placed in the cloud region that's closest to the demand source.

Location aware elasticity is built into the EU's Beacon framework, which is designed to reduce cloud latency by placing data resources in cloud regions nearest to the users and resources accessing the data most often. Source: Beacon Project

Addressing cloud latency via proximity is the most straightforward approach: choose a cloud service that's located in the same region as your data center. As CIO's Anant Jhingran writes in an April 24, 2017, article, the choices are rarely this simple in the real world.

A typical scenario is an enterprise that keeps backends and APIs in on-premise data centers while shifting management and analytics operations to cloud services. Now you're round-tripping APIs from the data center to the cloud. To reduce the resulting latency, some companies use lightweight, federated cloud gateways that transmit analytics and management services asynchronously to the cloud while keeping APIs in the data center.

Confirming residency of cloud data regardless of location changes

The efficiencies that cloud services are famous for is possible only because the data they host can be relocated as required to meet the demands of the moment. This creates a problem for companies that need to keep tabs on just where sensitive data is stored and accessed, primarily to ensure compliance with regulations mandating preset security levels. These regulations include HIPAA in the U.S. and the forthcoming European General Data Protection Regulation (GDPR), which takes effect in May 2018.

Researchers at the National University of Singapore School of Computing have developed a method of verifying the residency of data hosted on any cloud server (pdf). Rather than depend on audits of the servers used by cloud providers to confirm the presence of specific data, the new technique verifies residency of outsourced data by demanding proof that each file is maintained in its entirety on the local drives of a specific cloud server.

Government data residency requirements differ greatly from data transfer requirements and data retention rules; some residency requirements may compromise data protection rather than enhance it. Source: Baker McKenzie Inform

The researchers contend that data residency offers greater assurances than approaches that focus on the retrievability of the data. The technique enhances auditing for compliance with service level agreements because it lets you identify the geographic location of the cloud server a specific file is stored on. That level of identification is not possible with after-the-fact auditing of a cloud service's logs.

Another advantage of the data residency approach is the ability to verify simultaneously replications of data on geographically dispersed cloud servers. The Proof of Data Residency (PoDR) protocol the researchers propose is shown to be accurate (low false acceptance and false rejection rates) and applicable without increasing storage or audit overhead.

From Machine Learning to Superclouds: Competing Visions of Cloud 2.0

$
0
0

Point releases are a grand tradition of the software industry. The practice of designating an update "2.0" is now frequently applied to entire technologies. Perhaps the most famous example of the phenomenon is the christening of "Web 2.0" by Dale Dougherty and Tim O'Reilly in 2003 to mark the arrival of "the web as platform": Users are creators rather than simply consumers of data and services.

It didn't take long for the "2.0" moniker to be applied to cloud computing. References to "Cloud 2.0" appeared as early as 2010 in an InformationWeek article by John Soat, who used the term to describe the shift by organizations to hybrid clouds. As important as the rise of hybrid clouds has been, combining public and private clouds doesn't represent a giant leap in technology.

So where is that fundamental shift in cloud technology justifying the "2.0" designation most likely to reveal itself? Here's an overview of the cloud innovations vying to become the next transformative technology.

Google's take on Cloud 2.0: It's all about machine-learning analytics

The company with the lowest expectation for what it labels Cloud 2.0 is Google, which believes the next great leap in cloud technology will be the transition from straightforward data storage to the provision of cloud-based analytics tools. Computerworld's Sharon Gaudin cites Google cloud business Senior VP Diane Greene as saying CIOs are ready to move beyond simply storing data and running apps in the cloud.

Greene says data analytics tools based on machine learning will generate "incredible value" for companies by providing insights that weren't available to them previously. The tremendous amount of data being generated by businesses is "too expensive, time consuming and unwieldy to analyze" using on-premise systems, according to Gaudin. The bigger the data store, the more effective machine learning can be in answering thorny business problems.

Gaudin quotes an executive for a cloud consultancy who claims Google's analytics tools are perceived to have "much more muscle" than those offered by Amazon Web Services. The challenge for Google, according to the analyst, is to show potential customers the use cases for its analytics and deep learning expertise. Even though companies have more data than they know what to do with, they will hesitate before investing in cloud-based analytics until they are confident of a solid return on that investment in the form of fast and accurate business intelligence.

Companies are expected to balk at adopting Google's machine learning analytics until they are convinced that there are legitimate use cases for the technology. Source: Kunal Dea from Google Cloud Team, via Slideshare

The race to be the top AI platform is wide open

After decades of hype, artificial intelligence is a technology whose time has finally come. Machine learning is a key component of commercial AI implementations. A May 21, 2017, article on Seeking Alpha cites a study by Accenture that found AI could increase productivity by up to 40 percent in 2035. AI wouldn't be possible without the cloud's massively scalable and flexible storage and compute resources. Even the largest enterprises would be challenged to meet AI's appetite for data, power, bandwidth, and other resources by relying solely on in-house hardware.

The Seeking Alpha article points out that Amazon's big lead in cloud computing is substantial, but it is far from insurmountable. The market for cloud services is "still in its infancy," and the three leading AI platforms - Amazon's open machine learning platform, Google's TensorFlow, and Microsoft's Open Cognitive Toolkit and Machine Learning - are all focusing on attracting third-party developers.

AWS's strong lead in cloud services over Microsoft, Google, and IBM is not as commanding in the burgeoning area of AI, where competition for the attention of third-party developers is fierce. Source: Seeking Alpha

Google's TensorFlow appears to have a head in front of the AI competition right out of the gate. According to The Verge, TensorFlow is the most popular software of its type on the Github code repository; it was developed by Google for its in-house AI work and was released to developers for free in 2015. A key advantage of TensorFlow is the ability of the two billion Android devices in use each month to be optimized for AI. This puts machine learning and other AI features in the hands of billions of people.

Plenty of 'Superclouds' to choose from

Competition is also heating up for ownership of the term "supercloud." There is already a streaming music service using the moniker, as well as an orchestration platform from cloud service firm Luxoft. That's not to mention the original astronomical use of the term that Wiktionary defines as "a very large cloud of stellar material."

Two very different "supercloud" research projects are pertinent to next-generation cloud technology: one is part of the European Union's Horizon 2020 program, and the other is underway at Cornell University and funded by the National Science Foundation. Both endeavors are aimed at creating a secure multicloud environment.

The EU's Supercloud project is designed to allow users to craft their own security requirements and "instantiate the policies accordingly." Compliance with security policies is enforced automatically by the framework across compute, storage, and network layers. Policy enforcement extends to "provider domains" by leveraging trust models and security mechanisms. The overarching goal of the project is the creation of secure environments that apply to all cloud platforms, and that are user-centric and self-managed.

The EU's Supercloud program defines a "distributed architectural plane" that links user-centric and provider-centric approaches in expanded multicloud environments. Source: Supercloud Project

A different approach to security is at the heart of the Supercloud research being conducted at Cornell and led by Hakim Weatherspoon and Robbert van Renesse. By using "nested virtualization," the project packages virtual machines with the network and compute resources they rely on to facilitate migration of VMs between "multiple underlying and heterogeneous cloud infrastructures." The researchers intend to create a Supercloud manager as well as network and compute abstractions that will allow apps to run across multiple clouds.

The precise form of the next iteration of cloud computing is anybody's guess. However, it's clear that AI and multicloud are technologies poised to take cloud services by storm in the not-too-distant future.


How Machine Learning is Helping Drive Cloud Adoption

$
0
0

An interesting trend that is helping drive cloud adoption is the rise of machine learning. As organizations seek ways to automate more processes, the use of machine learning is also increasing in order to meet those needs. As machine learning has increased, it has helped to increase the use of the cloud to help store and process the massive amounts of data that can be required for it. 

What is machine learning?

Since the invention of computers people have been trying to answer the question of whether a computer can ‘learn’...

Source: Andres Munoz, Courant Institute of Mathematical Sciences

Machine learning is a way for computers to “learn” things without needing to have as much specific programming done as would a typical application. Machine learning can help create algorithms which can use data and either learn from the data or make various types of predictions based on identified patterns in the data. Such learning can be useful for a number of tasks that would otherwise be much more difficult, such as OCR (optical character recognition), email filtering, medical monitoring, text analysis, photo analysis, video analysis, translation, and speech recognition, and many others.

For example, email filtering can be done much more simply, since patterns used by spam or scam emails can be identified and allow for a better chance that those types of messages will be moved to a “spam” folder or blocked entirely. This is certainly useful in helping to keep inboxes free of messages most users do not want to sift through on their own every day! 

Machine Learning and Cloud Innovation

Image depicting machine learning as the focal point of the cloud. Source: Forbes

As noted in the image from Forbes, there are a number of things that are driven by machine learning that also helps to drive cloud innovation. Things such as business intelligence, personal assistants, IOT (Internet of Things), bots, and cognitive computing are brought about by machine learning, which in turn allows for the cloud to be a desirable place to collect, store, analyze and retrieve the data needed for these various applications.

 For instance, IOT is big on connecting machines so that they can communicate with one another and exchange data. Machine learning helps to drive these types of interactions, and using the cloud makes it even easier for machines to exchange data with one another as there will be an easy way to make those connections. 

 As a result, the cloud has seen much innovation in its ability to handle this type of data exchange as cloud systems have become more flexible and offer the ability to scale much more easily than with a traditional data center.

Cloud infrastructure and scalability

Communications with remote systems across factories and transportation, real-time data gathering and analytics, and the ability to integrate with enterprise software that drives your business are often requirements of any IoT system that promise value in additional automation, value discovery, cost savings, and new ways to improve customer service.

Source: Tracy Siclair, HP Enterprise

 Machine learning and cloud services make an excellent combination, as many cloud services, such as Morpheus, make it easy to provision the resources needed for the collection, storage, and retrieval of large amounts of data. The biggest reasons this works is that such cloud services offer both flexibility and scalability.

 A cloud service is typically flexible enough to allow you to provision servers with different specifications, depending on the needs of the various pieces required for the machine learning that needs to occur. For example, if you need different operating systems for different servers, it is as simple as selecting them and spinning up the server. 

 The same applies to provisioning various different databases such as MySQL, MongoDB, and so on - you can simply spin up what you need easily and move on to any setup and programming that needs to be done without worrying about acquiring hardware.

 Scalability is another big reason that machine learning and the cloud work well together. Since machine learning often needs a progressively larger amount of storage space, using a cloud service makes a lot of sense. 

 With a standard data center, you can end up spending a great deal on what you determine the maximum amount of storage space is that you will need up front, with the potential to still need more later. This can be quite costly both in the beginning and as time goes on.

On the other hand, a typical cloud service allows you to purchase only the storage space you need up front, then scale up to more space as you need it. This definitely saves money in the beginning and allows you to add more space as needed rather than simply jumping to an incredibly large amount of space before you need it. As you can see, this would definitely help to drive cloud adoption!

 

Is the Cloud Ready for Speech APIs?

$
0
0

Everybody's talking -- to cloud-connected devices. The question is whether your company has the speech APIs it will need to collect, analyze, and act on valuable speech data.

Many people see natural language processing as the leading edge of artificial intelligence inroads into everyday business activities. The AI-powered voice technologies at the heart of popular virtual assistants such as Apple's Siri, Microsoft's Cortana, Amazon's Alexa, and Samsung's Bixby are considered "less than superhuman," as IT World's Peter Sayer writes in a May 24, 2017, article.

Any weaknesses in the speech AI present in those consumer products are easier for many companies to abide because the systems are also noted for their ability to run on low-power, plain vanilla hardware -- no supercomputers required.

Coming soon to a cloud near you: Automatic speech analysis

Machine voice analysis took a big step forward with the recent general availability of Google's Cloud Speech API, which was first released as an open beta in the summer of 2016. The Cloud Speech API uses the same neural-network technology found in the Google Home and Google Assistant voice products. As InfoQ's Kent Weare writes in a May 6, 2017, article, the API supports services in 80 different languages.

Google product manager Dan Aharon cites three typical human-computer use cases for the Cloud Speech API: mobile, web, and IoT.

Human-computer interactions that speech APIs facilitate include search, commands, messaging, and dictation. Source: Google, via InfoQ

Among the advantages of cloud speech interfaces, according to Aharon, are speed (150 words per minute, compared to 20-40 wpm for typing); interface simplicity; hands-free input; and the increasing popularity of always-listening devices such as Amazon Echo, Google Home, and Google Pixel.

A prime example of a lost voice-based resource for businesses is customer-service phone calls. Interactive Tel CTO Gary Graves says the Cloud Speech API lets the company gather "actionable intelligence" from recordings of customer calls to the service department. In addition to providing managers with tools for improving customer service, the intelligence collected holds employees accountable. Car dealerships have shown the heightened accountability translates directly into increased sales.

Humanizing and streamlining automated voice interactions

Having to navigate through multi-level trees of voice-response systems can make customers pine for the days of long hold times. Twilio recently announced the beta version of its Automated Speech Recognition API that converts speech to text, which lets developers craft applications that respond to callers' natural statements. Compare: "Tell us why you're calling" to "If you want to speak to a support person, say 'support'." Venture Beat's Blair Hanley Frank reports on the beta release in a May 24, 2017, article.

Twilio's ASR is based on the Google Cloud Speech API; it processes 89 different languages and dialects, and it costs from two cents for each 15 seconds of recognition. Also announced by Twilio is a Universal API that delivers to applications information about the intent of the natural language it processes. The Understand API works with Amazon Alexa in addition to Twilio's own Voice and SMS tools.

Could Amazon's cloud strength be a weakness in voice services?

By comparison, Amazon's March 2017 announcement of the Amazon Connect cloud-based contact center attempts to leverage its built-in integration with Amazon DynamoDB, Amazon Redshift, Amazon Aurora, and other existing AWS services. TechCrunch's Ingrid Lunden writes in a March 28, 2017, article that Amazon hopes to leverage its low costs compared to call centers that are not cloud-based, as well as requiring no up-front costs or long-term contracts.

A key feature of Amazon Connect is the ability to create automatic responses that integrate with Amazon Alexa and other systems based on the Amazon Lex AI service. Rather than being an advantage, the tight integration of Amazon Connect with Lex and other AWS offerings could prove to be problematic for potential customers, according to Zeus Kerravala in an April 25, 2017, post on No Jitter.

Kerravala points out that the "self-service" approach Amazon takes to the deployment of a company's custom voice-response system presents a challenge for developers. Businesses wishing to implement Amazon Connect must first complete a seven-step AWS sign-up process that includes specifying the S3 bucket to be used for storage.

The Amazon Connect contact center requires multiple steps to configure for a specific business once you've completed the initial AWS setup process. Source: Amazon Web Services

Developers aren't the traditional customers for call-center systems, according to Kerravala, and many potential Amazon Connect customers will struggle to find the in-house talent required to put the many AWS pieces together to create an AI-based voice-response system.

The democratization of AI, Microsoft style

In just two months, Microsoft's Cognitive Toolkit open-source deep learning system went from release candidate to version 2.0, as eWeek's Pedro Hernandez reports in a June 2, 2017, article. Formerly called the Computational Network Toolkit (CNTK), the general-availability release features support for the Keras neural network library, whose API is intended to support rapid prototyping by taking a "user-centric" approach. The goal is to allow people with little or no AI experience to include machine learning in their apps.

Microsoft claims its Cognitive Toolkit delivers clear speed advantages over Caffe, Torch, and Google's TensorFlow via efficient scaling in multi-GPU/multi-server settings. Source: Microsoft Developer Network

According to Microsoft, three trends are converging to deliver AI capabilities to everyday business apps:

  • The computational power of cloud computing
  • Enhanced algorithms and machine learning capabilities
  • Access to huge stores of cloud-based data

TechRepublic's Mark Kaelin writes in a May 22, 2017, article that Microsoft's Cognitive Services combine with the company's Azure cloud system to allow implementation of a facial recognition function simply by adding a few lines of code to an existing access-control app. Other "ready made" AI services available via Azure/Cognitive Services mashups are Video API, Translator Speech API, Translator Text API, Recommendations API, and Bing Image Search API; the company promises more such integrated services and APIs in the future.

While it is certain that AI-based applications will change the way people interact with businesses, it is anybody's guess whether the changes will improve those interactions, or make them even more annoying than many automated systems are today. If history is any indication, it will likely be a little of both.

Five Tips for Protecting Cloud Resources from Internal Threats

$
0
0

New approaches are required to secure data in an age of networks without boundaries. Although traditional approaches to internal IT security have been rendered obsolete, many tried-and-true techniques are adaptable to the cloud era. Here are five aspects to consider when crafting your company's data security plan.

If you remain unconvinced of the need to update your internal data security approach to make it cloud-ready, consider the fate of Smyth Jewelers, a retail jewelry chain headquartered in Maryland. Robert Uriarte and Christina Von der Ahe of the Orrick Trade Secrets Group write in a June 16, 2017, article on JD Supra that the company found itself locked out of its Dropbox account after the departure of the lone employee responsible for maintaining the account.

Among the proprietary company documents stored in the account were business plans, vendor details, confidential information about employees, customer lists, purchase histories, and "valuable customer account metrics," according to the authors. All it took to place these vital data resources off limits was for the employee to change the email address associated with the Dropbox account from the person's company email address to his private address.

As they say, hindsight is 20-20. The quandary Smyth Jewelers found itself in was easily preventable. To ensure your organization's cloud data is protected against attacks from the inside, follow these simple steps. As they also say, an ounce of prevention is worth a pound of cure.

Tip #1: Safeguard your cloud account details

The obvious approach to prevent a disgruntled employee from locking down the company's cloud assets is to establish administrative login credentials that IT controls. Another layer of prevention is available by enabling the cloud service's notifications whenever an important system setting changes. To be effective, alerts must reach the right parties in a timely manner.

Further, use a cloud service that provides multiple tiers of access services allows you to designate critical documents and resources that receive an added layer of protection. These may be as simple as a folder-level password, or file-level restrictions on viewing, printing, downloading, or editing a document.

Tip #2: Think in terms of 'governance' rather than 'controls' and 'monitoring'

There are few rules enforced by IT departments that employees can't figure out how to break. A better approach for managing information risk in your company is governance, according to Matt Kelly of RadicalCompliance in a June 19, 2017, article on Legaltech News. By focusing on governance, you devise policies for handling data that apply to everyone in the organization. The policies serve as a framework that employees can use for making judgments about information as new, unanticipated risks arise.

Kelly explains the key difference between governance and controls: governance educates users about the reasons why they need to be mindful of the risks associated with the information they handle; while controls are perceived as a fixed set of rules that apply in specific situations.

Organizations encountered an average of 23.2 separate cloud-related threats per month in the fourth quarter of 2016, an 18.4 percent increase from the year-earlier period; the highest single category was insider threats, reported by 93.5 percent of companies. Source: Skyhigh Networks

Typical scenarios of information risk are employees who expose sensitive company data on an insecure cloud app, who collect private information from minors without acquiring their parents' consent, and who destroy data that needs to be preserved for litigation. No set of controls would prevent these occurrences, but in each case, employees mindful of the risks inherent in these situations via governance would know how to respond accordingly.

Tip #3: Implement multifactor authentication, without ticking off users

The single most effective way to prevent unauthorized access to your company's cloud assets is by using two-factor or multifactor authentication. The single most likely way to turn users against you is by implementing multifactor authentication in a way that makes it harder for employees to get their work done. You have to find the middle ground that delivers ample protection but isn't too onerous to workers.

In a June 21, 2017, article on TechTarget, Ramin Edmond describes the single sign-on (SSO)  technique as a way to strengthen authentication without overburdening employees. SSO creates two layers of authentication, but users only need to be authenticated in both layers once to gain access to a range of apps, databases, and documents. Mobile implementations of SSO allow secure access to multiple mobile apps after a single two-factor authentication.

Multifactor authentication and single-sign on will account for larger shares of the global cloud identity access management market in 2020. Source: Allied Market Research

Tip #4: Work with HR to educate and train users about data security policies

Responsibility for crafting data security policies and training employees in the application and enforcement of those policies is shared by the human resources and IT departments. In many companies, IT either takes on too much of the job of employee education at the exclusion of HR or attempts to offload the bulk of the training work to HR.

In a June 21, 2017, article on JD Supra, Jennifer Hodur identifies three areas where HR and IT need to work together:

  • Educate users about how to spot and avoid phishing attempts, ransomware, and other scams
  • Identify and react to potential and actual data breaches
  • Respond consistently to violations of data security policies by employees

Tip #5: Be stingy in granting requests for privileged accounts

Haystax Technology recently conducted a crowd-based survey of 300,000 members of the LinkedIn Information Security Community about their approach to insider threats. Security Intelligence's Rick M. Robinson reports on the survey results in a June 22, 2017, article. Sixty percent of the survey respondents identified privileged users as the source of internal data breaches.

Not all the breaches traced back to a compromised privileged account were malicious in nature. The IT pros surveyed claim negligence and old-fashioned mistakes are the sources of a great number of serious data breaches and data loss. Contractors and temporary workers were identified by 57 percent of the survey respondents as the cause of internal data breaches, whether because they are less loyal or have not been trained in the company's data security policies.

DevOps: How to Achieve Rapid Mobile App Development

$
0
0

When you are launching a mobile app, being able to update it quickly when things happen is imperative, as users can easily move on to another app if yours is not working at its full potential. When you need to develop and update applications rapidly, taking a DevOps approach can be extremely helpful in meeting your goals.

What is DevOps?

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

Source: The Agile Admin

DevOps is a way of approaching the delivery of applications in such as way as to streamline the development and deployment processes so that business operations, IT management, and developers can all work together in a meaningful way. 

In the past, operations and development often worked separately, and software updates might occur a few times a year at most. With the onset of agile development, this process was sped up greatly on the development side, as updates could occur in terms of hours rather than months. However, without the development team being able to work with people on the business side to ensure that updates helped to meet their needs in a timely fashion, it was still difficult to increase the speed at which new apps and updates were released.

With DevOps, selected members from both business operations and development are given the opportunity to work together daily to ensure that not only can updates happen quickly, but that those quick updates are things that are helping to ensure the needs and requirements of the business team are met. 

With the teams working together daily, there is far less chance that something is implemented in such a way that one team is happy with it, but the other is not, which often requires additional business planning and development work to complete the release or update. Instead, agreements can be made as things are being developed, and the teams will be far less likely to have the need to have a meeting to fix what one side or the other understood was going to be implemented differently.

Mobile App Development

While mobile apps may sometimes be localized to specific devices and not need much integration with other services, many of these apps not only are needed on multiple devices and platforms but also need access to information from the backend from one or more of your own databases.

With this in mind, a large number of mobile apps are simply a front end that is tailored to specific platforms and then simply sends and retrieves data through a larger backend system you have set up to handle what is done with the data as it comes and goes. This type of model works well with DevOps, especially when you are able to combine DevOps with the cloud, which can speed up the delivery process even further.

Adding the Cloud

Cloud adoption has increased rapidly over the last few years, and this is due in large part to the advantages the cloud can offer in the way of speeding up and normalizing the way in which infrastructure is set up for the numerous projects that are often being worked on within an organization.

Cloud adoption in 2016. Source: RightScale

For example, obtaining a new server machine in a data center can often take days, weeks, or even months depending on the process and costs involved in setting up a new machine. If costs are high, IT and business must decide whether the new machine can be implemented or not. This means that the project may have to move toward sharing a machine with other applications/databases or being put on hold until the financial needs can be met.

On the other hand, use of the cloud allows you to scale much more easily, and often with less cost. If you use a cloud service such as Morpheus, you have the ability to instantly put additional servers online. If you have a server setup that works for most or all of the project you will launch, you can save that setup so that you can simply click and have your server up and running in no time! From here, you can easily implement the code needed to run any new applications and make quick updates anytime you need to afterward.

DevOps and the Cloud

With both cloud adoption and DevOps, you can really help to reduce or eliminate shadow IT, which is often used as a method of avoiding the usual business and/or IT restrictions that are in place to ensure apps are meeting business needs as well as IT security and consistency concerns. An app that is built outside of these restrictions is often placed on borrowed hardware, such as an employee’s personal computer or on a server being shared with another department. These type of setups can easily lead to security breaches either via the network, app, or hardware!

With the cloud, you make getting the necessary servers much easier and faster for teams working on a project, so that there is less chance they will want to “work around” it to save time. Also, with business and IT working together daily with DevOps, the individual teams will be far less likely to simply develop and implement something on the side because they don’t want to wait for input from the other team.

With both DevOps and the cloud, mobile and other web applications can be developed and delivered at a much faster rate, making both you and your customers much happier!

How to Prepare for the Next Cloud Outage

$
0
0

A big part of an IT manager's job is asking, "What if...?" As cloud services become more popular, disaster plans are updated to include contingencies for the moment when the company's cloud-hosted data resources become unavailable.

Two recent service failures highlight the damage that can result when a cloud platform experiences an outage:

  • On February 28, 2017, an Amazon employee working on the company's Simple Storage Service (S3) executed a command that the person thought would remove a small number of servers from an S3 subsystem. As HyTrust's Ashwin Krishnan writes in a May 4, 2017, article on Light Reading, the command was entered incorrectly, which resulted in the removal of many more servers than intended.
  • On March 21, 2017, many customers of Microsoft's Azure, Outlook, Hotmail, OneDrive, Skype, and Xbox Live services were either unable to access their accounts, or they experienced severely degraded service. The company traced the outage to a "deployment task" gone awry; rolling back the task remedied the problem.

The proactive approach to mitigating the effects of a cloud failure

If you think the threat of an outage would deter companies from taking advantage of cloud benefits, you obviously haven't been working in IT very long. There's nothing new about ensuring the high availability of crucial apps and data when trouble strikes. On the contrary, keeping systems running through glitches large and small is the IT department's claim to fame.

NS1's Alex Vayl explains in an article on IT Pro Portal that the damage resulting from an outage can be greatly reduced by application developers following best practices. Vayl identifies six keys for designing and deploying high availability applications:

  1. Prioritize the components comprising the app stack based on the project's cost/complexity, the impact of a failure on users, and the likelihood of each component failing.
  2. Minimize the risk that could be introduced by third parties, primarily cloud service providers, by ensuring they follow best practices themselves.
  3. Consider the app components that need to be maintained in private or hybrid clouds due to compliance and regulatory considerations.
  4. When technically feasible, configure critical app components in hybrid clouds that are backed up via a separate cloud service; alternatively, replicate data across at least two zones and automatically reroute traffic if one zone fails.
  5. Implement an intelligent DNS setup that shifts traffic dynamically based on real-time user, network, application, and infrastructure telemetry.
  6. Make sure your DNS isn't a single point of failure by contracting with a managed DNS provider, as well as with a secondary DNS provider.

"

The cloud lets you implement abstraction at the infrastructure, platform, or application level, but improving availability via multi-zone, multi-region, or multi-cloud increases management complexity. Source: Cloudify

Redundancy nearly always adds to the cost and complexity of cloud networks. However, the expenses incurred as a result of a prolonged network outage could dwarf your investment in redundant data systems. The best way to minimize the risk of a catastrophic system failure is by eliminating single points of failure. The best way to avoid single points of failure is by investing in redundancy.

In other words, you can pay now for redundancy, or pay later for massive data losses.

Getting a grip on the 'three vectors of control'

Preparing for a cloud service outage isn't much different than getting ready for any system failure, according to HyTrust's Krishnan. No matter the nature of the network, there will always be three pinch points, or "vectors of control," that managers need to master.

The first is scope, which is the number of objects each admin or script is authorized to act upon at a particular time. Using the Microsoft outage as an example, a deployment task's scope would limit the number of containers it could operate on at one time.

The second control vector is privilege, which controls what type of action an admin or script (task) can take on an object. An example of a privilege restriction would be a task that is allowed to launch a container but not to destroy one.

The third control point is the governance model, which implements best practices and your policy for enforcing the scope and privileges described above in a "self-driven" manner. For example, a governance policy would limit the number of containers an admin or script can act on at one time to no more than 100 (scope) while also providing a predefined approval process for exceptions to the policy.

Despite the highly publicized outage of February 28, 2017, AWS has experienced much less downtime than cloud-platform competitors Microsoft and Google. Source: The Information

Multi-cloud's biggest selling points: Backup and redundancy

The major cloud platforms -- Amazon, Microsoft, and Google -- have different attitudes about transferring data between their respective systems. Google's new Cloud Endpoints API is designed to integrate with AWS's Lambda function-as-a-service product, as NetworkWorld's Brandon Butler writes in an April 27, 2017, article. This allows you to use Endpoints to manage API calls associated with a Lambda application.

Among AWS's cloud connectivity products are Direct Connect, Amazon Virtual Private Cloud, and AWS Storage Gateway; coming soon is an easier way to run VMware workloads in AWS clouds. Yet AWS executives continue to downplay the use of multiple clouds, even for backup and redundancy. They insist it's easier, less expensive, and just as effective to split your workloads among various regions of the AWS cloud.

Many analysts counter this argument by pointing out the future belongs to multi-cloud, primarily because of the approach's built-in redundancy and resiliency in the event of outages -- whether caused by human error or a natural disaster. Butler suggests that one way to hedge your bets as the multi-cloud era begins is to adopt a third-party cloud integration tool rather than to rely on your cloud provider's native tools.

(For a great primer on everything multi-cloud, check out TechRepublic's "Multi-cloud: The Smart Person's Guide," published on May 4, 2017.)

10 Big Data Myths Exploded

$
0
0

If a little bit of data is good, then a lot of data must be great, right? That's like saying if a cool breeze feels nice on a warm summer day, then a tornado will make you feel ecstatic.

Perhaps a better analogy for big data is a high-spirited champion racehorse: With the proper training and a talented jockey up, the thoroughbred can set course records, but minus the training and rider, the powerful animal would never make it into the starting gate.

To ensure your organization's big data plans stay on track, you need to dispel these 10 common misconceptions about the technology.

1. Big data simply means 'lots of data': At its core, big data describes how structured or unstructured data combine with social media analytics, IoT data, and other external sources to tell a "bigger story." That story may be a macro description of an organization's operation or a big-picture view that can't be captured using traditional analytic methods. A simple measure of the volume of data involved is insignificant from an intelligence-gathering perspective.

2. Big data needs to be clean as a whistle: In the world of business analytics, there is no such thing as "too fast." Conversely, in the IT world, there is no such thing as "garbage in, gold out." Just how clean is your data? One way to find out is to run your analytics app, which can identify weaknesses in your data collections. Once those weaknesses are addressed, run the analytics again to highlight the "cleaned up" areas.

3. All the human analysts will be replaced by machine algorithms: The recommendations of data scientists are not always implemented by the business managers on the front lines. Industry executive Arijit Sengupta states in a TechRepublic article that the proposals are often more difficult to put in place than the scientists project. However, relying too much on machine-learning algorithms can be just as challenging. Sengupta says machine algorithms tell you what to do, but they don't explain why you're doing it. That makes it difficult to integrate analytics with the rest of the company's strategic planning.

Predictive algorithms range from relatively simple linear algorithms to more sophisticated tree-based algorithms, and finally to extremely complex neural networks. Source: Dataiku, via Dataconomy

4. Data lakes are a thing: According to Toyota Research Institute data scientist Jim Adler, the huge storage repositories that some IT managers envision housing massive amounts of structured and unstructured data simply don't exist. Organizations don't indiscriminately dump all their data into one shared pool. The data is "carefully curated" in a department silo that encourages "focused expertise," Adler states. This is the only way to deliver the transparency and accountability required for compliance and other governance needs.

5. Algorithms are infallible prognosticators: Not long ago, there was a great deal of hype about the Google Flu Trends project, which claimed to predict the location of influenza outbreaks faster and more accurately than the U.S. Centers for Disease Control and other health information services. As the New Yorker's Michele Nijhuis writes in a June 3, 2017, article, it was thought that people's searches for flu-related terms would accurately predict the regions with impending outbreaks. In fact, simply charting local temperatures turned out to be a more accurate forecasting method.

Google's flu-prediction algorithm fell into a common big data trap: it made meaningless correlations, such as connecting high school basketball games and flu outbreaks because both occur during the winter. When data mining is operating on a massive set of data, it is more likely to encounter relationships among information that is statistically significant, yet entirely pointless. An example is linking the divorce rate in Maine with the U.S. per capita consumption of margarine: there is indeed a "statistically significant" relationship between the two numbers, despite the lack of any real-world significance.

6. You can't run big data apps on virtualized infrastructure: When "big data" first appeared on people's radar screens about 10 years ago, it was synonymous with Apache Hadoop. As VMware's Justin Murray writes in a May 12, 2017, article on Inside Big Data, the term now encompasses a range of technologies, from NoSQL (MongoDB, Apache Cassandra) to Apache Spark.

Critics previously questioned the performance of Hadoop on virtual machines, but Murray points out that Hadoop scales on VMs with performance comparable to bare metal, and it utilizes cluster resources more efficiently. Murray also blows up the misconception that the basic features of VMs require a storage area network (SAN). In fact, vendors frequently recommend direct attached storage, which offers better performance and lower costs.

7. Machine learning is synonymous with artificial intelligence: The gap between an algorithm that recognizes patterns in massive amounts of data and one that is able to formulate a logical conclusion based on the data patterns is more like a chasm. ITProPortal's Vineet Jain writes in a May 26, 2017, article that machine learning uses statistical interpretation to generate predictive models. This is the technology behind the algorithms that predict what a person is likely to buy based on past purchases, or what music they may like based on their listening history.

As clever as these algorithms may be, they are a far cry from achieving the goal of artificial intelligence, which is to duplicate human decision-making processes. Statistics-based predictions lack the reasoning, judgment, and imagination of humans. In this sense, machine learning may be considered a necessary precursor of true AI. Even the most sophisticated AI systems to date, such as IBM's Watson, can't provide the insights into big data that human data scientists deliver.

8. Most big data projects meet at least half their goals: IT managers know that no data-analysis project is 100-percent successful. When the projects involve big data, the success rates plummet, as shown by the results of a recent survey by NewVantage Partners (pdf). While 95 percent of the business leaders surveyed said their companies had engaged in a big data project over the past five years, only 48.4 percent of the projects had achieved "measurable results."

In fact, big data projects rarely get past the pilot stage, according to the results of Gartner research released in October 2016. The Gartner survey found that only 15 percent of big data implementations are ever deployed to production, which is relatively unchanged from the 14 percent success rate reported in the previous year's survey.

NewVantage Partners' Big Data Executive Survey indicates that fewer than half of all big data projects realize their goals, and "cultural" changes are the most difficult to achieve. Source: Data Informed

9. The rise of big data will reduce demand for data engineers: If a goal of your organization's big data initiatives is to minimize the need for data scientists, you may be in for an unpleasant surprise. The 2017 Robert Half Technology Salary Guide indicates that annual salaries for data engineers have jumped to an average between $130,000 and $196,000, while salaries for data scientists are now between $116,000 and $163,500 on average, and salaries for business intelligence analysts currently average from $118,000 to $138,750.

10. Employees and line managers will embrace big data with open arms: The NewVantage Partners survey found that 85.5 percent of the companies participating are committed to creating a "data-driven culture." However, the overall success rate of new data initiatives is only 37.1 percent. The three obstacles cited most often by these companies are insufficient organizational alignment (42.6 percent), lack of middle management adoption and understanding (41 percent), and business resistance or lack of understanding (41 percent).

The future may belong to big data, but realizing the technology's benefits will require a great deal of good old-fashioned hard work -- of the human variety.

When is it a Good Time to Implement DevOps?

$
0
0

When deciding to implement DevOps, you may not be sure when the best time to get started will be. Will other teams and individuals within your organization get on board immediately, or will it take some time and convincing? Can you get support from the top of the chain, and can they assist you in pushing forward a new way of operating so that people feel comfortable making the change? Is it imperative to make the change due to technical needs such as cloud offerings? All of these things can affect when the best time to implement DevOps is, so you will want to carefully decide what the best approach will be.

Where do people currently stand?

The main difference between doing DevOps right and otherwise is how the people interact with each other.  Shared knowledge, common goals, and desire to succeed are all traits of organizations wanting to do it right.

Source: Necco Ceresani for Xebia Labs

If everyone is already on board, then obviously it is a good time to go ahead as long as you have the necessary infrastructure and plan in place to do so. This, of course, is not usually the case unless you are in a particularly small organization where there is not a great deal of separation of teams from one another and you convey your message to as many different affected parties as possible much more quickly.

If you don’t have enough support yet, be sure to get a good plan with plenty of input from others to assist you in getting more people on board with the idea. This can also be helped by looking at the additional things below that can help you gain further support from others.

Can you get support from the top?

A vastly important and necessary factor is being able to get support from the top. If your leadership is not on board, it is unlikely you will be able to go forward with the implementation until they are. Again, if you have a good plan with input from others within the organization to help support your proposal, you will likely be able to get leadership on board much more easily.

Also, if leadership does not want to take an active role in getting people on board, it can certainly slow things down, as there may be some who feel no urgency to make any changes without a little support from the top. So, do your best to get the full support from leadership as this can get things moving forward much more quickly.

Can you make the change comfortable for others?

The most successful implementations start from practices you already know and use, scaled to organizational level and consistently improved with best practices. Big bangs – where everything is restarted from scratch – are bound to cause disruption and distraction.

Source: Mike Dunham for Scio

Whether or not you have support from the top, making others feel comfortable with the idea of changing things can go a long way toward getting a large number of people and team leaders on board quickly.

For example, if you simply show each team how DevOps would benefit them and save them time before implementation, then you stand a better chance of gaining their support. If you simply demand that teams start implementing new procedures on a specific date, you are likely to get much more negative results instead.

If you have leadership on board already, this can be enhanced if they will take part in this process and assist in showing how such a change will benefit each individual, each team, and the organization overall.

Is the change needed as soon as possible?

Of course, if there is a major need to change as quickly as possible, such as security issues with the current processes, then you may be able to convince others to get on board by explaining the issues that are causing this change to be necessary and timely. In such a case, it may be necessary to make the change anyway, so you will definitely want support from the top if this approach must be taken.

When doing so, as mentioned, you should already have a plan in place for the transition and get people as much information, training, and assistance as possible so that they are able to take an active part in the process. Many times, if people understand that they are helping to solve a potentially serious problem, they will be more willing to lend a supporting hand.

In the end, just be sure you have as many of these bases covered as you possibly can before deciding to make the switch. The more comfortable everyone is with making the change, the better a time it is to begin implementing DevOps in your organization!

 

 


What Does the Future Hold for DBAs?

$
0
0

If any IT job can be considered secure, you would think it would be that of a database administrator. After all, the U.S. Bureau of Labor Statistics Occupational Outlook Handbook forecasts an 11 percent increase in DBA employment from 2014 to 2024. That's a faster rate than the average for all occupations, and just a tick below the 12 percent growth rate projected by the agency for all computer jobs.

Before any of you DBAs out there get too comfortable about your employment prospects, keep in mind that the database skillset continues the transformation triggered by the rise of cloud computing and database as a service (DBaaS). According to the BLS numbers, DBA employment at cloud service providers will grow by 26 percent in the decade ending in 2024.

The shifting emphasis to cloud databases hasn't had a great impact on the skills companies look for in their new DBAs. While database administration is one of the top ten "hot skills" identified in Computerworld's Forecast 2017 survey of in-demand tech skills, SQL programmers continue to be the most sought-after group. The survey found that 25 percent of companies plan to hire DBAs this year.

IT managers place database administration as one of the top 10 hot skills in 2017: 25 percent of the companies surveyed plan to hire a DBA in 2017. Source: Computerworld

One change noted in the 2017 survey is an increased focus on DBAs who "understand the user experience," according to Michelle Beveridge, CIO for adventure travel firm Intrepid Group. DBAs need to look beyond data rules, mandatory input requirements, and data structures to consider first and foremost the business processes behind the data collection. Finding DBAs with these skills will continue to be a challenge, Beveridge states.

Slow pace of change in languages, tools benefits experienced DBAs, programmers

A look at the most recent DB-Engines ranking of DBMSs by popularity indicates the staying power of traditional relational databases: Oracle, MySQL, and Microsoft SQL Server continue to dominate the rankings, as they have for years. As of the June 2017 numbers, the PostgreSQL relational DBMS took over the fourth spot (year-to-year) from the MongoDB document store, while Cassandra was rated eighth (down one from the year-earlier ranking), and Redis was ninth (up one place from the June 2016 scores).

The continuing popularity of old favorites is also evident in the June 2017 RedMonk Programming Language Rankings, which are based on rankings from GitHub and Stack Overflow. JavaScript and Java have held down the top two spots since the inception of the rankings in 2012. Python, PHP, and C# have traded the third, fourth, and fifth positions on the list for almost as long. Another long-time favorite, C++, is holding steady in the sixth position, while Ruby dropped to eighth after peaking in the fourth spot in the third quarter of 2013.

The most upwardly mobile language on the list is Kotlin, which is ranked only 46th, but the language has moved up from number 65 in the December 2016 rankings. Most of the bump is attributed to Google's decision in May 2017 to make Kotlin the company's alternative to Swift, which is currently rated 11th. The language is expected to continue its climb up the rankings as more Android developers experiment with Kotlin as they create new apps.

Enterprise repositories find their way to the public cloud

Data warehouses represent one of the last bastions of in-house data centers. A new class of public cloud data repositories is challenging the belief that warehouses need to reside on the premises. TechTarget's Trevor Jones writes in a June 8, 2017, article that services such as Amazon Redshift, Google Cloud Platform BigQuery, and Microsoft Azure SQL Data Warehouse provide greater abstraction and integration with related services. This makes it simpler for managers to explore the organization's deep pools of data alongside traditional data warehouses.

Choosing a database depends in large part on the size of the data you need to accommodate - the more data you have, the more likely you'll need a non-relational database. Source: Stephen Levin, via Segment

The goal of such services is to enhance business intelligence by tapping a range of cloud services hosting structured and unstructured data. However, the challenges in realizing this goal are formidable, particularly for enterprises. Much of a company's existing structured data must be cleaned or rewritten for the transition to cloud platforms, and it isn't unusual for enterprises to have workloads in several different cloud services.

One company in the process of transitioning to a cloud data warehouse is the New York Times, which previously built its own Hadoop cluster and used data warehouses from Informatica, Oracle, AWS, and other vendors. This setup left much of the company's data "too siloed and too technical," according to Jones. The Times is now transitioning to Google Cloud Platform as the sole receptacle for all of its warehoused data, primarily as a way to put powerful analytical tools in the hands of users.

Laying the groundwork for real-time analytics

A technology likely to have a great impact on DBAs in coming years is real-time analytics, which is also called streaming analytics. Dataversity defines stream processing as analyzing and acting on data in real time by applying continuous queries. Applications connect to external data sources, integrate analytics in the app "flow," and update external databases with the processed information.

While descriptive, predictive, and prescriptive analytics perform batch analysis on historical data, streaming analytics evaluate and visualize data in real time. This facilitates operational decision-making related to business processes, transactions, and production. It also allows current and historical data to be reported at the same time, which makes it possible to display via a dashboard changes in transactional data sets in real time.

The components of a real-time streaming analytics architecture include real-time and historical data combined in an event engine from which real-time actions are taken and displayed on a dashboard. Source: Paul Stanton, via MarTech Advisor

Many obstacles must be overcome to realize the benefits of real-time analytics. For example, a great number of organizations rely on Hadoop for analyzing their large stores of historical data, but Hadoop can't accommodate streaming, real-time data. Alternatives include MongoDB, Apache Flink, Apache Samza, Spark Streaming, and Storm. Also, real-time data flows are likely to overwhelm existing business processes, and the costs of faulty analysis increase exponentially.

The more insights you can gain from your organization's data resources, and the faster those insights can be applied to business decisions, the more value you can squeeze out of your information systems. Putting practical business intelligence in the hands of managers when and where they need it is the reward that lets DBAs know they're contributing directly to their company's success.

Cloud and AI: A Work in Progress, with Unlimited Upside

$
0
0

Cloud computing and artificial intelligence: Two technologies that were made for each other.

 

Well, not quite. To date, cloud-based AI remains a work in progress. The long-term forecast remains rosy as the pieces of the AI puzzle fall into place. What is uncertain is which of the most promising AI platforms will realize the technology's potential. Here's a look at where the AI platform leaders are today, and how they are likely to stack up in the future.

 

Amazon's big lead in the cloud gives it a leg up in the AI sweepstakes

 

If the history of the technology industry teaches one lesson, it's that an early lead in a new product category can evaporate faster than you can say "Netscape" or "MySpace." Still, no company appears to be ready to challenge the substantial lead Amazon AWS has in the cloud. If you think this fact will prevent competitors from taking on the company in AI, well, you don't know tech.

 

The recent joint announcement by Amazon founder and CEO Jeff Bezos and Microsoft CEO Satya Nadella that their AI-based voice assistants -- Amazon's Alexa and Microsoft's Cortana -- would work together caught the attention of the press. TechCrunch's Natasha Lomas writes in an August 30, 2017, article that despite the clunkiness of one voice assistant calling the other, the deal brings together Cortana's business and productivity focus with Alexa's emphasis on consumer e-commerce and entertainment.

 

In an August 30, 2017, interview with the New York Times' Nick Wingfield, Bezos states that people will turn to multiple voice assistants for information, depending on the subject matter. That's why it makes sense for Alexa to depend on Cortana when someone needs to access data that resides in Outlook rather than build a direct link for Alexa to the Microsoft productivity app.

 

Yet the Amazon AI projects getting all the press are the "showy ones," according to Bezos in an interview with Internet Association CEO Michael Beckerman. As GeekWire's Todd Bishop reports in a May 6, 2017, article, Bezos states that machine learning and AI will usher in a "renaissance" and "golden age" in which these "advanced techniques" will be accessible to all organizations, "even if they don’t have the current class of expertise that’s required."

 

The Amazon AI portfolio: Something for everyone

 

The cloud leader has assembled a formidable lineup of AI services to offer its customers. Topping the list is the AWS Deep Learning AMI (Amazon Machine Image) that simplifies the process of creating "managed, auto-scaling clusters of GPUs for training and inference at any scale." The product is pre-installed with Apache MXNet, TensorFlow, Caffe2 (and Caffe), Theano, Torch, Microsoft Cognitive Toolkit, Keras, and other deep learning tools and drivers, according to the company.

 

Amazon's AI offerings are separated into three layers to address the varying requirements and expertise levels of its business customers. Source: Amazon, via TechRepublic

 

The AWS Deep Learning CloudFormation template uses the Amazon Deep Learning AMI to facilitate setting up a distributed deep learning cluster, including the EC2 instances and other AWS resources the cluster requires. Three API-driven AI services are designed to let developers add AI features to their apps via API calls:

 

  • Amazon Lex uses Alexa's automatic speech recognition (ASR) and natural language understanding (NLU) to add "conversational interfaces," or chatbots, to your apps
  • Amazon Polly converts text to "lifelike" speech in more than two dozen languages and in both male and female voices
  • Amazon Rekognition uses the image-analysis features of Amazon Prime Photos to give apps the ability to identify objects, scenes, and faces in images; it can also compare faces between images

 

Two other Amazon AI tools are intended to help developers and data scientists create models that can be deployed and managed with minimum overhead. Amazon Machine Learning offers visualization tools and wizards that allow machine learning models to be devised without having to deal with complex algorithms. Apache Spark on Amazon EMR (Elastic MapReduce) serves up a managed Hadoop framework for processing data-intensive apps across EC2 instances.

 

IBM Watson: Off to a slow start, but still a top contender

 

The consensus is that IBM Watson is over-hyped. Maybe Watson's critics are simply impatient. Artificial intelligence requires a great deal of data. The most efficient place for that data to reside is in the cloud. As the cloud grows, it provides AI systems such as Watson with the space they need to blossom. At present, much of the deep data Watson needs to thrive remains missing in action.

 

in a June 28, 2017, article, Fortune's Barb Darrow explains that Watson isn't a single service, which confounds some customers. Instead, Watson is "a set of technologies that need[] to be stitched together at [the customer's] site." You're buying "a big integration project" rather than a product, according to Darrow.

 

Much can be learned from what appear to be Watson failures, Darrow writes. For example, MIT Tech Review found that the cause of M.D. Anderson Cancer Center's decision to cancel a collaboration with Watson Health was a shortage of the data Watson needs to be "trained."

 

The Register's Shaun Nichols, in a July 19, 2017, article, points out another shortcoming of IBM's AI platform. Watson is noted for being "tricky to use compared to the competition." More importantly, IBM is "losing the war for AI talent," which reduces the chances that the company will be able to develop competitive AI products in the future.

 

In an August 25, 2017, article, TechTarget's Mekhala Roy quotes Ruchir Puri, IBM Watson chief architect: "AI and the cloud are two sides of the same coin." Because of cloud services, more developers have access to AI in a "very consumable way," according to Puri. This includes middle school students using Watson as part of a robotics project, for example. 

 

As AI tools reach a broader swath of users, Watson stands poised to serve as the AI platform for the masses... eventually.

 

Google Cloud Machine Learning Engine and DeepMind

 

Google's initial foray into providing businesses with a platform for their AI initiatives is the Google Cloud Machine Learning Engine, which the company describes as a managed service for building "machine learning models that work on any type of data, of any size." The service depends on the AI-learning TensorFlow framework that also underpins Google Photos, Google Cloud Speech, and other products from the company.

 

Like IBM's Watson, Google Cloud Machine Learning Engine is a collection of parts that customers assemble as their needs dictate. As TechTarget's Kathleen Casey reports in a July 2017 article, the service's four primary components are a REST API, the gcloud command-line tool, the Google Cloud Platform Console, and the Google Cloud Datalab. The ability to integrate with such Google services as Google Cloud Dataflow and Google Cloud Storage adds the processor and storage elements required to create machine-learning apps.

 

The four leading cloud machine-learning vendors take very different approaches in their attempts to dominate the burgeoning field of cloud AI. Source: TechTarget

 

NetworkWorld's Gary Eastwood writes in a May 8, 2017, article that in addition to making TensorFlow open source, Google provides businesses that aren't able to create their own custom machine-learning models with "pretrained models." According to the Next Platform's Jeffrey Burt in a March 9, 2017, article, Google hopes to leverage its cloud AI initiatives to close the gap between it and the two current cloud leaders, Amazon AWS and Microsoft Azure.

 

No one doubts Google's commitment to ruling AI in the cloud. As Burt points out, the company has invested $30 billion in its cloud strategy. Eric Schmidt, executive chairman of Google's Alphabet parent, is quoted as saying Google has "the money, the means, and the commitment to pull off a new platform of computation globally for everyone who needs it. Please don’t attempt to duplicate this. You have better uses [for] your money.”

 

Microsoft Project Brainwave/Cognitive Toolkit

 

Making AI more accessible is a common theme among the platform contenders. In this regard, Microsoft has credentials its AI competitors lack. Microsoft calls its new Project Brainwave a "deep learning acceleration platform," according to TechRadar's Darren Allen in an August 23, 2017, article. Using Intel’s Stratix 10 field programmable gate array hardware accelerator, the company promises "ultra-low latency" data transfers and "effectively" real-time AI calculations.

 

Project Brainwave supports Microsoft's Cognitive Toolkit and Google's TensorFlow; support for Azure customers is expected soon, according to the company. In an August 25, 2017, post in TechNewsWorld, Richard Adhikari describes Project Brainwave's three primary layers:

 

  • High-performance, distributed system architecture
  • Deep neural network (DNN) engine synthesized onto field programmable gate arrays
  • Compiler and runtime for low-friction deployment of trained models

 

According to Adhikari, Microsoft's AI platform is designed to benefit from the company's Project Catapult "massive FPGA infrastructure" that has been added in recent years to Azure and Bing. It's worth noting that Microsoft chose to use FPGAs rather than chips optimized for a specific set of algorithms, which is the approach Google took with its Tensor Processing Unit. Forbes' Aaron Tilley writes in an August 25, 2017, article that this gives Microsoft more flexibility, which is an important feature considering the fast pace of change in deep learning.

 

Seeking Alpha's Motek Moyen agrees that Project Brainwave's flexibility is an advantage over Google's TPU. In an August 29, 2017, article, Moyen states that future-proofing its AI platform will allow Microsoft to challenge Amazon's AWS as the number one AI provider. While Amazon maintains its lead in the IaaS space, Microsoft holds the top spot in the enterprise SaaS category.

 

Amazon's tremendous lead in the market for cloud infrastructure services could be challenged by Microsoft's Project Brainstorm and other AI-based platforms. Source: Synergy Research Group, via Seeking Alpha

 

In the AI horserace, don't rule out late-closing Facebook, Apple

 

When members of Facebook's AI Research (FAIR) team spoke at a recent NVidia GPU tech conference in San Jose, CA, they focused on the work being done at the company to advance the field in general, particularly "longer-term academic problems," according to the New Stack's TC Currie in an August 21, 2017, article. However, there are at least two areas where Facebook intends to "productize" their research: language translation and image classification.

 

Facebook's primary advantage in AI research is the incredible volume of data it has to work with. The training models the FAIR team develops may have millions of parameters to weight and bias. The resulting dataset may be "tens of terabytes" in size, according to Currie. In addition to the multi-terabyte datasets, the company's deep neural net (DNN) training must support computer vision models requiring from 5 to more than 100 exa-FLOPS (floating point operations per second) of compute and billions of I/O operations per second.

 

Until recently, Apple wasn't saying much about its AI plans. As the Wall Street Journal's Tripp Mickle points out in a September 3, 2017, article, Apple's historic penchant for secrecy may work against its AI efforts because researchers prize the opportunity to publish and discuss their AI work openly. Working in Apple's favor is the ability of researchers to influence the consumer products that will put AI into the hands of millions, perhaps billions of people.

 

While it often feels like AI and other paradigm-shifting technologies come on us in a rush, the reality is that the trip from the lab to the street is long and slow, filled with many twists and turns. And like the fabled contest between the tortoise and the hare, the race doesn't always go to the swiftest.

Implementing Monitoring to Encourage DevOps Support

$
0
0

When you are looking to implement DevOps in your organization, one starting point that may help you gain support for implementation is to get a monitoring service up and running for your website, web apps, or both.

Being able to show management that improvements could be made to performance, customer experience, and sales could give you an avenue in which you can offer DevOps implementation as a method of making the improvements quickly and easily, thus leading to further gains in all of these areas.

So, what could monitoring data have in it that may be good examples of where improvements could be made to your IT infrastructure?

Website and app performance

Amazon experienced 1 % loss in revenue per 100 ms site load delay

Source: Paessler Blog

The performance of your website or app can make a very big difference when it comes to your organization’s bottom line. Any sort of delay or bottleneck can cost you sales. For instance, a potential customer that has to wait for your website to load for more than a couple of seconds is very likely to leave and pursue a purchase elsewhere.

A slow response when trying to make a purchase could have the same end result, as the customer may feel like the site or app is down because it is not responding in a timely manner. It could also result in customers wanting refunds that did get through the process because they never saw a response that the transaction was successful.

These types of issues could be caused by your network, lack of site or app optimization, a slow database server or connection, or a number of other things. Having an automatic monitoring solution in place can help you pinpoint the origin of the problem and give you insight into how DevOps could be helpful in solving the issue.

Customer experience

As mentioned before, performance is a key factor in customer experience, but not the only one. A monitoring service could prove helpful in finding that potential customers are getting held up on specific pages or interactions for some reason.

For example, users may be having trouble finding a button that adds an item to their cart or may have the same trouble finding the right item to click to finalize a purchase. Any number of interactions could cause a bottleneck. Searches taking too long, menus that must be opened to complete an action but the user cannot find the menu controls, content that is only available on mouseover (mobile devices don’t have mouses for this action), and many other possibilities.

Again, the implementation of a monitoring service, in addition to user testing, can be extremely helpful in determining where users are having trouble interacting with your website or web app.

Customer conversions

Walmart increased conversion rate by 2 % for every 1 second of load time improvement

Source: Paessler Blog

As noted above, customer conversions are the end result of the other issues. As you clean up things such as load time (the quote above shows how much a single second of improvement in load time helped Amazon in this area), you should also get a notable improvement in customer conversions!

If users find that your site or app loads quickly and that they can find the interactive controls they need to gather and purchase items, they will be far more likely to complete purchases than they would otherwise. This can be a great help in convincing management that DevOps could be extremely helpful to the organization in order to address the issues at hand and convert more sales.

DevOps as a solution

DevOps needs (and in effect, creates) a culture of knowledge and information sharing, that leads to collaboration between the various teams. The practice of DevOps principles not only is beneficial for the performance of software development and operations, but it also has a very positive impact on the web service development and quality assurance performance.

Source: Cigniti

Since the implementation of DevOps helps to drive collaboration, you will see that it is a great solution for optimizing your organization’s website or web apps for the varied issues that users may encounter. As noted above, this could be anything: performance, database issues, development issues, or design issues.


Oftentimes, there is room for improvement in more than one of these areas, and DevOps implementation allows your teams to work together smoothly and quickly to address all of the issues simultaneously. Not only will monitoring help you find the issues, but you can then show how DevOps is the best solution for your team to tackle these issues in tandem!

 

New Hybrid Cloud Approaches Teach Old Data Centers New Tricks

$
0
0

Data centers are evolving from walled-in, self-contained hardware and software systems into cloud gateways. The platforms companies relied on just a decade ago are now destined for computer museums. As the cloud transition continues, the roles of in-house IT will be very different, but just as vital to the organization's success as ever.

 

Oracle/mainframe-type legacy systems have short-term value in the "pragmatic" hybrid clouds described by David Linthicum in a September 27, 2017, post on the Doppler. In the long run, however, the trend favoring OpEx (cloud services) over CapEx (data centers) is unstoppable. The cloud offers functionality and economy that no data center can match.

 

Still, there will always be some role for centralized, in-house processing of the company's important data assets. IT managers are left to ponder what tomorrow's data centers will look like, and how the in-house installations will leverage the cloud to deliver the solutions their customers require.

 

Spending on hybrid clouds in the U.S. will nearly quadruple between 2014 and 2021. Source: Statistica, via Insight

 

Two disparate networks, one unified workflow

 

The mantra of any organization planning a hybrid cloud setup is "Do the work once." Trend Micro's Mark Nunnikhoven writes in an April 19, 2017, article that this ideal is nearly impossible to achieve due to the very different natures of in-house and cloud architectures. In reconciling the manual processes and siloed data of on-premises networks with the seamless, automated workflows of cloud services, some duplication of effort is unavoidable.

 

The key is to minimize your reliance on parallel operations: one process run in-house, another run for the same purpose in the cloud. For example, a web server in the data center should run identically to its counterparts deployed and running in the cloud. The first step in implementing a unified workflow is choosing tools that are "born in the cloud," according to Nunnikhoven. The most important of these relate to orchestration, monitoring/analytics, security, and continuous integration/continuous delivery (CI/CD).

 

Along with a single set of tools, managing a hybrid network requires visibility into workloads, paired with automated delivery of solutions to users. In both workload visibility and process automation, cloud services are ahead of their data center counterparts. As with the choice of cloud-based monitoring tools, using cloud services to manage workloads in the clouds and on-premises gives you greater insight and improved efficiency.

 

The components of a Cloud Customer Hybrid Integration Architecture encompass the public network, a cloud provider network, and an enterprise network. Source: Cloud Standards Customer Coucil

 

Process automation is facilitated by orchestration tools that let you automate the operating system, application, and security. Wrapping the components in a single package makes one-click deployment possible, in addition to other time- and resource-saving techniques. Convincing staff of the benefits of applying cloud tools and techniques to the management of in-house systems is often the most difficult obstacle encountered when transitioning from traditional data centers to cloud services.

 

The view of hybrid clouds from the data center out

 

Role reversals are not uncommon in the tech industry. However, it took decades to switch from centralized mainframes to decentralized PC networks and back to centralized (and virtualized) cloud servers. The changes today happen at a speed that the most agile of IT operations find difficult to keep pace with. Kim Stevenson, VP and General Manager of data center infrastructure at Lenovo, believes the IT department's role is now more important than ever. As Stevenson explains in an October 5, 2017, article on the Stack, the days of simply keeping pace with tech changes are over. Today, IT must drive change in the company.

 

The only way to deliver the data-driven tools and resources business managers need is by partnering with the people on the front lines of the business, working with them directly to make deploying and managing apps smooth, simple, and quick. As the focus of IT shifts from inside-out to outside-in, the company's success depends increasingly on business models that support "engineering future-defined products" using software-defined facilities that deliver business solutions nearly on demand.

 

Data center evolution becomes a hybrid-cloud revolution

 

Standards published by the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) define four data center tiers: from a basic modified server room with a single, non-redundant distribution path (tier one) to one with redundant capacity components and multiple independent distribution paths. A tier one data center offers limited protection against physical events, while a tier four implementation protects against nearly all physical events and supports concurrent maintainability, so a single fault will not cause downtime.

 

These standards addressed the needs of traditional client-server architectures characterized by north-south traffic, but they come up short when applied to server virtualization's primarily east-west data traffic. Network World's Zeus Karravala writes in a September 25, 2017, article that the changes this fundamental shift will cause in data centers have only begun.

"

 

Hybrid cloud dominates the future plans of enterprise IT departments: 85 percent are based on multi-clouds, the vast majority of which will be hybrid clouds. Source: RightScale 2017 State of the Cloud Report

 

Karravala cites studies conducted by ZK Research that found 80 percent of companies will rely on hybrid public-private clouds for their data-processing needs. Software-defined networks are the key to achieving the agility future data-management tasks will require. SDNs will create hyper-converged infrastructures to provide the servers, storage, and networks modern applications rely on.

 

Two technologies that make HCIs possible are containerization and micro-segmentation: the former allows an entire runtime environment to be virtualized nearly instantaneously because it can be created and destroyed quickly; while the latter supports predominant east-west data traffic by creating secure zones that allow data to bypass firewalls, intrusion prevention tools, and other security components.

 

Laying the groundwork for the cloud-first data center

 

IT executives are eternal optimists. The latest proof is evident in the results of a recent Intel survey of data security managers. Eighty percent of the executives report their organizations have adopted a cloud-first strategy that they expect to take no more than one year to implement. What's striking is that the same survey conducted one year earlier reported the same figures. Obviously, cloud-first initiatives are taking longer to put in place than expected.

 

Sometimes, an unyielding "cloud first" policy results in square pegs being forced into round holes. TechTarget's Alan R. Earls writes in a May 2017 article that often the people charged with implementing their companies' cloud-first plan can't explain what benefits they expect to realize as a result. Public clouds aren't the answer for every application. In addition to the organization's experience level with cloud services and the availability of cloud resources, some apps simply work better and/or run more reliably and efficiently in-house.

 

An application portfolio assessment identifies the systems that belong on the cloud, particularly those that take the best advantage of containers and microservices. For a majority of companies, their data center of the future will operate as the private end of an all-encompassing hybrid cloud.

Cloud Compliance from A to Z

$
0
0

When it comes to regulatory compliance, some IT departments are in need of an attitude adjustment. Confirming your organization complies with rules governing the protection of sensitive data is no longer an onerous task to be avoided at all costs. Forward-thinking companies now realize their cloud compliance efforts are the cornerstone of realizing the cost, performance, efficiency, and security benefits of cloud services.

 

There's no mystery about why companies large, small, and in between are adopting multi-cloud strategies. The three primary reasons cited by Gigaom's Tim Crawford in an October 19, 2017, article are to prevent being locked into a single cloud provider; to take advantage of the best of breed in specific service and tool categories; and to evaluate several competing cloud services prior to focusing on and building around a single service.

 

Many companies taking the multi-cloud route are surprised to discover that dealing with multiple cloud services facilitates compliance. For example, few industries take compliance more seriously than financial services. In an October 6, 2017, post on Data Center Knowledge, Sean Finnerty cites a 2015 SANS survey (pdf) that found 71 percent of financial firms choose to keep compliance controls in-house rather than on public cloud services.

 

In fact, a multi-cloud setup provides the logging, control, classification, redundancy, and maintenance required to guarantee compliance with such regulations as the Payment Card Industry Data Security Standard (PCI DSS), the Gramm-Leach-Bliley Act (GLBA), and Europe's General Data Protection Regulation (GDPR).

 

Governance: The key to unlocking all the cloud's benefits

 

According to Finnerty, virtual environments streamline regulatory compliance by offering high availability, scalability, and multilayer security that lets you apply protections with pinpoint accuracy. Datamation's Christine Taylor goes even further, explaining in an April 13, 2017, article that the goal of cloud governance is to manage IT processes "to receive maximum value from cloud computing investments."

 

The most popular method used by companies to ensure cloud services meet compliance requirements is contract clauses (48 percent), followed by SLAs (44 percent) and audits (8 percent). Source: SANS Institute

 

Investing the time and resources required to implement a business-wide cloud governance initiative allows companies to realize "significant cost savings" from their management processes and cloud frameworks, according to Taylor. Four important points to raise with cloud service providers are listed below:

 

  1. Primary responsibility for compliance with any government regulations that apply to data stored in the cloud rests with the IT department. However, cloud providers share some responsibility for regulations applying to privacy and to the storage of sensitive data.
  2. The service level agreement must include recovery assurance that guarantees data availability and durability, as well as compliance with data-retention requirements.
  3. Confirm the physical security of the cloud service as well as the strength of the service's digital defenses. Request the most recent annual audit on data storage compliance as well as segmentation information in multi-tenant environments.
  4. For cross-border investigations, verify compliance with national and regional privacy laws, and ensure sensitive data can be "culled" for storage in safe regions.

Compliance enhances efficiency of cloud implementations

 

According to the results of a 2016 survey of enterprise IT pros by research firm Clutch, the regulation receiving the most attention of data managers is the Cloud Security Alliance (CSA), which 39 percent deem necessary for their cloud data. Clutch's Andrew Miller writes in an article on the Whir that the CSA's Certificate of Cloud Security Knowledge guarantees that the provider is knowledgeable in the design, construction, and maintenance of cloud environments. The Security, Trust & Assurance Registry (STAR) program of the CSA raises the bar even higher, offering " toughest set of cloud security regulations in the industry," according to Miller.

 

The Cloud Security Alliance is the regulation cited by most enterprise IT departments as a requirement before they will commit their sensitive data to a cloud service. Source: Clutch

 

The next most popular cloud standard, required by 23 percent of the enterprise IT managers surveyed, is the International Organization of Standards' ISO/IEC 27018:2014, which governs the storage and processing of personally identifiable information (PII). Cloud services that have earned certification for all of ISO 27001 are considered to be compliant with ISO/IEC 27018:2014.

 

Compliance with U.S. Food and Drug Administration's regulations pertaining to electronic records (Chapter 11: Electronic Records; Electronic Signatures) was cited by 23 percent of the IT managers surveyed by Clutch as a requirement for their cloud service providers. In addition, 18 percent of enterprise IT departments insist on compliance with standards of the Content Delivery & Security Association (CDSA). This entails internal risk assessments by cloud services, documentation of security systems and processes, review by a CDSA auditor, a surveillance audit after six months, and an annual audit to maintain certification.

 

Challenges of confirming HIPAA and FedRAMP compliance by cloud services

 

While the data-protection requirements built into the Health Insurance Portability and Accountability Act of 1996 (HIPAA) affect only organizations that conduct electronic healthcare transactions, a violation of the HIPAA rules can result in a multi-million-dollar fine. For example, the maximum fine for a single HIPAA violation is $1.5 million. A breach involving thousands of patient records, such as the one reported by New York-Presbyterian Hospital and Columbia University in 2014, may put an organization on the hook for a fine as high as $4.8 million.

 

Cloud providers may claim to be HIPAA-compliant, but as Clutch's Miller points out, the U.S. Department of Health and Human Services (HHS), which oversees HIPAA, does not certify providers. Organizations are responsible for ensuring their applications are HIPAA-compliant, using the HHS's own audit protocols.

 

Federal government agencies adopting cloud services must ensure their providers comply with the Federal Risk and Authorization Management Program (FedRAMP), a framework that uses third-party assessment organizations (3PAO) to confirm that a cloud service's data safety measures are transparent and consistent with the agencies' security policies. Ramona Adams writes in a September 29, 2017, post on ExecutiveGov that the new FedRAMP Tailored baseline is designed to make it faster and easier for agencies to authorize "low-impact software-as-a-service platforms."

 

The massive compliance headache on the horizon: GDPR

 

The calendar may still say "2017," but many IT managers -- and all of those whose companies operate in Europe -- are thinking long and hard about May 2018. That's when the European Union's General Data Protection Regulation (GDPR) takes effect. Adam Shepherd writes in an October 12, 2017, article on the UK site CloudPro that companies in the U.S. are most likely to be "caught by surprise" by the GDPR's penalties for non-compliance.

 

The GDPR sets fines for failing to protect sensitive consumer data as high as 20 million euros or 4 percent of the firm's "global annual turnover," whichever amount is higher. Shepherd quotes Box executive David Benjamin as stating that had the Equifax breach occurred when GDPR was in effect, it would have made Equifax subject to a fine of $60 million or greater because the breach affected hundreds of thousands of British citizens.

 

A report released recently by research firm Netskope on the GDPR readiness of cloud services found that only 24.6 percent of the cloud services currently used by enterprises received a GDPR-readiness rating of "high." Factors considered in determining the services' readiness include the location of cloud-stored data, the level of encryption, and the specifics of data processing agreements.

 

Of the 75 percent of cloud services not yet in compliance with GDPR, 67.1 percent do not specify in their terms of service that customers own the data, 80.4 percent don't encrypt data at rest, and 41.9 percent replicate data in geographically dispersed data centers. Source: Netskope, via Gigabit

 

A one-stop resource for companies preparing for GDPR compliance is the Information Security Forum's GDPR Implementation Guide, which takes a two-phase approach:

 

  • Phase A entails preparation via personal data discovery, compliance status determination, and definition of the scope of the company's GDPR program.
  • Phase B covers implementation of GDPR requirements so as to demonstrate sufficient levels of GDPR compliance.

It's time for organizations to move cloud compliance off the back burner. Careful planning of your compliance efforts will result in faster and more-efficient IT operations, while also minimizing the risk of a costly data breach. Think of your compliance efforts as the best form of insurance -- the type that can prevent a disaster that has the potential of putting your company out of business.

Viewing all 1101 articles
Browse latest View live