What is Cloud Computing Data Security

Definitive Cloud Migration Checklist For Planning Your Move

Embracing the cloud may be a cost-effective business solution, but moving data from one platform to another can be an intimidating step for technology leaders.

Ensuring smooth integration between the cloud and traditional infrastructure is one of the top challenges for CIOs. Data migrations do involve a certain degree of risk. Downtime and data loss are two critical scenarios to be aware of before starting the process.

Given the possible consequences, it is worth having a practical plan in place. We have created a useful strategy checklist for cloud migration.

planning your move with a cloud migration checklist

1. Create a Cloud Migration Checklist

Before you start reaping the benefits of cloud computing, you first need to understand the potential migration challenges that may arise.

Only then can you develop a checklist or plan that will ensure minimal downtime and ensure a smooth transition.

There are many challenges involved with the decision to move from on-premise architecture to the cloud. Finding a cloud technology provider that can meet your needs is the first one. After that, everything comes down to planning each step.

The very migration is the tricky part since some of your company’s data might be unavailable during the move. You may also have to take your in-house servers temporarily offline. To minimize any negative consequences, every step should be determined ahead of time.

With that said, you need to remain willing to change the plan or rewrite it as necessary in case something brings your applications and data to risk.

2. Which Cloud Solution To Choose, Public, Hybrid, Private?

Public Cloud

A public cloud provides service and infrastructure off-site through the internet. While public clouds offer the best opportunity for efficiency by sharing resources, it comes with a higher risk of vulnerability and security breaches.

Public clouds make the most sense when you need to develop and test application code, collaboratively working on projects, or you need incremental capacity. Be sure to address security concerns in advance so that they don’t turn into expensive issues in the future.

Private Cloud

A private cloud provides services and infrastructure on a private network. The allure of a private cloud is the complete control over security and your system.

Private clouds are ideal when your security is of the utmost importance.  Especially if the information stored contains sensitive data. They are also the best cloud choice if your company is in an industry that must adhere to stringent compliance or security measures.

Hybrid Cloud

A hybrid cloud is a combination of both public and private options.

Separating your data throughout a hybrid cloud allows you to operate in the environment which best suits each need. The drawback, of course, is the challenge of managing different platforms and tracking multiple security infrastructures.

A hybrid cloud is the best option for you if your business is using a SaaS application but wants to have the comfort of upgraded security.

3. Communication and Planning Are Key

Of course, you should not forget your employees when coming up with a cloud migration project plan. There are psychological barriers that employees must work through.

Some employees, especially older ones who do not entirely trust this mysterious “cloud” might be tough to convince. Be prepared to spend some time teaching them about how the new infrastructure will work and assure them they will not notice much of a difference.

Not everyone trusts the cloud, particularly those who are used to physical storage drives and everything that they entail. They – not the actual cloud service that you use – might be one of your most substantial migration challenges.

Other factors that go into a successful cloud migration roadmap are testing, runtime environments, and integration points. Some issues can occur if the cloud-based information does not adequately populate your company’s operating software. Such scenarios can have a severe impact on your business and are a crucial reason to test everything.

A good cloud migration plan considers all of these things. From cost management and employee productivity to operating system stability and database security. Yes, your stored data has some security needs, especially when its administration is partly trusted to an outside company

When coming up with and implementing your cloud migration system, remember to take all of these things into account. Otherwise, you may come across some additional hurdles that will make things tougher or even slow down the entire process.

meeting to go over cloud migration strategy

4. Establish Security Policies When Migrating To The Cloud

Before you begin your migration to the cloud, you need to be aware of the related security and regulatory requirements.

There are numerous regulations that you must follow when moving to the cloud. These are particularly important if your business is in healthcare or payment processing. In this case, one of the challenges is working with your provider on ensuring your architecture complies with government regulations.

Another security issue includes identity and access management to cloud data. Only a designated group in your company needs to have access to that information to minimize the risks of a breach.

Whether your company needs to follow HIPAA Compliance laws, protect financial information or even keep your proprietary systems private, security is one of the main points your cloud migration checklist needs to address.

Not only does the data in the cloud need to be stored securely, but the application migration strategy should keep it safe as well. No one – hackers included – who are not supposed to have it should be able to access that information during the migration process. Plus, once the business data is in the cloud, it needs to be kept safe when it is not in use.

It needs to be encrypted according to the highest standards to be able to resist breaches. Whether it resides in a private or public cloud environment, encrypting your data and applications is essential to keeping your business data safe.

Many third-party cloud server companies have their security measures in place and can make additional changes to meet your needs. The continued investments in security by both providers and business users have a positive impact on how the cloud is perceived.

According to recent reports, security concerns fell from 29% to 25% last year. While this is a positive trend in both business and cloud industries, security is still a sensitive issue that needs to be in focus.

5. Plan for Efficient Resource Management

Most businesses find it hard to realize that the cloud often requires them to introduce new IT management roles.

With a set configuration and cloud monitoring tools,  tasks switch to a cloud provider.  A number of roles stay in-house. That often involves hiring an entirely new set of talents.

Employees who previously managed physical servers may not be the best ones to deal with the cloud.

There might be migration challenges that are over their heads. In fact, you will probably find that the third-party company that you contracted to handle your migration needs is the one who should be handling that segment of your IT needs.

This situation is something else that your employees may have to get used to – calling when something happens, and they cannot get the information that they need.

While you should not get rid of your IT department altogether, you will have to change some of their functions over to adjust to the new architecture.

However, there is another type of cloud migration resource management that you might have overlooked – physical resource management.

When you have a company server, you have to have enough electricity to power it securely. You need a cold room to keep the computers in, and even some precautionary measures in place to ensure that sudden power surges will not harm the system. These measures cost quite a bit of money in upkeep.

When you use a third-party data center, you no longer have to worry about these things. The provider manages the servers and is in place to help with your cloud migration. Moreover, it can assist you with any further business needs you may have. It can provide you with additional hardware, remote technical assistance, or even set up a disaster recovery site for you.

These possibilities often make the cloud pay for itself.

According to a survey of 1,037 IT professionals by TechTarget, companies spend around 31% of their dedicated cloud spending budgets on cloud services. This figure continues to increase as businesses continue discovering the potential of the cloud

cost savings from moving to cloud

6. Calculate your ROI

Cloud migration is not inexpensive. You need to pay for the cloud server space and the engineering involved in moving and storing your data.

However, although this appears to be one of the many migration challenges, it is not. As cloud storage has become popular, its costs are falling. The Return on Investment or ROI, for cloud storage also makes the price worthwhile.

According to a survey conducted in September 2017, 82% of organizations realized that the prices of their cloud migration met or exceeded their ROI expectations. Another study showed that the costs are still slightly higher than planned.

In this study, 58% of the people responding spent more on cloud migration than planned. The ROI is not affected as they still may have saved money in the long run, even if the original migration challenges sent them over budget.

One of the reasons why people receive a positive ROI is because they will no longer have to store their current server farm. Keeping a physical server system running uses up quite a few physical utilities, due to the need to keep it powered and cool.

You will also need employees to keep the system architecture up to date and troubleshoot any problems. With a cloud server, these expenses go away. There are other advantages to using a third party server company, including the fact that these businesses help you with cloud migration and all of the other details.

The survey included some additional data, including the fact that most people who responded – 68% of them – accepted the help of their contracted cloud storage company to handle the migration. An overwhelming majority also used the service to help them come up with and implement a cloud migration plan.

Companies are not afraid to turn to the experts when it comes to this type of IT service. Not everyone knows everything, so it is essential to know when to reach out with questions or when implementing a new service.

Final Thoughts on Cloud Migration Planning

If you’re still considering the next steps for your cloud migration, the tactics outlined above should help you move forward. A migration checklist is the foundation for your success and should be your first step.

Cloud migration is not a simple task. However, understanding and preparing for challenges, you can migrate successfully.

Remember to evaluate what is best for your company and move forward with a trusted provider.


comparison vmware nsx

NSX-V vs NSX-T: Discover the Key Differences

Virtualization has changed the way data centers are built. Modern data centers utilize physical servers and hardware as hypervisors to run virtual machines. Virtualizing these functions enhances flexibility, cost-effectiveness, and improve the scalability of the data center. VMware is a leader in the virtualization platform market space. It allows for multiple virtual machines to run on a single physical machine.

One of the most important elements of each data center, including virtualized ones, is the network. Companies that require large or complex network configurations prefer using software-defined networking (SDN).

SDN or Software-defined-networking is an architecture to make networks agile and flexible. It improves network control by equipping companies and service providers with the ability to rapidly respond and adapt to changing technical requirements. It’s a dynamic technology in the world of virtualization.

VMware

In the virtualization market space, VMware is one of the biggest names, offering a wide range of products connected to their virtual workstation, network virtualization, and security platform. VMware NSX has two variants of the product called NSX-V and NSX-T.

In this article, we explore VMware NSX and examine some differences between VMware NSX-V and VMware NSX-T.

nsx data centers

What is NSX?

NSX refers to a specialized software-defined networking solution offered by VMware. Its main function is to provide virtualized networking to its users. NSX Manager is the centralized component of NSX, which is used for the management of networks. NSX also provides essential security measures to ensure that the virtualization process is safe and secure.

Businesses seeing the scale and complexity of their networks growing rapidly will need greater power invested in visibility and management. Modernization can be achieved in the implementation of a top-grade data center SDN solution with agile controls. SDN empowers this vision by centralizing and automating management and control.

What is NSX-T?

NSX-T by VMware offers an agile software-defined infrastructure for building cloud-native application environments. It aims to provide automation, simplicity in operations, networking and security.

NSX-T supports multiple clouds, multi-hypervisor environments, and bare-metal workloads. It also supports cloud-native applications. NSX-T supports network virtualization stack for OpenStack, Kubernetes, KVM, and Docker as well as AWS native workloads. It can be deployed without a vCenter Server, and it’s adopted for heterogeneous compute systems. NSX-T is considered the future of VMware.

What is NSX-V?

NSX-V architecture features deployment reconfiguration, rapid provisioning, and destruction of the on-demand virtual networks. It integrates with VMware vSphere and is specific to hypervisor environments. Such design utilizes the vSphere distributed switch, allowing a single virtual switch to connect multiple hosts in a cluster.

NSX explained

NSX Components

The primary components of VMware are NSX Edge gateways, NSX Manager, and NSX controllers.

NSX Manager is a primary component that works to manage networks, from a private data center to native public clouds. With NSX-V, the NSX Manager works with one vCenter Server. In the case of NSX-T, the NSX Manager can be deployed as ESXi VM or KVM VM and NSX Cloud. NXT-T Manager runs on the Ubuntu operating system while NSX-V is on Photon OS. The NSX controller is the central hub, controlling all logical switches that are within a network, and secures information of all virtual machines, VXLANs, and hosts.

NSX Edge

NSX Edge is a gateway service that allows VMs access to physical and virtual networks. It can be installed as a services gateway or as a distributed virtual router and provides the following services: Firewalls, Load Balancing, Dynamic routing, Dynamic Host Configuration Protocol (DHCP), Network Address Translation (NAT), and Virtual Private Network (VPN).

NSX Controllers

NSX Controllers is a distributed state management system that overlays transport tunnels. It controls virtual networks that are deployed as a VM on KVM or ESXi hypervisors. It monitors and controls all logical switches within the network, and manages information about VMs, VXLANs, switches, and hosts. Structured with three controller nodes, it ensures data redundancy if one NSX Controller node malfunctions or fails.

Features of NSX

There are many similar features and capabilities for both NSX types. These include:

  • Distributed routing
  • API-driven automation
  • Detailed monitoring and statistics
  • Software-based overlay
  • Enhanced user interface

There are many differences as well. For example, NSX-T is cloud-based. It is not focused on any specific platform or hypervisor. NSX-V offers tight integration with vSphere. It also uses a manual process to configure the IP addressing scheme for network segments. APIs are also different for NSX-V and NSX-T.

To better understand these concepts, view the VMware NSX-V vs NSX-T table below.

VMware NSX-V vs NSX-T – Feature Comparison

Comparison of Features NSX-V NSX-T
Basic Functions NSX-V offers rich features such as deployment reconfiguration and rapid provisioning and destruction of any on-demand virtual network.

 

This allows a single virtual switch to connect to multiple hosts in a cluster, by utilizing the vSphere distributed switch.

NSX-T provides users with an agile software-defined infrastructure. It can be used for building cloud-native application environments.

 

It also provides simplicity when it comes to operations in networking and security.

 

Multiple clouds, multi-hypervisor environments, and bare-metal workloads are all supported by its data structure.

Origins Originally released in 2012, NSX-V is built around the VMWare vSphere environment. NSX-T also originates from the vSphere ecosystem, designed to address some of the use cases not covered by the NSX-V.
Coverage NSX-V is designed for the sole purpose of allowing on-premises (physical network) vSphere deployments.

 

A single NSX-V manager can work only with a single VMware vCenter server instance. It is only applicable for VMware Virtual Machines.

 

This leaves a significant coverage gap, leaving out organizations and businesses using hybrid infrastructure models.

NSX-T extends its coverage to include multi-hypervisors, containers, public clouds, and bare metal servers.

 

Since it is decoupled from VMware’s hypervisor platform, it can easily incorporate agents. This is done to perform micro-segmentation even on non-VMware platforms.

 

The NSX-T’s limitations include some feature gaps. It also leaves out certain micro-segmentation solutions like Guardicore Centra.

Working with NSX Manager NSX-V works with only one vCenter Server. It runs on Photon OS. NSX-T can be deployed as ESXi VM or KVM VM and NSX Cloud. It runs on the Ubuntu operating system.
Deployment NSX-V requires registration with VMware as the NSX Manager needs to be registered.

 

The NSX Manager calls for extra NSX Controllers for deployment.

NSX-T requires the ESXi hosts or transport nodes to be registered first.

 

The NSX Manager acts as a standalone solution. NSX-T requires users to configure the N-VDS which includes the uplink.

Routing NSX-V uses network edge security and gateway services which are used to isolate virtualized networks.

 

NSX Edge is installed both as a logical distributed router as well as an edge services gateway.

NSX-T routing is designed for cloud environments and multi-cloud use. It is designed for multi-tenancy use cases.
Overlay encapsulation protocols VXLAN – NSX-V uses the VXLAN encapsulation protocol GENEVE – NSX-T uses GENEVE which is a more advanced protocol
Logical switch replication modes Unicast, Multicast, Hybrid Unicast (Two-tier or Head)
Virtual switches (N-VDS) used vSphere Distributed Switch (VDS) Open vSwitch (OVS) or VDS
Two-tier distributed routing Not Available Available
APR Suppression Available Available
Integration for traffic inspection Available Not Available
Configuring IP addresses scheme for network segments Manual Automatic
Kernel Level Distributed Firewall Available Available

Deployment Options

The process of deployment looks quite similar for both, yet there are many differences between the NSX-V and NSX-T features. Here are some critical differences in deployment:

  • With NSX-V, there is a requirement to register with VMware. An NSX Manager needs to be registered.
  • NSX-T allows pointing the NSX-T solution to the VMware vCenter for registering the ESXi hosts or Transport Nodes.
  • NSX-V Manager provides a standalone solution. It calls for extra NSX Controllers for deployment.
  • NSX-Manager integrates controller functionality and NSX Manager in the virtual appliance. NSX-T Manager becomes a combined appliance.
  • NSX-T has an extra configuration of N-VDS which should be completed. This includes the uplink.

Routing

The differences in routing are evident between NSX-T and NSX-V. NSX-T is designed for the cloud and multi-cloud. It is for multi-tenancy use cases, which requires the support of multi-tier routing.

NSX-V features network edge security and gateway services, which can isolate virtualized networks. NSX Edge is installed as a logical distributed router. It is also installed as an edge services gateway.

Choosing between NSX-V and NSX-T

The major differences are evident as seen in the table above, and help us understand the variables in NSX-V vs. NSX-T systems. One is closely associated with the VMWare ecosystem. The other is unrestricted, not focused on any specific platform or hypervisor. To identify for whom each software is best, take into consideration how each option will be used and where it will run:

Choosing NSX-V:

  • NSX-V is recommended in cases where a customer already has a virtualized application in the data center. The customer might want to create network virtualization for the current application.
  • For customers who value the presence of several tightly integrated features which would be most beneficial in this case.
  • If a customer is considering a virtualization application for a current application, NSX-V is recommended.
Use Cases For NSX-V:
Security – Secure end-user, DMZ anywhere
Application continuity – Disaster recovery, Multi data center pooling, Cross cloud

Choosing NSX-T:

  • In cases where a customer wants to build modern applications on platforms such as Pivotal Cloud Foundry or OpenShift, NSX-T is recommended. This is due to the vSphere enrollment support (migration coordinator) it provides.
  • You plan to build on modern applications, like OpenShift or Pivotal Cloud.
  • There are multiple types of hypervisors available.
  • If there are any network interfaces to modern applications.
  • You are using multi-cloud-based and cloud networking applications.
  • You are using a variety of environments.
Use Cases For NSX-T:
Security – Micro-segmentation
Automation – Automating IT, Developer cloud, Multi-tenant infrastructure

Note: VMware NSX-V and NSX-T have many distinct features, a totally different code base, and cater to different use cases.

Conclusion: VMware’s NSX Options Provide a Strong Network Virtualization Platform

NSX-T and NSX-V both solve many virtualization issues, offer full feature sets, and provide an agile and secure environment. NSX-V is the proven and original software-defined solution. It is best if you need a network virtualization platform for existing applications.

NSX-T is the way of the future. It provides you with all of the necessary tools for moving your data, no matter the underlying physical network, and helps you adjust to the constant change in applications.

The choice you make depends on which NSX features meet your business needs. What do you use or prefer? Contact us for more information on NSX-T pricing and NSX-V to NSX-T migration. Keep reading our blog to learn more about different tools and how to find best-suited solutions for your networking requirements.


cloud native applications

Cloud-Native Application Architecture: The Future of Development?

Cloud native application architecture allows software and IT to work together in a faster modern environment.

Applications designed around cloud-native structure define the difference between how new technology is built, packaged, and distributed, instead of where it was created and stored. When creating these applications, you retain complete control and have the final say in the process.

Even if you are not currently hosting your application in the cloud, this article will influence how you develop modern applications moving forward. Read on to find out what cloud native is, how it works, and it’s future implications.

What is Cloud-Native? Defined

Cloud native in terms of applications are container-based environments or packaged apps of microservices. Cloud-native technologies build applications that contain several services packaged together which are deployed together and managed on cloud infrastructure using DevOps processes which provide uninterrupted delivery workflows. These microservices create what is called the architectural approach, which is in place to create smaller bundled applications.

definition of cloud-native from the foundation

What is a Cloud-Native Architecture?

Cloudnative architecture is built specifically to run in the cloud.

Cloud-native apps start as packaged software called containers. Containers are run through a virtual environment and become isolated from their original environments to make them independent and portable. You can run your personalized design through test systems to see where it’s located. Once you’ve tested it, you can edit to add or remove options.

Cloud-native development allows you to build and update applications quickly while improving quality and reducing risk. It’s efficient, run responsive, and scalable. These are fault-tolerant apps which be run anywhere, from public or private environments, or in hybrid clouds. You can test and build your application until it is precisely how you want it to be. For development aspects that you are not an expert in, you can easily outsource them.

The architecture of your system can be built up with the help of microservices. With the help of these services, you can set up the smaller parts of your apps individually, instead of reworking the entire app, all at once. More specifically, with DevOps and containers, applications become easier to update and release. As a collection of loosely connected services, such as microservices are easier to upgrade instead of waiting for one significant release which takes more time and effort.

Lastly, you’ll want to make sure your application has access to the elasticity of the cloud. With this elasticity it allows your developers to push code to production much faster than in traditional server-based models. You can move and scale your app’s resources at any time.

What Are The Characteristics of Cloud Native Applications?

Now that you know the basics about cloud-native apps, here are a few design princicples that you discuss with your developer in the development stages:

Develop With The Best Languages And Frameworks

All services of cloud-native applications are made using the best languages and frameworks. Make sure you can choose what language and framework suites your apps best.

Build With APIs For Collaboration & Interaction

Find out if you’ll be using fine-grained API-driven services for interaction and collaborating your apps, which are based on different protocols for different parts of the app. For example, Google’s open-source remote procedure call, or gRPC, is used for communication inside different services.

Agile DevOps & Automation

Confirm the capability of your app of becoming fully automated, to manage large applications.

How is your application being defined via protocols? Such protocols are CPU and storage quotas and network policies. The difference between you and an IT department when it comes to these protocols is that you are the owner and have access to everything, the department doesn’t.

Managing your app through DevOps will give your app its own independent life. See how different pipelines may work together to send out and manage your application.

Building Cloud-Native Applications

Application development will differ from developer to developer, depending on their skills and capabilities. Common to most cloud-native apps are the following characteristics, which are added in during the development process.

  • Updates – Your app will always be available and up to date.
  • Multitenancy – The app will work in a virtual space, sharing resources with other applications.
  • Downtime – If a cloud provider has an outage, another data center can pick up where it left off.
  • Automation – Speed, and agility rely on audited, reliable, and proven processes that are repeated, as needed.
  • Languages – Cloud-native apps can be written in HTML, Java, .Net, PHP, Ruby, CSS, JavaScript, Node.js, Python, and Go, never in C/C++, C#, or any virtual studio language.
  • Statelessness – With cloud-native apps being loosely coupled, apps aren’t tied to anything. The app stores itself in a database or another external entity, but you can still find them easily.
  • Designed modularly – Microservices run the functions of your app, they can be shut off when not needed, or be updated in one section, rather than the entire app being shut down.

The Future of Cloud-Native Technologies

Cloud-native has already proven with its efficiencies that it is the future of developing software. By 2025, 80% of enterprise apps will become cloud-based or be in the process of transferring themselves over to cloud-native apps. IT departments are already switching to developing cloud-native apps to save money and keep their designs secure off-site, safe from competitors.

Adoption now will save yourself the hassle of doing it later, when it’s more expensive.

By switching over to cloud-native apps, you’ll be able to see first-hand what they have to offer and benefit from running them yourself for years to come. Now, that you know how to take advantage of new types of infrastructure, you can continue to improve it by giving your app developers the tools they need. Go cloud-native and get the benefits of flexible, scalable, reusable apps that use the best container and cloud technology available.

What to Look for When Outsourcing Cloud Native Apps

During the planning process, many companies decide to hire a freelancer to help out in developing and executing a cloud-native strategy. It pays off to have a developer experienced in increasing the speed of applications development and organizing compute resources across different environments. It can save you time, money, and a lot of frustration.

When looking for an application developer remember to take these things into consideration,

  • Trust – Ensure that they will keep your information safe and secure
  • Quality – Have they produced and provided high quality services to other businesses?
  • Price – In creating your own apps, you don’t want to overspend Compare process and services to keep prices down

Cloud-native development helps your company derive more value from hybrid cloud architecture. It’s so important to partnerwith a company or contractor that has experience and a great track record.

Go cloud-native and partner with PhoenixNAP Global IT Services. Contact us today for more information.


a working security operations center

What is a Security Operations Center (SOC)? Best Practices, Benefits, & Framework

In this article you will learn:

  • Understand what a Security Operations Center is and active how detection and response prevent data breaches.
  • Six pillars of modern security operations you can’t afford to overlook.
  • The eight forward-thinking SOC best practices to keep an eye on the future of cybersecurity. Including an overview and comparison of current  Framework Models.
  • Discover why your organization needs to implement a security program based on advanced threat intelligence.
  • In-house or outsource to a managed security provider? We help you decide.


The average total cost of a data breach in 2018 was $3.86 million. As businesses grow increasingly reliant on technology, cybersecurity is becoming a more critical concern.

Cloud security can be a challenge, particularly for small to medium-sized businesses that don’t have a dedicated security team on-staff. The good news is that there is a viable option available for companies looking for a better way to manage security risks – security operations centers (SOCs).

In this article, we’ll take a closer look at what SOCs are, the benefits that they offer. We will also take a look at how businesses of all sizes can take advantage of SOCs for data protection.

 

stats showing the importance of security operations centers

What is a Security Operations Center?

A security operations center is a team of cybersecurity professionals dedicated to preventing data breaches and other cybersecurity threats. The goal of a SOC is to monitor, detect, investigate, and respond to all types of cyber threats around the clock.

Team members make use of a wide range of technological solutions and processes. These include security information and event management systems (SIEM), firewalls, breach detection, intrusion detection, and probes. SOCs have many tools to continuously perform vulnerability scans of a network for threats and weaknesses and address those threats and deficiencies before they turn into a severe issue.

It may help to think of a SOC as an IT department that is focused solely on security as opposed to network maintenance and other IT tasks.

the definition of SOC security

6 Pillars of Modern SOC Operations

Companies can choose to build a security operations center in-house or outsource to an MSSP or managed security service providers that offer SOC services. For small to medium-sized businesses that lack resources to develop their own detection and response team, outsourcing to a SOC service provider is often the most cost-effective option.

Through the six pillars of security operations, you can develop a comprehensive approach to cybersecurity.

    • Establishing Asset Awareness

      The first objective is asset discovery. The tools, technologies, hardware, and software that make up these assets may differ from company to company, and it is vital for the team to develop a thorough awareness of the assets that they have available for identifying and preventing security issues.

    • Preventive Security Monitoring

      When it comes to cybersecurity, prevention is always going to be more effective than reaction. Rather than responding to threats as they happen, a SOC will work to monitor a network around-the-clock. By doing so, they can detect malicious activities and prevent them before they can cause any severe damage.

    • Keeping Records of Activity and Communications

      In the event of a security incident, soc analysts need to be able to retrace activity and communications on a network to find out what went wrong. To do this, the team is tasked detailed log management of all the activity and communications that take place on a network.

SOC, security operations team at work

  • Ranking Security Alerts

    When security incidents do occur, the incident response team works to triage the severity. This enables a SOC to prioritize their focus on preventing and responding to security alerts that are especially serious or dangerous to the business.

  • Modifying Defenses

    Effective cybersecurity is a process of continuous improvement. To keep up with the ever-changing landscape of cyber threats, a security operations center works to continually adapt and modify a network’s defenses on an ongoing, as-needed basis.

  • Maintaining Compliance

    In 2019, there are more compliance regulations and mandatory protective measures regarding cybersecurity than ever before. In addition to threat management, a security operations center also must protect the business from legal trouble. This is done by ensuring that they are always compliant with the latest security regulations.

Security Operations Center Best Practices

As you go about building a SOC for your organization, it is essential to keep an eye on what the future of cybersecurity holds in store. Doing so allows you to develop practices that will secure the future.

SOC Best Practices Include:

Widening the Focus of Information Security
Cloud computing has given rise to a wide range of new cloud-based processes. It has also dramatically expanded the virtual infrastructure of most organizations. At the same time, other technological advancements such as the internet of things have become more prevalent. This means that organizations are more connected to the cloud than ever before. However, it also means that they are more exposed to threats than ever before. As you go about building a SOC, it is crucial to widen the scope of cybersecurity to continually secure new processes and technologies as they come into use.

Expanding Data Intake
When it comes to cybersecurity, collecting data can often prove incredibly valuable. Gathering data on security incidents enables a security operations center to put those incidents into the proper context. It also allows them to identify the source of the problem better. Moving forward, an increased focus on collecting more data and organizing it in a meaningful way will be critical for SOCs.

Improved Data Analysis
Collecting more data is only valuable if you can thoroughly analyze it and draw conclusions from it. Therefore, an essential SOC best practice to implement is a more in-depth and more comprehensive analysis of the data that you have available. Focusing on better data security analysis will empower your SOC team to make more informed decisions regarding the security of your network.

Take Advantage of Security Automation
Cybersecurity is becoming increasingly automated. Taking DevSecOps best practices to complete more tedious and time-consuming security tasks free up your team to focus all of their time and energy on other, more critical tasks. As cybersecurity automation continues to advance, organizations need to focus on building SOCs that are designed to take advantage of the benefits that automation offers.

Security Operations Center Roles and Responsibilities

A security operations center is made up of a number of individual team members. Each team member has unique duties. The specific team members that comprise the incident response team may vary. Common positions – along with their roles and responsibilities – that you will find in a security team include:

  • SOC Manager

    The manager is the head of the team. They are responsible for managing the team, setting budgets and agendas, and reporting to executive managers within the organization.

  • Security Analyst

    A security analyst is responsible for organizing and interpreting security data from SOC report or audit. Also, providing real-time risk management, vulnerability assessment,  and security intelligence provide insights into the state of the organization’s preparedness.

  • Forensic Investigator

    In the event of an incident, the forensic investigator is responsible for analyzing the incident to collect data, evidence, and behavior analytics.

  • Incident Responder

    Incident responders are the first to be notified when security alerts happen. They are then responsible for performing an initial evaluation and threat assessment of the alert.

  • Compliance Auditor

    The compliance auditor is responsible for ensuring that all processes carried out by the team are done so in a way that complies with regulatory standards.

security analyst SOC chart

SOC Organizational Models

Not all SOCs are structured under the same organizational model. Security operations center processes and procedures vary based on many factors, including your unique security needs.

Organizational models of security operations centers include:

  • Internal SOC
    An internal SOC is an in-house team comprised of security and IT professionals who work within the organization. Internal team members can be spread throughout other departments. They can also comprise their own department dedicated to security.
  • Internal Virtual SOC
    An internal virtual SOC is comprised of part-time security professionals who work remotely. Team members are primarily responsible for reacting to security threats when they receive an alert.
  • Co-Managed SOC
    A co-managed SOC is a team of security professionals who work alongside a third-party cybersecurity service provider. This organizational model essentially combines a semi-dedicated in-house team with a third-party SOC service provider for a co-managed approach to cybersecurity.
  • Command SOC
    Command SOCs are responsible for overseeing and coordinating other SOCs within the organization. They are typically only found in organizations large enough to have multiple in-house SOCs.
  • Fusion SOC
    A fusion SOC is designed to oversee the efforts of the organization’s larger IT team. Their objective is to guide and assist the IT team on matters of security.
  • Outsourced Virtual SOC
    An outsourced virtual SOC is made up of team members that work remotely. Rather than working directly for the organization, though, an outsourced virtual SOC is a third-party service. Outsourced virtual SOCs provide security services to organizations that do not have an in-house security operations center team on-staff.

Take Advantage of the Benefits Offered by a SOC

Faced with ever-changing security threats, the security offered by a security operations center is one of the most beneficial avenues that organizations have available. Having a team of dedicated information security professionals monitoring your network, security threat detection, and working to bolster your defenses can go a long way toward keeping your sensitive data secure.

If you would like to learn more about the benefits offered by a security operations center team and the options that are available for your organization, we invite you to contact us today.


HIPAA Compliant Cloud Storage

HIPAA Compliant Cloud Storage Solutions: Maintain Healthcare Compliance

Hospitals, clinics, and other health organizations have had a bumpy road towards cloud adoption over the past few years. The implied security risks of using the public cloud or working with a third-party service provider considerably delayed cloud adoption in the healthcare industry.

Even today, when 84% of healthcare organizations use cloud services, the question of choosing the right HIPAA compliant cloud provider can be a headache.

All healthcare providers whose clients’ data is stored in the U.S. are a subject to a set of  regulations known as HIPAA compliance

Today, any organization that handles confidential patient data needs abide by HIPAA storage requirements.

What is HIPAA Compliance?

HIPAA standards provide protection of health data. Any vendor working with a healthcare organization or business handling health files must abide by the HIPAA privacy rules. There are also many ancillary industries that must adhere to the guidelines if they have access to medical and patient data. This is where HIPPA Compliant cloud storage plays a significant role.

In 1996, “the U.S. Department of Health and Human Services (“HHS”) issued the Privacy Rule to implement the requirement of the Health Insurance Portability and Accountability Act (HIPAA) of 1996.” The Privacy Rule addresses patients’ “electronic protected health information” and how organizations, or “HIPAA covered entities” subject to the Privacy Rules must comply.

Most healthcare institutions use some form of electronic devices to provide medical care. This means that information no longer resides on a paper chart, but on a computer or in the cloud. Unlike general businesses or most commercial entities, healthcare institutions are legally obliged to employ the most reliable data backup practices.

So, how does this affect their choice of a cloud provider?

When planning their move to cloud computing, health care institutions need to ensure their vendor meets specific security criteria.

These criteria translate into requirements and thresholds that a company must meet and maintain to become HIPAA-ready. These come down to a set of certifications, SOC auditing and reporting, encryption levels, and physical security features.

HIPAA cloud storage solutions should work to make becoming compliant simple and straightforward. This way, healthcare organizations have one less thing to worry about and can focus on improving their critical processes.

storage requirements for Hipaa compliance

 HIPAA Cloud Storage and Data Backup Requirements

A cloud service provider doing business with a company operating under the HIPAA-HITECH act rules is considered a business associate. As such, it must show that it within cloud compliance standards and follows any relevant standards. Although the vendor does not directly handle patient information, it does receive, manage, and store Protected Health Information (PHI). This fact alone makes them responsible for protecting it according to HIPAA-HITECH act guidelines.

Being HIPAA compliant means implementing all of the rules and regulations that the Act proposes. Any vendor offering services that are subject to the act must provide documentation as proof of their conformity. This documentation needs to be sent not only to their clients but also to the Office for Civil Rights (OCR). The OCR is a sub-agency of the U.S. Department of Education, which promotes equal access to healthcare and human services programs.

Healthcare industry organizations looking to work with a HIPAA Compliant cloud storage provider should request proof of compliance to protect themselves. If the provider follows all standards, it should have no qualms about sharing the appropriate documentation with you.

HIPAA requirements for cloud hosting organizations are the same as the requirements for business associates. They fall into three distinct categories: administrative, physical, and technical safeguards.

  • Administrative Safeguards: These types of safeguards are transparent policies that outline how the business will comply from an operational standpoint. The operations can include managing security risk assessments, appropriate procedures, disaster and emergency response, and managing passwords.
  • Physical Safeguard: Physical safeguards are usually systems that are in place to protect customer data. They might include proper storage, data backup, and appropriate disposal of media at a data center. Important security precautions for facilities where hardware or software storage devices reside are also a part of this category.
  • Technical Safeguards: This group of safeguards refers to technical features implemented to minimize data risk and maximize protection. Requiring unique login information, auto-logoff policies, and authentication for PHI access are just some of the technical safeguards that should be in place.

Medical Record storage in the cloud

What Makes a HIPAA Certified Cloud Provider Compliant?

Providing HIPAA compliant file storage hardware or software is not as simple as flipping a switch. It takes a tremendous amount of time and effort for a company to become compliant.

The critical element to look for in a HIPAA certified cloud storage provider is its willingness to make a Business Associate Agreement. Known as a BAA, this agreement is completed between two parties planning to transmit, process, or receive PHI. Its primary purpose is to protect both parties from any legal repercussions resulting in the misuse of protected health information.

A Business Associate Agreement BAA must not add, subtract, or contradict the overall standards of the HIPAA. However, if both parties agree, supplementing specific terminology is acceptable. There are also some core terms that make up the groundwork for a compliant business associate agreement and must remain for the contract to be considered legally binding.

The level of encryption enabled by the cloud provider needs proper attention. The company should be encrypting files not only in transit but also at rest. Advanced Encryption Standard (AES) is the minimum level of encryption that it should use for file storage and sharing. AES is a successor to Data Encryption Standard (DES) and was developed by the National Institute of Standards and Technology (NIST) in 1997. It is an advanced encryption algorithm that offers improved defense against different security incidents.

man working on a mobile device at work

Selecting a Compliant Cloud Storage Vendor

When choosing a HIPAA compliant provider, look for HIPAA web Hosting that meets the measures outlined in the previous section. Make sure you ask them about their data storage security practices to how secure your PHI data will be.

Does the potential vendor offer a service level agreement?

An SLA contract indicates guaranteed response times to threats, typically within a twenty-four-hour window. As a company that transmits PHI, you need to know how quickly the provider can notify you in the event of an incident. The faster you receive a breach notification, the more efficiently you can respond.

Don’t forget that the storage of electronic cloud-based medical records should be in a secure data center.

What are the security measures in place in case of an incident? How is access to the facility determined? Ask for a detailed outline of how they implement and enforce physical security. Check how they respond in the event of a data breach. Make sure you get all the relevant details before you bring your data to risk.

Your selected vendor should also have a Disaster Recovery and Continuity Plan in place.

A continuity plan will anticipate loss due to natural disasters, data breaches, and other unforeseen incidents. It will also provide the necessary processes and procedures if or when such events occur. Concerning data loss prevention best practices, it is also essential to determine how often the proposed method undergoes rigorous testing.

Healthcare Medical Records Security – How can I be Sure?

Cloud providers that take compliance seriously will ensure their certifications are current. There are several ways to check if they follow standards and relevant regulations.

One way is to audit your potential provider using an independent party. Auditing will bring any possible risks to your attention and reveal the vendor’s security tactics. Cloud storage for medical records providers must regularly audit their systems and environments for securing threats to remain compliant. The term ‘regularly’ is not defined by the act, so it is essential to request documentation and information on at least a quarterly basis. You should also ensure you have constant access to reports and documentation detailing the most recent audit.

Another way to determine whether the company is compliant is to assess the qualifications of its employees. All staff needs to be educated on the most current standards and get familiarized with specific safeguards. Only with these in place organizations can achieve compliance.

Ask your potential vendor tough questions. Anyone with access to PHI needs appropriate training on secure data transmission methods. Training needs to include the ability to securely encrypt patient information no matter where they are stored.

A HIPAA compliant company will not ask you for a backdoor to access your data or permission to bypass your access management protocols. Such vendors recognize the risk of requiring additional authentication or access points. Compromising access to authentication protocols and password requirements is a serious violation and should never happen.

a secure cloud for storing data

Cloud Backup & Storage Frequently Asked Questions

Ask potential cloud vendors which method they use to evaluate your HIPAA compliance.

Is a HIPAA policy template available for use? Does the provider offer guidance and feedback on compliance? How are they ensuring that you are up to date and aware of security rules and regulations? Do they offer HIPAA compliant email?

Does the company have full-time employees on-premise?

Having a presence on site and available around the clock is a mechanism to ensure advanced security. An available representative makes PHI security more reliable and guarantees a quick response if needed. It also gives you peace of mind knowing that the company in charge of your data protection is thoroughly versed in the required standards.

The right provider should also be quick to adapt to the changes and inform you of anything that directly affects your PHI or your access to it.

Data deletion is a crucial component in choosing the appropriate HIPAA business associate. How long is the information kept for a period before being purged? How is data leakage prevented when servers are taken out of commission or erased? Is the data provided to you before deletion? The act offers no guidelines concerning the required length of time, but it is an agreement you and your provider must reach together.

In addition to your knowledge, determine how well your potential provider is versed in HIPAA regulations. Cloud companies often fail to follow the latest regulation changes, and you have to look for the one with consistent dedication.

Shop around. Do not be content with the first quote.

Many companies tout their HIPAA security, only to discover that they fall short of the measuring stick. Do your research, ask questions, and determine which vendor best suits your needs.

HIPAA-Compliant  Cloud Storage is Critical

When it comes to protecting medical records in the cloud, phoenixNAP will support your efforts with the highest service quality, security, and dependability.

We provide a selection of data centers which offer state-of-the-art protection for your medical files. With scalable cloud solutions, a 100% uptime guarantee, and unmatched disaster recovery, you can rest assured that your infrastructure is compliant.

HIPAA certifications can be confusing, complicated, and stressful.

You need to be able to trust your cloud provider to keep your files safe. PhoenixNap Global IT Services will allow you the freedom to focus your attention on other areas of your business and ensure the protection of your entities and business associates.


Cloud Security Tips

Cloud Security Tips to Reduce Security Risks, Threats, & Vulnerabilities

Do you assume that your data in the cloud is backed up and safe from threats? Not so fast.

With a record number of cybersecurity attacks taking place in 2018, it is clear that all data is under threat.

Everyone always thinks “It cannot happen to me.” The reality is, no network is 100% safe from hackers.

According to the Kaspersky Lab, ransomware rose by over 250% in 2018 and continues to trend in a very frightening direction. Following the advice presented here is the ultimate insurance policy from the crippling effects of a significant data loss in the cloud.

How do you start securing your data in the cloud? What are the best practices to keep your data protected in the cloud?  How safe is cloud computing?

To help you jump-start your security strategy, we invited experts to share their advice on Cloud Security Risks and Threats.

Key Takeaways From Our Experts on Cloud Protection & Security Threats

  • Accept that it may only be a matter of time before someone breaches your defenses, plan for it.
  • Do not assume your data in the cloud is backed up.
  • Enable two-factor authentication and IP-location to access cloud applications.
  • Leverage encryption. Encrypt data at rest.
  • The human element is among the biggest threats to your security.
  • Implement a robust change control process, with weekly patch management cycle.
  • Maintain offline copies of your data to in the event your cloud data is destroyed or held ransom.
  • Contract with 24×7 security monitoring service.
  • Have an security cident response plan.
  • Utilize advanced firewall technology including WAF (Web Access Firewalls).
  • Take advantage of application services, layering, and micro-segmentation.

1. Maintain Availability In The Cloud

Dustin AlbertsonDustin Albertson, Senior Cloud Solutions Architect at Veeam

When most people think about the topic of cloud-based security, they tend to think about Networking, Firewalls, Endpoint security, etc. Amazon defines cloud security as:

Security in the cloud is much like security in your on-premises data centers – only without the costs of maintaining facilities and hardware. In the cloud, you do not have to manage physical servers or storage devices. Instead, you use software-based security tools to monitor and protect the flow of information into and of out of your cloud resources.

But one often overlooked risk is maintaining availability.  What I mean by that is more than just geo-redundancy or hardware redundancy, I am referring to making sure that your data and applications are covered. Cloud is not some magical place where all your worries disappear; a cloud is a place where all your fears are often easier and cheaper to multiply.  Having a robust data protection strategy is key. Veeam has often been preaching about the “3-2-1 Rule” that was coined by Peter Krogh.

The rule states that you should have three copies of your data, storing them on two different media, and keeping one offsite. The one offsite is usually in the “cloud,” but what about when you are already in the cloud?

This is where I see most cloud issues arise, when people are already in the cloud they tend to store the data in the same cloud. This is why it is important to remember to have a detailed strategy when moving to the cloud. By leveraging things like Veeam agents to protect cloud workloads and Cloud Connect to send the backups offsite to maintain that availability outside of the same datacenter or cloud. Don’t assume that it is the providers’ job to protect your data because it is not.

2. Cloud MIgration is Outpacing The Evolution of Security Controls

salvatore stolfo Allure SecuritySalvatore Stolfo, CTO of Allure Security

According to a new survey conducted by ESG, 75% of organizations said that at least 20% of their sensitive data stored in public clouds is insufficiently secured. Also, 81% of those surveyed believe that on-premise data security is more mature than public cloud data.

Yet, businesses are migrating to the cloud faster than ever to maximize organizational benefits: an estimated 83% of business workloads will be in the cloud by 2020, according to LogicMonitor’s Cloud Vision 2020 report. What we have is an increasingly urgent situation in which organizations are migrating their sensitive data to the cloud for productivity purposes at a faster rate than security controls are evolving to protect that data.

Companies must look at solutions that control access to data within cloud shares based on the level of permission that user has, but they must also have the means to be alerted when that data is being accessed in unusual or suspicious ways, even by what appears to be a trusted user.

Remember that many hackers and insider leaks come from bad actors with stolen, legitimate credentials that allow them to move freely around in a cloud share, in search of valuable data to steal. Deception documents, called decoys, can also be an excellent tool to detect this. Decoys can alert security teams in the early stage of a cloud security breach to unusual behaviors, and can even fool a would-be cyber thief into thinking they have stolen something of value when in reality, it’s a highly convincing fake document. Then, there is the question of having control over documents even when they have been lifted out of the cloud share.

This is where many security solutions start to break down. Once a file has been downloaded from a cloud repository, how can you track where it travels and who looks at it? There must be more investment in technologies such as geofencing and telemetry to solve this.

3. Minimize Cloud Computing Threats and Vulnerabilities With a Security Plan

Nic O Donovan VMwareNic O’Donovan, Solutions Architect and Cloud Specialist with VMware 

The Hybrid cloud continues to grow in popularity with the enterprise – mainly as the speed of deployment, scalability, and cost savings become more attractive to business. We continue to see infrastructure rapidly evolving into the cloud, which means security must develop at a similar pace. It is essential for the enterprise to work with a Cloud Service Provider who has a reliable approach to security in the cloud.

This means the partnership with your Cloud Provider is becoming increasingly important as you work together to understand and implement a security plan to keep your data secure.

Security controls like Multi-factor authentication, data encryption along with the level of compliance you require are all areas to focus on while building your security plan.

4. Never Stop Learning About Your Greatest Vulnerabilities

ISAAC KOHEN is the founder and CEO of Teramind

Isacc Kohen, CEO of Teramind

More and more companies are falling victim to the cloud, and it has to do with cloud misconfiguration and employee negligence.

1. The greatest threats to data security are your employees. Negligent or malicious, employees are one of the top reasons for malware infections and data loss. The reasons why malware attacks and phishing emails are common words in the news is because they are ‘easy’ ways for hackers to access data. Through social engineering, malicious criminals can ‘trick’ employees into giving passwords and credentials over to critical business and enterprise data systems. Ways to prevent this: an effective employee training program and employee monitoring that actively probes the system

2. Never stop learning. In an industry that is continuously changing and adapting, it is important to be updated on the latest trends and vulnerabilities. For example with the Internet of Things (IoT), we are only starting to see the ‘tip of the iceberg’ when it comes to protecting data over increased wi-fi connections and online data storage services. There’s more to develop with this story, and it will have a direct impact on small businesses in the future.

3. Research and understand how the storage works, then educate. We’ve heard the stories – when data is exposed through the cloud, many times it’s due to misconfiguration of the cloud settings. Employees need to understand the security nature of the application and that the settings can be easily tampered with and switched ‘on’ exposing data externally. Educate security awareness through training programs.

4. Limit your access points. An easy way to mitigate this, limit your access points. A common mistake with cloud exposure is due to when employees with access by mistake enable global permissions allowing the data exposed to an open connection. To mitigate, understand who and what has access to the data cloud – all access points – and monitor those connections thoroughly.

5. Monitoring the systems. Progressive and through. For long-term protection of data on the cloud, use a user-analytics and monitoring platform to detect breaches quicker. Monitoring and user analytics streamlines data and creates a standard ‘profile’ of the user – employee and computer. These analytics are integrated and following your most crucial data deposits, which you as the administrator indicated in the detection software. When specific cloud data is tampered with, moved or breached, the system will “ping” an administrator immediately indicating a change in character.

5. Consider Hybrid Solutions

Michael V.N. HallMichael V.N. Hall, Director of Operations for Turbot

There are several vital things to understand about security in the cloud:

1. Passwords are power – 80% of all password breaches could have been prevented by multifactor identification: by verifying your personal identity via text through to your phone or an email to your account, you can now be alerted when someone is trying to access your details.

One of the biggest culprits at the moment is weakened credentials. That means passwords, passkeys, and passphrases are stolen through phishing scams, keylogging, and brute-force attacks.

Passphrases are the new passwords. Random, easy-to-remember passphrases are much better than passwords, as they tend to be longer and more complicated.

MyDonkeysEatCheese47 is a complicated passphrase and unless you’re a donkey owner or a cheese-maker, unrelated to you. Remember to make use of upper and lowercase letters as well as the full range of punctuation.

2. Keep in touch with your hosting provider. Choose the right hosting provider – a reputable company with high-security standards in place. Communicate with them regularly as frequent interaction allows you to keep abreast of any changes or developing issues.

3. Consider a hybrid solution. Hybrid solutions allow for secure, static systems to store critical data in-house while at the same time opening up lower priority data to the greater versatility of the cloud.

6. Learn How Cloud Security Systems Work

tom desotTom DeSot, CIO of Digital Defense, Inc.

Businesses need to make sure they evaluate cloud computing security risks and benefits. It is to make sure that they educate themselves on what it means to move into the cloud before taking that big leap from running systems in their own datacenter.

All too often I have seen a business migrate to the cloud without a plan or any knowledge about what it means to them and the security of their systems.  They need to recognize that their software will be “living” on shared systems with other customers so if there is a breach of another customer’s platform, it may be possible for the attacker to compromise their system as well.

Likewise, cloud customers need to understand where their data will be stored, whether it will be only in the US, or the provider replicates to other systems that are on different continents.  This may cause a real issue if the information is something sensitive like PII or information protected under HIPAA or some other regulatory statute.  Lastly, the cloud customer needs to pay close attention to the Service Level Agreements (SLA) that the cloud provider adheres to and ensure that it mirrors their own SLA.

Moving to the cloud is a great way to free up computing resources and ensure uptime, but I always advise my clients to make a move in small steps so that they have time to gain an appreciation for what it means to be “in the cloud.”

7. Do Your Due Diligence In Securing the Cloud

Ken StasiakKen Stasiak, CEO of SecureState

Understand the type of data that you are putting into the cloud and the mandated security requirements around that data.

Once a business has an idea of the type of data they are looking to store in the cloud, they should have a firm understanding of the level of due diligence that is required when assessing different cloud providers. For example, if you are choosing a cloud service provider to host your Protected Health Information (PHI), you should require an assessment of security standards and HIPAA compliance before moving any data into the cloud.

Some good questions to ask when evaluating whether a cloud service provider is a fit for an organization concerned with securing that data include: Do you perform regular SOC audits and assessments? How do you protect against malicious activity? Do you conduct background checks on all employees? What types of systems do you have in place for employee monitoring, access determination, and audit trails?

8. Set up Access Controls and Security Permissions

Michael R DuranteMichael R. Durante, President of Tie National, LLC.

While the cloud is a growing force in computing for its flexibility for scaling to meet the needs of a business and to increase collaboration across locations, it also raises security concerns with its potential for exposing vulnerabilities relatively out of your control.

For example, BYOD can be a challenge to secure if users are not regularly applying security patches and updates. The number one tip I would is to make the best use of available access controls.

Businesses need to utilize access controls to limit security permissions to allow only the actions related to the employees’ job functions. By limiting access, businesses assure critical files are available only to the staff needing them, therefore, reducing the chances of their exposure to the wrong parties. This control also makes it easier to revoke access rights immediately upon termination of employment to safeguard any sensitive content within no matter where the employee attempts access from remotely.

9. Understand the Pedigree and Processes of the Supplier or Vendor

Paul EvansPaul Evans, CEO of Redstor

The use of cloud technologies has allowed businesses of all sizes to drive performance improvements and gain efficiency with more remote working, higher availability and more flexibility.

However, with an increasing number of disparate systems deployed and so many cloud suppliers and software to choose from, retaining control over data security can become challenging. When looking to implement a cloud service, it is essential to thoroughly understand the pedigree and processes of the supplier/vendor who will provide the service. Industry standard security certifications are a great place to start. Suppliers who have an ISO 27001 certification have proven that they have met international information security management standards and should be held in higher regard than those without.

Gaining a full understanding of where your data will to geographically, who will have access to it, and whether it will be encrypted is key to being able to protect it. It is also important to know what the supplier’s processes are in the event of a data breach or loss or if there is downtime. Acceptable downtime should be set out in contracted Service Level Agreements (SLAs), which should be financially backed by them provide reassurance.

For organizations looking to utilize cloud platforms, there are cloud security threats to be aware of, who will have access to data? Where is the data stored? Is my data encrypted? But for the most part cloud platforms can answer these questions and have high levels of security. Organizations utilizing the clouds need to ensure that they are aware of data protection laws and regulations that affect data and also gain an accurate understanding of contractual agreements with cloud providers. How is data protected? Many regulations and industry standards will give guidance on the best way to store sensitive data.

Keeping unsecured or unencrypted copies of data can put it at higher risk. Gaining knowledge of security levels of cloud services is vital.

What are the retention policies, and do I have a backup? Cloud platforms can have widely varied uses, and this can cause (or prevent) issues. If data is being stored in a cloud platform, it could be vulnerable to cloud security risks such as ransomware or corruption so ensuring that multiple copies of data are retained or backed up can prevent this. Guaranteeing these processes have been taken improves the security levels of an organizations cloud platforms and gives an understanding of where any risk could come from

10. Use Strong Passwords and Multi-factor Authentication

Fred ReckFred Reck, InnoTek Computer Consulting

Ensure that you require strong passwords for all cloud users, and preferably use multi-factor authentication.

According to the 2017 Verizon Data Breach Investigations Report, 81% of all hacking-related breaches leveraged either stolen and/or weak passwords.  One of the most significant benefits of the Cloud is the ability to access company data from anywhere in the world on any device.  On the flip side, from a security standpoint, anyone (aka “bad guys”) with a username and password can potentially access the businesses data.  Forcing users to create strong passwords makes it vastly more difficult for hackers to use a brute force attack (guessing the password from multiple random characters.)

In addition to secure passwords, many cloud services today can utilize an employee’s cell phone as the secondary, physical security authentication piece in a multi-factor strategy, making this accessible and affordable for an organization to implement. Users would not only need to know the password but would need physical access to their cell phone to access their account.

Lastly, consider implementing a feature that would lock a user’s account after a predetermined amount of unsuccessful logins.

11. Enable IP-location Lockdown

Chris ByrneChris Byrne is co-founder and CEO of Sensorpro

Companies should enable two-factor authentication and IP-location lockdown to access to the cloud applications they use.

With 2FA, you add another challenge to the usual email/password combination by text message. With IP lockdown you can ring-fence access from your office IP or the IP of remote workers. If the platform does not support this, consider asking your provider to enable it.

Regarding actual cloud platform provision, provide a data at rest encryption option. At some point, this will become as ubiquitous as https (SSL/TLS). Should the unthinkable happen and data ends up in the wrong hands, i.e., a device gets stolen or forgotten on a train, then data at rest encryption is the last line of defense to prevent anyone from accessing your data without the right encryption keys. Even if they manage to steal it, they cannot use it. This, for example, would have ameliorated the recent Equifax breach.

12. Cloud Storage Security Solutions With VPN’s

Eric Schlissel, expert on cloud security threatsEric Schlissel, President, and CEO of GeekTek

Use VPNs (virtual private networks) whenever you connect to the cloud. VPNs are often used to semi-anonymize web traffic, usually by viewers that are geoblocked by accessing streaming services such as Netflix USA or BBC Player. They also provide a crucial layer of security for any device connecting to your cloud. Without a VPN, any potential intruder with a packet sniffer could determine what members were accessing your cloud account and potentially gain access to their login credentials.

Encrypt data at rest. If for any reason a user account is compromised on your public, private or hybrid cloud, the difference between data in plaintext vs. encrypted format can be measured in hundreds of thousands of dollars — specifically $229,000, the average cost of a cyber attack reported by the respondents of a survey conducted by the insurance company Hiscox. As recent events have shown, the process of encrypting and decrypting this data will prove far more painless than enduring its alternative.

Use two-factor authentication and single sign-on for all cloud-based accounts. Google, Facebook, and PayPal all utilize two-factor authentication, which requires the user to input a unique software-generated code into a form before signing into his/her account. Whether or not your business aspires to their stature, it can and should emulate this core component of their security strategy. Single sign-on simplifies access management, so one pair of user credentials signs the employee into all accounts. This way, system administrators only have one account to delete rather than several that can be forgotten and later re-accessed by the former employee.

13. Beware of the Human Element Risk

Steven WeismanSteven J.J. Weisman, Lawyer, and Professor at Bentley University

To paraphrase Shakespeare, the fault is not in the cloud; the responsibility is in us.

Storing sensitive data in the cloud is a good option for data security on many levels. However, regardless of how secure a technology may be, the human element will always present a potential security danger to be exploited by cybercriminals. Many past cloud security breached have proven not to be due to security lapses by the cloud technology, but rather by actions of individual users of the cloud.

They have unknowingly provided their usernames and passwords to cybercriminals who, through spear phishing emails, phone calls or text messages persuade people to give the critical information necessary to access the cloud account.

The best way to avoid this problem, along with better education of employees to recognize and prevent spear phishing, is to use dual factor authentication such as having a one time code sent to the employee’s cell phone whenever the cloud account is attempted to be accessed.

14. Ensure Data Retrieval From A Cloud Vendor

It Tropolis Cloud ProviderBob Herman, Co-Founder, and President of IT Tropolis.

1. Two-factor authentication protects against account fraud. Many users fail victim to email phishing attempts where bad actors dupe the victim into entering their login information on a fake website. The bad actor can then log in to the real site as the victim, and do all sorts of damage depending on the site application and the user access. 2FA ensures a second code must be entered when logging into the application. Usually, a code sent to the user’s phone.

2. Ensuring you own your data and that can retrieve it in the event you no longer want to do business with the cloud vendor is imperative. Most legitimate cloud vendors should specify in their terms that the customer owns their data. Next, you need to confirm you can extract or export the data in some usable format, or that the cloud vendor will provide it to you on request.

15. Real Time and Continuous Monitoring

sam bisbee cto threat stackSam Bisbee, Chief Security Officer at Threat Stack

1. Create Real-Time Security Observability & Continuous Systems Monitoring

While monitoring is essential in any data environment, it’s critical to emphasize that changes in modern cloud environments, especially those of SaaS environments, tend to occur more frequently; their impacts are felt immediately.

The results can be dramatic because of the nature of elastic infrastructure. At any time, someone’s accidental or malicious actions could severely impact the security of your development, production, or test systems.

Running a modern infrastructure without real-time security observability and continuous monitoring is like flying blind. You have no insight into what’s happening in your environment, and no way to start immediate mitigation when an issue arises. You need to monitor application and host-based access to understand the state of your application over time.

  • Monitoring systems for manual user actions. This is especially important in the current DevOps world where engineers are likely to have access to production. It’s possible they are managing systems using manual tasks, so use this as an opportunity to identify processes that are suited for automation.
  • Tracking application performance over time to help detect anomalies. Understanding “who did what and when” is fundamental to investigating changes that are occurring in your environment.

2. Set & Continuously Monitor Configuration Settings

Security configurations in cloud environments such as Amazon Direct Connect can be complicated, and it is easy to inadvertently leave access to your systems and data open to the world, as has been proven by all the recent stories about S3 leaks.

Given the changeable (and sometimes volatile) nature of SaaS environments, where services can be created and removed in real time on an ongoing basis, failure to configure services appropriately, and failure to monitor settings can jeopardize security. Ultimately, this will erode the trust that customers are placing in you to protect their data.

By setting configurations against an established baseline and continuously monitoring them, you can avoid problems when setting up services, and you can detect and respond to configuration issues more quickly when they occur.

3. Align Security & Operations Priorities for Cloud Security Solutions and Infrastructure

Good security is indistinguishable from proper operations. Too often these teams are at odds inside an organization. Security is sometimes seen as slowing down a business— overly focused on policing the activities of Dev and Ops teams. But security can be a business enabler.

Security should leverage automation testing tools, security controls and monitoring inside an organization — across network management, user access, the configuration of infrastructure, and vulnerability management across application layer — will drive the business forward, reducing risk across the attack surface and maintaining operational availability.

16. Use Auditing Tools to Secure Data In the Cloud

Jeremy VanceJeremey Vance, US Cloud

1. Use an auditing tool so that you know what all you have in the cloud and what all of your users are using in the cloud. You can’t secure data that you don’t know about.

2. In addition to finding out what services are being run on your network, find out how and why those services are being used, by whom and when.

3. Make that auditing process a routine part of your network monitoring, not just a one-time event. Moreover, if you don’t have the bandwidth for that, outsource that auditing routine to a qualified third party like US Cloud.

17. Most Breaches Start At Simple Unsecured Points

Marcus TurnerMarcus Turner, Chief Architect & CTO at Enola Labs

The cloud is very secure, but to ensure you are keeping company data secure it is important to configure the cloud properly.

For AWS specifically, AWS Config is the tool best utilized to do this. AWS, when configured the right way, is one of the most secure cloud computing environments in the world. However, most data breaches are not hackers leveraging complex programs to get access to critical data, but rather it’s the simple unsecured points, the low hanging fruit, that makes company data vulnerable.

Even with the best cloud security, human error is often to blame for the most critical gap or breach in protection. Having routines to validate continuous configuration accuracy is the most underused and under-appreciated metric for keeping company data secure in the cloud.

18. Ask Your Cloud Vendor Key Security Questions

Brandan KeavenyBrandan Keaveny, Ed.D., Founder of Data Ethics LLC

When exploring the possibilities of moving to a cloud-based solution, you should ensure adequate supports are in place should a breach occur. Make sure you ask the following questions before signing an agreement with a cloud-based provider:

Question: How many third-parties does the provider use to facilitate their service?

Reason for question (Reason): Processes and documentation will need to be updated to include procedural safeguards and coordination with the cloud-based solution. Additionally, the level of security provided by the cloud-based provider should be clearly understood. Increased levels of security made need to be added to meet privacy and security requirements for the data being stored.

Question: How will you be notified if a breach of their systems occurs and will they assist your company in the notification of your clients/customers?

Reason: By adding a cloud-based solution to the storage of your data also adds a new dimension of time to factor into the notification requirements that may apply to your data should a breach occur. These timing factors should be incorporated into breach notification procedures and privacy policies.

When switching to the cloud from a locally hosted solution your security risk assessment process needs to be updated. Before making the switch, a risk assessment should take place to understand the current state of the integrity of the data that will be migrated.

Additionally, research should be done to review how data will be transferred to the cloud environment. Questions to consider include:

Question: Is your data ready for transport?

Reason: The time to conduct a data quality assessment is before migrating data to a cloud-based solution rather than after the fact.

Question: Will this transfer be facilitated by the cloud provider?

Reason: It is important to understand the security parameters that are in place for the transfer of data to the cloud provider, especially when considering large data sets.

19. Secure Your Cloud Account Beyond the Password

Contributed by the team at Dexter Edward

Secure the cloud account itself. All the protection on a server/os/application won’t help if anyone can take over the controls.

  • Use a strong and secure password on the account and 2-factor authentication.
  • Rotate cloud keys/credentials routinely.
  • Use IP whitelists.
  • Use any role-based accesses on any associated cloud keys/credentials.

Secure access to the compute instances in the cloud.

  • Use firewalls provided by the cloud providers.
  • Use secure SSH keys for any devices that require login access.
  • Require a password for administrative tasks.
  • Construct your application to operate without root privilege.
  • Ensure your applications use encryption for any communications outside the cloud.
  • Use authentication before establishing public communications.

Use as much of the private cloud network as you can.

  • Avoid binding services to all public networks.
  • Use the private network to isolate even your login access (VPN is an option).

Take advantage of monitoring, file auditing, and intrusion detection when offered by cloud providers.

  • The cloud is made to move – use this feature to change up the network location.
  • Turn off instances when not in use. b. Keep daily images so you can move the servers/application around the internet more frequently.

20. Consider Implementing Managed Virtual Desktops

Michael Abboud, CEO, and Founder of TetherView

Natural disasters mixed with cyber threats, data breaches, hardware problems, and the human factor, increase the risk that a business will experience some type of costly outage or disruption.

Moving towards managed virtual desktops delivered via a private cloud, provides a unique opportunity for organizations to reduce costs and provide secure remote access to staff while supporting business continuity initiatives and mitigating the risk of downtime.

Taking advantage of standby virtual desktops, a proper business continuity solution provides businesses with the foundation for security and compliance.

The deployment of virtual desktops provides users with the flexibility to work remotely via a fully-functional browser-based environment while simultaneously allowing IT departments to centrally manage endpoints and lock down business critical data. Performance, security, and compliance are unaffected.

Standby virtual desktops come pre-configured and are ready to be deployed instantaneously, allowing your team to remain “business as usual” during a sudden disaster.

In addition to this, you should ensure regular data audits and backups

If you don’t know what is in your cloud, now is the time to find out. It’s essential to frequently audit your data and ensure everything is backed up. You’ll also want to consider who has access to this data. Old employees or those who no longer need access should have permissions provoked.

It’s important to also use the latest security measures, such as multi-factor authentication and default encryption. Always keep your employees up to speed with these measures and train them to spot potential threats that way they know how to deal with them right away.

21. Be Aware of a Provider’s Security Policies

Jeff Bittner global IT asset disposition company (ITAD)Jeff Bittner, Founder and President of Exit technologies

Many, if not most, businesses will continue to expand in the cloud, while relying on on-premise infrastructure for a variety of reasons, ranging from a simple cost/benefit advantages to reluctance to entrust key mission-critical data or systems into the hands of third-party cloud services providers. Keeping track of what assets are where in this hybrid environment can be tricky and result in security gaps.

Responsibility for security in the cloud is shared between the service provider and the subscriber. So, the subscriber needs to be aware not only of the service provider’s security policies, but also such mundane matters as hardware refresh cycles.

Cyber attackers have become adept at finding and exploiting gaps in older operating systems and applications that may be obsolete, or which are no longer updated. Now, with the disclosure of the Spectre and Meltdown vulnerabilities, we also have to worry about threats that could exploit errors or oversights hard-coded at the chip level.

Hardware such as servers and PCs has a limited life cycle, but often businesses will continue to operate these systems after vendors begin to withdraw support and discontinue firmware and software updates needed to counter new security threats.

In addition to being aware of what their cloud provider is doing, the business must keep track of its own assets and refresh them or decommission them as needed. When computer systems are repurposed for non-critical purposes, it is too easy for them to fall outside of risk management and security oversight.

22. Encrypt Backups Before Sending to the Cloud

Mikkel Wilson, CTO at Oblivious.io

1. File metadata should be secured just as vigilantly as the data itself. Even if an attacker can’t get at the data you’ve stored in the cloud, if they can get, say, all the filenames and file sizes, you’ve leaked important information. For example, if you’re a lawyer and you reveal that you have a file called “michael_cohen_hush_money_payouts.xls” and it’s 15mb in size, this may raise questions you’d rather not answer.

2. Encrypt your backups *before* you upload them to the cloud. Backups are a high-value target for attackers. Many companies, even ones with their own data centers, will store backups in cloud environments like Amazon S3. They’ll even turn on the encryption features of S3. Unfortunately, Amazon stores the encryption keys right along with the data. It’s like locking your car and leaving the keys on the hood.

23. Know Where Your Data Resides To Reduce Cloud Threats

Vikas AdityaVikas Aditya, Founder of QuikFynd Inc,

Be aware of where their data is stored these days so that they can proactively identify if any of the data may be at risk of a breach.

These days, data is being stored in multiple cloud locations and applications in addition to storage devices in business. Companies are adopting cloud storage services such as Google Drive, Dropbox, OneDrive, etc. and online software services for all kind of business processes. This has led to vast fragmentation of company data, and often managers have no idea where all the data may be.

For example, a confidential financial report for the company may get stored in cloud storage because devices are automatically synching with cloud or a sensitive business conversation may happen in cloud-based messaging services such as Slack. While cloud companies have all the right intentions to keep their customer data safe, they are also the prime target because hackers have better ROI in targeting such services where they can potentially get access to data for millions of subscribers.

So, what should a company do?

While they will continue to adopt cloud services and their data will end up in many, many locations, they can use some search and data organization tools that can show them what data exists in these services. Using full-text search capabilities, they can then very quickly find out if any of this information is a potential risk to the company if breached. You cannot protect something if you do not even know where it is. And more importantly, you will not even know if it is stolen. So, companies looking to protect their business data need to take steps at least to be aware of where all their information is.

24. Patch Your Systems Regularly To Avoid Cloud Vulnerabilities

Adam SternAdam Stern, CEO of Infinitely Virtual

Business users are not defenseless, even in the wake of recent attacks on cloud computing like WannaCry or Petya/NotPetya.

The best antidote is patch management. It is always sound practice to keep systems and servers up to date with patches – it is the shortest path to peace of mind. Indeed, “patch management consciousness” needs to be part of an overarching mantra that security is a process, not an event — a mindset, not a matter of checking boxes and moving on. Vigilance should be everyone’s default mode.

Spam is no one’s friend; be wary of emails from unknown sources – and that means not opening them. Every small and midsize business wins by placing strategic emphasis on security protections, with technologies like clustered firewalls and intrusion detection and prevention systems (IDPS).

25. Security Processes Need Enforcement as Staff Often Fail to Realize the Risk

Murad Mordukhay QencodeMurad Mordukhay, CEO of Qencode

1. Security as a Priority

Enforcing security measures can become difficult when working with deadlines or complex new features. In an attempt to drive their products forward, teams often bend the rules outlined in their own security process without realizing the risk they are putting their company into. A well thought out security process needs to be well enforced in order achieve its goal in keeping your data protected. Companies that include cloud security as a priority in their product development process drastically reduce their exposure to lost data and security threats.

2. Passwords & Encryption

Two important parts of securing your data in the cloud are passwords and encryption.

Poor password management is the most significant opportunity for bad actors to access and gain control of company data. This usually accomplished through social engineering techniques (like phishing emails) mostly due to poor employee education. Proper employee training and email monitoring processes go a long way in helping expose password information. Additionally, passwords need to be long, include numbers, letters, and symbols. Passwords should never be written down, shared in email, or posted in chat and ticket comments. An additional layer of data protection is achieved through encryption. If your data is being stored for in the cloud for long periods, it should be encrypted locally before you send it up. This makes the data practically inaccessible in the small chance it is compromised.

26. Enable Two-factor Authentication

Timothy PlattTim Platt, VP of IT Business Services at Virtual Operations, LLC

For the best cloud server security, we prefer to see Two Factor Authentication (also known as 2FA, multi-factor authentication, or two-step authentication) used wherever possible.

What is this? 2 Factor combines “something you know” with “something you have.” If you need to supply both a password and a unique code sent to your smartphone via text, then you have both those things. Even if someone knows your password, they still can’t get into your account. They would have to know your password and have access to your cell phone. Not impossible, but you have just dramatically made it more difficult for them to hack your account. They will look elsewhere for an easier target.  As an example, iCloud and Gmail support 2FA – two services very popular with business users.  I recommend everyone use it.

Why is this important for cloud security?

Because cloud services are often not protected by a firewall or other mechanism to control where the service can be accessed from. 2FA is an excellent additional layer to add to security.  I should mention as well that some services, such as Salesforce, have a very efficient, easy to use implementation of 2FA that isn’t a significant burden on the user.

27. Do Not Assume Your Data in the Cloud is Backed-Up

Mike Potter RewindMike Potter, CEO & Co-Founder at Rewind

Backing up data that’s in the cloud: There’s a big misconception around how cloud-based platforms (ex. Shopify, QuickBooks Online, Mailchimp, Wordpress) are backed up. Typically, cloud-based apps maintain a disaster recovery cloud backup of the entire platform. If something were to happen to their servers, they would try to recover everyone’s data to the last backup. However, as a user, you don’t have access to their backup to restore your data.

This means that you risk having to manually undo unwanted changes or permanently losing data if:

  • A 3rd party app integrated into your account causes problems.
  • You need to unroll a series of changes
  • Your or someone on your team makes a mistake
  • A disgruntled employee or contractor deletes data maliciously

Having access to a secondary backup of your cloud accounts gives you greater control and freedom over your own data. If something were to happen to the vendor’s servers, or within your individual account, being able to quickly recover your data could save you thousands of dollars in lost revenue, repair costs, and time.

28. Minimize and Verify File Permissions

randolph morrisRandolph Morris, Founder & CTO at Releventure

1. If you are using a cloud-based server, ensure monitoring and patching the Spectre vulnerability and its variations. Cloud servers are especially vulnerable. This vulnerability can bypass any cloud security measures put in place including encryption for data that is being processed at the time the vulnerability is being utilized as an exploit.

2. Review and tighten up file access for each service. Too often accounts with full access are used to ensure software ‘works’ because they had permission issues in the past. If possible, each service should use its own account and only have restricted permission to access what is vital and just give the minimum required permissions.

29. When Securing Files in the Cloud,  Encrypt Data Locally First

Brandon Ackroyd headshotBrandon Ackroyd, Founder and Mobile Security Expert at Tiger Mobiles 

Most cloud storage users assume such services use their own encryption. They do, Dropbox, for example, uses an excellent encryption system for files.

The problem, however, is because you’re not the one encrypting, you don’t have the decryption key either. Dropbox holds the decryption key so anyone with that same key can decrypt your data. The decryption happens automatically when logged into the Dropbox system so anyone who accesses your account, e.g., via hacking can also get your now non-encrypted data.

The solution to this is that you encrypt your files and data, using an encryption application or software, before sending them to your chosen cloud storage service.

30. Exposed Buckets in AWS S3 are Vulnerable

Todd Bernhard Cloud CheckrTodd Bernhard, Product Marketing Manager at CloudCheckr

1. The most common and publicized data breaches in the past year or so have been due to giving the public read access to AWS S3 storage buckets. The default configuration is indeed private, but people tend to make changes and forget about it, and then put confidential data on those exposed buckets. 

2. Encrypt data, both in traffic and at rest. In the data center, where end users, servers, and application servers might all be in the same building. By contrast, with the Cloud, all traffic goes over the Internet, so you need to encrypt data as it moves around in public. It’s like the difference between mailing a letter in an envelope or sending a postcard which anyone who comes into contact with it can read the contents.

31. Use the Gold-standard of Encryption

Jeff CaponeJeff Capone, CEO of SecureCircle

There’s a false sense of privacy being felt by businesses using cloud-based services like Gmail and Dropbox to communicate and share information. Because these services are cloud-based and accessible by password, it’s automatically assumed that the communications and files being shared are secure and private. The reality is – they aren’t.

One way in which organizations can be sure to secure their data is in using new encryption methods such as end-to-end encryption for emailing and file sharing. It’s considered the “gold standard” method with no central points of attack – meaning it protects user data even when the server is breached.

These advanced encryption methods will be most useful for businesses when used in conjunction with well-aligned internal policies. For example, decentralizing access to data when possible, minimizing or eliminating accounts with privileged access, and carefully considering the risks when deciding to share data or use SaaS services.

32. Have Comprehensive Access Controls in Place

Randy BattatRandy Battat, Founder and CEO, PreVeil

All cloud providers have the capability of establishing access controls to your data. This is essentially a listing of those who have access to the data. Ensure that “anonymous” access is disabled and that you have provided access only to those authenticated accounts that need access.

Besides that, you should utilize encryption to ensure your data stays protected and stays away from prying eyes. There is a multitude of options available depending on your cloud provider. Balance the utility of accessing data with the need to protect it – some methods are more secure than others, like utilizing a client-side key and encryption process. Then, even if someone has access to the data (see point #1), they only have access to the encrypted version and must still have a key to decrypt it

Ensure continuous compliance to your governance policies. Once you have implemented the items above and have laid out your myriad of other security and protection standards, ensure that you remain in compliance with your policies. As many organizations have experienced with cloud data breaches, the risk is not with the cloud provider platform. It’s what their staff does with the platform. Ensure compliance by monitoring for changes, or better yet, implement tools to monitor the cloud with automated corrective actions should your environment experience configuration drift.

33. 5 Fundamentals to Keep Data Secure in the Cloud

David Gugick, VP of Product Management at CloudBerry

  • Perform penetration testing to ensure any vulnerabilities are detected and corrected.
  • Use a firewall to create a private network to keep unauthorized users out.
  • Encrypt data using AES encryption and rotate keys to ensure data is protected both in transit and at rest.
  • Logging and Monitoring to track who is doing what with data.
  • Identity and Access Control to restrict access and type of access to only the users and groups who need it.

34. Ensure a Secure Multi-Tenant Environment

Anthony Dezilva cloud security expertAnthony Dezilva, CISO at PhoenixNAP

When we think of the cloud, we think of two things.  Cost savings due to efficiencies gained by using a shared infrastructure, and cloud storage security risk.

Although many published breaches are attributed to cloud-based environment misconfiguration, I would be surprised if this number was more than, the reported breaches of non-cloud based environments.

The best cloud service providers have a vested interest in creating a secure multi-tenant environment.  Their aggregate spending on creating these environments are far more significant than most company’s IT budgets, let alone their security budgets.  Therefore I would argue that a cloud environment configured correctly, provides a far higher level of security than anything a small to medium-sized business can create an on-prem.

Furthermore, in an environment where security talent is at a grave shortage, there is no way an organization can find, let alone afford the security talent they need.  Resulting in the next best thing, create a business associate relationship with a provider that not only has a strong secure infrastructure but also provides cloud monitoring security solutions.

Cloud Computing Threats and Vulnerabilities: Need to know

  • Architect solution as you would any on-prem design process;
  • Take advantage of application services layering and micro-segmentation;
  • Use transaction processing layers with strict ACLs that control inter-process communication.  Use PKI infrastructure to authenticate, and encrypt inter-process communication.
  • Utilize advanced firewall technology including WAF (Web Access Firewalls) to front-end web-based applications, to minimize the impact of vulnerabilities in underlying software;
  • Leverage encryption right down to record level;
  • Accept that it is only a matter of time before someone breaches your defenses, plan for it.  Architect all systems to minimize the impact should it happen.
  • A flat network is never okay!
  • Robust change control process, with weekly patch management cycle;
  • Maintain offline copies of your data, to mitigate the risk of cloud service collapse, or malicious attack that wipes your cloud environment;
  • Contract with 24×7 security monitoring services that have an incident response component.


man looking out at threats in cloud security

Cloud Storage Security: How Secure is Your Data in The Cloud?

Data is moving to the cloud at a record pace.

Cloud-based solutions are increasingly in demand around the world. These solutions include everything from secure data storage to entire business processes.

A Definition Of Cloud Storage Security

Cloud-based internet security is an outsourced solution for storing data. Instead of saving data onto local hard drives, users store data on Internet-connected servers. Data Centers manage these servers to keep the data safe and secure to access.

Enterprises turn to cloud storage solutions to solve a variety of problems. Small businesses use the cloud to cut costs. IT specialists turn to the cloud as the best way to store sensitive data.

Any time you access files stored remotely, you are accessing a cloud.

Email is a prime example. Most email users don’t bother saving emails to their devices because those devices are connected to the Internet.

Learn about cloud storage security and how to take steps to secure your cloud servers.

Types of Cloud: Public, Private, Hybrid

There are three types of cloud solutions.

Each of these offers a unique combination of advantages and drawbacks:

Public Cloud: These services offer accessibility and security. This security is best suited for unstructured data, like files in folders. Most users don’t get a great deal of customized attention from public cloud providers. This option is affordable.

Private Cloud: Private cloud hosting services are on-premises solutions. Users assert unlimited control over the system. Private cloud storage is more expensive. This is because the owner manages and maintains the physical hardware.

Hybrid Cloud: Many companies choose to keep high-volume files on the public cloud and sensitive data on a private cloud. This hybrid approach strikes a balance between affordability and customization.

types of clouds to secure include private public and hybrid

How Secure is Cloud Storage?

All files stored on secure cloud servers benefit from an enhanced level of security.

The security credential most users are familiar with is the password. Cloud storage security vendors secure data using other means as well.

Some of these include:

Advanced Firewalls: All Firewall types inspect traveling data packets. Simple ones only examine the source and destination data. Advanced ones verify packet content integrity. These programs then map packet contents to known security threats.

Intrusion Detection: Online secure storage can serve many users at the same time. Successful cloud security systems rely on identifying when someone tries to break into the system. Multiple levels of detection ensure cloud vendors can even stop intruders who break past the network’s initial defenses.

Event Logging: Event logs help security analysts understand threats. These logs record network actions. Analysts use this data to build a narrative concerning network events. This helps them predict and prevent security breaches.

Internal Firewalls: Not all accounts should have complete access to data stored in the cloud. Limiting secure cloud access through internal firewalls boosts security. This ensures that even a compromised account cannot gain full access.

Encryption: Encryption keeps data safe from unauthorized users. If an attacker steals an encrypted file, access is denied without finding a secret key. The data is worthless to anyone who does not have the key.

Physical Security: Cloud data centers are highly secure. Certified data centers have 24-hour monitoring, fingerprint locks, and armed guards. These places are more secure than almost all on-site data centers. Different cloud vendors use different approaches for each of these factors. For instance, some cloud storage systems keep user encryption keys from their users. Others give the encryption keys to their users.

Best-in-class cloud infrastructure relies on giving users the ideal balance between access and security. If you trust users with their own keys, users may accidentally give the keys to an unauthorized person.

There are many different ways to structure a cloud security framework. The user must follow security guidelines when using the cloud.

For a security system to be complete, users must adhere to a security awareness training program. Even the most advanced security system cannot compensate for negligent users.

man looking for cyber security certifications in the IT industry

Cloud Data Security Risks

Security breaches are rarely caused by poor cloud data protection. More than 40% of data security breaches occur due to employee error. Improve user security to make cloud storage more secure.

Many factors contribute to user security in the cloud storage system.

Many of these focus on employee training:

Authentication: Weak passwords are the most common enterprise security vulnerability. Many employees write their passwords down on paper. This defeats the purpose. Multi-factor authentication can solve this problem.

Awareness: In the modern office, every job is a cybersecurity job. Employees must know why security is so important and be trained in security awareness. Users must know how criminals break into enterprise systems. Users must prepare responses to the most common attack vectors.

Phishing Protection:  Phishing scams remain the most common cyber attack vector. These attacks attempt to compromise user emails and passwords. Then, attackers can move through business systems to obtain access to more sensitive files.

Breach Drills: Simulating data breaches can help employees identify and prevent phishing attacks. Users can also improve response times when real breaches occur. This establishes protocols for handling suspicious activity and gives feedback to users.

Measurement: The results of data breach drills must inform future performance. Practice only makes perfect if analysts measure the results and find ways to improve upon them. Quantify the results of simulation drills and employee training to maximize the security of cloud storage.

Cloud Storage Security Issues: Educate Employees

Employee education helps enterprises successfully protect cloud data. Employee users often do not know how cloud computing works.

Explain cloud storage security to your employees by answering the following questions:

Where Is the Cloud Located?

Cloud storage data is located in remote data centers. These can be anywhere on the planet. Cloud vendors often store the same data in multiple places. This is called redundancy.

How is Cloud Storage Different from Local Storage?

Cloud vendors use the Internet to transfer data from a secure data center to employee devices. Cloud storage data is available everywhere.

How Much Data Can the Cloud Store?

Storage in the cloud is virtually unlimited. Local drive space is limited. Bandwidth – the amount of data a network can transmit per second – is usually the limiting factor. High-Volume, low-bandwidth cloud service will run too slowly for meaningful work.

Does The Cloud Save Money?

Most companies invest in cloud storage to save money compared to on-site storage. Improved connectivity cuts costs. Cloud services can also save money in disaster recovery situations.

Is the Cloud Secure and Private?

Professional cloud storage comes with state-of-the-art security. Users must follow the vendor’s security guidelines. Negligent use can compromise even the best protection.

Cloud Storage Security Best Practices

Cloud storage providers store files redundantly. This means copying files to different physical servers.

Cloud vendors place these servers far away from one another. A natural disaster could destroy one data center without affecting another one hundreds of miles away.

Consider a fire is breaking out in an office building. If the structure contains paper files, those files will be the first to burn. If the office’s electronic equipment melts, then the file backups will be gone, too.

If the office saves its documents in the cloud, this is not a problem. Copies of every file exist in multiple data centers located throughout the region. The office can move into a building with Internet access and continue working.

Redundancy makes cloud storage security platforms failure-proof. On-site data storage is far riskier. Large cloud vendors use economies of scale to guarantee user data is intact. These vendors measure hard drive failure and compensate for them through redundancy.

Even without redundant files, only a small percentage of cloud vendor hard drives fail. These companies rely on storage for their entire income. These vendors take every precaution to ensure users’ data remains safe.

Cloud vendors invest in new technology. Advances improve security measures in cloud computing. New equipment improves results.

This makes cloud storage an excellent option for securing data against cybercrime. With a properly configured cloud solution in place, even ransomware poses no real threat. You can wipe the affected computers and start fresh. Disaster recovery planning is a critical aspect of cloud storage security.

Invest in Cloud Storage Security

Executives who invest in cloud storage need qualified cloud maintenance and management expertise. This is especially true for cloud security.

Have a reputable managed security services provider evaluate your data storage and security needs today.


cloud versus colocation options for hosting

Colocation vs Cloud Computing : Best Choice For Your Organization?

In today’s modern technology space, companies are opting to migrate from on-premises hardware to hosted solutions.

Every business wants the optimal cohesion between the best technology available and a cost-effective solution. Identifying the unique hosting needs of the business is crucial.

This decision is often instigated due to overhead cost but can spiderweb out into security opportunities, redundancy, disaster recovery, and many other factors. Both colocation providers and the cloud offer hosted computing solutions where data storage and are offsite in a data center.

To cater to the multitude of business sizes, data centers offer a wide range of customizable solutions. In this article, we are going to compare colocation to cloud computing services.

What is Cloud Computing?

Under a typical cloud service model, a data center delivers computing services directly to its customer through the Internet. The customer pays based on the usage of computing resources, much in the same way homeowners pay monthly bills for using water and electricity.

In cloud computing, the service provider takes total responsibility for developing, deploying, maintaining, and securing its network architecture and usually implements shared responsibility models to keep customer data safe.

What is Colocation?

Colocation is when a business places its own server in a third-party data center and uses its infrastructure and bandwidth to process data.

The key difference here is that the business retains ownership of its server software and physical hardware. It simply uses the colocation data center’s superior infrastructure to gain more bandwidth and better security.

Colocation services often include server management and maintenance agreements. These tend to be separate services that the colocation facility offers for a monthly fee. This can be valuable when businesses can’t afford to send IT specialists to and from the colocation facility on a regular basis.

Comparing Colocation & The Cloud

The decision between colocation vs. cloud computing is not a mutually exclusive one.

It is entirely feasible for companies to pick different solutions for completing various tasks.

For example, an organization may host most of its daily processing systems on a public cloud server, but host its mission-critical databases on its own server. Deploying that server on-site would be expensive and insecure, so the company will look for a colocation facility that can house and maintain its most crucial equipment.

This means that the decision between colocation and cloud hosting services is one that executives and IT professionals have to make based on each asset within the corporate structure. Merely migrating everything to a colocation facility or a cloud service provider often means missing out on critical opportunities to implement synergistic solutions.

How to Weigh the Benefits and Drawbacks for Individual IT Assets

Off-premise IT solutions like cloud hosting and colocation offer significant IT savings compared to expensive, difficult-to-maintain on-premises alternatives.

However, it takes a higher degree of clarity to determine where individual IT assets should go.

In many cases, this decision depends on the specific objectives that company stakeholders wish to obtain from particular tasks and processes.

It relies on the motive for migrating to an off-premises solution in the first place, whether the goal is security and compliance, for better connectivity, or for superior business continuity.

1. Security

Both cloud hosting and colocation data centers offer greater security when compared to on-premises solutions. Although executives often cite security concerns as one of the primary reasons holding them back from hosted services. The fact is that cloud computing is more secure than on-premises infrastructure.

Entrusting your company data to a third party may seem like a poor security move. However, dedicated managed service providers are better equipped to handle security issues. Service providers have resources and talent explicitly allocated to cybersecurity concerns, which means they can identify threats quicker and mitigate risks more comprehensively than in-house IT specialists.

When it comes to cloud infrastructure, the data security benefits are only as good as the service provider’s reputation. Reputable cloud hosting vendors have robust, multi-layered security frameworks in place and are willing to demonstrate their resilience.

A colocation strategy can be even better from a security perspective. But only if you have the knowledge, expertise, and resources necessary to implement a competitive security solution in-house.

Ideally, a colocation facility can take care of the security framework’s physical and infrastructural elements while your team operates a remote security operations center to cover the rest.

2. Compliance

Cloud storage can make compliance much more manageable for organizations that struggle to keep up with continually evolving demands placed on them by regulators. A reputable cloud service provider can offer on-demand compliance, shifting software and hardware packages to meet regulatory requirements on the fly. Often, the end-user doesn’t even notice the difference.

In highly regulated industries such as healthcare with HIPAA Compliance,  the situation may be more delicate. Organizations that operate in these fields need to establish clear service level agreements that stipulate which party is responsible for regulatory compliance and where their respective obligations begin and end.

The same is true for colocation partners.

If your business is essentially renting space in a data center and installing your server there, you have to establish responsibility for compliance concerns from the beginning.

In most situations, this means that the colocation provider will take responsibility for the physical and hardware-related aspects of the compliance framework. Your team will be responsible for the software-oriented elements of compliance. This can be important when dealing with new or recently changed regulatory frameworks like Europe’s GDPR.

3. Connectivity

One of the primary benefits to moving computing processes into a data center environment is the ability to enjoy better and more comprehensive connectivity. This is one of the areas where well-respected data centers invest heavily in providing their clients best-in-class bandwidth, connection speed, and reliability.

On-prem solutions often lack state-of-the-art network infrastructure. Even those that enjoy state-of-the-art connectivity soon find themselves behind the looming threat of obsolescence as the marching beat of technological advance moves steadily forward.

Managed cloud computing agreements typically include clauses for updating system hardware and software in response to advances in the field. Cloud service providers have economic incentives to update their network hardware and connectivity devices since their infrastructure is the service they offer customers.

Colocation is an elegant way to maximize the throughput of a well-configured server. It allows a company to use optimal bandwidth – thanks to the colocation facility’s state-of-the-art infrastructure – without having to continually deploy, implement, and maintain updates to on-premises system architecture.

Both colocation and cloud computing also provide unique benefits to businesses looking for hosting in specific geographic areas. You can minimize page load and processing times by reducing the physical distance between users and the servers they need to access.

4. Backup and Disaster Recovery

Choosing colocation or cloud backup and disaster recovery is a definite value contributor that only comprehensive managed service providers offer. Creating, deploying, and maintaining redundant business continuity solutions are one of the most important things that any business or institution can do.

Colocation and cloud computing providers offer significant cost savings for backup and disaster recovery as built-in services. Businesses and end users have come to expect disaster recovery solutions as standard features.

But not all disaster recovery solutions enjoy the same degree of quality and resilience. Data centers that offer business continuity solutions also need to invest in top-of-the-line infrastructure to make those solutions usable.

If your business has to put its disaster recovery plan to the test, you want to know that you have enough bandwidth to potentially run your entire business off of a backup system indefinitely.

IT Asset Migration To The Cloud Or Colocation Data Center

IT professionals choosing between colocation vs. cloud need to carefully assess their technology environment to determine which solution represents the best value for their data and processes.

For example; an existing legacy system infrastructure can play a significant role in this decision. If you already own your servers and they can reasonably be expected to perform for several additional years, colocation can represent a crucial value compared to replacing aging hardware.

Determining the best option for migrating your IT assets requires expert consultation with experienced colocation and cloud computing specialists. Next-generation data management and network infrastructure can hugely improve cost savings for your business if implemented with the input of a qualified data center.

Find out whether colocation or cloud computing is the best option for your business. Have one of our experts assess your IT environment today.


What is a Bare Metal Hypervisor? A Comprehensive Guide

Are you looking for a highly scalable, flexible and fast solution for your IT backbone?

Understanding the difference between bare metal or virtualized environments will allow you to make an informed decision.

Take the time here to master the basics:

What are the requirements for your project regarding performance, density, and compliance? These terms will determine your deployment strategy, including the ability to run virtualized environments.

What is Bare Metal?

Bare Metal is just another way of saying “Dedicated Server.”

This is a single tenant environment with direct access to underlying hardware technology without any hypervisor overhead. Bare metal can support many kinds of operating systems on top of its core performance.

The term bare metal refers to direct access to the hardware. It includes the ability to leverage all of its specific features which would not be accessible with type 1 or 2 hypervisor. This would only emulate that environment through virtualization.

data center auditing standards

What is a Bare Metal Hypervisor?

A bare metal hypervisor or a Type 1 hypervisor, is virtualization software that is installed on hardware directly.

At its core, the hypervisor is the host or operating system.

It is structured to allow for the virtualization of underlying hardware components to function as if they have direct access to the hardware. The hypervisor enables a computer to separate its operating system from its core physical structure and hardware. From this position, the hypervisor can give a physical host the ability to operate many virtual units.

It allows for the opportunity to house many clients on the same server. Server Virtualization allows for a much denser deployment at the cost of the overhead and limited ability to leverage all hardware features.

Each client will experience a simulation of its own dedicated server. However, the physical resources of the server such as CPU cycles, memory, and network bandwidth are being shared between all tenants on the server.

The hypervisor is all about flexibility and scalability. Hypervisors allow for a much more dense utilization of hardware, especially in situations where not all physical resources are being used. Virtualization could, but does not require an underlying OS. Especially when speaking about datacenter related production workloads. Datacenters look at hypervisors being deployed on top of bare metal servers and not within the OS.

The type of image that a virtual environment creates also determines the performance of a hypervisor.

Microsoft, Citrix, and VMware have the three most popular hypervisor systems. The Hyper-V, Systems XenServer and ESX brands, respectively, represent the majority of the hypervisor market today.

Who is Bare Metal Ideal For?

The bare metal environment works well for many types of workloads regardless of company size.

Enterprise data centers require granular resource and access management, high level of security, and ability to scale. Single tenant environments can perform better and do not run into the risk of “noisy neighbors.” There is less risk involved from a security perspective due to physical isolation.

What are the Major Features of Bare Metal?

Bare metal servers are dedicated to one client and are never physically shared with more than one customer. If that client chooses to run a virtualized platform on top of it, they create a multitenant environment themselves. Bare metal is often the most streamlined way to command resources.

With bare metal, clients can avoid what is known as the “noisy neighbor effect” that is present in the hypervisor environment.

These servers can also run equally well in individually owned data centers or co-location, held by IT service providers/IaaS providers. A business also has the option to rent a bare metal server easily on a subscription from a managed service provider.

The primary advantage a bare metal environment is its separation.

The system does not need to run inside of any other operating system. However, it still provides all of the services to the virtual environments that are necessary.

multi-tenant server vs single tenant server

What Are The Benefits Of Bare Metal?

Without the use of bare metal, tenants receive isolation and security within the traditional hypervisor infrastructure. However, the “noisy neighbor” effect may still exist.

If one physical server is overloaded with requests or consumption from one of the tenants on the server, isolation becomes a disadvantage. The bare metal environment completely avoids this situation.

Bare metal also gives administrators the option to increase resources through the ability to add new hardware.

  • Lower overhead costs – Virtualization platforms incur more overhead than bare metal because no hypervisor layer is taking the processing power of the server. With less overhead, the responsiveness and the overall speed of a solution will improve. Bare metal also allows for more hardware customization, which can improve speed and responsiveness.
  • Cost effective for data transfer – Bare metal providers often offer much more cost-effective approaches to outbound data transfer. Dedicated server environments could potentially provide several terabytes of free data transfer. All else being equal, virtualized environments would not be able to match these initial offers. However, these scenarios are dependant upon server offers and partnerships and never guaranteed.
  • Flexible deployment – Server configurations can be incredibly precise. Depending on your workload, you may be able to mix bare metal and virtual environments.
  • QoS – Quality of Service often work to eliminate the problem of the “noisy neighbor” occur in the bare metal environment. This can be considered a financial advantage as well as a technical one. If something goes wrong, a client has someone to hold accountable on paper. However, as with any SLA, this may vary on a case-by-case basis.
  • Greater security – Organizations that are very security sensitive may worry about falling short of regulatory compliance standards in a hypervisor multitenant environment. This is one of the most common reason that some companies are reluctant to move to bare-metal cloud computing. Bare metal servers make it very possible to create an entirely physical separation of resources. Remember, virtualization does not mean less security by default. Security is incredibly complex and broad terminology, and there are many factors involved.

blue doors of a server room

What Are The Benefits Of Bare Metal Hypervisors?

You may not need the elite performance of a single tenant, a bare metal server. Your company may be able to better utilize resources by using a hypervisor. Hypervisors have many benefits, even when compared to the highly efficient and scalable bare-metal solution.

Choose a hypervisor when you have a dynamic workload, and you do not need to have an absolute cutting edge performance. Workloads that need to be spun up and run for only a short period before they are turned off are perfect for this environment.

  • Backup and protection – Virtual machines are much easier to secure than traditional applications. Before an application can be backed up, it must be paused first. This process is very time consuming, and it may cause the app a substantial downtime. A virtual machine’s memory space can be captured quickly and easily using a snapshot tool. This snapshot can then be saved to a disk in a matter of moments. Every snapshot that is taken can be recalled, providing recovery and restoration of lost or damaged data to a user on demand.
  • Improved hardware utilization – A bare metal server may only play host to a single application and operating system. A hypervisor uses much more of the available resources from the network to host multiple VM instances. Each of these instances can run an entirely independent application and operating system on the same physical system.
  • Improved mobility – The structure of the VM makes it very mobile because it is an independent entity separate from the underlying hardware. A VM can be migrated between any remote or local virtual servers that have enough available resources. This can be done at any point in time with effectively no disruption. This occurs so often that it has a buzzword: live migration. That said, a virtual machine can be moved to the same hypervisor environment on a different underlying infrastructure as long as it can run the hypervisor. Ultimate mobility is achieved with containerization.
  • Adequate security – Virtual instances created by a hypervisor are isolated logically from each other, even if they are not separated physically. Although they may be on the same physical server, they do not have any fundamental knowledge of each other. If one is attacked or suffers an error, the problem does not move directly to another. Although the noisy neighbor effect may occur, hypervisors are incredibly secure although they are not physically dedicated to a single client.

type 1 bare metal hypervisor vs standard hosting

Making the Best Decision for Your Project

Every situation is different, and each requires looking at all solutions available. In the end, there is no definite answer for a bare metal server with native workload versus bare metal with a hypervisor and virtualized workloads. Both options have their advantages and disadvantages, so it comes down making sure that all areas of the business are met.

Once evaluated, the decisions will be made by what your team is most comfortable with and what best fits your needs. Testing both systems is recommended to validate performance as well as how it impacts your infrastructure and your service management.

With the proper understanding of security, scalability, and flexibility, you should be primed with enough tools to narrow down your decision. With some guidance and testing, a bare metal type 1 hypervisor could be the solution your business has been looking for.


Data Security In Cloud Computing: How Secure Is Your Data?

This article is an expert-level account of our security services by phoenixNAP’s own Anthony Dezilva. Anthony is a 25yr industry veteran, with a background in virtualization and security. He is the Product Manager for Security Services at phoenixNAP.

Leadership and Partnership In Cloud Security

Definitions are critical; essential even. The term “leadership”, for example, is defined simply by Google dictionary, as “The action of leading a group of people or an organization”. At phoenixNAP, leading in our industry is part of our DNA and culture. We define leadership as creating innovative, reliable, cost-optimized, and world-class solutions that our customers can easily consume.

In that vein, the term “Cloud Infrastructure” (or its predecessor “Cloud Computing“) tend to represent multiple different scenarios and solutions, drummed up by overzealous marketing teams. Without a clear definition, clarity around the terms is convoluted at best. “Cloud Security,” however, is more often described as representing concerns around data confidentiality, privacy, regulatory compliance, recovery, disaster recovery, and even vendor viability. We aim to bring clarity, specificity, and trust into this space through our Data Security Cloud solutions.

The Road Ahead: The Security Landscape

According to Heng & Kim (2016) of Gartner, by 2020, 60% of businesses will suffer a failure of some sort, directly attributed to their internal IT team’s inability to manage risk effectively. 87% of nearly 1200 global C-Level executives surveyed by E&Y say they needed 50% more funding to deal with the increased threat landscape. Compound that problem by the fact that we are facing a global skills shortage in technology and security services. These issues directly impact the ability of organizations to maintain and retain their Information Technology and now their Cybersecurity staff.

While the industry prepares for this potential security epidemic, predictions state that a consolidation of the vast number of security services providers is going to take place, along with an increased focus and reliance on automation and machine learning tools. Despite public concern, this may not be such a bad thing. The growing sophistication of these tools, the ability to perform analytics and correlation in many dimensions, and the automation capabilities, could create efficiencies or potentially, advancements in our defensive capabilities.

Industry-leading providers in this space are not standing idly by. As such a provider, phoenixNAP is at the forefront of many initiatives, ranging from local to international. For example, it is critical that we begin to foster knowledge in children as young as grade school to gain an interest in the field. Working with industry organizations, we sponsor events and take leadership roles in organizations to support curriculum development and awareness. We are leading efforts in threat intelligence sharing, and the use of disparate dark web data sources, to create a predictive analysis that can be operationalized for early threat vector identification. Additionally, we have partnered with the United States Armed Forces and U.S. Department of Veteran Affairs to provide pathways for those service members interested, to have a low barrier of entry, and to have a dedicated support system, so that they can successfully transition into cyber roles as civilians.

“Leadership,” we view as our social responsibility and our contribution to enhancing the security posture of our market segment.

Why is this relevant to security in the cloud?

A Gartner study from 2015 predicted a 16% year-over-year annual growth rate. The reality is that as we approach the 2020 mark, we see a 32% increase in IT spending on cloud services. That same study identified that about 40% of IT budgets are now allocating for cloud or SaaS related services.

“These growing statistics are relevant because this is going to influence your existing cloud strategy dramatically, or if you don’t have one, this should alert you that you will soon require one.”

Secure Solutions From Our Unique Perspective

It is safe to assume you are already in the cloud, or you are going there. Our focus is to educate on what we believe are the most significant components of a secure cloud infrastructure, and how these components complement and support the security needs of modern business. Just as the path-goal theory emphasizes the importance of the relationship to the goal achievement, as a technology service provider, we believe in partnering with our customers and going the extra mile to become mutually trusted advisors in product creation and sustenance. The cloud is in your not-too-distant future. Let us keep you safe and secure, and guide you along the way.

At phoenixNAP, we have a unique perspective. As an infrastructure provider, we offer a service portfolio of complementary tools and services to provide organizations with holistic, secure, cloud-based solutions. With that in mind, we identified a gap in the small, and medium-sized business space (SMB), and their barriers to entry, for access to cutting-edge technology such as this. We knew what we had to do: we developed the tools to help these businesses with access to a world-class secure cloud-based solution offering, which met and supported their regulatory needs. We set the bar on performance, recoverability, business continuity, security and now compliance pretty high. Our passion for small to medium-sized businesses and dedication to security is why we built the Data Security Cloud. Our Data Security Cloud is an aspiration to create the world’s most secure cloud offering.

We wanted a way to build a solution that would be the Gold Standard in security, but also entirely accessible to everyone. For that to happen, we needed to commoditize the traditionally consultative security services offerings and offer it at an affordable OpEx cost structure. That is exactly what we did.

Cloud Security is a Shared Responsibility

The 2017 Cloud Adoption Survey found that 90.5% of respondents believe that Cloud Computing is the future of IT. As many as 50.5% of these respondents still identified security as a concern. Of those concerns, the following areas were of particular interest:

    • Data and application integration challenges
    • Regulatory compliance challenges (54% indicated PCI compliance requirements)
    • Worries over “lock-in” due to proprietary public cloud platforms
    • Mistrust of large cloud providers
    • Cost

We architected our solution from the ground up, with these perspectives in mind. We identified that we needed to monitor, actively defend, and resource a Security Operations Center, to respond to incidents 24×7 globally. We designed a solution where we partner with each of our customers to share in the responsibility of protecting their environment. Ultimately, this strategy contributes to protecting the privacy and confidentiality of their subsequent customers privileged, financial, healthcare, and personal/demographic data. We set out to design a system to empower your goals towards your security posture.

Our challenge, as we saw it, was to commoditize and demystify the types of security in cloud computing. We have invested significant resources in integrating tools and pushed vendors to transition from a traditional CapEx cost model to an OpEx pay-as-you-grow model. Ultimately, this strategy enables pricing structures that are favorable for this market segment and removes any barrier of entry, so that our customers can access the same tools and techniques formerly reserved for the enterprise space.

What are Cloud Services?

When speaking of Cloud Services, we have to define the context of:

Private Cloud

    • A Private Cloud typically represents the virtualization solution you have in-house or one you or your organization may host in a data center colocation.
    • Optimizing the use of idle time on a typical compute workload, by aggregating multiple workloads onto a single host, the Private Cloud will take advantage of the resource overprovisioning inherent of a bare metal hypervisor platform.
    • You own your Private Cloud. It is technically in your facility, under your operational control. The confidence in the security controls are therefore high, yet dependent on the skills and competency of the operators and their ability to keep up with proper security hygiene.
    • The challenge, however, is that you still have to procure and maintain the hardware, software, licensing, contingency planning (backup and business continuity), and even the human resources described above. Including the organizational overheard to continuously develop and manage these resources (training, HR, medical/dental plans, etc.).


Public Cloud

    • A public cloud is an environment where a service provider makes a virtualization infrastructure available for resources such as virtual machines, applications, and/or storage. These resources are open to the general public consumption over the internet. The public cloud is typically an environment operated under a pay-per-use model, where the customer pays only for what they have subscribed and/or committed to.
    • We can categorize public cloud further as:
      • Software-as-a-Service (SaaS). A great example of SaaS is Microsoft’s Office 365. Although you can use a lot of the tools via the internet browser itself, you can also download the client-facing software, while all the real work happens within the cloud environment.
      • Platform-as-a-Service (PaaS). A solution where the cloud provider delivers hardware and software tools, typically in an OpEx model.
      • Infrastructure as a Service (IaaS). When we refer to the public cloud, this is typically the service most people refer to. A typical scenario is when you visit a website and order a virtual Windows Server; with X amount of processors, Y amounts of RAM, and Z amounts of Storage. At phoenixNAP, we offer this style of service. Once provisioned, you install IIS and Wordpress, you upload your site, and now you have an internet-facing server for your website. Consumers drawn to this model are typically cost-conscious and attempting to create their solution with the least expenditure. Things like an Internet-facing firewall could be overlooked or entirely skipped. Strong system architecture practices such as creating separate workloads for web platforms and database/storage platforms (with an internal firewall) may also suffer. What might be obvious at this point is that this is one of those areas of intense focus when we created our solutions.
    • Our value proposition is that this type of cloud platform reduces the need for the organization to invest and maintain its on-premise infrastructure, resources, or even annual service contracts. Although this will reduce resource needs, it will not eliminate them. As most licensing costs are either included via the provider and most likely available at significantly reduced price-points through the provider’s economies of scale, you are also guaranteed to get some of the best pricing possible.

The following table contrasts the shifting cost allocation model:

Traditional IT

Asset Costs

    • Server Hardware
    • Storage Hardware
    • Networking Hardware
    • Software Licensing

Labor Costs to Maintain Infrastructure
Physical Data Center Costs

    • Power
    • Cooling
    • Security
    • Insurance

Outsourcing/Consulting Costs
Communications/Network Costs

Public Cloud

Virtual Infrastructure Costs

    • Server Costs
      • vProcs
      • vRAM
      • vStorage
    • Software License Costs
    • Professional Services
    • Bandwidth Costs
    • Managed Services Costs

Hybrid Cloud

    • Consider the Hybrid Cloud as a fusion between the Private and Public Cloud. The desired goal is for workloads in both of these environments to communicate with each other, including the ability to move these workloads seamlessly between the two platforms.
    • Though this is also possible in the other scenarios, in the case of the Hybrid Cloud, it is typical to see a public cloud environment configured like an on-premise environment. This scenario could have proper North-South traffic segmentation, and in the rare case, proper East-West traffic segmentation facilitated by either virtual firewall appliances or most recently VMware NSX based micro-segmentation technology.

What Role Do Control Frameworks Play?

Control Frameworks are outlines of best practices. A strong and defined set of processes and controls that help the provider adhere to proper security posture. Posture that can be evaluated, audited and reported on, especially when subject to regulatory requirements verified by an audit process. What this means to a consumer is that the provider has built a standards-based solution that’s consistent with the industry. They have not cut corners, they have made the effort to create a quality product that’s reliable and inter-operable should you need to port-in or port-out components of your infrastructure. A standards-based approach by the provider can also be leveraged for your own regulatory compliance needs, as is may address components on your checklist that you can assign to the provider.

Partnering With the Best

Market share numbers are a quantitative measure, although subject to a level of alpha, it is still statistically sound. Intel and VMware are clear leaders and global innovators in this space. Product superiority, a qualitative measure, is a crucial asset when integrating components to create innovative solutions in a highly demanding space. At phoenixNAP, we are proud of our ongoing partnerships and proud to develop products with these partners. We believe in the value of co-branded solutions that innovate yet create stable platforms due to longevity and leadership in the space.

Developing our Data Security Cloud (DSC) product offering, we had the pleasure of working with the latest generation of Intel chipsets and early release VMware product code. We architected and implemented with next-generation tools and techniques, not bound by the legacy of the previous solutions or methodologies.

We incorporated VMware’s vRealize Suite and vCloud Director technologies into a world-class solution. At phoenixNAP, we not only want to empower our customers to manage their operational tasks themselves but by using the industry standard VMware as a platform, we can create hybrid cloud solutions between their on-premise and Data Security Cloud implementations.

Starting Fresh

As we wanted to design a secure cloud service offering, we chose not to be influenced by legacy. Starting with a whole new networking platform based on software-defined-networking, we created and built a flexible, scalable, solution, incorporating micro-segmentation and data isolation best practices. We designed this level of flexibility and control throughout the entire virtualization platform stack and the interconnecting communications fabric.

Design Methodology

We drew upon our extensive background in meeting compliance goals; incorporating a framework approach, using industry best practices, anticipating the needs and limitations inherited with achieving industry and compliance certifications such as PCI, HIPAA Compliance, and ISO 27002 (coming soon). We designed a flexible, yet secure architecture, supplemented by a VMware LogInsight log collection and aggregation platform, that streams security-related incidents to a LogRhythm SIEM, monitored by our 24×7 Security Operations Center (SOC).

We Proved It

What better way to prove that we achieved our goals in a security standard than to have the most respected organizations validate and certify us. We had TrustedSec evaluate our environment, and have them attest that it met their expectations. However, we didn’t stop at just achieving compliance alone. Additionally, as security professionals, we audited our environment, going over and beyond the regulatory standards. We designed our framework to have a “no compromise approach,” and our fundamental philosophy of “do the right thing” from a technical and security perspective. Proved by our PCI certification of this secure cloud platform.

The Launch of our Security Services Offering

After years of extensive testing and feedback from our customers, we built our Security Risk  Management and Incident Response capabilities into a service offering, available to our entire customer base. We enhanced our Security Operations through the integration of advanced Security Orchestration and automated testing tools, and through strategic partnerships with public and private Information Sharing and Collaboration (ISACs) organizations. Enhanced by our ability to gather threat vector data globally, in real-time from our own systems, member organizations, and the dark web, we utilize unique enrichment techniques, to do predictive profiling of the social structure of this society; with a goal create, actionable intelligence or early warning systems, to support our defensive posture.

What this means is that we are building advanced tools to detect threats before they impact your business. We are using these tools to take preventative action to protect customer networks under our watch. Actions which could see the latest threat pass you by without including you in its wake.

Layered Approach to Creating a Secure Cloud Infrastructure

Proven Base

phoenixNAP has a long and proven history in designing, developing, and operating innovative infrastructure solutions. With a parent company in the financial transactions sector, we have extensive knowledge and expertise in the secure operations of these critical solutions. As an operator of global data center facilities, we have established a trustworthy reputation and operational process, to support the needs of our diverse and vast client base.

Our certifications in SOC-1, SOC-2, and SOC-3 establish a baseline for physical and logical access control, data security, and business continuity management and procedures. Our Type II designation verifies these capabilities in practice. Our PCI-DSS certification establishes our commitment and credibility to “doing the right thing” to create an environment that exemplifies your concerns for the highest level of security posture.

Redundant Global Communication Fabric

At phoenixNAP, we believe that every customer deserves the highest form of security and protection. At our most consumer level, our customers benefit from an Internet Service riding on top of a six-career blended connection, with technologies such as DDoS mitigation built into the communication fabric. Every one of our customers receives this exceptional level of protection out-of-the-box. Piggy-backing on our datacenter availability expertise, we designed a meshed switching fabric that is resilient as it is fast, eliminating single points of failure that gives us the confidence to offer a 100% Service Level Availability (SLA) guarantee.

Highly Scalable Hardware Platform

“A new platform that represents the largest Data Center Platform advancement in a decade”

Lisa Spellman – Intel VP/GM of Xeon and Datacenter

Secure at the Foundation

    • Root of trust module (TPM)
    • Built-in instruction sets for verification (Intel TXT)
    • Fast, high-quality random number generator (RDSEED)
    • Firmware assurance (BIOS Guard)

Built-in Ecosystem

    • Efficient provisioning and initialization (Intel PTE)
    • Scalable management with policy enforcement (Intel CIT)
    • Direct integration with HyTrust and VMware, etc.

A New Level of Trust

    • Secure, Enterprise Key Management
    • Trusted connectivity
    • Remote attestation fo the secure platform
    • Compliance and measurement at the core

Designed around the latest Intel Xeon processor technology alongside our extensive expertise in managing highly scalable workloads in our other cloud offerings, we built a computing platform that achieved 1.59X performance gaines over previous generations. These increases that are passed down into our customer’s workloads, providing them with better performance, and a higher density environment to optimize their existing investment, without any capital outlay; in most cases without any additional OpEx commitments.

Advanced Hypervisor Technology

We build a foundational commitment to VMware, and our commitment to integrate the latest tools and techniques to empower our customers to do what they need, whenever they need it.

Using Hybrid Cloud Extender we can help customers bridge the network gaps to hosted cloud services while maintaining network access and control. Tools like VMWareNSX allow for the creation of logical security policies that can be applied to a Virtual Machine regardless of location (cloud or on-premise). The integration of the latest Intel Cloud Integrity Toolkit allows for platform security with unmatched data protection and compliance capabilities.

Our vRealize Suite and vCloud Director integration is no different. We provide our customers with direct access to the tools they need to manage and protect their hybrid cloud environments effectively. In the event the customer wishes to engage phoenixNAP to perform some of these tasks, we offer Managed Services through our NOC and 3rd party support network.

Segmented Components

Experience has taught us how to identify and prevent repeat mistakes, even those made by strategic or competitive partners in the industry segment. One of those lessons learned is the best practice to section and separate the “Management” compute platform, from the “User compute platform.” Segmentation will significantly minimize the impact of a “support system” crash, or even a heavy operational workload, from impacting the entire computing environment. By creating flexible and innovative opportunities, we train our teams to reflect, communicate and enhance their experiences, creating a knowledgeable and savvy operator who can step onto the batter’s box ready to do what’s asked of them.

Threat Management

We believe that we have created a state-of-the-art infrastructure solution with world-class security and functionality. However, the solution is still dependent on a human operator. One, that based on skill or training, could be the weakest link. We, therefore, engage in continuous education, primarily through our various industry engagements and leadership efforts. This service offering is designed to be a high touch environment, using a zero-trust methodology. A customer, who is unable to deal with the elements of an incident, can see us engage on their behalf and resolve the contention.

If all else fails, and the environment is breached, we rely on 3rd party pre-contracted Incident Responders that deploy in a rapid format. The proper handling of cybersecurity Incident Response requires a Crisis Communication component. One or more individuals trained in handling the details of the situation, interfacing with the public and law enforcement, and based in the concepts of psychology, are trained to be sensitive and supportive to the various victim groups of the situation.

As we bundle backup and recovery as a core service in our offerings, we can make service restoration decisions based on the risk of overwriting data vs. extended downtime. Using the cloud environment to our advantage, we can isolate systems, and deploy parallel systems to restore the service, while preserving the impacted server for further forensic analysis by law enforcement.

It’s All About the Layers

Hierarchy of Virtual Environment Security Technologies

Security solutions are designed to defend through depth. If one layer is compromised, the defense process begins by escalating the tools and techniques to the next tier. We believe that a layered approach as described creates a secure and stable solution that can easily be scaled laterally as the needs and customer base grows.

Why Does This All Matter?

In one of his articles in the CISO Playbook series, Steve Riley challenges IT leaders not to worry that migration to the cloud may require relinquishing total control but encourages them to embrace a new mindset. This mindset is focused on identity management, data protection, and workload performance.

The primary is likely a reference to the cost savings achieved from consolidation, and transfer of responsibility to a service provider.

    • Converting CapEx expenditure to OpEx ones can surely improve cash flow to those in the SMB market space.
    • Reducing technical overhead through the elimination of roles no longer required, can provide far more operating capital, and
      by re-focusing core-resources to concentrate on core-competencies, create business advantages in the areas that are important to the organization.

According to Gartner, the benefits of cloud migration include the following:

    • Shorter project times: Cloud IaaS is a strong approach for trial and error, offering the speed required to test the business model success.
    • Broader geographic distribution: The global distribution of cloud IaaS enables applications to be deployed to other regions quickly.
    • Agility and scalability: The resource is pay-as-you-go. If an application is designed correctly, then it is simple to scale the capability in a short period.
    • Increased application availability: As described, we have demonstrated the highest levels of security and reliability. If you have the right application design, you can develop application availability accordingly.

What’s Fueling the Cloud-First Strategy?

We hear many organizations adopting a cloud-first strategy, where they default to a cloud-based solution, until it proves unable, or not feasible before they consider other options. Factors driving this trend include:

    • Reduced infrastructure and operational costs. From a reduction in capital expenditures, using the elasticity of cloud services, lower overall software costs and potential reduction of IT staff, organizations report approximately 14% in savings.
    • Flexibility and scalability to support business agility. Agility is defined by the ability to bring new solutions to market quickly. The ability to control costs, leverage different types of services, and being flexible to adapt to market conditions.
    • Cloud services tend to use the latest in innovation. Being able to leverage the high rate of innovation in this space, an organization can benefit by incorporating it as part of their business strategy.
    • A cloud-first strategy can drive business growth through a supportive ecosystem.

Things to Consider

Not every workload is appropriate or destined for cloud-based compute platforms. The scoping part of any cloud migration project should start by identifying and selecting workloads that are easily migrated and implemented in multi-tenant cloud implementation.

The customer needs to understand the profile and characteristics of their workloads. For many years we would have never considered moving database workloads off of physical hardware. This is a similar case where high I/O or hardware timer reliant workloads (such as GPS or real-time event processing) may be sensitive to being in a shared, multi-tenant computer environment.

    • More importantly, cloud services predominately revolve around x86-based server platforms. Therefore, workloads that are reliant on other processor architecture, or even specialized secondary processing units or even dongles, do not make ideal cloud candidates.

In contrast, cloud-based infrastructure allows for:

    • Business Agility – for rapid deployment, and even rapid transition from one platform to another, with low transition costs.
    • Device Choice – The flexibility to deploy, tear down, and redeploy various device configurations in a matter of clicks.
    • Collaboration – Cloud providers typically provide an expert-level helpdesk, with direct access to a community of experts that can support your needs.

There are many reasons to consider a hybrid strategy where you combine workloads. What needs to stay on bare-metal can remain on bare metal servers, either in your facility or a colocation facility such as ours, while staying connected to the cloud platform via a cross-connect, gaining the benefits of both scenarios.

Cloud computing security consists of a broad set of concerns. It is not limited to data confidentiality alone, but concerns for privacy, regulatory compliance, continuity and recovery, and even vendor viability. Staying secure in the cloud is, however, a “shared responsibility.” It requires partnerships, especially between the customer and their infrastructure service provider. Nobody needs to be convinced that data breaches are frequent, and often due to management or operator neglect. Customers are becoming tired of their data being disclosed and then used against them. Most recently, abused via an email-based threat vector, where the bad actor quotes a breached user ID and password, as a way to convince the target recipient to perform an undesired action, behind the mask of perceived authenticity.

Any organization that accepts Personally Identifiable Information (PII) of its customer base establishes with that customer, an implied social contract to protect that information. At phoenixNAP, we have demonstrated leadership in the infrastructure space on a global scale, through partnerships with customers, solution aggregators, and resellers. We have created innovative solutions to meet the unique challenges faced by businesses, going above and beyond to achieve the goals desired by the target organization.

Notes from the Author: Elements of a Strong Security Strategy

Over the years, I have learned many important lessons when it comes to creating solutions that are secure and reliable. Here are some final thoughts to ponder.

    • There is no substitute for strong architecture. Get it right and you have a stable foundation to build upon. Get it wrong and you will play whack-a-mole for the rest of that life-cycle.
    • Have detailed documentation. Implement policies and procedures that make sense. Documentation that supports the business process. Security policy cannot burden the users. If it does, it just becomes a target for shadow IT. It needs to be supportive of the existing process while implementing the control it absolutely needs. A little control is better than no control due to a workaround.
    • Plan for a breach, plan to be down, plan for an alien invasion. If you plan for it, you won’t be caught in a state of panic when something undoubtedly happens. The more off-the-beaten-path a scenario seems, the better you can adopt for when real-life scenarios arise.
    • You can’t protect what you don’t know you have. Asset management is the best thing you can do for your security posture. If it’s meant to be there: document it. If it’s not meant to be there: make certain that you have a mechanism to detect and isolate it. Even to find out who put it there, why and when.
    • Now that you know what you have: monitor it. Get to know what normal behavior is. Get to know its “baseline.”
    • Use that baseline as a comparative gauge to detect anomalies. Is this system showing inconsistent behavior?
    • Investigate. Have the capability to see the alert triggered by that inconsistent behavior. Are you a 24/7 operation? Can you afford to ignore that indicator until the morning? Will your stakeholders, including your customers accept your ability to detect and respond to the Service Level Agreement (SLA) you extend to them? Can you support the resourcing needed for a 24/7 operation, or do you need to outsource the Threat Management component at least in a coverage extension model? The best SIEM tools are useless without someone actioning the alerts as soon as they pop up. Machine learning helps, however, it cannot yet replace the operator.
    • Mitigate the problem or be able to recover the environment. Understand what your Recovery Point Objectives (RPOs) and your Recovery Time Objectives (RTO). Do your current solutions meet those goals? Can those same goals be met if you have to recover into a facility across the country, with no availability from your current staff due to the crisis being faced? How will you communicate with your customers? Do you have a crisis communicator and incident handler as part of the response team?
    • Take your lessons learned, improve the process and do it all over again.

No single vendor can provide you with a “silver bullet.” Any vendor that tells you such, is someone you should shy away from. Every customer’s needs are unique. Each situation takes a unique blend of solutions to be effective. Hence your vast network of partner relationships, to provide you with the solutions you need, without trying to make you fit onto one of their offerings.

The offer is always on the table. At phoenixNAP, we will gladly take the call to discuss your concerns in this area, and provide advice on what our thoughts are on the topic of interest. Promoting and supporting properly secured environments is part of our social responsibility. It is part of our DNA and the core philosophy for building products in this segment. Let us be your partner in this journey.

Use of Reference Architectures

One of the benefits of a cloud-based, secure infrastructure such as our Data Security Cloud, is the ability to implement battle tested reference architectures that in some cases go above and beyond the standard capabilities of what’s possible in the physical environment.

In what we would consider an extreme case; an architecture as depicted above creates multiple layers of security with various gateways to get to the prized databases that most bad actors are after. Let’s not ignore the bad actors that want to take control of the web infrastructure to infect visitors with infectious payloads; however, the real prize sits inside those databases in the form of PII, PHI, or PCI data. While the various levels of defensive components are designed to make it difficult for the bad actors to storm the castle, the 24×7 Threat Monitoring will undoubtedly catch the multiple attempts and anomalous behavior, triggering an investigation and response activity.

Through a robust combination of tools, technology, services, and a cost model that’s supportive of the needs of the SMB space, we believe we have demonstrated our leadership, but more importantly, we have created a solution that will benefit you; our SMB customer. We aim to have created a complete security solution that you can take forward as you further define your cloud strategy.

Our Promise

We have assembled a world-class team of highly experienced and skilled leaders, who are passionate about cloud security. As global thought leaders, we design for the world and implement locally. We create sustainable solutions, understanding a customer’s appetite and limited budget. Let us show you how we can benefit your goals through our solutions offerings. Keeping with our promise to “do the right thing” as it involves finding the best solution for you.

Get Started with Data Security in Cloud Computing Today

Contact phoenixNAP today.

Complete the form below and our experts will contact you within 24 hours.


Bare Metal vs. Virtualization: What Performs Better?

Just because of the sheer volume of solutions out there, it is very challenging to generalize and provide a universally truthful answer to which is better, a bare metal or cloud solution. When you also throw in all the various use cases into the equation, you get a mishmash of advice.

However, we can all agree that each option has its advantages and disadvantages. In this article, I will try to provide an overview of strong points and shortcomings of both bare metal and cloud environments, with a single use case performance comparison.

Let us start with the pros and cons of a bare metal server.

Data Crunching

Advanced Features of Bare Metal

Dollar for dollar, bare metal servers can process more data than any other solution. Just imagine 28 cores working their way through data, going as smooth as a knife through butter.

Of course, there are exceptions. If you are running a single-threaded application, it does not matter how much cores you throw at it; you will not see any benefit. To have your applications running in an optimal environment, you need to make sure you have chosen the right solution.

Single-tenant environment

It is kind of soothing to know that at any given point in time, 100 percent of a server’s resources are at your disposal. A bare metal or dedicated server is your private environment in that you can configure it any way you want.

In comparison, pick the wrong vendor for your public cloud solution, and you will feel like you are using public transportation – things will go slow, and you will not get anything done in time.

Another critical point is security, of course. Again, I will use the public transportation metaphor. Bare metal servers are like having your own car in the sense that you are isolated from the outer world, safe knowing that nobody can get in. Public cloud, on the other hand, can be like riding the bus, you never know who is getting on and whether someone will try to do something harmful.

Raw power and reliability

Picture Grand Tour host, Jeremy Clarkson, screaming “power” at the top of his lungs. Well, that is what you get with bare metal. It is fully customizable, and it ranges from state of the art powerhouses to inexpensive low-powered machines. If you are looking into dedicated bare metal, then you are in for it for the raw power of the latest and greatest CPUs, such as the Intel® Xeon® Scalable processors family. I am thinking something along the lines of a dual Intel® Xeon Gold 6142 (32 ×2.6GHz) 6-bay with 512 GB ECC DDR4 and additional SSD storage. I always say, if you want to overtake your competitors, you stand slim chances of doing it with a Ford Pinto.

Customization opportunities

You are the one who builds the configuration from the ground up and selects each component, so it is more than evident that bare metal provides ample room for customization. Besides hardware resources, you can have any operating system you like, or control panel, software option, and control panel add-on. You can even run your own environment with a virtualization hypervisor.

That brings us to an essential point.

Bare Metal Provisioning

You need to know what you are doing

Well, you or your IT team. Either way, provisioning a bare metal server demands knowledge, careful planning, diligent managing, and being well-aware of your requirements. However, a lot of maintenance can be outsourced. Our Managed Services for bare metal offers a full suite of services that complement your IT team.

Security

We have reached a point when we do not have to go through last year’s statistics, as we can all agree that ransomware and numerous other cyber threats are all around us.

When it comes to security, dedicated single tenant servers are as safe as it gets. In a single-tenant environment, each server is under the control of an individual client. The only way bare metal can be compromised is if somebody breaks into the data center with the intention of damaging or stealing data.

But given that bare metal data centers have top-notch security nowadays, nobody is getting in.

You can deploy a bare metal backup or restore solution. This option will add up to the overall efficiency of your data security strategy and will keep your workloads safe in case of disasters.

GPU

Sparse are cloud solutions which offer any significant GPU power. With bare metal, it is easier to find the right GPU solution to work together with your server’s CPU.

Ultimately, it can be paired with a hypervisor.

You can put a multitude of operating systems on top of a bare metal server, including a hosted hypervisor. That is an operating system used to create VMs within a physical server. A hypervisor, unlike an OS, cannot run applications natively. It allows you to divide the workload into separate VMs, giving you the flexibility and scalability of a virtualized environment.

Compared to VMs, bare metal is time-consuming to provision

Plan wisely as deploying a physical server is not as fast as getting a VM powered up.

PhoenixNAP, in most cases, deploys a server within 4 hours, provided that your order does not contain any special instructions, or need an onboard RAID configuration. That is fast, but not cloud fast.

How You Can Benefit from Deploying a Virtual Machine

Quick setup

Whenever you need four additional servers to support your e-commerce store promotion, they can be deployed in a matter of seconds on a cloud platform. Need virtual servers to run multiple applications or test a new feature?

No problem, it can be done instantly. Periodical testing of large apps is made possible because VMs can be automatically created, used as a test machine, and then discarded.

Virtualization

Flexibility & scalability

Thanks to a hypervisor layer, bare metal cloud instances are as flexible and scalable as it gets. Moving things from one VM to another, resizing a VM, and dividing the dynamic workload between several VMs is straightforward. When it comes to scalability and elasticity, that is pretty much you need. This is one of the critical differences between a bare metal server and virtualization. It is also the reason why a hosted virtualization hypervisor is a popular solution for businesses of different sizes.

Bringing a new level of flexibility, hypervisor technologies are allowing for more efficient IT resource planning. For example, organizations can distribute workloads based on their use. This is especially useful for modern applications that have spikes in resource usage. At the same time, older legacy apps tend to run on a single machine only and would require adapting and recoding to reap the benefits of cloud environments.

A good idea is to define a procedure for determining where your apps should run or at least cover the basics, such as defining storage, security, and performance requirements of the applications you intend to run. Some vendors offer up to 30 days of a free trial, which gives you more than enough time to test the environment and the resources provisioned.

Move around freely

When it comes to migrating data and just moving things around, VMs are the better option. Migrating or even getting a new VM up and running can be done in a matter of minutes.

Easy to manage

Unlike a bare metal server, virtualized environments are more easily managed. With solutions such as VMware vSphere and VMware ESXi, setting up a virtual environment does not take more than several hours. Your provider carries a part of the responsibility for your VMs, so you do not need an entire IT team to manage it.

You need adequate management tools, i.e., a virtual machine manager, and a trusted provider to ensure your virtual applications are running smoothly and securely. If need be, you can install guest operating systems besides the host operating system to better control your resources. Organizations can make use of guest operating systems by running them on VMs used for testing, without the VMs having direct access to host OS resources.

Reduced costs

Besides scalability, this is the main reason why everything is going cloud. As it is so easy to manage and scale cloud resources, it is easier to scale your costs as well.

High latency

Cloud computing environments are more prone to latency for various reasons. For one thing, if VMs are on separate networks, it can lead to packet delays. With cloud environments, you do not have a direct connection with the physical hardware, as there is a hypervisor layer between your app and physical resources. Thus, the chances are that VMs will suffer from a higher latency than if you were running apps directly on a bare metal server. Furthermore, performance bottlenecks may occur due to the sheer number of tenants.

Security

Public clouds offer low security, considering that there can be numerous tenants on a single server. However, the cloud as a solution is getting increasingly better at data protection. Just a couple of months ago, phoenixNAP launched its Data Security Cloud, giving the market an entirely new cloud security model.

Easy to mix and match

With some cloud solutions, users can use both single-tenant and multi-tenant resources in a single environment. The best thing about it is that, in most cases, this can be easily implemented and can provide additional value to your cloud environment.

Whenever you are in the market for a cloud hosting solution, find one that supports hybrid environments. You might be just a small business right now, but you never know when or why you would find such a solution VERY useful.

A virtual environment is ideal for:

  1.       E-Commerce
  2.       SaaS
  3.       Testing new features
  4.       Enterprise resource planning (ERP) solutions

Performance Testing

Bare Metal Server vs. Virtualization

CenturyLink did an interesting performance test running Kubernetes for container creation.

Two clusters were created, one was based on a bare metal environment while the other was made up of virtual machines. They measured network latency and local CPU utilization. See the results for yourself:

As it might have been expected, running things on a bare metal server produced almost 3x less latency than the cluster comprised of virtual machines. Furthermore, at certain times, CPU utilization was pretty high on the VMs compared to a dedicated bare metal server.

So what does this mean?

First of all, if you are running data-crunching apps which can significantly benefit from direct access to physical hardware, a bare metal server should be your first choice. It comes out as the winner with its lower latency and lower CPU utilization, consequently providing faster result times and more data output.

Can we say that bare metal is the best option out there? – No. This is just one performance analysis which emphasizes one specific use case. Cloud workloads can be moved around freely, are more flexible and scalable, tend to cost less, and are more easily maintained. But, they also tend to offer less performance and safety.

Conclusion

Ultimately, there is no right answer. Each option has its strengths and weaknesses, and it all comes down to what your organization needs. For many, finding middle ground might be the way to go.

For example, enterprises should seek out solutions that combine the strengths of both worlds and look into hybrid cloud environments, which bridge the gap between public and private cloud resources. This option is excellent if you have already invested in infrastructure and do not want to see that money wasted, but also want to make use of the cloud’s flexibility and scalability. In an in-house and outsourced hybrid cloud option, a part of your workload will be maintained on internal systems, while other computing workloads are outsourced to external cloud systems.

In conclusion, a single solution that works for everyone does not exist. If you are running an organization with diverse projects, consider a hybrid environment that combines bare metal and cloud hosting to maximize your ROI.


cloud computing in simple terms to understand

What is Cloud Computing in Simple Terms? Definition & Examples

Did you know that the monthly cost of running a basic web application was about $150,000 in 2000?

Cloud computing has brought it down to less than $1000 a month.Read more


cloud security

What Is Cloud Security & What Are the Benefits?

When adopting cloud technology, security is one of the most critical issues.

Many Organizations still fear that their data is not secure in the cloud environment.

Companies want to apply the same level of security to their cloud systems as their internal resources. It is essential to understand and identify the challenges of outsourcing data protection in the cloud.

how cloud security works

What is Cloud Security?

Cloud security is a set of control-based safeguards and technology protection designed to protect resources stored online from leakage, theft, or data loss.

Protection encompasses cloud infrastructure, applications, and data from threats. Security applications operate as software in the cloud using a Software as a Service (SaaS) model.

Topics that fall under the umbrella of security in the cloud include:

  • Data center security
  • Access control
  • Threat prevention
  • Threat detection
  • Threat mitigation
  • Redundancy
  • Legal compliance
  • Security policy

How Do You Manage Security in the Cloud?

Cloud service providers use a combination of methods to protect your data.

Firewalls are a mainstay of cloud architecture. Firewalls protect the perimeter of your network security and your end-users. Firewalls also safeguard traffic between different apps stored in the cloud.

Access controls protect data by allowing you to set access lists for different assets. For instance, you might allow specific employees application access, while restricting others. A general rule is to provide employees’ access to only the tools they need to do their job. By maintaining strict access control, you can keep critical documents from malicious insiders or hackers with stolen credentials.

Cloud providers take steps to protect data that’s in transit. Data Security methods include virtual private networks, encryption, or masking. Virtual private networks (VPNs) allow remote employees to connect to corporate networks. VPNs accommodate tablets and smartphones for remote access.

Data masking encrypts identifiable information, such as names. This maintains data integrity by keeping important information private. With data masking, a medical company can share data without violating HIPAA laws, for example.

Threat intelligence spots security threats and ranks them in order of importance. This feature helps you protect mission-critical assets from threats.

Disaster recovery is key to security since it helps you recover data that are lost or stolen.

While not a security component per se, your cloud services provider may need to comply with data storage regulations. Some countries require that data must be stored within their country. If your country has this requirement, you need to verify that a cloud provider has data centers in your country.

What are the Benefits of a Cloud Security System?

Now that you understand how cloud computing security operates, explore the ways it benefits your business.

Cloud-based security systems benefit your business through:

Top threats to systems include malware, ransomware, and DDos.

Malware and Ransomware Breaches

Malware poses a severe threat to businesses.

Over 90 percent of malware comes via email. It is often so convincing that employees download malware without realizing it. Once downloaded, the malicious software installs itself on your network, where it may steal files or damage content.

Ransomware is a form of malware that hijacks your data and demands a financial ransom. Companies wind up paying the ransom because they need their data back.

Data redundancy provided by the cloud offers an alternative to paying ransom for your data. You can get back what was stolen with minimal service interruption.

Many cloud data security solutions identify malware and ransomware. Firewalls, spam filters, and identity management help with this. This keeps malicious email out of employee inboxes.

DDoS Protection

In a DDoS or distributed denial of service attack, your system is flooded with requests. Your website becomes slow to load until it crashes when the number of requests is too much to handle.

DDoS attacks come with serious side effects. Every minute your website is inaccessible, you lose money.

Half of the companies that suffer DDoS attacks lose $10,000 to $100,000. Many businesses suffer from reputation damage when customers lose faith in the brand. If confidential customer data is lost in a DDoS attack, you could face legal challenges.

Given the severity of these side effects, it’s no wonder that some companies close after DDoS attacks. Consider that one recent DDoS attack lasted for 12 days and you sense the importance of protection.

Cloud security services actively monitor the cloud to identify and defend against attacks. By alerting your cloud provider of the attack in real-time, they can take steps to secure your systems.

Threat Detection

Security for cloud computing provides advanced threat detection using endpoint scanning for threats at the device level. Endpoint scanning increases security for devices that access your network.

Computing Security Considerations Require Team Effort

Cloud partners offer clear advantages over in-house data storage. Economies of scale allow a cloud service to invest in the latest security solutions, such as machine learning. As cloud solutions are scalable, your business can purchase what you need with the ability to upgrade at any time.

Now that you know what cloud security is, you have a better understanding of how service providers keep your big data safe.

Remember, a strong security policy should outline what strategies the service uses. You should ask questions to compare and ensure that you are protecting your critical business resources.


man not watching his Cloud applications and services

What Is Cloud Monitoring? Benefits and Best Practices

Cloud monitoring is a suite of tools and processes that reviews and monitors cloud computing resources for optimal workflow.

Manual or automated monitoring and management techniques ensure the availability and performance of websites, servers, applications, and other cloud infrastructure. Continually evaluating resources levels, server response times, speed, availability, and predicts potential vulnerability future issues before they arise.

Cloud Monitoring Strategy As An Expansion of Infrastructure

Web servers and networks have continued to become more complicated. Companies found themselves needing a better way to monitor their resources.

Cloud monitoring tools were developed to keep track of things like hard drive usage, switch, and router efficiency, and processor/RAM performance. These are all excellent, and vulnerabilities. But many of these management tools fall short of the needs for cloud computing.

Another similar toolset, often used by network administrators, is configuration management. This includes user controls like group policies and security protocols such as firewalls and/or two-factor authentication. These work based on a preconfigured system, which is built on anticipated use and threats. However, when a problem occurs, these can be slow to respond. The issue must first be detected, the policy adjusted, then the change implemented. A delayed response time of manually logging and reviewing can bog this process down even further.

A cloud monitor uses the advantages of virtualization to overcome many of these challenges. Most cloud functions run as software in a constructed virtual environments. Because of this, monitoring and managing applications can be built into the fabric of that environment; including resource cloud management and security.

Cloud Monitoring service

The Structure of Cloud Monitoring Solutions

Consider the growing range of SaaS services such as Software, Platform, and Infrastructure. Each of these services runs in a virtual server space in the cloud. For example; Security as a Service lives in a hosted cloud space in a data center.  Users remotely connect over the internet. In the case of cloud platform services, an entire virtual server is created in the cloud.  A virtual server might span across several real-world servers and hard-drives, but it can host hundreds of individual virtual computers for users to connect to.

As these services exist in a secure environment, there is a layer of insulation between the real-world monitoring and cloud-based monitoring.

Just as a network monitoring application is capable of being installed on a Local Area Network (LAN) to watch network traffic, monitoring software can be deployed within the cloud environment. Instead of examining hard drives or network switches, monitoring apps in the cloud track resources across multiple devices and locations.

One important feature of cloud server monitoring is that it provides more access and reporting ability than traditional infrastructure monitors.

diagram showing What Is Cloud Monitoring

Types of Cloud-Based Monitoring of Servers & Their Benefits

Website: A website is a set of files stored on a computer, which in turn sends those files to other computers over a network.

The host can be a local computer on your network, or remotely hosted by a cloud services provider. Some of the essential metrics for website monitoring include traffic, availability, and resource usage. For managing a website as a business asset, other parameters include user experience, search availability, and time on page. There are several ways this monitoring can be implemented and acted on. A monitoring solution that tracks visitors might indicate that the “time on page” metric is low, suggesting a need for more useful content. A sudden spike in traffic could mean a cyber attack. Having this data available in real-time helps a business adjust its strategy to serve customer needs better.

A virtual machine is a simulation of a computer, within a computer. This is often scaled out in Infrastructure as a Service (IaaS), where a virtual server hosts several virtual desktops for users to connect to. A monitoring application can track users and traffic, as well as infrastructure and the status of each machine. This offers the benefits of traditional IT infrastructure monitoring, with the added benefits of additional cloud monitoring solutions. From a management perspective, tracking employee productivity and resource allocation can be important metrics for virtual machines.

Database Monitoring:  Many cloud applications rely on databases, such as the popular SQL server database. In addition to the previous benefits, a database monitor can also track queries and data integrity. It can also help to monitor connections to the database to show real-time usage data. Tracking database access requests can also help improve security.  For example, resource usage and responsiveness can show if there’s a need for upgraded equipment. Even a simple uptime detector can be useful if your database has a history of instability. Knowing the precise moment a database goes down can improve resolution response time.

Virtual Network:  This technology creates software versions of network tech, such as routers, firewalls, and load balancers. As they are designed with software, integrated tools to monitor can give you a wealth of data about their operation. For example, if one virtual router is continuously overwhelmed with traffic, the network can be adjusted to compensate.  Instead of replacing hardware, virtualization infrastructure easily adapts to optimize the flow of data. Also, monitoring tools analyze user behavior to detect and resolve intrusions or inefficiencies.

Cloud Storage:  Secure cloud storage combines multiple storage devices into a single virtual storage space.

Cloud computing monitoring track multiple analytics simultaneously. More than that, cloud storage is often used to host SaaS and IaaS solutions. In these applications, it can be configured to track performance metrics, processes, users, databases, and available storage. This data is used to focus on features that users find helpful or to fix bugs that disrupt functionality.

company meeting to plan IT strategy

Best Practices For Monitoring

Decide what metrics are most critical. There are many customizable cloud monitoring solutions. Take an inventory of the assets you are using. Then map out the data you would like to collect. This helps to make informed decisions about which cloud monitoring software best fits your needs. It also gives you an advantage when moving to implement a monitoring plan. For example, an application developer might want to know which features are used the most, or the least. As they update, they may scrap features that aren’t popular in favor of features that are. Or, they may use application performance monitoring to make sure they have a good user experience.

Automate the monitoring. One compelling feature is scripting. Monitoring and reporting can be scripted to run automatically. Since cloud functions are virtual, it’s easy to implement software monitoring into the fabric of the cloud application.  Even logging and red-flag events can be automated to send a notice when problems are detected. For example, an email notification might be sent if unauthorized access is detected or if resource usage exceeds a threshold.

Consider the security of cloud-based applications. Many users believe that their data is less secure on a remote cloud server than on a local device.  While it is true that data centers present a tempting target for hackers, they also have better resources. Modern data centers invest in top-tier security technology and personnel.  This offers a significant advantage over end users. With that said, it’s still crucial for cloud users to be mindful of cloud security.

While data centers offer protection for the hardware and infrastructure, it’s important to exercise good end-user security habits. Proper data security protocols like two-factor authentication and strong firewalls are a good start. Monitoring can supplement that first line of defense by tracking usage within the virtual space. This helps detect vulnerabilities by reporting habits that might create security gaps. It also helps by recognizing unusual behavior patterns, which can identify and resolve data breach.

scale monitor businessman

Final Thoughts: Cloud Based Monitoring

With the virtual nature of cloud computing management, infrastructure is already in place for cloud monitoring applications. For a reasonable up-front investment of time and money, monitoring applications can deliver a wealth of actionable data. This data gives businesses insight into which digital strategies are more effective than others.  It can identify costly and ineffective services as well.

It is worth looking at application monitoring to report on how your cloud resources are being used. There may be room for improvement.


Is IT Security Service The Future

SECaaS: Why Security as a Service is a Trend To Watch

Your company is facing new cybersecurity threats daily. Learn how Security as a Service (SECaaS) efficiently protects your business.

The cybersecurity threat landscape is rapidly expanding. Technology professionals are fending off attacks from all directions.

The lack of security expertise in many organizations is a challenge that is not going away anytime soon.

CIOs and CSOs have quickly realized that creating custom solutions are often too slow and expensive.

They now realize that managed security service providers or MSSP companies are the best way to maintain protection. Software-as-a-service (SaaS) is becoming a more comfortable concept for many technology professionals.

What is Security as a Service?

SECaaS is a way to outsource complex security solutions needs to experts in the field while allowing internal IT and security teams to focus on core business competencies.

Not long ago, security was considered a specialization that needed to be in-house. Most technology professionals spent only a small portion of their time ensuring that backups always, the perimeter was secure, and firewalls were in place. There was a relatively black and white view of security with a more inward focus. Antivirus software offers only basic protection. It is not enough to secure against today’s threats.

Fast forward to today, where risks are mounting from all directions.  Data assets spend a significant portion of their life in transit both within and outside the organization. New software platforms are being introduced on a weekly if not a daily timeline with many organizations. It is more difficult than ever to maintain a secure perimeter, and accessible data, while staying competitive and agile.

lock on a circuit board

Threat Protection from All Sides

Today’s business users savvier about accessing secure information. Yet, many are less aware of the ways that they could be opening their networks to external attacks.

This causes a nightmare for system administrators and security professionals alike as they attempt to batten down the hatches of their information and keep it truly secure. Advanced threats from external actors who are launching malware and direct attacks at a rate of thousands per day are a challenge.

The drive towards accessibility of data and platforms at all times causes a constant tension between business users and technology teams. Security technologists seek to lock down internal networks at the same time users are clamoring for the ability to bring their own device to work.

There is a significant shift in today’s workforce towards the ability to work whenever and wherever the individual happens to be.

This makes it crucial that technology teams can provide a great user experience without placing too many hurdles in the way of productivity.

When business users find an obstacle, they are likely to come up with an unacceptable workaround that is less secure than the CSO would like. Account requirements too prohibitive?

No problem. Users will just share their usernames and passwords with internal and external parties. Providing easy access to confidential information. These are only the internal threats. External forces are constantly banging on your digital doors, looking for a point of weakness that they can exploit.

Cybercriminals are active throughout the world. No businesses are immune to this threat. Damage from cybercrime is set to exceed an annual amount of $6 trillion by 2021. Doubling the impact from just 2015.

The amount of wealth changing hands due to cybercrime is astronomical. This can be a heavy incentive both for businesses to become more secure and for criminals to continue their activity. Spending on cybersecurity is also rising at a rapid rate and expected to continue that trend for quite some time. However, businesses are struggling to find or train individuals in the wide spectrum of skills required to combat cyberterrorism.

managing options with SIEM tools

Benefits of Security as a Service

SECaaS has a variety of benefits for today’s businesses including providing a full suite of managed cloud computing services.

Staffing shortages in information security fields are beginning to hit critical levels.

Mid-size and smaller businesses are unlikely to have the budget to hire these professionals. IT leaders anticipate that this issue will get worse before it improves. Technology budgets are feeling the strain. Businesses need to innovate to stay abreast of the competition.

The costs involved with maintaining, updating, patching and installing software are very high. There are additional requirement to scale platforms and secure data storage on demand. These are all areas cloud-based security provides a measure of relief for strained IT departments.

Managed cloud SECaaS businesses have the luxury of investing in the best in the business from a security perspective — from platforms to professionals. Subscribers gain access to a squad of highly trained security experts using the best tools that are available on the market today and tomorrow. These security as a service providers are often able to deploy new tech more rapidly and securely than a single organization.

Automating Manual Tasks

Having someone continually review your business logs to ensure software and data are still secure is probably not a good use of time. However, SECaaS platforms can monitor your entire employee base while also balancing endpoint management.

Results are delivered back in real time with automated alerts triggered when unusual activity is logged. Completing these tasks automatically allows trained technology professionals to focus more on efforts that move the business forward while much of the protection is done behind the scenes. Benchmarking, contextual analytics, and cognitive insights provide workers with quick access to items that may be questionable. This allows movement to happen without requiring drudge work behind the scenes.

Reducing Complexity Levels

Does your information technology team have at least a day each week to study updates and apply patches to your systems? If not, your business may be a prime candidate for security as a service.

It is becoming nearly impossible for any IT team to stay updated on all platforms. Or, see how their security needs interact with other platforms that you’re utilizing and then apply the appropriate patches. Many organizations require layers of protection due to the storage of personally identifiable information (PII). This can add to the level of complexity.

Protecting Against New Threats

Cybercriminals are always looking for new ways to attack a large number of systems at once. Global ransomware damage costs are in the billions of dollars, and an attack will occur approximately every 14 seconds by 2020.

Industry insiders such as Warren Buffet state that cyber attacks are the worst problem faced by humankind — even worse than nuclear weapons. The upfront cost of paying a ransom is only the tip of the iceberg when it comes to damages that are caused. Businesses are finding hundreds of thousands of dollars in direct and indirect costs associated with regaining access to their information and software.

Security as a Service Provider monitoring

Examples of Security as a Service Providers Offerings

Traditional managed providers are enhancing security offerings to include incident management, mobile, endpoint management, web, and network security threats and more.

SECaaS is a sub-category of SaaS and continues to be of interest to businesses of all sizes as complexity levels rise.

Today’s security as a service vendors go beyond the traditional central management console and include:

  • Security analysis: Review current industry standards and audit whether your organization is in compliance.
  • Performance balancing with cloud monitoring tools: Guard against a situation where a particular application or data pathway is unbalancing the infrastructure.
  • Email monitoring: Security tools to detect and block malicious emails, including spam and malware.
  • Data encryption: Your data in transit is much more secure with the addition of cryptographic ciphers.
  • Web security: Web application firewall management that monitors and blocks real-time. Threat management solutions from the web.
  • Business continuity: Effective management of short-term outages with minimal impact to customers and users.
  • Disaster recovery: Multiple redundancies and regional backups offer a quick path to resuming operations in the event of a disaster.
  • Data loss prevention: DLP best practices include tracking and review of data that is in transit or in storage, with additional tools to verify data security.
  • Access and identity management: Everything from password to user management and verification tools.
  • Intrusion Management: Fast notifications of unauthorized access, using machine learning and pattern recognition for detection.
  • Compliance: Knowledge of your specific industry and how to manage compliance issues.
  • Security Information Event Management: Log and event information is aggregated and shown in an actionable format.

While offerings from security as a service companies may differ, these are some of the critical needs for external security management platforms.

Once you have a firm grasp of what can be offered, here’s how you can evaluate vendor partners based on the unique needs of your business.

secure network security providers

Evaluating SECaaS Providers

Security has come to the forefront as businesses continue to rely on partners to perform activities from infrastructure support to data networks. This shift in how organizations view information risk makes it challenging to evaluate a potential cloud computing solution as a fit.

The total cost of ownership (TCO) for working with a SECaaS partner should represent significant savings for your organization. This is especially important when you balance against performing these activities internally. Evaluate total costs by looking at the expense of hiring information security professionals, building adequate infrastructure and reporting dashboards for monitoring. Be sure you fully disclose items such as total web traffic, the number of domains and data sources and other key metrics when requesting estimates.

The level of support that is provided, guaranteed uptime and SLAs are also essential statistics. Your vendor should be able to provide you with detailed information on the speed of disaster recovery. You will need the same information on how quickly infiltrations are identified and any issue resolved. A disaster situation is the least likely possibility. You should also review the time to address simple problems. For example, a user who is locked out of their account or adding a new individual to your network. A full security program will allow your network managed service provider to pinpoint problems quickly.

It is critical that the solution you select works with other business systems that are already in use. Secure cloud solutions are often easier to transition between than on-premise options. It is better to work with a single vendor to provide as many cloud services as possible. This allows for bundled pricing. It can enhance how well software packages work together.

Your team can monitor system health and data protection with real-time dashboards and reporting. This is valuable whether or not a vendor is also overseeing the threat detection process. You will improve the internal comfort level of your team while providing ready access to individuals who are most familiar with the systems. This availability of data will keep everything working smoothly. Be sure that your vendor understands how to provide actionable insight. They should also make recommendations for improving your web security. Access is always a concern.

Evaluating core IT security strategy factors help keep your organization’s goals aligned. A proactive SECaaS vendor-partner adds value to the business by providing DDOS protection. Plus, offering risk management and more.

Security challenges for today’s CIOs & CSOs are Real

Hackers target businesses of all sizes for ransomware and phishing attacks. Staying vigilant is no longer enough.

Today’s sophisticated environment requires proactive action taken regularly with the addition of advanced activity monitoring. Keeping all of this expertise in-house can be overly expensive. The costs involved with creating quality audits and control processes can also be quite high.

Security in the cloud offers the best of both worlds.

Learn more about our security as a service.  Request a free initial consultation with the experts at PhoenixNAP.


Data Center Tier Classification Levels Explained (Tier 1, 2, 3, 4)

Your choice between data center types is as important as your choice of a server regarding web hosting.

The right server in the wrong location means lousy performance. Fortunately, there is a system in place for your business to make the most informed decision.

You might think to yourself that all data centers must be alike, save for a few localized differences or independent security measures. You would be quite far from the truth in this assumption. Choosing a data center solution that is right for your business is much more straightforward once you understand the concept of “Tiers.”

Tiers, or levels, are ways to differentiate the requirements of each type of data center operator, with a focus on redundant components, critical load distribution paths, cooling, and many other specifications. As it stands now, there are four tiers, and as you would expect, they are defined precisely.

The Global Data Center Authority’s “Uptime Institute” is responsible for the proprietary “Tier Standard System.”

Uptime is the most critical metric when regarding web hosting, though not the only one. The rating system defines a benchmark for the data center industry.

Most experts agree that the standardized system has been well received. Here we take a look at the tiers from levels 1 through 4 (often displayed with Roman Numerals as I through IV). We will also discuss what to look for when examining data center power and infrastructure for your business.

certification standards being checked by a man

What Are Data Center Tier Ratings?

The classification levels of data centers represent a certification of design. A tier is another way of saying “level of service.”

The 4 tiers of data centers are:

  • Tier 1 Data Center
  • Tier 2 Data Center
  • Tier 3 Data Center
  • Tier 4 Data Center

The Uptime Institute does not tell anyone exactly how it defines tiers, though the most important metrics are made public. These metrics include redundant electrical path for power, uptime guarantee, cooling capacity, and concurrent maintainability, to name a few.

Background of Data Center Tiers & Levels

The Telecommunications Industry Association (TIA) created the first set of standards for data centers in 2005.

The Uptime Institute standard was formed separately from the TIA standard. The Institute also differed from the TIA because of its specialty in data centers whereas TIA standards could apply to many different aspects of the IT industry.

The Uptime Institute last revised its certification process most recently in July of 2015.

It was discovered that there were data centers without official rankings that were stating the Institute certified them. Much of the controversy happened between the Tier III and Tier IV rankings.

Design elements still make a difference but are not as heavily weighted. Any classification that was based solely on design is now no longer listed on the Uptime Institute website.

The percentages for each metric remain a secret of the Institute.

The Uptime Institute Chief Operating Officer addressed “Efficient IT” in a press release. The release stated that day-to-day operations for a data center now count towards rankings.

The Institute has also created an “Efficient IT Stamp of Approval” for data centers that produce efficient outcomes.

There are two levels of Efficient IT certification:

    • Approved Status – Data centers that achieve this status are already in compliance with previous Uptime Institute standards. The stamp of approval continues for two years. After the certification expires, the center must be re-evaluated to receive another two-year accreditation.
    • Activated Status – Activation means that the Institute has observed a data center moving towards higher efficiency. The Activated status is only good for a year. If a data center has not achieved efficiency excellence, it may still be awarded Activated state upon a new evaluation.

Data Center Tiers 1, 2, 3, 4 Explained

A tier 1 data center can be little more than a powered warehouse. They are not required to be very sophisticated. On the other end of the spectrum is a tier 4 data center. This tier gives its clients a guarantee of uptime and 2N (two times the amount required for operation) cooling and redundant power and infrastructure. These standards will protect most companies. Level IV clients usually never even hear if there are issues at the data center infrastructures due to these redundancies. These standards show just how reliable top-tier systems are.

Tier 2 colocation data centers are more robust than Tier I centers. Tier II does not have complicated performance hardware. For instance, level III and IV data centers require dual power inputs. Level II does not. Level II gives clients a customizable balance between cost management and performance.

A tier 3 data center can perform repairs without any notable service disruption. Another way to define a level III provider is that they offer an N+1 (the amount required for operation plus a backup) availability for clients. As with any technology product, unplanned maintenance may still cause a problem in a level III provider. In short, level III is even tolerant of some faults.

Tier 4 data centers are considered “fault tolerant.” Unplanned maintenance does not stop the flow of data to a  data center Tier IV. Day-to-day operations continue regardless of any support taking place.

As you would expect, each tier has the characteristics of the levels below them. A Tier II provider, for example, will always be more reliable than a Tier I.

Availability According To Data Center Tiers

Availability levels include data from the hardware:

  • Tier 1 – 99.671% Guaranteed availability
  • Tier 2 – 99.741% Guaranteed availability
  • Tier 3 – 99.982% Guaranteed availability
  • Tier 4 – 99.995% Guaranteed availability

a roof with cooling equipment

What is a Tier 4 Data Center?

To be defined as Tier 4, a data center must adhere to the following:

    • Zero single points of failure. Tier IV providers have redundancies for every process and data protection stream. No single outage or error can shut down the system.
    • 99.995 % uptime per annum. This is the level with the highest guaranteed uptime. It must be maintained for a center to maintain Tier IV ranking.
    • 2N+1 infrastructure (two times the amount required for operation plus a backup). 2N+1 is another way of saying “fully redundant.”
    • No more than 26.3 minutes of downtime per annum as a maximum figure. Providers must allow for some downtime for optimized mechanical operations; however, this annual downtime does not affect customer-facing operations.
    • 96-hour power outage protection. A level IV infrastructure must have at least 96 hours of independent power to qualify at this tier. This power must not be connected to any outside source and is entirely proprietary. Some centers may have more.

Tier IV is considered an enterprise-level service. Companies without international reach and consistently high web traffic do not usually require Tier IV facilities. Tier IV has approximately twice the site infrastructure of a Tier III location.

If you need to host mission-critical servers, this is the level to use. Tier IV data centers ensure the safety of your business regardless of any mechanical failures. You will have backup systems for cooling, power, data storage, and network links. Data Center Security is compartmentalized with biometric access controls. Full fault tolerance keeps any problems from ever slowing down your business. This is true even if you host less critical servers in other tier levels.

This tier also ensures optimized efficiency. Your servers are housed in the most physically advantageous locations. This drastically extends the life of your hardware. If the temperature and humidity are kept consistent, you gain a great deal of efficiency. Even the backups and dual power sources are treated like primaries. You experience no downtime if you have to use one of these protections unexpectedly.

Of course, Tier IV colocation is also the most expensive choice. This is why this level is dominated by international brands with consistently high levels of traffic or processing demands.

What is a Tier 3 Data Center?

To be defined as Tier 3, a data center must adhere to the following:

    • N+1 (the amount required for operation plus a backup) fault tolerance. A Tier III provider can undergo routine maintenance without a hiccup in operations. Unplanned maintenance and emergencies may cause problems that affect the system. Problems may potentially affect customer-facing operations.
    • 72 hours of protection from power outages. This provider must have at least three days of exclusive power. This power cannot connect to any outside source.
    • No more than 1.6 hours of downtime per annum. This downtime is allowed for purposes of maintenance and overwhelming emergency issues.
    • 99.982 % uptime. This is the minimum amount of uptime that a level 3 provider can produce. The redundancies help to protect this number even if a system suffers unexpected issues.

Companies using Tier III providers are often growing companies or a business that is larger than the average SMB (Small to Medium Business). Most data center companies that are ranked by the Uptime Institute have a level III ranking.

Tier III gives you most of the features of a Tier IV infrastructure without some of the elite protections. For instance, you gain the advantage of dual power sources and redundant cooling. Your network streams are fully backed up. If your business does not need to compete on an international level against elite brands, this is a highly competitive tier.

Are you concerned with efficiency?

Level III should be the lowest that you go. Guaranteed uptime is slightly less than Tier IV, and the system is not entirely fault-tolerant. If you do not expect to be targeted by malicious hackers or competitors, you may not need to move any higher than level III.

Tier III is also less expensive than IV. You may choose this tier due to budget constraints with a plan to expand into a higher level later.

What is a Tier 2 Data Center?

To be defined as Tier 2, a data center must adhere to the following:

    • No more than 22 hours of downtime per annum. There is a considerable jump between levels II and III regarding downtime. Redundancy is one of the primary reasons for this.
    • 99.741 % uptime per annum. This is a minimum amount of uptime that this provider can produce in a year.
    • Partial cooling and multiple power redundancies. A Tier II provider does not enjoy redundancy in all areas of operation. The most critical aspects of its mechanical structure receive priority. These two aspects are power and cooling distribution. Redundancy in these areas is only partial. No part of the system is fault tolerant.

Tier II data centers are often targeted to SMB sized business clients. There are more guarantees of efficiency than a level II system. Tier II providers are also able to handle more clients.

Small business servers typically use this level. There is a massive decline in features from levels III to II. The utility is fundamentally different. If your business prioritizes redundant capacity components, then you may want to look at this level of infrastructure.

Companies with the web traffic that coincides with a small business are best suited for this tier. It is significantly less expensive than Tier III in most cases.

What is a Tier 1 Data Center?

To be defined as Data Center Tier 1, a data center must adhere to the following:

    • No more than 28.8 hours of downtime per annum. These facilities are allowed the highest amount of downtime of any level.
    • Zero redundancy. This level of a facility does not have redundancy on any part of its operations. Facilities do not have any redundancy guarantees within its power and cooling certification process.
    • 99.671 % uptime per annum. This is the lowest amount of uptime that a facility graded by the Uptime Institute can produce.

If you are a small business, then Tier I may be your ideal solution. You are presumably looking for a cost-effective solution. These centers do not have many of the features that larger centers have although they may include a generator or a backup cooling system.

The use of the Tier I infrastructure designed for startup companies with a need for a colocation data center. This is the most budget conscious option for a business. Your infrastructure consists of a single uplink, a single path for power, and non-redundant servers.

Be sure that your location managers are dedicated to physical security before committing to a Tier I facility. You may also want to check the temperature and humidity of the building. A building that is appropriately maintained can avert many mechanical problems. This is especially true as facility age. If you plan on staying in this tier for an extended time, this is an essential check.

group looking at Data Center Tiers Classification

Data Center Classification Standards: Choosing the Right Tier

Data centers are not required to receive a Tier Classification System Ranking to do business.

Having a specific tier ranking does help legitimize its services, but it is not strictly required. Of the centers that have an official classification, the majority are considered enterprise level facilities.

When searching for a data center, make sure that any ranking you see comes directly from the Uptime Institute. Many companies use Uptime Institute ranking standards for their internal standardizations. However, this does not mean that the Institute has vetted them personally.

Definitions may even be “interpreted” in some cases, though this is likely a rarity. It is best to efficiently research when choosing a data center and validates all accredited certifications.

Earning an official ranking from the Uptime Institute is difficult. There is no guarantee that an investment in a center will warrant a specific classification. This is especially important to consider between the Tier III and Tier IV ranking. The investment in building out a Tier IV level facility is quite substantial. Tier III centers are often much cheaper to build and maintain.

That said, the clientele that requires a Tier IV facility will also have the budget to sustain residence. Just remember not to rely entirely on classification as the system ultimately is a pay-to-play certification.

Next, read our Data Center Migration Checklist with the best practices before making the move!

PhoenixNAP is an industry-leading global services company. Our flagship facility, the connectivity hub of the southwest in North America, meets or beats the requirements of a Tier 3+ rated facility with all systems being greater than N+1 and concurrently maintainable.

Although we have not engaged the Uptime Institute to certify our design, we welcome our customers and partners to put us to the test!


Choose the Best Cloud Service Provider: 12 Things to Know!

Choosing a cloud service provider is without question more involved than choosing the first result from a Google search. Each business has different requirements, customizations, and financial responsibilities.

It is crucial for the perfect service to meet and exceed businesses expectations.

That said; what does a business need to look for when searching for a cloud service?

The use of cloud and cloud services differ from one client to the next. Finding the right vendor will always vary though there are similar categories to narrow down what your business requires.

Below is a handy guide to help you navigate the plethora of options available to you and your business within the cloud server hosting industry.

a man with finger on personal cloud servers

Is the Cloud Server User Interface Actually Usable?

The user interface does not often receive the attention it should. An efficient and user-friendly interface goes a long way to allow more people to work on server-based tasks.

In the past, these seemingly trivial actions could require a full IT department to manage.

Amazon Direct Connect (AWS), for example, uses a somewhat cumbersome user interface. This could make it difficult for a business to perform medial tasks without a dedicated IT team. The point of a simple and effective UI is that you as a company need to access your data at all times.

A business needs to be able to access its internal and client data from anywhere; that is the beauty of the cloud. Having a simple UI allows for access from virtually anywhere, at all times, from varying devices just by logging in through the service provider’s client portal. Since it is web-based, using a smartphone, a laptop, or a tablet should not pose a problem.

How Does a Service Level Agreement (SLA) Work?

Cloud service agreements can often appear complicated and aren’t helped by a lack of industry standards for how they are constructed or defined. For SLAs in particular, many providers turn what could be a simple or straightforward agreement into an unnecessarily complicated, or worse, deliberately misleading language.

Having the technical proficiency and knowledge of terms can help decipher much of the complicated information, though it is often more reasonable to partner with a provider that offers transparency.

Most SLAs are split into two groups. The first is a conventional set of terms and conditions. This is a standard document provided to every applicant with the service provider. These types of forms are usually available online and agreed upon digitally.

The next part of the agreement is a customized contract between the client and vendor. Willingness to offer specific customization depends on each provider and should be part of the decision-making process of choosing the ideal solution.

The majority of these customizable SLA’s are for large, enterprise contracts. There are times when a smaller business may attempt to negotiate exclusive agreements and built-in provisions within their contract.

Regularly challenge service providers that appear prepared to offer flexible terms. Ask them to provide details on how they plan to support any requested customization, who is responsible for this modification and what are the steps in place used to administer this variation. Always remember your main components to cover in an SLA; service level objectives, remediation policies and penalties/incentives related to these objectives, and any exclusions or caveats.

SLA with a data center

Documentation, Provisioning and Account Set-Up

Service level agreement best practices and other contracts are often broken down into four different points of interest (with additional sub-sections as needed for customization). These four points of interest are legal protections, service delivery, business terms, and data assurance.

The legal protections portion of the SLA should cover Limitation of liability, Warranties, Indemnification, and Intellectual property rights. Customers are often wary of offering up too much information to avoid any potential for exposure if there were ever a breach, while the vendor would want to limit their liability if there were ever a claim.

Service delivery often varies depending on the size of the cloud computing service provider. The rule of thumb is to always look for a precise definition of all services and deliverables. Make sure you are crystal clear on all of the service company’s responsibilities relating to their offered services (service management, provisioning, delivery, monitoring, support, escalations, etc.).

The business terms will include points around publicity, insurance policies, specific business policies, operational reviews, fees, and commercial terms.

Within the business terms, specifics with regards to the contract need to include how, or to what extent, the service provider can unilaterally change the terms of the service or contract.

To prevent abrupt increases in billing, it is crucial that the SLA be adhered to, without changes during the course of an agreed upon terms.

The last point of emphasis is data policies and protection. The data assurance portion of an SLA will include detailed information covering data management, data conversion, data security, and ownership and use rights. It is essential to think long-term with any cloud storage provider and review data conversion policies to understand how transferable data may be if you decide to leave.

Reliability and Performance Metrics To Look For

There are numerous techniques for measuring the reliability of cloud server solutions.

First, validate the performance of the cloud infrastructure provider to their SLA’s for the last 6-12 months. Often, a service provider will publish this information publicly, but others should supply it if asked.

Here’s the thing though: don’t expect complete perfection. Server downtime is to be expected, and no solution will have a perfect record.

For more information on acceptable levels of downtime, research details on differentiating Data Center Tiers 1, 2, 3 & 4. What’s valuable about these reports is how the company responded to the downtown. Also, verify that all of the monitoring and reporting tools work with your existing management and reporting systems.

Accurate and detailed reporting on the reliability of the network provides a valuable window into what to expect when working with the service providers.

Confirm with the provider that they have an established, documented, and proven process for handling any planned and unplanned downtime. They should have documentation and procedures in place on their communication practices with customers during an outage. It is best that these processes include timelines, threat level assessments, and overall prioritization.

Ensure to have all documentation covered for all remedies and liability limitations offered when issues arise.

Is Disaster Recovery Important?

Beyond network reliability, a client needs to consider the cloud disaster recovery services protocols with individual vendors.

These days, data centers work to build their facilities in as disaster-free locations as possible, mitigating the risk of natural catastrophes at all possibilities. However, problems can still arise, and expectations have to be set in case something goes wrong. These expectations can include backup and redundancy protocols, restoration, data sources scheduling, and integrity checks (to name a few).

The disaster recovery protocol also needs to include what roles both client and service provider are responsible for. All roles and responsibilities for any escalation process must be documented as your company may be the ones to implement some of these processes.

Additional risk insurance is always a smart idea to help cover the potential costs associated with data recovery (when aspects of recovery fall under the jurisdiction of the client).

data security with a lock

What Should I Know About Data Security?

Protecting data preserves a business and its clients from data theft. Securing data in the cloud affects both the direct client and those the client conducts business with.

Validate the cloud vendor’s different levels of systems and data security measures. Also, look into the capabilities of the security operations and security governance processes. The provider’s security controls should support your policies and procedures for security.

It is always a smart option to ask for the provider’s internal security audit reports, as well as incident reports and evidence of remedial actions for any issues that may have occurred.

Network Infrastructure and Data Center Location

The location of a data center for a service provider is crucial as it will dramatically affect many factors.

As mentioned previously, having a location where natural disasters are rare is always desirable. These areas are often remote enough that the cost of services can be lower than in a robust urban area.

Location of the data center also affects network latency. The closer a business location to the data center, the lower the latency and the faster data reaches the client. Therefore, a company based in Los Angeles will receive its data from a Phoenix-based data center faster than a data center located in Amsterdam.

For businesses that require more of a global presence, utilizing data centers around the world for distribution and redundancy is always an option. When looking for your ideal provider, it is worth inquiring how many locations globally they offer.

a woman providing tech support

What If I Need Tech Support?

Tech support can be the bane of existence and the cause for insurmountable levels of frustration if not cohesive with the provider. Making sure that the provider you are looking for has reliable, actionable, and efficient support is essential.

If an issue arises, the longer a problem festers, the higher the risk of security threats or worse: a damaged reputation. Clients may become frustrated with a business if they are unable to access their accounts or contact the company. This could wreak havoc on many levels if issues are not resolved quickly.

Some data centers and service providers offer tailored resources to technical problems that could go as far to include 24/7 on-call service.

Tech support is a vital part of the selection process for a CSP. You want to feel at ease with your data and services, and a reliable support system is critical.

Business Health of Service Provider

Technical, logistical, and securities aside, it is essential to take a look at the operational capabilities of cloud service providers
. It is crucial to research your final CSP options’ financial health, reputation, and overall profile.

It is necessary to perform due diligence to validate that the service provider is in a healthy financial position and responsible enough to maintain business through the agreed-upon service term. If the provider were to run into financial troubles during your term, it could cause unrecoverable damage to your company.

Research if the provider has any past or current legal problems by researching and requesting data from the company. Asking about potential changes within the corporate structure (such as acquisitions or mergers) is another point of helpful info. Remember, this does not have to be a doom and gloom conversation. An acquisition could benefit the services and support you are offered down the line.

The background of major decision-makers within the cloud computing providers can be a useful roadmap for identifying trends and future potential issues.

Certifications and Standards

When searching for a cloud service provider, it’s always wise to validate the current best practices and technological understanding that they represent.

One way to do this is to see what certifications a provider has earned and how often they renew. This shows not only how up-to-date they are, how detail oriented a provider is, but also how in tune with industry standards they are. While these criteria may not determine which service provider you choose, they can be beneficial in shortlisting potential suppliers.

There are many different standards and certifications that a service provider can acquire. It depends entirely on the organization, the levels of security, the other clientele that a vendor works with, and numerous other conditions. Some standards to become familiar with in your search are DMTF, ETSI, ISO, Open Grid Forum, GICTV, SNIA, Open Cloud Consortium, Cloud Standards Customer Council, NLST, OASIS, IEEE, and IETF.

More than just a lengthy repertoire of certifications, you want to keep an eye out for structured processes, strong knowledge management, effective data control, and service status visibility. It is also worth researching how the service intends on staying current with new certifications and technology expansion.

Cloud security standards exist on a separate facet and certifications are awarded by different organizations. The primary criteria for security include the CSA (CS-A, SSAE, PCI, IEC, ICO, COBIT, HIPAA, Cyber Essentials, ISAE 3402, COBIT and GDPR.

Operational standards are a third category to consider and to seek out certification. These certifications include ISO, ITIL, IFPUG, CIF, DMTV, COBIT, TOGAF 9, MOF, TM Forum and FitSM.

Cloud and secure data storage solutions should be proud of their earned certifications and display them on their website. If certification badges are not present, inquiring about current certifications is easy enough.

Service Dependencies and Partnerships

Service providers often rely on different vendors for hardware and services. It is necessary to consider the various vendors and how each impacts a company’s cloud and data server experience.

Validating the provider’s relationships with vendors is essential. Also keeping an eye on their accreditation levels and technical capabilities are useful practices.

Think about whether the services of these vendors fit into a broader ecosystem of other services that might complement or support it. Some of the vendors may connect easier with IaaS, SaaS or PaaS cloud services. There could be some overlap or pre-configured in services that your business could see as a benefit.

Knowing the partnerships a provider has and whether it uses one, or several of the three cloud service models is helpful. It illustrates whether the service partner is the best fit for the ultimate goals of the business.

cloud service provider migrations to a new one

IT Cloud Services Migration Support and Exit Planning

When searching for your ideal partner, take care to look at the long-term strategy.

In the event you ever decide to move your services or grow too large for the service capabilities of a provider. The last thing you want to run into is a scenario called Vendor Lock-In. This is a situation in which you, as a potential customer, using a product or service cannot easily transition to a competitor. This circumstance often arises when proprietary technologies are used by a provider that end up being incompatible with other providers.

There are specific terms to keep an eye out for when comparing build apps and data centers. Some examples of CSPs using vendor lock-in technology includes:

    • CSP compatible application architecture
    • Proprietary secure cloud management tools
    • Customized geographic diversity
    • Proprietary cloud API’s
    • Personalized cloud Web services (e.g., Database)
    • Premium configurations
    • Custom configurations
    • Data controls and applications access
    • Secure data formats (not standardized)
    • Service density with one provider

Choosing a provider with standard services without relying on tailor crafted systems will reduce long-term pain points and help to avoid vendor lock-in.

Always remember to have a clear cloud migration strategy planned out even before the beginning of your relationship. Transitioning to a new provider is not always a smooth or seamless transition, so it is worth finding out about their processes before signing a contract.

Furthermore, consider how you will access your data, what state it will be in, and for how long the provider will keep it after you have moved on.

Takeaways On Cloud-Based Computing Vendors

Deciding between business cloud server providers seems like a daunting task.

With the right strategy and talking points, a business can find the right solution for a service provider in no time.

Remember to think long-term to avoid any potential for data center lock-in. Avoid the use of proprietary technologies and a build a defined exit strategy to prevent any possible headaches down the line.

Spend the time necessary to build workable and executable SLAs with contractual terms. A detailed SLA is the primary form of assurance you have that the services will be delivered and supported as agreed.

With the right research and remaining vigilant for what your business requires, finding the perfect solution is possible for everyone.


cloud hosting vs dedicated comparison

Cloud vs Dedicated Server: Which Is Best For Your Business?

Your business has a website. Your company might, in fact, be that website. That site needs to be hosted somewhere that has reliable uptime, doesn’t cost a fortune, and loads lightning fast.

Picking the perfect web host has many implications for a business that are far reaching. One constant does remain: every company needs a website and a fast server to host it on.

Even a one-second difference in page response can cost a company 7% of its customers.

In July 2018, Google will be updating their algorithm to including page speed as a ranking factor. Consider the implications if consumers are leaving your pages due to load time and your rankings are suffering.

Load-time is just one of many examples of the importance of web hosting, and its impacts on the company bottom line. The web host a company chooses is vitally important.

To understand the importance of web hosting servers, let’s break down the difference in the two major types of offered services: cloud hosting and dedicated servers.

Both have their advantages and disadvantages that may become more relevant to a company that is on a budget, facing time constraints or looking to expand. Here are the definitions and differences that you need to know.

The Cloud Ecosystem

The cloud is a technology that allows an infinite number of servers to act as a single entity. When information is stored “in the cloud,” it means that it is being held in a virtual space that can draw resources from different physical hubs strategically located around the world.

These hubs are actual servers, often in data center facilities, that connect through their ability to share resources in virtual space. This is the cloud.

Cloud servers use clustered filesystems such as Ceph or a large Storage Area Network (SAN) to allocate storage resources. Hosted and virtual machine data are accommodated through decentralization. In the event of a failure, this environment can easily migrate its state.

A hypervisor is also installed to handle how different sizes of cloud servers are partitioned. It also manages the allocation of physical resources to each cloud server including processor cores, RAM and storage space.

hosting service that provides server management with a man in front of screen

Dedicated Hosting Environment

The dedicated server hosting ecosystem does not make use of virtual technology.  All resources are based on the capabilities and limitations of a single piece of physical hardware.

The term ‘dedicated’ comes from the fact that it is isolated from any other virtual space around it based on hardware. The hardware is built specifically to provide industry-leading performance, speed, durability and most importantly, reliability.

What is a Cloud Server and How Does it Work?

In simple terms, cloud server hosting is a virtualized hosting platform.

Hardware known as bare metal servers provide the base level support for many cloud servers.  A public cloud is made up of multiple bare metal servers, usually kept in a secure colocation data center. Each of these physical servers plays host to numerous virtual servers.

A virtual server can be created in a matter of seconds, quite literally. It can also be dismissed as quickly when it is no longer needed. Sending resources to a virtual server is a simple matter as well, requiring no in-depth hardware modifications. Flexibility is one of the primary advantages of cloud hosting, and it is a characteristic that is essential to the idea of the cloud server.

Within a single cloud, there can be multiple web servers providing resources to the same virtual space. Although each physical unit may be a bare metal server, the virtual space is what clients are paying for and ultimately using. Clients do not access the operating system of any of the base units.

What is Dedicated Server Hosting?

Dedicated hosting has the potential to have just a single client on a physical server.

All of the resources of that server are available to that specific client that rents or buys the physical hardware. Resources are customized to the needs of the client, including storage, RAM, bandwidth load, and type of processor. Dedicated hosting servers are the most powerful machines on the market and often contain multiple processors.

A single client may require a cluster of servers. This cluster is known as a “private cloud.” 

The cluster is built on virtual technology, with the many dedicated servers all contributing to a single virtual location. The resources that are in the virtual space are only available to one client, however.

Mixing Cloud and Dedicated Servers – the Hybrid Cloud

An increasingly popular configuration that many companies are using is called a hybrid cloud. A hybrid cloud uses dedicated and cloud hosting solutions. A hybrid may also mix private and public cloud servers with colocated servers. This configuration allows for multiple variations on the customization side which is attractive to businesses that have specific needs or budgetary constraints.

One of the most popular hybrid cloud configurations is to use dedicated servers for back-end applications. The power of these servers creates the most robust environment for data storage and movement. The front-end is hosted on cloud servers. This configuration works well for Software as a Service (SaaS) applications, which require flexibility and scalability depending on customer-facing metrics.

selecting the right IT vendor for cloud services

Cloud Servers and Dedicated Servers – the Similarities

At their core, both dedicated and cloud servers perform the same necessary actions. Both solutions can conduct the following applications:

  • store information
  • receive requests for that information
  • process requests for information
  • return information to the user who requested it.

Cloud servers and dedicated servers also maintain differences from shared hosting or Virtual Private Server (VPS) hosting. Due to the increasing sophistication structure of cloud and dedicated solutions, they outpace shared/VPS solutions in the following areas:

  • Processing large amounts of traffic without lag or performance hiccups.
  • Receiving, processing and returning information to users with industry standard response times.
  • Protecting the fidelity of the data stored.
  • Ensuring the stability of web applications.

The current generation of cloud hosting solutions and dedicated servers have the general ability to support nearly any service or application. They can be managed using similar back-end tools, and both solutions can run on similar software. The difference is in the performance.

Matching the proper solution to an application can save businesses money, improve scalability and flexibility, and help maximize resource utilization.

The Difference Between Dedicated Servers and Cloud Computing

The differences between cloud hosting and dedicated servers become most apparent when comparing performance, scalability, migration, administration,  operations, and pricing.

scalability of data centers

Performance

Dedicated servers are usually the most desired choice for a company that is looking for fast processing and retrieval of information. Since they process data locally, they do not experience a great deal of lag when performing these functions.

This performance speed is especially important in industries where every 1/10th of a second counts, such as ecommerce.

Cloud servers must go through the SAN to access data, which takes the process through the back end of the infrastructure. The request must also route through the hypervisor. This extra processing adds a certain level of latency that cannot be reduced.

Processors in dedicated servers are entirely devoted to the host website or application. Unless all of the processing power is used at once (which is highly unlikely), they do not need to queue requests. This makes dedicated servers an excellent choice for companies with CPU intensive load balancing functions. In a cloud environment, processor cores require management to keep performance from degrading. The current generation of hypervisors cannot manage requests without an added level of latency.

Dedicated servers are entirely tied to the host site or application which prevents throttling on the overall environment. Dedication of this magnitude allows networking a simple function when compared to the cloud hosting environment.

In the cloud, sharing the physical network incurs a significant risk of throttling bandwidth. If more than one tenant is using the same network simultaneously, both tenants may experience a myriad of adverse effects. Hosting providers give many cloud-based tenants the option to upgrade to a Network Interface Card (NIC).

This option is often reserved for clients who are bumping up against the maximum available bandwidth that is available on the network. NIC’s can be expensive. But companies often find they are worth the extra cost.

Scale Your Business Hosting Needs

Dedicated hosting scales differently than cloud-based servers. The physical hardware is limited by the number of Distributed Antenna System (DAS) arrays or drive-bays it has available on the server.

A dedicated server may be able to add a drive to an already open bay through an underlying Logical Volume Manager (LVM) filesystem, a RAID controller, and an associated battery. DAS arrays are more difficult to hot swap.

In contrast, cloud server storage is easily expandable (and contractible). Because the SAN is off the host, the cloud server does not have to be part of the interaction to provision more storage space. Expanding storage in the cloud environment does not incur any downtime.

Dedicated servers also take more time and resources to change processors without maintenance downtime. Websites hosted on a single server that requires additional processing capabilities require a total migration or networking with another server.

disaster recovery and business continuity in the cloud

Migration

Both dedicated and cloud hosting solutions can achieve seamless migration. Migration within the dedicated environment requires more planning. To perform a seamless migration, the new solution must keep both future and current growth in mind. A full-scale plan should be created.

In most cases, the old and new solutions should run concurrently until the new server is completely ready to take over. It is also advisable to maintain the older servers as a backup until the new solution can be adequately tested.

Server Management: Administration and Operations

Dedicated servers may require a company to monitor its dedicated hardware.  Therefore in-house staff must understand systems administration more closely. A company will also need a deep understanding of load profile maintain data storage requirements within the proper range.

Scaling, upgrades, and maintenance is a joint effort between client and provider that must be carefully engineered to keep downtime to a minimum.

Cloud servers are more accessible to administer. Scalability is faster with much less of an impact on operations. 

Where dedicated platforms require planning to estimate server requirements accurately, the cloud platforms require planning to work around the potential limitations that you may face.

cloud hosting service server management

Cloud vs Server Cost Comparison

Cloud servers ordinarily have a lower entry cost than dedicated servers. However, cloud servers tend to lose this advantage as a company scales and requires more resources.

There are also features that can increase the cost of both solutions.

For instance, running a cloud server through a dedicated network interface can be quite expensive. 

A benefit of dedicated servers is they can be upgraded. With more memory, network cards and Non-Volatile Memory (NVMe) disks that will improve capabilities at the expense of a company’s hardware budget.

Cloud servers are typically billed on a monthly OpEx model. Physical server options usually are CapEx expenditures. They allow you to oversubscribe your resources without additional cost. You now have a capital expenditure costs that may be written off over a three year period.

cloud vs dedicated hosting

Making a Choice: Cloud Servers vs Dedicated Servers

Matching the needs of your business to the configuration is the most crucial aspect of choosing between computing platforms.  

This computing platform needs to complement the operating procedures, be scalable, and cost-effective. These variables are critical evaluators when selecting between a cloud or dedicated server solution. 

Also, you are not able to take advantage of the new technological benefits as rapidly as you would in a cloud environment.

The value proposition for bare metal technologies is in the historical evidence that suggests most server workloads take advantage of a fraction of the actual physical resources over an extended period.  By combining workloads on a single hardware platform, one can optimize the capitalized expenditure of that hardware platform. This is the model cloud service providers use to create cheaper computing resources on their platform.

A dedicated server provides access to raw performance, both processor and data storage. Depending on the computing workload, this may be necessary.

There is a place for both. Utilizing a hybrid strategy, an organization can processor-intensive workloads on dedicated systems. While also running their scalable workloads on cloud infrastructure,  taking advantage of the strengths of each platform.

With the current maturity of cloud orchestration tools, and the ability to cross-connect into cloud environments, an organization can have multiple workloads in various environments. Additionally, it can run physical infrastructure that interacts with these cloud services.

Which should you choose? Selecting one over the other based on a single metric is a mistake.

Consider the following:

  • The advantages of each solution.
  • The current needs of your business.
  • Your future scalability needs.
Have Questions?

Not sure which hosting option to choose for your business needs? Contact one of our hosting service experts.


Professional Data Storage

Secure Data Storage Solution: 6 Rules to Making the Right Choice

As your business grows, so does your need for secured professional data storage.  

Your digital database expands every day with each email you send and receive, each new customer you acquire, and each new project you complete. As your company adopts new business systems and applications, create more files, and generate new database records, it needs more space for storing this data.

The trend of massive digital data generation is affecting every business. According to analyst reports, the demand for data storage worldwide reached nearly 15,000 exabytes last year. With such an impressive figure, it is clear why choosing a professional storage solution is a frequent challenge in the business world.

What companies are looking for in a data storage solution

The rapidly growing data volume is only one of the challenges businesses are facing. As you compile more files, you also need better data protection methods. Securing mission-critical files and databases is a number one priority for today’s businesses that are increasingly exposed to cyber attacks.

You also want to ensure the data is accessible to your teams at any point. Whether they are working remotely or using multiple devices to access business documents, you need to provide them with easy and secure access to your company’s file system.

These are just some of the reasons why choosing secure data storage can be a tough task. When you add cost considerations to these reasons, the issue becomes even more complicated.

Most business execs do not understand storage access methods, performance, redundancy, risk, backup, and disaster recovery. This makes things much more difficult for IT administrators who need to justify the cost of additional storage requirements.

So why is storage so challenging to tackle and manage?  

Most small businesses have limited storage systems, lacking the ability to expand as their needs grow. Their IT departments are left to deal with the challenge of handling high costs of storage along with the cost of security systems and software licenses.

Larger businesses, on the other hand, have an issue of finding a solution that is both flexible and secure. This is especially important for companies operating in regulated industries such as Financial Services, Government, and Healthcare.

Whatever the focus of your business, your quest for a perfect professional data storage solution may get complicated. 

1. Assess your current and future data storage needs

a folder with a secure data storage

The first rule businesses should address is their current and future data storage needs.  

Do you know the minimum storage requirements for your applications, device drivers, etc.?  Of the space you have left, do you have enough to sustain business needs for the next five years?  

If you are unsure, you can assess the amount of storage you have now and compare it to your needs in five years. Sure, you can restrict the size of your employee’s inboxes and the amount of storage they can use on the company shared drive.  However, how long will your business be able to sustain these restrictions? You will get to a point where your business outgrows your data storage.

As you continue to add new customers and prospective client information to your customer relationship database (CRM), you can expect to see an exponential need for more storage.  Even if you take precautionary measures to remove duplicate entries in your CRM and perform routine data cleanup, your need for additional storage will continue to grow. As your applications require updates and patches and you continue to add new apps to your business, your needs for more storage to house all of it will keep growing.

2. Consider storage functionality that you need

After you assess your current and future needs, considering data storage functionality is the next most important thing to consider. Although it is a fundamental aspect, it is easily overlooked. After all, what function does data storage perform anyway?  

You should have already answered the question of why you are purchasing storage by this point. Typically, the goal is to lower IT costs, improve productivity, or support business expansion. Instead of having to buy physical servers or add hard drives that you have to maintain, you can centralize your data storage and management in the cloud.

The cloud would help you increase network performance and make data more accessible to your employees. Moreover, it will make your critical assets available in case of a system failure.  These are just some of the factors that should drive you toward the optimal solution for your needs.

You will need to determine whether a shared public cloud would suit your needs well or whether you should consider a private option. Both have their advantages and are tailored for businesses with different needs. If your idea is to share less sensitive information in the public cloud, you may not need to invest significantly in data storage expansion. Dedicated and more secure storage options, which can meet the highest storage security and compliance needs, may be more expensive.

This is why you need to ask yourself what is it that you need right now and what goals you want to achieve in future. The answers to these questions also provide you with a starting point for your decision on which type of storage solution is right for your business.

If you do not know or cannot determine the storage function, you can assume that a shared solution is not necessary. Many small businesses do not need dedicated server providers anyway.

However, it all depends on where you forecast your business will be in a few years.  If your organization is reliant on building a large customer base, you may consider mapping out how many customers or potentials you will have and how much storage each data record requires. Multiply that by the number of records you plan to have, and calculate a rough estimate of necessary storage.

Best way to store sensitive data

3. Redefine your information security processes

Data security is a vital issue to address when choosing and implementing a storage solution. Without a sound storage security strategy in place, you risk losing your sensitive data. With the frequency of data breaches becoming more and more alarming, you should integrate security solutions into each step of your data management process.

Many businesses risk losing data stored on their infrastructure due to platform vulnerabilities or poor security management practices. This is especially true for companies using public or hybrid cloud solutions, where a third-party vendor carries part of the responsibility for data security.

While the cloud is not inherently insecure, the lack of storage security best practices and specialized data security software make your cloud data more vulnerable. To protect data adequately, you need to implement information security best practices on multiple levels in your company.

This involves training your employees on the best practices of cybersecurity, implementing new physical security procedures, hiring data scientists, and developing disaster recovery plans. If your data is stored on multiple platforms or with different providers, this may become a complicated issue, so you need to consider it before you make your choice.

You should keep the operational aspects of security in mind when choosing data storage such as security devices, security administrating, and data monitoring. Is your data encrypted in storage and transit?

Data Encryption

Just because cloud storage is vulnerable doesn’t mean your data should be.  Understand where your data is stored, how it is transferred, and who has access to the keys.  For instance, what would an outage mean to your business? Do you have a valid SSL certificate?  Does your CA have a good reputation? Some of the most recent major outages occurred because the SSL certificates were expired.

In addition to this, consider the type of data you backup.  Sensitive data should be encrypted and secured separately from non-sensitive data.  Many businesses use the hybrid cloud to ensure their critical data is stored on an impenetrable platform and protected by different types of data security measures.

You also need to enforce a strict data usage and storage policy company-wide. Employees should become aware of the sensitive nature of their customer information, as well as the best ways to protect data. With comprehensive security training, your employees can become the best guardians of your critical files. 

4. Data backup and deduplication options

Another rule to consider when selecting a professional data storage solution is deduplication.  

This is the process of identifying unique data segments by comparing them with previously stored data.  With an autonomous backup, the same data can continuously be saved after deduplication is complete. Why save and backup duplicate data in the first place?  The deduplication process saves only the unique data in a compressed format.

Deduplication reduces your storage requirement by eliminating any redundant data or information found.  This also helps improve processing speed by reducing the server workload. Additionally, deduplication reduces the amount of data you have to manage and increases data recovery times.  

Imagine the processing power you expend on sifting through gigabytes upon gigabytes of duplicate data, not to mention confusion of which files are relevant. Another way to think of why deduplication is essential to your data storage,  you could end up paying for more storage than you need.  You may end up saving money by eliminating duplicate data because you will not have to scale up your data storage.

You may find that deduplication offers more storage space that you are already paying for.  You could use this newly found storage for applications or other storage needs. 

Deduplication is a method of decluttering folders and databases. Depending on your data, this process could be performed through either manual or automatic processes. Your first step could be to find tools that seek similar data or files because you may not be able to find duplication easily.  Once you find it, just delete or determine if you need it or not.

5. Compare speed and capacity of different solutions

Once you have chosen the storage option, you can determine the performance and capacity you need. Capacity is easy to determine and the most obvious function. Performance can be easy to explain but hard to quantify. You may have a hard time determining the needed bandwidth, latency, and burst speeds.

General Data Protection Regulation Meeting

While there is a debate among IT professionals about processor speed versus storage, all you care about as a business owner is the performance of the storage you are paying for. In this case, you may wish to do a little research on which processors can yield the best performance for data storage. If you have selected a shared storage solution, find out what processors the storage provider uses.

You do not need a complete understanding of processor speeds. However, consider this: a dual or quad core processor of 2.8 GHz is better than a single core 3.4 GHz processor. Two cores run two programs simultaneously at 2.8 GHz, while the single core 3.4 GHz processor must share the processing power. This means that the 3.4 GHz processor is limited to operating at 1.7 GHz. In addition to processor speeds, memory speed should be adequately matching as well.

6. Find a provider on which you can rely

If considering moving to or buying additional shared storage in the cloud, consider the reliability behind it. You need to choose a credible vendor or a data center provider and ensure the service level agreement (SLA) is tailored to your needs. 

A Service Level Agreement should list the acceptable amount of downtime, reliability, redundancy, and disaster recovery you should expect from a shared storage solution. You also need to consider your provider’s data security methods and data security technologies. This would give you peace of mind considering the availability of your data even in case of a disaster. 

You should have your IT administrator chime in on this one because reliability o means the difference between having to wait hours or days for recovery in the event of a catastrophic failure.  Even if you do not think you will need to access your data storage solution hourly, daily, weekly, or monthly, you need to ensure it is there when you need it. 

The concepts of availability and redundancy are equally important.  You should not think of storage as just a typical server. In almost all cases, data storage solutions are built and managed through enterprise servers all with the same physical components. Small businesses should look at a mid to high-end storage provider to support their lower-end servers. Regardless of the size of your company or the size of the servers your data storage resides on, all principles of reliability apply. You will have to weigh reliability and security risks and determine the best choice for your business.

For example, do you plan on using this storage for legacy data you might only access once a quarter or once a year? In this case, the reliability of storage will not be as critical as the data your employees need to access daily and hourly.

Conclusion: Finding a Secured Provider Of Data Storage

In summary, your need for professional data storage will grow along with your business. So will your need for a comprehensive and up-to-date security strategy.

To overcome this challenge, you need to perform an initial assessment of your current and future data storage needs, research storage vendors, and security options. Once you have a clear picture of the functions and needs your storage platform, you should consider how you can secure it adequately.

Building a security architecture that meets all your needs for flexibility and scalability may turn out to be a complicated task. Cloud computing does offer flexible, but you still need strong security and data management strategies to maintain the highest level of safety for your data. This is why choosing a secure storage option is an essential part of a company’s digital transformation strategy.

With the right solution, you can optimize all your critical processes. By following the tips outlined in this article, you increase your chances of making a great decision.


disaster recovery in the cloud explained

What is Cloud Disaster Recovery? 9 Key Benefits

Your business data is under constant threat of attack or data loss.

Malicious code, hackers, natural disasters, and even your employees can wipe out an entire server filled with critical files without anyone noticing until it is too late.

Are you willing to fully accept all these risks?

What is cloud disaster recovery?

Cloud-based storage and recovery solutions enable you to backup and restore your business-critical files in case they are compromised.

Thanks to its high flexibility, the cloud technology enables efficient disaster recovery, regardless of the type or intensity of workloads. The data is stored in a secured cloud environment architected to provide high availability. The service is available on-demand, which enables organizations of different sizes to tailor DR solutions to their needs. 

As opposed to traditional solutions, cloud-based disaster recovery is easy to set up and manage. Businesses no longer need to wast hours on transferring backup data from their in-house servers or tape drives to recover after a disaster. The cloud automates these processes, ensuring fast and error-free data recovery.

hardware failure vs power loss

Always be prepared for with proper data security

As companies continue to add new hardware and software applications and services to their daily processes, related security risks increase.  Disasters can occur at any moment and leave a business devastated by massive data loss. When you consider how much they can cost, it is clear why it makes sense to create a data backup and recovery plan. 

Disaster recovery statistics show that 98% of organizations surveyed indicate that a single hour of downtime can cost their business over $100,000. Any amount of downtime can cost a business tens of thousands to hundreds of thousands in man-hour labor spent to recover or redo the work lost.  In some cases, an 8-hour downtime window can cost a small company up to $20k and large enterprises in the tens of thousands.

Considering the figures, it is clear why every second of service or system interruption counts and what is the actual value of having a disaster recovery plan in place.

Cloud recovery helps businesses bounce back from natural disasters, cyber-attacks, ransomware, and other threats that can render all files useless in an instant.  Just by minimizing the time needed to take workloads back online, it directly lowers the cost of a system failure. 

Although most companies and their IT departments are aware of the risk, few make an effort to implement disaster recovery until it is too late. Now, let us take a more in-depth look at how it can translate into business benefits.

man standing in front of a rack of servers in a cloud data center

Benefits of a cloud-based disaster recovery solution

One of the most significant advantages of cloud-based options over standard disaster recovery management is their cost-efficiency.  Traditional backup involves setting up physical servers at a remote location, which can be costly. The cloud, on the other hand, enables you to outsource as many hardware and software resources as you need while paying only for what you use. 

When considering the cost of disaster recovery, it is essential to think beyond the actual price of the solution.

Just think about how much it would cost not to have it.  Small companies can choose a service plan that fits their budget.  The implementation of data management does not require any additional maintenance costs or hiring IT teams. Your provider handles all the technical activities, so you do not have to worry about them. 

Another benefit of cloud-based technology is its reliability.  Service providers have data banks to provide redundancy, which ensures maximum availability of your data.  It also makes it possible for your backups to be restored faster than what would be the case with traditional DR. 

The workload migration and failover in cloud-based environments can take only several minutes. With traditional recovery solutions, this time frame is usually longer since the failover involves physical servers set up in a remote location. Depending on the amount of data you need to back up, you can also choose to migrate data in phases

Cloud backup services offer a high degree of scalability. Compared to physical systems, cloud backup is virtually endless.  As organizations grow, their systems can grow with them. All you need to do is extend your service plan with your provider and get additional resources as the need arises. 

disaster recovery and business continuity in the cloud

Failover and failback capabilities in the cloud

When it comes to business-critical data, cloud data backup and recovery provides the most reliable business continuity and failback option.

During a data outage, workloads are automatically shifted to a different location and restarted from there. This process is called failover, and it is initiated when the primary systems experience an issue. After the issues on the original location are resolved, the workloads are failed back to it. This is done using professional disaster recovery and replication tools, which are available from the data center and infrastructure-as-a-service providers. 

Although failover and failback activities can be automated in the cloud, businesses should regularly run tests on designated network locations to make sure there is no impact to live or production network data.

When establishing the data set in a disaster recovery solution, you can select data, virtual machine images, or full applications to fail over. This process may take a while, and this is why organizations need to discuss every step of it with their data center provider. 

man looking for cyber security certifications in the IT industry

Disaster Recovery as a Service (DRaaS)

Part of a cloud disaster recovery plan might include DRaaS disaster recovery as a service. It is designed to help organizations protect themselves against loss of critical business data.

These disaster recovery solutions require a business to help them understand what they need from their service.

A business might identify a general pool of data they need to be backed up, how often it should be backed up.  Further, companies should determine the level of effort required to invest in backing up the data during disaster recovery.  Once a company clarifies the requirements, they can look for DRaaS providers to suit their needs.

How cloud computing backup and recovery is evolving

With cyber attacks and system failures becoming more commonplace, companies are increasingly turning to disaster recovery in the cloud.

As the demand grows, providers continue improving their offerings. Recent reports suggest that the market for backup and DR cloud services is on the rise with a growing number of solutions being offered to companies of different sizes.

The increase in demand also illustrates a greater awareness of their value. Cyber attacks and system failures are occurring on a daily basis and businesses are justifiably concerned about the safety of their data. They need an option that can protect their data in a diversity of scenarios that are putting their daily operations at risk. 

Studies have also found that the principal cause of downtime is the power outage.  This means that no matter how many copies you have of your files in-house, they can all be lost if the power goes out.  With cloud-based DRaaS, your data is saved remotely with reliable power sources.  In most cases, cloud services distribute data to different power grids ensuring sufficient redundancy.

Many older services included physical backups at offsite locations.  Offsite backups are expensive and inefficient as they involve duplicating physical equipment at another location or having a combination of on-premises and physical backups.

Cloud Service Level Agreements

Service level agreements (SLAs) hold cloud computing disaster recovery providers responsible for all maintenance and management of services rendered. They also include details on recourse and penalty for any failures to deliver promised services.

For example, an SLA agreement can ensure disaster recovery providers reimburse their clients with service credit in the events of a service outage or in case data cannot be recovered in a disaster. From there, customers can use their credits toward their monthly bill or from another service offered by the DR provider even though these credits will not make up the entire loss the business experiences in delayed cloud recovery. 

An SLA also includes guaranteed uptime, recovery point, and recovery time goals.  For example, the latter can be any set time from an hour to 24 hours or more depending on the amount of data to be backed up and recovered. More specifically, this is defined in terms of RTOs and RPOs, which are essential concepts in disaster recovery. 

The recovery time objective (RTO) is the acceptable period for applications to be down.  Recovery point objectives (RPO) are the acceptable period data is down for. Based on these two criteria, companies define their needs and can choose an adequate solution.  

man drawing an image of a cloud with the words disaster recovery

Define Your Recovery Requirements 

A large part of any cloud backup and disaster recovery plan is the amount of bandwidth and network capacity required to perform failover.  

A sufficient analysis of how to make data available when needed is essential for choosing the best fit for a company.  Part of the considerations should be if the network and bandwidth capacity can handle redirecting all users to the cloud at once. 

Another consideration for hybrid environments is how to restore data from the cloud to an on-premise data center network and how long it will take to perform this.  Backup sets for recovery will need to be designed as part of any disaster recovery solution as well.

When defining these requirements, RTOs and RPOs play a major role.  Both of these goals are included as part of the business impact analysis.  

Recovery points are the points at which data must be recovered. This may include the frequency of backup as it is based on the methods the data is used.  For instance, information and files that are frequently updated might have a recovery point of a few minutes, while less essential data would need to be recovered within a few hours. 

Both recovery time and recovery point objectives represent the impact on the bottom line.  The smaller these values are, the higher the cost of the DRaaS.

Part of the recovery time and recovery point should include a schedule for automated backups.  Keep in mind the difference in the length required to backup data versus applications and create two schedules or note the differences in one schedule.

Cloud Disaster Recovery Management

Creating a custom cloud backup & disaster recovery plan

There is no magic blueprint for back up and disaster recovery solutions. Each company must learn more about the industry’s best practices and determine the essential workloads required to continue operations after a data loss or other catastrophe. 

The overall principle used to derive an IT recovery plan is triage.  This is the process of creating a program that begins with the identification and prioritization of services, applications, and data, and determining an appropriate amount of downtime before the disaster causes a significant impact on business operations.  These efforts include developing a set of recovery time objectives that will define what type of solution a business needs.

By identifying essential resources and appropriate downtime and recovery, a business has a solid foundation for a cloud DR solution.  

All critical applications and data must be included in this blueprint. On the other hand, to minimize costs and ensure a fast recovery when the strategy is put into practice, a business should remove all irrelevant applications and data. 

After the applications and data are identified and prioritized,  the recovery time objectives are defined. The most cost-effective way of achieving the goals should use a separate method for each application and service.  

Some businesses may require a separate method for data and applications running in their private or public cloud environments.  Most likely, this scenario would return different means to protect application clusters and data with parallel recovery time objectives.

Once the design for disaster recovery is final, periodic tests should be performed to ensure it works as needed. Many companies have backups in place but are not sure how to use them when they need them. This is why you need to test both internal and external procedures regularly and even update them as needed. 

A general recommendation is to test your systems on an annual basis, carefully following each step of the outlined process. However, in companies that have dynamic multi-cloud strategies or those that are expanding at an unsteady pace, these tests may need to be performed even more frequently. As new systems or infrastructure upgrades are implemented, the disaster recovery plan should be updated to reflect the changes.  

It is also important to use a cloud monitoring tool.

selecting the right IT vendor for cloud services

Options for disaster data recovery in the cloud 

Data centers offer varying options businesses can choose from for data protection.

Managed applications are popular components of a disaster recovery cloud strategy. In this case, both primary production data and backup cases are stored in the cloud and managed by a provider. This allows companies to reap the cloud’s benefits in a usage-based model while moving away from dependency on on-premises backups.  

A managed or hosted recovery solution brings you a comprehensive cloud-based platform with the needed hardware and software to support your operations. With this option, data and applications remain on-premises, and only data is backed up on the cloud infrastructure and restored as needed.  Such a solution is more cost-effective than a traditional option such as local, offsite data backup. However, the process of recovery for applications may be slow. 

Some application vendors may already offer cloud backup services. Businesses should check with their vendors if this is an option to make the implementation as easy as possible. Another viable option is to back up to and restore from the cloud infrastructure. Data is restored to virtual machines in the cloud rather than on-premises servers, requiring cloud storage and cloud computing resources.  

The restore process can be executed when a disaster strikes, or it can be recurring. Recurring backups ensure data is kept up-to-date through resource sharing and is essential when recovery goals are short.

For applications and data with short or aggressive objectives, replication to virtual machines in the cloud is a viable DRaaS service. By replicating to the cloud, you can ensure data and applications are protected both in the cloud and on-premises.  

Replication is viable for cloud VM to cloud VM and on-premises to cloud VM.  The products in the replication to VMs are based on continuous data protection.  

DR in the cloud

Getting Started With Cloud Disaster Recovery

After a business has determined which type of recovery solution they want, the next step is to make an overview of the options available with different providers and data centers.  

The key to finding a solution that suits the business needs is discussing options with multiple service providers.  

Many vendors offer variances in their pricing packages which may include a certain number of users, application backup, data backup, and frequency of backup.

The only efficient way to choose a managed cloud backup and disaster recovery provider is to assess your needs adequately. Discuss needs with all stakeholders in the business in all departments to discover critical data and applications to ensure business continuity.  

Determine recovery time and point objectives and create a schedule with appropriate downtime for data and applications.  Next, consider the budget allotted for disaster recovery. 

Examine various options to find the best one for your business.