voice technologies

How AI and Voice Technology are Changing Business

When the first version of Siri came out, she battled to understand natural language patterns, expressions, colloquialisms, and different accents all synthesized into computational algorithms.

Voice technology has improved extensively over the last few years. These changes are all thanks to the implementation of Artificial Intelligence (AI). AI has made voice technology much more adaptive and efficient.

This article focuses on the impact that AI and voice technology have on businesses enabling voice technology services.

AI and Voice Technology

The human brain is complex. Despite this, there are limits to what it can do. For a programmer to think of all the possible eventualities is impractical at best. In traditional software development, developers instruct software on what function to execute and in which circumstances.

It’s a time-consuming and monotonous process. It is not uncommon for developers to make small mistakes that become noticeable bugs once a program is released.

With AI, developers instruct the software on how to function and learn. As the AI algorithm learns, it finds ways to make the process more efficient. Because AI can process data a lot faster than we can, it can come up with innovative solutions based on the previous examples that it accesses.

The revolution of voice tech powered by AI is dramatically changing the way many businesses work. AI, in essence, is little more than a smart algorithm. What makes it different from other algorithms is its ability to learn. We are now moving from a model of programming to teaching.

Traditionally, programmers write code to tell the algorithm how to behave from start to finish. Now programmers can dispense with tedious processes. All they need to do is to teach the program the tasks it needs to perform.

The Rise of AI and Voice Technology

Voice assistants can now do a lot more than just run searches. They can help you book appointments, flights, play music, take notes, and much more. Apple offers Siri, Microsoft has Cortana, Amazon uses Alexa, and Google created Google Assistant. With so many choices and usages, is it any wonder that 40% of us use voice tech daily?

voice technology ai diagram

They’re also now able to understand not only the question you’re asking but the general context. This ability allows voice tech to offer better results.

Before it, communication happened via typing or graphical interfaces. Now, sites and applications can harness the power of smart voice technologies to enhance their services in ways previously unimagined. It’s the reason voice-compatible products are on the rise.

Search engines have also had to keep up as optimization targeted text-based search queries only. As voice assistant technology advances, it’s starting to change. In 2019, Amazon sold over 100 million devices, including Echo and third-party gadgets, with Alexa built-in.

According to Google, 20% of all searches are voice, and by 2020 that number could rise to 50%. That means for businesses looking to grow voice technology is one major area to consider, as the global voice commerce is expected to be worth $40B by 2022.

How Voice Technology Works

Voice technology requires two different interfaces. The first is between the end-user and the endpoint device in use. The second is between the endpoint device and the server.

It’s the server that contains the “personality” of your voice assistant. Be it a bare metal server or on the cloud, voice technology is powered by computational resources. It’s where all the AI’s background processes run, despite giving you the feeling that the voice assistant “lives” on your devices.

diagram of voice recognition

It seems logical, considering how fast your assistant answers your question. The truth is that your phone alone doesn’t have the required processing power or space to run the full AI program. That’s why your assistant is inaccessible when the internet is down.

How Does AI in Voice Technology Work?

Say, for example, that you want to search for more information on a particular country. You simply voice your request. Your request then relays to the server. That’s when AI takes over. It uses machine learning algorithms to run searches across millions of sites to find the precise information that you need.

To find the best possible information for you, the AI must also analyze each site very quickly. This rapid analysis enables it to determine whether or not the website pertains to the search query and how credible the information is.

If the site is deemed worthy, it shows up in search results. Otherwise, the AI discards it.

The AI goes one step further and watches how you react. Did you navigate off the site straight away? If so, the technology takes it as a sign that the site didn’t match the search term. When someone else uses similar search terms in the future, AI remembers that and refines its results.

Over time, as the AI learns more and more. It becomes more capable of producing accurate results. At the same time, the AI learns all about your preferences. Unless you say otherwise, it’ll focus on search results in the area or country where you live. It determines what music you like, what settings you prefer, and makes recommendations. This intelligent programming allows that simple voice assistant to improve its performance every time you use it.

Learn how Artificial Intelligence automates procedures in ITOps - What is AIOps.

Servers Power Artificial Intelligence

Connectivity issues, the program’s speed, and the ability of servers to manage all this information are all concerns of voice technology providers.

Companies need to offer these services to run enterprise-level servers. The servers must be capable of storing large amounts of data and processing it at high speed. The alternative is cloud-computing located off-premise by third-party providers, that reduces over-head costs and increases the growth potential of your services and applications.

How servers and AI power voice technology
Alexa and Siri are complex programs, but why would they need so much space on a server? After all, they’re individual programs; how much space could they need? That’s where it becomes tricky.

According to Statista, in 2019, there were 3.25 billion active virtual assistant users globally. Forecasts say that the number will be 8 billion by the end of 2023.

The assistant adapts to the needs of each user. That essentially means that it has to adjust to a possible 3.25 billion permutations of the underlying system. The algorithm learns as it goes, so all that information must pass through the servers.

It’s expected that each user would want their personal settings stored. So, the servers must accommodate not only the new information stored, but also save the old information too.

This ever-growing capacity is why popular providers run large server farms. This is where the on-premise versus cloud computing debate takes on greater meaning.

man holding a phone speaking

Takeaway

Without the computational advances made in AI, voice technology would not be possible. The permutations in the data alone would be too much for humans to handle.

Artificial intelligence is redefining tech apps with voice technologies in a variety of businesses. It’s very compatible with AI and will keep improving as machine learning grows.

The incorporation of voice technology using AI in the cloud can provide fast processing and improve businesses dramatically. Businesses can have voice assistants that handle customer care and simultaneously learn from those interactions, teaching itself how to serve your clients better.


veeam for microsoft backups

How to Leverage Object Storage with Veeam Backup Office 365

Introduction

phoenixNAP Managed Backup for Microsoft Office 365 solution powered by Veeam has gained popularity amongst Managed Service Providers and Office 365 administrators in recent years.

Following the publication of our KB article, How To Install & Configure Veeam Backup For Office 365, we wanted to shed light on how one can leverage Object Storage as a target to offload bulk Office 365 backup data. Object Storage support has been introduced in the recent release of Veeam Backup for Office 365 v4 as of November 2019. It has significantly increased the product’s ability to offload backup data to cloud providers.

Unlike other Office 365 backup products, VBO has further solidified the product’s flexibility benefits to be deployed in different scenarios, on-premises, as a hybrid cloud solution, or as a cloud service. phoenixNAP has now made it easier for Office 365 Tenants to leverage Object Storage, and for MSPs to increase margins as part of their Managed Backup service offerings. It’s simple deployment, lower storage cost and ability to scale infinitely has made Veeam Backup for Office 365 a top performer amongst its peers.

In this article, we will be discussing the importance of taking Office 365 backup, explain Object Storage architecture in brief and present the necessary steps required to configure Object Storage as a backup repository for Veeam Backup for Office 365.

You may have different considerations in the way the product should be configured. Nonetheless, this blog will focus on leveraging Object Storage as a backup target for Office 365 data. Since Veeam Backup for Office 365 can be hosted in many ways, this blog will remain deployment-neutral as the process required to add Object Storage target repository is common to all deployment models.

veeam

Why Should We Backup Office 365?

Some misconceptions which frequently surface when mentioning Office 365 backup is the idea that since Office 365 data resides on Microsoft cloud, such data is already being taken care of. To some extent they do, Microsoft goes a long way to have this service highly available and provide some data retention capabilities, but they still make it clear that as per the Shared Responsibility Model and GDPR regulation, the data owner/controller is still the one responsible for Office 365 data. Even if they did, should you really want to place all the eggs in one basket?

Office 365 is not just limited to email communication – Exchange Online, but it is also the service used for SharePoint Online, OneDrive, and Teams which are most commonly used amongst organizations to store important corporate data, collaborate, and support their distributed remote workforce. At phoenixNAP we’re here to help you elevate Veeam Backup for Office 365 and assist you in recovering against:

  • Accidental deletion
  • Overcome retention policy gaps
  • Fight internal and external security threats
  • Meet legal and compliance requirements

This further solidifies our reason why you should also opt for Veeam Backup for Office 365 and leverage phoenixNAP Object Storage to secure and maintain a solid DRaaS as part of your Data Protection Plan.

veeam-backup for microsoft

Object Storage

What is object storage?

Object Storage is another type of data storage architecture that is best used to store a significant amount of unstructured data. Whereas File Storage data is stored in a hierarchical way to retain the original structure but is complex to scale and expensive to maintain, Object Storage stores data as objects typically made up of the data itself, a variable amount of metadata and unique identifiers which makes it a smart and cost-effective way to store data.

Cache helps in cost reduction and is aimed at reducing cost expensive operations, this is especially the case when reading and writing data to/from object storage repositories. With the help of cache, Veeam Explorer is powerful enough to open backups in Object Storage and use metadata to obtain the structure of the backup data objects. Such a benefit allows the end-user to navigate through backup data without the need to download any of it from Object Storage. Large chunks of data are first compressed and then saved to Object Storage. This process is handled by the Backup Proxy server and allows for a smarter way to store data. When using object storage, metadata and cache both reside locally, backup data is transferred and located in Object Storage

In this article, we’ll be speaking on how Object Storage is used as a target for VBO Backups, but one must point out that as explained in the picture below, other Veeam products are also able to interface with Object Storage as a backup repository.

veeam backup repository

Why should we consider using it?

With the right infrastructure and continuous upkeep, Office 365 administrators and MSPs are able to design an on-premise Object Storage repository to directly store or offload O365 backup data as needed but to fully achieve and consume all its benefits, Object Storage on cloud is the ideal destination for Office 365 backups due to its simpler deployment, unlimited scalability, and lower costs;

  • Simple Deployment
    As noted further down in this article one will have a clear picture of the steps required to set up an Object Storage repository on the cloud. With a few necessary pre-requires and proper planning, one can have this repository up and running in no time by following a simple wizard to create an Object Storage repository and present it as a backup repository.
  • Easily Scalable
    While the ability to scale and design VBO server roles as needed is already a great benefit, the ability to leverage Object Storage to a cloud provider makes harnessing backup data growth easier to achieve and highly redundant.
  • Lower Cost Capabilities
    An object-based architecture is the most effective way for organizations to store large amounts of data and since it utilizes a flat architecture it consumes disk space more efficiently thus benefiting from a relatively low cost without the overhead of traditional file architectures. Additionally, with the help of retention policies and storage limits, VBO provides great ways on how one can keep costs under control.

Veeam Backup for Microsoft Office 365 is licensed per user account and supports a variety of licensing options such as Subscription or Rental based licenses. In order to use Object Storage as a backup target, a storage account from a cloud service provider is required but other than that, feel free to start using it!

VBO Deployment Models

For the benefit of this article, we won’t be digging in too much detail on the various deployment models that exist for VBO, but we believe that you ought to know about the various models that exist when opting for VBO.

VBO can run on-premises, private cloud, and public cloud environments. O365 tenants have the flexibility to choose from different designs based on their current requirements and host VBO wherever they deem right. In any scenario, a local primary backup repository is required as this will be the direct storage repository for backups. Object Storage can then be leveraged to offload bulk backup data to a cheaper and safer storage solution provided by a cloud service provider like phoenixNAP to further achieve disaster recovery objectives and data protection.

In some instances, it might be required to run and store VBO in different infrastructures for full disaster recovery (DR) purposes. Both O365 tenants and MSPs are able to leverage the power of the cloud by collaborating with a VCSP like phoenixNAP to provide them the ability to host and store VBO into a completely different infrastructure while providing self-service restore capabilities to end-users. For MSPs, this is a great way to increase revenue by offering managed backup service plans for clients.

The prerequisites and how these components work for each environment are very similar, hence for the benefit of this article the following Object Storage configuration is generally the same for each type of deployment.

veeam for office 365

Click here to see the image in full size.

Configuring Object Storage in Veeam Backup for Office 365

As explained in the previous section, although there are different ways on how one can deploy VBO, the procedure to configure and set up Object Storage repository is quite similar in any case, hence no specific attention will be given to a particular deployment model during the following configuration walk-through.

This section of the document will assume that the initial configuration as highlighted with checkmarks below, has so far been accomplished and in a position to; set up Object Storage as a Repository, Configure the local Repository, Secure Object Storage and Restore Backup Data.

  • Defined Policy-based settings and retention requirements according to Data Protection Plan and Service Costs
  • Object Storage cloud account details and credentials in hand
  • Office 365 prerequisite configurations to connect with VBO
  • Hosted and Deployed VBO
  • Installed and Licensed VBO
  • Created an Organization in VBO
    Adding S3 Compatible Object Storage Repository*
    Adding Local Backup Repository
    Secure Object Storage
    Restore Backup Data

* When opting for Object Storage, it is a suggested best practice that S3 Object Storage configuration is set up in advance, this will come in handy when asked for Object Storage repository option when adding the Local Backup Repository.

Adding S3 Compatible Object Storage Repository

Step 1. Launch New Object Storage Repository Wizard

Right-click Object Storage Repositories, select Add object storage.

Step 2. Specify Object Storage Repository Name

Enter a Name for the Object Storage Repository and optionally a Description. Click Next.

Step 3. Select Object Storage Type

On the new Object storage type page, select S3 Compatible (phoenixNAP compatible). Click Next.

Step 4. Specify Object Storage Service Point and Account

Specify the Service Point and the Datacenter region. Click Add to specify the credentials to connect with your cloud account.

If you already have a credentials record that was configured beforehand, select the record from the drop-down list. Otherwise, click Add and provide your access and secret keys, as described in Adding S3-Compatible Access Key. You can also click Manage cloud accounts to manage existing credentials records.

Enter the Access key, the Secret key, and a Description. Click OK to confirm.

Step 5. Specify Object Storage Bucket

Finalize by selecting the Bucket to use and click Browse to specify the folder to store the backups. Click New folder to create a new folder and click OK to confirm

Clicking Advanced lets you specify the storage consumption soft limit to keep costs under control, this will be the global retention storage policy for Object Storage. As a best practice, this consumption value should be lower than the Object Storage repository amount you’re entitled to from the cloud provider in order to leave room for additional service data.

Click OK followed by Finish.

Adding Local Backup Repository

Step 1. Launch New Backup Repository Wizard

Open the Backup Infrastructure view.

In the inventory pane, select the Backup Repositories node.

On the Backup Repository tab, click Add Repository on the ribbon.

Alternatively, in the inventory pane, right-click the Backup Repositories node and select Add backup repository.

Step 2. Specify Backup Repository Name

Specify Backup Repository Name and Description then click Next.

Step 3. Specify Backup Proxy Server

When planning to extend a backup repository with object storage, this directory will only include a cache consisting of metadata. The actual data will be compressed and backed up directly to object storage that you specify in the next step.

Specify the Backup Proxy to use and the Path to the location to store the backups. Click Next.

Step 4. Specify Object Storage Repository

At this step of the wizard, you can optionally extend a backup repository with object storage to back up data directly to the cloud.

To extend a backup repository with object storage, do the following:

  1. Select the Offload backup data to the object storage checkbox.
  2. In the drop-down list, select an object storage repository to which you want to offload your data.
    Make sure that an object storage repository has been added to your environment in advance. Otherwise, click Add and follow the steps of the wizard, as described in Adding Object Storage Repositories.
  3. To offload data encrypted, select Encrypt data uploaded to object storage and provide a password.

Step 5. Specify Retention Policy Settings

At this step of the wizard, specify retention policy settings.

Depending on how retention policies are configured, any obsolete restore points are automatically removed from Object Storage by VBO. A service task is used to calculate the age of offloaded restore points, when this exceeds the age of the specified retention period, it automatically purges obsolete restore points from Object Storage.

  • In the Retention policy drop-down list, specify how long your data should be stored in a backup repository.
  • Choose a retention type:
    • Item-level retention.
      Select this type if you want to keep an item until its creation time or last modification time is within the retention coverage.
  • Snapshot-based retention.
    Select this type if you want to keep an item until its latest restore point is within the retention coverage.
  • Click Advanced to specify when to apply a retention policy. You can select to apply it on a daily basis, or monthly. For more information, see Configuring Advanced Settings.

Configuring Advanced Settings

After you click Advanced, the Advanced Settings dialog appears in which you can select either of the following options:

  • Daily at:
    Select this option if you want a retention policy to be applied on a daily basis and choose the time and day.
  • Monthly at:
    Select this option if you want a retention policy to be applied on a monthly basis and choose the time and day, which can be the first, second, third, fourth or even the last one in the month.

Securing Object Storage

To ensure Backup Data is kept safe and secure from any possible vulnerabilities, one must make sure to secure the backup application itself, and its communication channels. Veeam has made this possible by continuously implementing key security measures to address and mitigate any possible threats while providing us with some great security functionalities to interface with Object Storage.

VBO v4 can provide the same level of protection for your data irrelevant to any deployment model used. Communications between VBO components are always encrypted and all communication between Microsoft Office 365 and VBO is encrypted by default. When using object storage, data can be protected with optional encryption at-rest.

VBO v4 also introduces a Cloud Credential Manager which lets us create and maintain a solid list of credentials provided by any of the Cloud Service Providers. These records allow us to connect with the Object Storage provider to store and offload backup data. Credentials will consist of access and secret keys and work with any S3-Compatible Object Storage.

Password Manager lets us manage encryption passwords with ease. One can create passwords to protect encryption keys that are used to encrypt data being transferred to object storage repositories. To encrypt data, VBO uses the AES-256 specification.

Watch one of our experts speak about the importance of Keeping a Tight Grip on Office 365 Security While Working Remotely.

Restoring from Object Storage

Restoring backup data from Object Storage is just as easy as if you’re restoring from any traditional storage repositories. As explained earlier in this article, Veeam Explorer is the tool used to open and navigate through backups without the need to download any of it.

Veeam Explorer uses metadata to obtain the structure of the backup data objects and once backup data has been identified for restore, you may choose to select any of the available restore options as required. When leverage Object Storage on the cloud, one is also able to host Veeam explorer locally and use it to restore Office 365 backup data from the cloud.

Where Does phoenixNAP Come into Play?

For more information, please look at our product pages and use the form to request additional details or send an e-mail to sales@phoenixnap.com 

 

Abbreviations Table

DRaaS Disaster Recovery as a Service
GDPR General Data Protection Regulation
MSP Managed Service Provider
O365 Microsoft Office 365
VBO Veeam Backup for Office 365
VCC Veeam Cloud Connect
VCSP Veeam Cloud & Service Provider


tdd bdd differences

Test Driven vs Behavior Driven Development: Key Differences

Test-driven development (TDD) and Behavior-driven development (BDD) are both test-first approaches to Software Development. They share common concepts and paradigms, rooted in the same philosophies. In this article, we will highlight the commonalities, differences, pros, and cons of both approaches.

What is Test-driven development (TDD)

Test-driven development (TDD) is a software development process that relies on the repetition of a short development cycle: requirements turn into very specific test cases. The code is written to make the test pass. Finally, the code is refactored and improved to ensure code quality and eliminate any technical debt. This cycle is well-known as the Red-Green-Refactor cycle.

What is Behavior-driven development (BDD)

Behavior-driven development (BDD) is a software development process that encourages collaboration among all parties involved in a project’s delivery. It encourages the definition and formalization of a system’s behavior in a common language understood by all parties and uses this definition as the seed for a TDD based process.

diagram comparing Test Driven Development and Behavior Driven Development

Key Differences Between TDD and BDD

  TDD BDD
Focus Delivery of a functional feature Delivering on expected system behavior
Approach Bottom-up or Top-down (Acceptance-Test-Driven Development) Top-down
Starting Point A test case A user story/scenario
Participants Technical Team All Team Members including Client
Language Programming Language Lingua Franca
Process Lean, Iterative Lean, Iterative
Delivers A functioning system that meets our test criteria A system that behaves as expected and a test suite that describes the system’s behavior in human common-language
Avoids Over-engineering, low test coverage, and low-value tests Deviation from intended system behavior
Brittleness Change in implementation can result in changes to test suite Test suite-only needs to change if the system behavior is required to change
Difficulty of Implementation Relatively simple for Bottom-up, more difficult for Top-down The bigger learning curve for all parties involved

Test-Driven Development (TDD)

In TDD, we have the well-known Red-Green-Refactor cycle. We start with a failing test (red) and implement as little code as necessary to make it pass (green). This process is also known as Test-First Development. TDD also adds a Refactor stage, which is equally important to overall success.

The TDD approach was discovered (or perhaps rediscovered) by Kent Beck, one of the pioneers of Unit Testing and later TDD, Agile Software Development, and eventually Extreme Programming.

The diagram below does an excellent job of giving an easily digestible overview of the process. However, the beauty is in the details. Before delving into each individual stage, we must also discuss two high-level approaches towards TDD, namely bottom-up and top-down TDD.

example of the refactor cycle with red and green
Figure 1: TDD’s Red-Green-Refactor Cycle

 

Bottom-Up TDD

The idea behind Bottom-Up TDD, also known as Inside-Out TDD, is to build functionality iteratively, focusing on one entity at a time, solidifying its behavior before moving on to other entities and other layers.

We start by writing Unit-level tests, proceeding with their implementation, and then moving on to writing higher-level tests that aggregate the functionalities of lower-level tests, create an implementation of the said aggregate test, and so on. By building up, layer by layer, we will eventually get to a stage where the aggregate test is an acceptance level test, one that hopefully falls in line with the requested functionality. This process makes this a highly developer-centric approach mainly intended at making the developer’s life easier.

Pros Cons
Focus is on one functional entity at a time Delays integration stage
Functional entities are easy to identify Amount of behavior an entity needs to expose is unclear
High-level vision not required to start High risk of entities not interacting correctly with each other thus requiring refactors
Helps parallelization Business logic possibly spread across multiple entities making it unclear and difficult to test

Top-Down TDD

Top-Down TDD is also known as Outside-In TDD or Acceptance-Test-Driven Development (ATDD). It takes the opposite approach. Wherein we start building a system, iteratively adding more detail to the implementation. And iteratively breaking it down into smaller entities as refactoring opportunities become evident.

We start by writing an acceptance-level test, proceed with minimal implementation. This test also needs to be done incrementally. Thus, before creating any new entity or method, it needs to be preceded by a test at the appropriate level. We are hence iteratively refining the solution until it solves the problem that kicked off the whole exercise, that is, the acceptance-test.

This setup makes Top-Down TDD a more Business/Customer-centric approach. This approach is more challenging to get right as it relies heavily on good communication between the customer and the team. It also requires good citizenship from the developer as the next iterative step needs to come under careful consideration. This process will speed-up in time but does have a learning curve. However, the benefits far outweigh any negatives. This approach results in the collaboration between customer and team taking center stage, a system with very well-defined behavior, clearly defined flows, focus on integrating first, and a very predictable workflow and outcome.

Pros Cons
Focus is on one user requested scenario at a time Critical to get the Assertion-Test right thus requiring collaborative discussion between business/user/customer and team
Flow is easy to identify Relies on Stubbing, Mocking and/or Test Doubles
Focus is on integration rather than implementation details Slower start as the flow is identified through multiple iterations
Amount of behavior an entity needs to expose is clear More limited parallelization opportunities until a skeleton system starts to emerge
User Requirements, System Design and Implementation details are all clearly reflected in the test suite
Predictable

The Red-Green-Refactor Life Cycle

Armed with the above-discussed high-level vision of how we can approach TDD, we are free to delve deeper into the three core stages of the Red-Green-Refactor flow.

Red

We start by writing a single test, execute it (thus having it fail) and only then move to the implementation of that test. Writing the correct test is crucial here, as is agreeing on the layer of testing that we are trying to achieve. Will this be an acceptance level test or a unit level test? This choice is the chief delineation between bottom-up and top-down TDD.

Green

During the Green-stage, we must create an implementation to make the test defined in the Red stage pass. The implementation should be the most minimal implementation possible, making the test pass and nothing more. Run the test and watch it pass.

Creating the most minimal implementation possible is often the challenge here as a developer may be inclined, through force of habit, to embellish the implementation right off the bat. This result is undesirable as it will create technical baggage that, over time, will make refactoring more expensive and potentially skew the system based on refactoring cost. By keeping each implementation step as small as possible, we further highlight the iterative nature of the process we are trying to implement. This feature is what will grant us agility.

Another key aspect is that the Red-stage, i.e., the tests, is what drives the Green-stage. There should be no implementation that is not driven by a very specific test. If we are following a bottom-up approach, this pretty much comes naturally. However, if we’re adopting a top-down approach, then we must be a bit more conscientious and make sure to create further tests as the implementation takes shape, thus moving from acceptance level tests to unit-level tests.

Refactor

The Refactor-stage is the third pillar of TDD. Here the objective is to revisit and improve on the implementation. The implementation is optimized, code quality is improved, and redundancy eliminated.

Refactoring can have a negative connotation for many, being perceived as a pure cost, fixing something improperly done the first time around. This perception originates in more traditional workflows where refactoring is primarily done only when necessary, typically when the amount of technical baggage reaches untenable levels, thus resulting in a lengthy, expensive, refactoring effort.

Here, however, refactoring is an intrinsic part of the workflow and is performed iteratively. This flexibility dramatically reduces the cost of refactoring. The code is not entirely reworked. Instead, it is slowly evolving. Moreover, the refactored code is, by definition, covered by a test. A test that has already passed in a previous iteration of the code. Thus, refactoring can be done with confidence, resulting in further speed-up. Moreover, this iterative approach to improvement of the codebase allows for emergent design, which drastically reduces the risk of over-engineering the problem.

It is of critical importance that behavior should not change, and we do not add extra functionality during the Refactor-stage. This process allows refactoring to be done with extreme confidence and agility as the relevant code is, by definition, already covered by a test.

diagram of the testing pyramid with manual session testing on top

Behavior-Driven Development (BDD)

As previously discussed, TDD (or bottom-up TDD) is a developer-centric approach aimed at producing a better code-base and a better test suite. In contrast, ATDD is more Customer-centric and aimed at producing a better solution overall. We can consider Behavior-Driven Development as the next logical progression from ATDD. Dan North’s experiences with TDD and ATDD resulted in his proposing the BDD concept, whose idea and the claim was to bring together the best aspects of TDD and ATDD while eliminating the pain-points he identified in the two approaches. What he identified was that it was helpful to have descriptive test names and that testing behavior was much more valuable than functional testing.

Dan North does a great job of succinctly describing BDD as “Using examples at multiple levels to create shared understanding and surface certainty to deliver software that matters.”

Some key points here:

  • What we care about is the system’s behavior
  • It is much more valuable to test behavior than to test the specific functional implementation details
  • Use a common language/notation to develop a shared understanding of the expected and existing behavior across domain experts, developers, testers, stakeholders, etc.
  • We achieve Surface Certainty when everyone can understand the behavior of the system, what has already been implemented and what is being implemented and the system is guaranteed to satisfy the described behaviors

BDD puts the onus even more on the fruitful collaboration between the customer and the team. It becomes even more critical to define the system’s behavior correctly, thus resulting in the correct behavioral tests. A common pitfall here is to make assumptions about how the system will go about implementing a behavior. This mistake occurs in a test that is tainted with implementation detail, thus making it a functional test and not a real behavioral test. This error is something we want to avoid.

The value of a behavioral test is that it tests the system. It does not care about how it achieves the results. This setup means that a behavioral test should not change over time. Not unless the behavior itself needs to change as part of a feature request. The cost-benefit over functional testing is more significant as such tests are often so tightly coupled with the implementation that a refactor of the code involves a refactor of the test as well.

However, the more substantial benefit is the retention of Surface Certainty. In a functional test, a code-refactor may also require a test-refactor, inevitably resulting in a loss of confidence. Should the test fail, we are not sure what the cause might be: the code, the test, or both. Even if the test passes, we cannot be confident that the previous behavior has been retained. All we know is that the test matches the implementation. This result is of low value because, ultimately, what the customer cares about is the behavior of the system. Thus, it is the behavior of the system that we need to test and guarantee.

A BDD based approach should result in full test coverage where the behavioral tests fully describe the system’s behavior to all parties using a common language. Contrast this with functional testing were even having full coverage gives no guarantees as to whether the system satisfies the customer’s needs and the risk and cost of refactoring the test suite itself only increase with more coverage. Of course, leveraging both by working top-down from behavioral tests to more functional tests will give the Surface Certainty benefits of behavioral testing. Plus, the developer-focused benefits of functional testing also curb the cost and risk of functional testing since they’re only used where appropriate.

In comparing TDD and BDD directly, the main changes are that:

  • The decision of what to test is simplified; we need to test the behavior
  • We leverage a common language which short-circuits another layer of communication and streamlines the effort; the user stories as defined by the stakeholders are the test cases

An ecosystem of frameworks and tools emerged to allow for common-language based collaboration across teams. As well as the integration and execution of such behavior as tests by leveraging industry-standard tooling. Examples of this include Cucumber, JBehave, and Fitnesse, to name a few.

behavior-driven development diagram

The Right Tool for the Job

As we have seen, TDD and BDD are not really in direct competition with each other. Consider BDD as a further evolution of TDD and ATDD, which brings more of a Customer-focus and further emphasizes communication between the customer and the Technical team at all stages of the process. The result of this is a system that behaves as expected by all parties involved, together with a test suite describing the entirety of the system’s many behaviors in a human-readable fashion that everyone has access to and can easily understand. This system, in turn, provides a very high level of confidence in not only the implemented system but in future changes, refactors, and maintenance of the system.

At the same time, BDD is based heavily on the TDD process, with a few key changes. While the customer or particular members of the team may primarily be involved with the top-most level of the system, other team members like developers and QA engineers would organically shift from a BDD to a TDD model as they work their way in a top-down fashion.

We expect the following key benefits:

  • Bringing pain forward
  • Onus on collaboration between customer and team
  • A common language shared between customer and team-leading to share understanding
  • Imposes a lean, iterative process
  • Guarantee the delivery of software that not only works but works as defined
  • Avoid over-engineering through emergent design, thus achieving desired results via the most minimal solution possible
  • Surface Certainty allows for fast and confident code refactors
  • Tests have innate value VS creating tests simply to meet an arbitrary code coverage threshold
  • Tests are living documentation that fully describes the behavior of the system

There are also scenarios where BDD might not be a suitable option. There are situations where the system in question is very technical and perhaps is not customer-facing at all. It makes the requirements more tightly bound to the functionality than they are to behavior, making TDD a possibly better fit.

Adopting TDD or BDD?

Ultimately, the question should not be whether to adopt TDD or BDD, but which approach is best for the task at hand. Quite often, the answer to that question will be both. As more people are involved in more significant projects, it will become self-evident that both approaches are needed at different levels and at various times throughout the project’s lifecycle. TDD will give structure and confidence to the technical team. While BDD will facilitate and emphasize communication between all involved parties and ultimately delivers a product that meets the customer’s expectations and offers the Surface Certainty required to ensure confidence in further evolving the product in the future.

As is often the case, there is no magic bullet here. What we have instead is a couple of very valid approaches. Knowledge of both will allow teams to determine the best method based on the needs of the project. Further experience and fluidity of execution will enable the team to use all the tools in its toolbox as the need arises throughout the project’s lifecycle, thus achieving the best possible business outcome. To find out how this applies to your business, talk to one of our experts today.


wan cloud

Why Carrier-Neutral Data Centers are Key to Reduce WAN Costs

Every year, the telecom industry invests hundreds of billions on network expansion, which will rise by 2%-4% in 2020. Not surprisingly, the outcome is predictable: bandwidth prices keep falling.

As Telegeography reported, several factors accelerated this phenomenon in recent years. Major cloud providers like Google, Amazon, Microsoft, and Facebook have altered the industry by building their own massive global fiber capacity while scaling back their purchases from telecom carriers. These companies have simultaneously driven global fiber supply up and demand down. Technology advances, like 100 Gbps bit rates, have also contributed to the persistent erosion of costs.

The result is bandwidth prices that have never been lower. And the advent of Software-Defined Networking (SD-WAN) makes it simpler than ever to prioritize traffic between costly private networks and cheaper Internet bandwidth.

lower wan costs

This period should be the best of times for enterprise network architects, but not necessarily.

Many factors conspire against buyers who seek to lower costs for the corporate WAN, including:

  • Telecom contracts that are typically long-term and inflexible
  • Competition that is often limited to a handful of major carriers
  • Few choices for local access and Internet at corporate locations
  • The tremendous effort required to change providers, meaning incumbents, have all the leverage

The largest telcos, companies like AT&T and Verizon, become trapped by their high prices. Protecting their revenue base makes these companies reluctant adopters of SD-WAN and Internet-based solutions.

So how can organizations drive down spending on the corporate WAN, while boosting performance?

As in most markets, the essential answer is: Competition.

The most competitive marketplaces for telecom services in the world are Carrier-Neutral Data Centers (CNDCs). Think about all the choices: long-haul networks; local access; Internet providers, storage, compute, SaaS, etc. CDNCs offer a wide array of networking options, and the carriers realize that competitors are just a cross-connect away.

How much savings are available? Enough to make it worthwhile for many large regional, national, and global companies. In one report, Forrester interviewed customers of Equinix, the largest retail colocation company, and found that they saved an average of 40% on bandwidth costs, and 60%-70% cloud connectivity and network traffic cost reduction.

The key is to leverage CNDCs as regional network hubs, rather than the traditional model of hubbing connectivity out of internal corporate data centers.

CNDCs like to remind the market that they offer much more than racks and power as these sites can offer performance benefits as well. Internet connectivity is often superior, and many CNDCs offer private cloud gateways that improve latency and security.

But lower costs and the savings alone should be enough to justify most deployments. To see how you can benefit, contact one of our experts today.


workstation

Extend Your Development Workstation with Vagrant & Ansible

The mention of Vagrant in the title might have led you to believe that this is yet another article about the power of sharing application environments. As one does with code or how Vagrant is a great facilitator for that approach. However, there exists plenty of content about that topic, and by now the benefits of it are widely known. Instead, we will describe our experience in putting Vagrant to use in a somewhat unusual way.

A Novel Idea

The idea is to extend a developer workstation running Windows to support running a Linux kernel in a VM and to make the bridge between the two as seamless as possible. Our motivation was to eliminate certain pain points or restrictions in development. Which are brought about by the choice of OS for the developer’s local workstation. Be it a requirement at an organizational level, regulatory enforcement or any other thing that might or might not be under the developer’s control.

This approach is not the only one evaluated, as we also considered shifting work entirely to a guest OS on a VM, using Docker containers, leveraging Cygwin. And yes, the possibility of replacing the host OS was also challenged. However, we found that the way technologies came together in this approach can be quite powerful.

We’ll take this opportunity to communicate some of the lessons learned and limitations of the approach and share some ideas of how certain problems can be solved.

workstation expansion

Why Vagrant?

The problem that we were trying to solve and the concept of how we tried to do it does not necessarily depend on Vagrant. In fact, the idea was based on having a virtual machine (VM) deployed on a local hypervisor. Running the VM locally might seem dubious at first thought. However, as we found out, this gives us certain advantages that allow us to create a better experience for the developer by creating an extension to the workstation.

We opted to go for VirtualBox as a virtualization provider primarily because of our familiarity with the tool and this is where Vagrant comes into play. Vagrant is one of the tools that make up the open-source HashiCorp Suite, which is aimed at solving the different challenges in automating infrastructure provisioning.

In particular, Vagrant is concerned with managing VM environments in the development phase Note, for production environments there are other tools in the same suite that are more suitable for the job. More specifically Terraform and Packer, which are based on configuration as code. This implies that an environment can be easily shared between team members and changes are version controlled and can be tracked easily. Making the resultant product (the environment) consistently repeatable. Vagrant is opinionated and therefore declaring an environment and its configuration becomes concise, which makes it easy to write and understand.

Why Ansible?

After settling on using Vagrant for our solution and enjoying the automated production of the VM; the next step was to find a way to provision that VM in a way that marries the principles advertised by Vagrant.

We do not recommend having Vagrant spinning up the VMs in an environment and then manually installing and configuring the dependencies for your system. In Vagrant, provisioners are core and there are plenty from which you can choose. In our case, as long as our provisioning remained simple we stuck with using Shell (Vagrant simply uploads scripts to the guest OS and executes them).

Soon after, it became obvious that that approach would not scale well, alongside the scripts being too verbose. The biggest pain point was that developers would need to write in a way that favored idempotency. This is due to the common occurrence of needing to add steps to the configuration. All the while being overkill to have to re-provision everything from scratch.

At this point, we decided to use Ansible. Ansible by RedHat is another open-source automation tool that is built around the idea of managing the execution of plays. Using a playbook where a play can be thought of as a list of tasks mapped against a group of hosts in an environment.

These plays should ideally be idempotent which is not always possible. And again the entire configuration one would write is declared as code in YAML. The biggest win that was achieved with this strategy is that the heavy lifting is done by the community. It provides Ansible Modules, configurable Python scripts that perform specific tasks, for virtually anything one might want to do. Installing dependencies and configuring the guest according to industry standards becomes very easy and concise. Without requiring the developer to go into the nitty-gritty details since modules are in general highly opinionated. All of these concepts combine perfectly with the principles for Vagrant and integration between the two works like a charm.

There was one major challenge to overcome in setting up the two to work together. Our host machine runs Windows, and although Ansible is adding more support for managing Windows targets with time, it simply does not run from a Windows control machine. This leaves us with two options: having a further environment which can act as the Ansible controller or the simpler approach of having the guest VM running Ansible to provision itself.

The drawback of this approach is that one would be polluting the target environment. We were willing to compromise on this as the alternative was cumbersome. Vagrant allows you to achieve this by simply replacing the provisioner identifier. Changing from ansible to ansible_local, it automatically installs the required Ansible binaries and dependencies on the guest for you to use.

workstation expansion

File Sharing

One of the cornerstones we wanted to achieve was the possibility to make the local workspace available from within the guest OS. This is so you can have the tooling which makes up a working environment be readily available to easily run builds inside the guest. The options for solving this problem are plenty and they vary depending on the use case. The simplest approach is to rely on VirtualBox`s file-sharing functionality which gives near-instant, two-way syncing. And setting it up is a one-liner in the VagrantFile.

The main objective here was to share code repositories with the guest. It can also come handy to replicate configuration for some of the other toolings. For instance, one might find it useful to configure file sharing for Maven`s user settings file, the entire local repository, local certificates for authentication, etc.

Port Forwarding

VirtualBox`s networking options were a powerful ally for us. There are a number of options for creating private networks (when you have more than one VM) or exposing the VM on the same network as the host. It was sufficient for us to rely on a host-only network (i.e. the VM is reachable only from the host). And then have a number of ports configured for forwarding through simple NAT.

The major benefit of this is that you do not need to keep changing configuration for software, whether it is executing locally or inside the guest. All of this can be achieved in Vagrant by writing one line of configuration code. This NATting can be configured in either direction (host to guest or guest to host).

Bringing it together

Having defined the foundation for our solution, let’s now briefly go through what we needed to implement all of this. You will see that for the most part, it requires minimal configuration to reach our target.

The first part of the puzzle is the Vagrantfile in which we define the base image for the guest OS (we went with CentOS 7). The resources we want to allocate (memory, vcpus, storage), file shares, networking details and provisioning.

Figure 1: File structure of the solution

Note that the vagrant plugin `vagrant-vbguest` was useful to automatically determine the appropriate version of VirtualBox’s Guest Addition binaries for the specified guest OS and installing them. We also opted to configure Vagrant to prefer using the binaries that are bundled within itself for functionality such as SSH (VAGRANT_PREFER_SYSTEM_BIN set to 0) rather than rely on the software already installed on the host. We found that this allowed for a simpler and more repeatable setup process.

The second major part of the work was integrating Ansible to provision the VM. For this we opted to leverage Vagrant’s ansible_local that works by installing Ansible in the guest on the fly and running provisioning locally.

Now, all that is required is to provide an Ansible playbook.yml file and here one would define any particular configuration or software that needs to be set up on the guest OS.

Figure 2: Configuration of Ansible as provisioner in the VagrantFile

We went a step further and leveraged third-party Ansible roles instead of reinventing the wheel and having to deal with the development and ongoing maintenance costs.

The Ansible Galaxy is an online repository of such roles that are made available by the community. And you install these by means of the ansible-galaxy command.

Since Vagrant is abstracting away the installation and invocation of Ansible, we need to rely on Vagrant. Why? To make sure that these roles are installed and made available when executing the playbook. This is achieved through the galaxy_command parameter. The most elegant way to achieve this is to provide a requirements.yml file with the list of roles needed and have it passed to the ansible-galaxy command. Finally, we need to make sure that the Ansible files are made available to the guest OS through a file share (by default the directory of the VagrantFile is shared) and that the paths to them are relative to /vagrant.

Building a seamless experience…BAT to the rescue

We were pursuing a solution that makes it as easy as possible to jump from working locally to working inside the VM. If possible, we also wanted to be able to make this switch without having to move through different windows.

For this reason, we wrote a couple of utility batch scripts that made the process much easier. We wanted to leverage the fact that our entire workspace directory was synced with the guest VM. This allowed us to infer the path in the workspace on the guest from the current location in the host. For example, if on our host we are at C:WorkspaceProjectX and the workspace is mapped to vagrantworkspace, then we wanted the ability to easily run a command in vagrantworkspaceprojectx without having to jump through hoops.

To do this we placed a script on our path that would take a command and execute it in the appropriate directory using Vagrant’s command flag. The great thing about this trick is that it allowed us to trigger builds on the guest with Maven through the IDE by specifying a custom build command.

Figure 3: Illustrating how the path is resolved on the guest

Figure 4: Running a command in the guest against files in the local workspace

We also added the ability to the same script to SSH into the VM directly in the path corresponding to the current location on the host. To do this, on VM provisioning we set up a file share that allows us to sync the bashrc directory in the vagrant user’s home folder. This allows us to cd in the desired path (which is derived on the fly) on the guest upon login.

Finally, since a good developer is an efficient developer, we also wanted the ability to manage the VM from anywhere. So if, for instance, we have not yet launched the VM we would not need to keep navigating to the directory hosting the VagrantFile.

This is standard Vagrant functionality that is made possible by setting the %VAGRANT_CWD% variable. What we added on top is the ability to define it permanently in a dedicated user variable. And simply set it up only when we wanted to manage this particular environment.

Figure 5: Spinning up the VM from an arbitrary path

File I/O performance

In the course of testing out the solution, we encountered a few limitations that we think are relevant to mention.

The problems revolved around the file-sharing mechanism. Although there are a number of options available, the approach might not be a fit for certain situations that require intensive File I/O. We first tried to set up a plain VirtualBox file share and this was a good starting point since it works. And without requiring many configurations, it syncs 2-ways instantaneously, which is great in most cases.

The first wall was hit as soon as we tried running a FrontEnd build using NPM which relies on creating soft-links for common dependency packages. Soft-linking requires a specific privilege to be granted on the Windows host and still, it does not work very well. We tried going around the issue by using RSync which by default only syncs changes in one direction and runs on demand. Again, there are ways to make it poll for changes and bi-directionality could theoretically be set up by configuring each direction separately.

However, this creates a race-condition with the risk of having changes reversed or data loss. Another option, SMB shares, required a bit more work to set up and ultimately was not performant enough for our needs.

In the end, we found a solution to make the NPM build run without using soft-links and this allowed us to revert to using the native VirtualBox file share. The first caveat was that this required changes in our source-code repository, which is not ideal. Also, due to the huge number of dependencies involved in one of our typical NPM-based FrontEnd builds, the intense use of File I/O was causing locks on the file share, slowing down performance.

workstation remote

Conclusions

The aim was to extend a workstation running Windows by also running a Linux Kernel, to make it as easy as possible to manage and switch between working in either environment. The end result from our efforts turned out to be a very convenient solution in certain situations.

Our setup was particularly helpful when you need to run applications in an environment that is similar to production. Or when you want to run certain tooling for development, which is easier to install and configure on a Linux host. We have shown you how, with the help of tools like Vagrant and Ansible, it is easy to create a setup in such a way that can be shared and recreated consistently. Whilst keeping the configuration concise.

From a performance point of view, the solution worked very well for tasks that were demanding from a computation perspective. However, not the same can be said for situations that required intensive File I/O due to the overhead in synchronization.

For more knowledge-based information, check out what our experts have to say. Bookmark the site to stay updated weekly.


conference audience listen to speaker

38 Cyber Security Conferences to Attend in 2020

Global cybersecurity ensures the infrastructure of global enterprises and economies, safeguarding the prosperity and well-being of citizens worldwide. As IoT (Internet of Things) devices rapidly expand, and connectivity and usage of cloud services increases, cyber-related incidents such as hacking, data breaches, and infrastructure tampering become common. 

Global cybersecurity conferences are a chance for stakeholders to address these issues and formulate strategies to defend against attacks and disseminate knowledge on new cybersecurity policies and procedures.

Benefits of Attending a Cyber Security Conference in 2020:

  • Networking with peers
  • Education on new technologies
  • Outreach
  • New strategies
  • Pricing information
  • Giving back and knowledge-sharing
  • Discovering new talent
  • Case studies

Here is a list below of the top 37 cybersecurity conferences to attend in 2020. As future conference details become confirmed for later in the year, bookmark the page and check back for the latest info.

woman giving a speech on stage at security conference

1. National Privacy and Data Governance Congress

The National Privacy and Data Governance Congress Conference is an occasion to discover critical problems at the landmark of privacy, law, security, access, and data authority. This event joins specialists from the academe, industry, and government who are involved with compliance, data governance, privacy, security, and access within establishments.

The conference is extensive but provides an adequate amount of time for representatives to present inquiries, receive impromptu responses, and participate in eloquent discussions with hosts, associates, and decision-makers.

2. NextGen SCADA Europe 2020

The 6th Annual NextGen SCADA Europe exhibition and networking conference is back thanks to the high demand. It is a dedicated forum that will provide content depth and networking emphasis you need to help with making critical new decisions. A decision such as upgrading your SCADA structure to meet the needs of the digital grid better.

In a matter of three intensive days, you can take part in 20+ utility case-studies. Such vital studies like the critical subjects of integration, system architecture, cybersecurity, and functionality. Enroll now to gain insight into the reason why these studies are the newest buzz around the cyber circle.

3. Sans Security East

  • Date: February 1 - 8, 2020
  • Location: New Orleans, Louisiana, United States
  • Cost: Different prices for different courses. Most courses cost $7,020. An online course is available.
  • https://www.sans.org/event/security-east-2020

Jump-start the New Year with one of the first training seminars of 2020. Attend the SANS Security East 2020 in New Orleans for an opportunity to learn new cybersecurity best practices for 2020 from the world’s top experts. This training experience is to assist you in progressing your career.

SANS’ development is unchallenged in the industry. The organization provides fervent instructors who are prominent industry specialists and practitioners. Their applied knowledge adds significance to the teaching syllabus. These skilled instructors guarantee you will be capable of utilizing what you learn immediately. From this conference, you can pick from over twenty information security courses that are prepared by first-rate mentors.

4. International Conference on Cyber Security and Connected Technologies (ICCSCT)

  • Date: February 3 - 4, 2020
  • Location: Tokyo, Japan
  • Cost: $260 - $465
  • https://iccsct.coreconferences.com

The 2020 ICCSCT is a leading research session focused on presenting new developments in cybersecurity. The seminar happens every year to make a perfect stage for individuals to share opinions and experiences.

The International Conference on Cyber Security and Connected Technologies centers on the numerous freshly forthcoming parts of cybersecurity and connected technologies.

5. Manusec Europe: Cyber Security for Critical Manufacturing

As the industrial division continues to adopt advancements in technology, it becomes vulnerable to an assortment of cyber threats. To have the best tools to tackle cyber threats in the twenty-first century, organizations must involve all levels of employees to cooperate and institute best exercise strategies to guard vital assets.

This event will bridge the gap between the corporate IT senior level and process control professionals. Such practices will allow teams to discuss critical issues and challenges, as well as to debate cyber security best practice guidelines.

6. The European Information Security Summit

This organization, known as TEISS, is currently one of the leading and most wide-ranging cybersecurity meetings in Europe. It features parallel sessions on the following four streams:

  • Threat Landscape stream
  • Culture and Education stream
  • Plenary stream
  • CISOs & Strategic stream

Join over 500 specialists in the cybersecurity industry and take advantage of different seminars.

7. Gartner Data & Analytics Summit

Data and analytics are conquering all trades as they become the core of the digital transformation. To endure and flourish in the digital age, having a grasp on data and analytics is critical for industry players.

Gartner is currently the global leader in IT conference providers. You, too, can benefit from our research, exclusive insight, and unmatched peer networking. Reinvent your role and your business by joining the 50,000+ attendees that walk away from this seminar annually, armed with a better understanding and the right tools to make their organization a leader in their industry.

8. Holyrood Connect’s Cyber Security

With cyber threats accelerating in frequency, organizations must protect themselves from the potentially catastrophic consequences due to security breaches.

In a time where being merely aware of security threats is unsustainable, Holyrood’s annual cybersecurity conference will research the latest developments, emerging threats, and constant practice.

Join relevant stakeholders, experts, and peers as they research the next steps to reinforce defenses, improve readiness, and maintain cyber resilience.

Critical issues to be addressed:

  • Up-to-date briefing on the latest in cybersecurity practice and policy
  • Expert analysis of the emerging threat landscape both at home and abroad
  • Good practice and innovation in preventing, detecting and responding to cyber attacks
  • Developing a whole-organization approach to cyber hygiene – improving awareness, culture, and behavior
  • Horizon scanning: cybersecurity and emerging technology

9. 3rd Annual Internet of Things India Expo

The Internet of Things is a trade transformation vital to government, companies, and clients, renovating all industries. The Second Edition IoT India Expo will concentrate on the Internet of Things environment containing central bodies, software, services, and hardware. Distinct concentration for this symposium will be on:

  • Industrial IoT
  • Smart Appliances
  • Robotics
  • Cybersecurity
  • Smart Solutions
  • System Integrators
  • Smart Wearable Devices
  • AI
  • Cloud Computing
  • Sensors
  • Data Analytics
  • And much more…

10. BSides Tampa

  • Category: General Cyber Security Conference
  • Date: February 20, 2020
  • Location: Tampa, Florida, United States
  • Cost: General Admission - $50; Discount for specific parties like Teachers, Military and Students
  • https://bsidestampa.net

The BSides Tampa conference focuses on offering attendees the latest information on security research, development, and uses. The conference is held yearly in Tampa and features various demonstrations and presentations from the best available in industry and academia.

Attendees have the opportunity to attend numerous training classes throughout the conference. These training classes provide individual technical courses on subjects ranging from penetration testing, security abuse, or cybersecurity certifications.

11. RSA Conference, the USA

With the simplicity of joining the digital space opens the risk of real-world cyber dangers. The 2020 RSA Conference focuses on the topics of managing these cyber threats that prominent organizations, agencies, and businesses are facing.

This event is famous in the US as well as in Abu Dhabi, Europe, and Singapore. The RSA Conference is renowned for being one of the leading information security seminars that occur yearly. The event’s real objective is to utilize the active determination the leaders place into study and research.

12. RSA Conference, Cryptographers’ Track

More than 40,000 industry-leading specialists attend the event as it is the first industry demonstration for the security business. As one of the industry-leading cybersecurity events, 2020 is the pathway to scientific documents on cryptography. It is a fantastic place for the broader cryptologic public to get in touch with attendees from the government, the industry, and academia.

presentations at a cyber security seminar

13. Hack New York City 2020

  • Date: March 6 - 8, 2020
  • Location: Manhattan, New York, United States
  • Cost: Free
  • https://hacknyu.org

Hack NYC is about sharing ideas on how we can improve our daily cybersecurity practices and the overall economic strength. The threat of attack targeting the Critical National Infrastructure is real as provisions supporting businesses and communities face common weaknesses and an implicit dynamic threat.

Hack NYC emphasis’ on our planning for, and resistance to, the real potential for Kinetic Cyberattack. Be part of crucial solutions and lighten risks aimed at Critical National Infrastructure.

14. Healthcare Information and Management Systems Society (HIMSS)

Over 40,000 health IT specialists, executives, vendors, and clinicians come together from all over the globe for the yearly HIMSS exhibition and seminar. Outstanding teaching, first-class speakers, front-line health IT merchandise, and influential networking are trademarks of this symposium. Over three hundred instruction programs feature discussions and workspaces, front-runner meetings, keynotes, and an entire day of pre-conference seminars.

15. 14th International Conference on Cyber Warfare and Security

The 14th Annual ICCWS is an occasion for academe, consultants, military personnel, and practitioners globally to explore new methods of fighting data threats. Cybersecurity conferences like this one offer the opportunity to increase information systems safety and share concepts.

New risks that exist from migrating to the cloud and social networking are a growing focus for the research group, and the sessions designed to cover these matters particularly. Join the merging of crucial players as CCWS uniquely addresses cyber warfare, security, and information warfare.

16. PROTECT International Exhibition and Conference on Security and Safety

Leverage International (Consultants) Inc. arranges this annual international seminar and demonstration on safety and security. PROTECT first started in 2005 by the Anti-Terrorism Council. This conference is the one government-private division partnership sequence in the Philippines dedicated to protection and security. It contains a global display, an excellent level symposium, free to go to practical workspaces, and networking prospects.

17. TROOPERS20

The TROOPERS20 conference is a two-day event providing hands-on involvement, discussions of current IT security matters, and networking with attendees as well as speakers.

During the two-day seminar, you can expect discussions on numerous issues. There are also practical demonstrations on the latest research and attack methods to bring the topics closer to the participants.

The conference also includes discussions about cyber defense and risk management, as well as relevant demonstrations of InfoSec management matters.

18. 27th International Workshop on Fast Software Encryption

The 2020 Fast Software Encryption conference arranged by the International Association for Cryptologic Research, will take place in Athens, Greece. FSE is broadly renowned as the globally prominent occasion in the field of symmetric cryptology.

This event will cover many topics, both practical and theoretical, including the design and analysis of block ciphers, stream ciphers, encryption schemes, hash functions, message authentication codes, authenticated encryption schemes, cryptanalysis, and evaluation tools, and secure implementations.

The IACR has been organizing FSE seminars since 2002 and is an international company with over 1,600 associates that brings researchers in cryptology together.

The conference concentrates on quick and protected primitives aimed at symmetric cryptography which contains the examining and planning of:

  • Block Cryptographs
  • Encryption Structures
  • Stream Codes
  • Valid Encryption Structures
  • Hash Meanings
  • Message Valid Schemes
  • Cryptographic Variations Examination and
  • Assessment Apparatuses

They’ll address problems and resolutions concerning their protected applications.

19. Vienna Cyber Security Week 2020

Austrian non-governmental affiliates, international governmental entities, and the Energypact Foundation present this year’s conference. Its focus is to connect with significant global investors in the discussion and collaboration of the discipline of cybersecurity. The primary focus is an analytical structure with an emphasis on the energy subdivision.

20. Cyber Security for Critical Assets (CS4CA) the USA

  • Date: March 24 – 25, 2020
  • Location: Houston, Texas, United States
  • Cost: $1,699 - $2,999
  • https://usa.cs4ca.com

The Yearly CS4CA features two devoted issues for OT and IT authorizing representatives to enhance their professional areas of attention. The discussions intend to tackle some of the most common problems that connect both sets of specialists.

Each issue is curated by a set of industry-prominent professionals to be as significant, up-to-date, and detailed as possible for two days. Throughout this conference, you can expect the opportunities to take relevant tests, as well as to get inspired by some of the world’s prominent cybersecurity visionaries and network with colleagues.

21. World Cyber Security Congress 2020

The World Cyber Security Seminar is an advanced international seminar that interests CISOs as well as other cybersecurity specialists from various divisions. Its panel of 150 + outstanding speakers represent a wide range of verticals, such as:

  • Finance
  • Retail
  • Government
  • Critical Infrastructure
  • Transport
  • Healthcare
  • Telecoms
  • Educational Services.

The World Cyber Security Congress is a senior-level meeting created with Data Analytics, Heads of Risk and Compliance, CIOs, and CTOs, as well as CISOs and Heads of Information of Security in mind.

22. InfoSec World 2020

The 2020 InfoSec World Seminar will feature over one hundred industry specialists to provide applied and practical instructions on a wide array of security matters. The 2020 conference offers an opportunity for security specialists to research and examine concepts with colleagues.

Throughout the conference, attendees will have plenty of opportunities to increase their knowledge from this world-class seminar platform headed by the industry’s prominent specialists. They will also have a chance to receive 47 CPE credits over the one week course or attend New Tech Lab assemblies presented in real-life scenarios. Attendees also have the opportunity to participate in a virtual career fair where you can join via your tablet or at the fair section of the Expo.

Lastly, attendees have availability to take advantage of the Disney Resort with associates and guests.

23. 19th Annual Security Conference

The 19th Annual Security Conference provides an opportunity for discussions on security, assurance, and privacy that improve the understanding of current events but also encourage future dialogues related to cybersecurity.

The 2020 security seminar is a portion of the Vegas Conferences prepared by:

  • University of Houston, USA
  • University of Arkansas, USA

It provides a forum for discourses in Security, Assurance, and Privacy that enhance the understanding of current events, but also nurture future dialogues related to cybersecurity.

IT training at a cyber security seminar

24. World Border Security Congress

The 2020 World Border Security Congress is a conference which is planned by Torch Marketing. This seminar will contain subjects such as perimeter surveillance methods and schemes and will include:

  • The Latest Threats and Challenges at the Border
  • Continuing efforts against foreign terrorist fighters, irregular migration, and human trafficking
  • Capacity Building and Training in Border and Migration Management
  • Securing the Littoral Border: Understanding Threats and Challenges for Maritime Borders
  • Pre-Travel Risk Assessment and Trusted Travellers
  • The developing role of Biometrics in identity management & document fraud
  • Smuggling & Trade in Illicit Goods, Antiquities and Endangered Species
  • The Future Trends and Approach to Alternatives for Securing Borders

Join global leaders as they discuss issues surrounding improvements to the defense and administration of protracted land boundaries.

25. Black Hat Asia 2020

The sharpest industry professionals and scholars will come together for a four-day event at the 2020 Black Hat Asia. This function contains two days of intense applied training followed by another two days of the most current studies and susceptibility discoveries at these meetings. The Black Hat Executive Summit offers CISOs and other cybersecurity executives an opportunity to hear from a variety of industry experts who are helping to shape this next generation of information security strategy.

Black Hat delivers attendees practical lessons on subjects such as:

  • Wider-ranging Offensive Security
  • Penetration Testing
  • Analyzing Automotive Electrical Systems
  • Mobile Application Automation Testing Tools and Security
  • Infrastructure Hacking

These practical attack and defense lessons created entirely for Black Hat Asia and prepared by some of the prominent specialists in the industry are available to you. They each share the objective of distinguishing and protecting tomorrow’s information security environment.

26. ASIS Europe

This event’s purpose is to assist security professionals in finding the best ways to assess risks and act accordingly – not in legal arrangements or disclaimers but having the risk owner and user make educated decisions.

CONFERENCE
For aspiring and established leaders in need of a comprehensive learning experience, including masterclasses, executive sessions, keynotes, and show pass.

CLASSROOM TRAINING
For managers and team members who are seeking to gain attentive, practical skills with precise learning outcomes. Show pass included.

27.

The 2020 forum will feature keynote speaker Katie Arrington, chief information security officer at the Office of the Assistant Secretary of Defense for Acquisition of the USA. She is a 2020 Wash100 Award recipient. The themes to be addressed are the Cybersecurity Maturity Model Certification’s (CMMC) timeline. And how the certification process could change, explaining the functionality of the newly established CMMC accrediting body. Learn about the impact DoD’s CMMC will have on supply chain security, cybersecurity practices, and other aspects of the federal market.

28. CyberCentral 2020

The 2020 CyberCentral conference is a two-day event where participants collaborate with a global community of compatible cybersecurity enthusiasts. This event is a limited occasion, which allows its participants to walk away revitalized with resilient H2H networks, instead of with a lot of brochures and business cards.

29. Infiltrate Security Conference

Infiltrate is an in-depth technical conference whose focus is on aggressive security issues. Innovative researchers come together to evaluate and share experiences regarding the latest technological strategies that you cannot find elsewhere. Infiltrate is the leading event for those who are focused on the technical aspects of offensive security issues, such as:

  • Reverse-Engineering
  • Modern Wi-Fi Hacking
  • Linux Kernel Exploitation
  • IoT Exploit Development
  • Vulnerability Research

Infiltrate avoids policy and elaborate presentations in favor of just diehard thought-provoking technical topics.

30. Industrial Control Systems (ICS) Cyber Security Conference

The ICS Cyber Security Conference is an event where ICS users, vendors, system security providers, and government representatives meet to discuss industry trends. The convention’s goal is to analyze the latest threats and their causes to offer effective solutions for businesses of different sizes.

31. QuBit 2020 Cybersecurity Conference

The 2020 QuBit Cybersecurity Conference aims to provide up-to-date data to the cyber community of Central Europe from the western realm. Also, it hopes to aid in the circulation of security materials such as IT and internet tools that are now available to over two billion individuals internationally.

QuBit offers you a unique way to meet advanced and elite individuals with impressive backgrounds in the information security industry. Connect with QuBit today and discover the latest revolutions and ideas that are paving tomorrow‘s industry platform.

32. CSO50 Conference + Awards

The yearly CSO50 Awards acknowledges fifty businesses, including the employees, for their security development or inventiveness that exhibits exceptional commercial worth. The award-winning organizations are revealed in a special announcement and have their project summarized in an editorial on csoonline.com.

Attending this seminar is one of the best ways to boost your employees’ and colleagues’ esteem, as it gathers some of the industry’s top thought leaders. It can also be an excellent hiring device for those seeking to find new cybersecurity talents. Another benefit includes project winners asked to exhibit their projects at the CSO Conference + Awards.

Team members of winning projects are also offered courtesy registrations to the seminar. Lastly, the winning company will be announced at the CSO50 Awards dinner and summoned on stage to accept their award.

33. 15th Annual Secure360 Twin Cities

The 2020 Secure360 Twin Cities conference is a seminar for education in comprehensive security and risk organization. This event offers partnership and teaching for your whole team.

Secure360 concentrates on the following:

  • Risk and Compliance
  • Governance
  • Information Security
  • Professional Development
  • Continuity Management
  • Business Continuity
  • Physical Security

Key speakers will cover topics such as “Leading from any seat: Stories from the cockpit & lessons from the Grit Project” and Information Warfare: How our phones, newspapers, and minds have become the battlefield.”

34. THOTCON 0x9

THOTCON 0x9 is a yearly hacking seminar that takes place in Chicago, Illinois. More than 1,300 specialists and speakers from all over the world attend the event each year.

THOTCON is a non-profit, non-commercial event offering the best seminar conceivable on a restricted budget. When you go to a THOTCON event, you will have undergone one of the best information security conferences around the globe, combined with an exclusively casual and social experience.

35. CyberTech Asia

CyberTech Asia 2020 provides attendees with innovative and unique opportunities to gain insight into the latest improvements and resolutions presented by the global cyber public.

The conference’s central focus is on networking as well as reinforcing associations and establishing new contacts. CyberTech also delivers an unbelievable stage for B2B collaboration. CyberTech Asia will join the following:

  • Leading Multinational Corporations
  • Start-ups
  • Corporate and Private Investors
  • Specialists
  • Clients
  • SMB’s
  • Venture capital firms

36. The 18th International Conference on Applied Cryptography and Network Security

ACNS is an annual conference concentrating on current advances in the fields of practical cryptography and its implementation to systems and network security. The objective of this workshop is to exemplify academic research in addition to advances in engineering and practical frontiers.

The Computer Security group is organizing the conference at Sapienza University. The proceedings of ACNS 2020 are to be published by Springer in the LNCS series.

37. ToorCamp 2020

The ToorCamp first opened in 2009 in Washington State at the Titan-1 Missile Silo and is the American hacker camp. The next two conferences occurred in 2012 and 2014 on the Washington Coast. At these seminars, you are encouraged to display projects, share ideas, and collaborate with technology specialists that are attending the event.

38. 20th International Conference on Cyber Security Exercises (ICCSE)

The 2020 International Conference on Cyber Security Exercise focuses on joining prominent researchers, experts, and professors to discuss and disclose their knowledge and investigations on every feature of cybersecurity implementations. This year’s focus is on the fields of Cybersecurity and Security Engineering.

Don’t Miss Out!

When you attend security conferences, you benefit in multiple ways. Specialists teach you. You can take advantage of colleague-to-colleague discussions for professional improvement.

Most importantly, attending seminars presents the opportunity for you to obtain the information you need to tackle cyber attacks. Every minute of the day, there is someone somewhere creating the next cyber threat that could shut your business down.

Take advantage of learning how to stay one step ahead but getting your company and its employees ready and prepared for the next threat. It is no longer a matter of if it is a matter of when.

Are you ready?


comparing IPv4 and IPv6

IPv4 vs IPv6: Understanding the Differences and Looking Ahead

As the Internet of Things (IoT) continues to grow exponentially, more devices connect online daily. There has been fear that, at some point, addresses would just run out. This conjecture is starting to come true.

Have no fear; the Internet is not coming to an end. There is a solution to the problem of diminishing IPv4 addresses. We will provide information on how more addresses can be created, and outline the main issues that need to be tackled to keep up with the growth of IoT by adopting IPv6.

We also examine how Internet Protocol version 6 (IPv6) vs. Internet Protocol 4 (IPv4) plays an important role in the Internet’s future and evolution, and how the newer version of the IP is superior to older IPv4.

How an IP Address Works

IP stands for “Internet Protocol,” referring to a set of rules which govern how data packets are transmitted across the Internet.

Information online or traffic flows across networks using unique addresses. Every device connected to the Internet or computer network gets a numerical label assigned to it, an IP address that is used to identify it as a destination for communication.

Your IP identifies your device on a particular network. It’s I.D. in a technical format for networks that combine IP with a TCP (Transmission Control Protocol) and enables virtual connections between a source and destination. Without a unique IP address, your device couldn’t attempt communication.

ipv4 vs ipv6 adoption

IP addresses standardize the way different machines interact with each other. They trade data packets, which refer to encapsulated bits of data that play a crucial part in loading webpages, emails, instant messaging, and other applications which involve data transfer.

Several components allow traffic to flow across the Internet. At the point of origin, data is packaged into an envelope when the traffic starts. This process is referred to as a “datagram.” It is a packet of data and part of the Internet Protocol or IP.

A full network stack is required to transport data across the Internet. The IP is just one part of that stack. The stack can be broken down into four layers, with the Application component at the top and the Datalink at the bottom.

Stack:

  • Application – HTTP, FTP, POP3, SMTP
  • Transport – TCP, UDP
  • Networking – IP, ICMP
  • Datalink – Ethernet, ARP

As a user of the Internet, you’re probably quite familiar with the application layer. It’s one that you interact with daily. Anytime you want to visit a website; you type in http://[web address], which is the Application.

Are you using an email application? At some point then, you would have set up an email account in that application, and likely came across POP3 or SMTP during the configuration process. POP3 stands for Post Office Protocol 3 and is a standard method of receiving an email. It collects and retains email for you until picked up.

From the above stack, you can see that the IP is part of the networking layer. IPs came into existence back in 1982 as part of ARPANET. IPv1 through IPv3 were experimental versions. IPv4 is the first version of IP used publicly, the world over.

IPv4 Explained

IPv4 or Internet Protocol Version 4 is a widely used protocol in data communication over several kinds of networks. It is the fourth revision of the Internet protocol. It was developed as a connectionless protocol for using in packet-switched layer networks like Ethernet. Its primary responsibility is to provide logical connections between network devices, which includes providing identification for every device.

IPv4 is based on the best-effort model, which guarantees neither delivery nor avoidance of a duplicate delivery and is hired by the upper layer transport protocol, such as the Transmission Control Protocol (TCP). IPv4 is flexible and can automatically or manually be configured with a range of different devices depending on the type of network.

Technology behind IPv4

IPv4 is both specified and defined in the Internet Engineering Task Force’s (IETF) publication RFC 791, used in the packet-switched link layer in OSI models. It uses a total of five classes of 32-bit addresses for Ethernet communication: A, B, C, D, and E. Of these, classes A, B, and C have a different bit length for dealing with network hosts, while Class D is used for multi-casting. The remaining Class E is reserved for future use.

Subnet Mask of Class A – 255.0.0.0 or /8

Subnet Mask of Class B – 255.255.0.0 or /16

Subnet Mask of Class C – 255.255.255.0 or /24

Example: The Network 192.168.0.0 with a /16 subnet mask can use addresses ranging from 192.168.0.0 to 192.168.255.255. It’s important to note that the address 192.168.255.255 is reserved only for broadcasting within the users. Here, the IPv4 can assign host addresses to a maximum of 232 end users.

IP addresses follow a standard, decimal notation format:

171.30.2.5

The above number is a unique 32-bit logical address. This setup means there can be up to 4.3 billion unique addresses. Each of the four groups of numbers are 8 bits. Every 8 bits are called an octet. Each number can range from 0 to 255. At 0, all bits are set to 0. At 255, all bits are set to 1. The binary form of the above IP address is 10101011.00011110.00000010.00000101.

Even with 4.3 billion possible addresses, that’s not nearly enough to accommodate all of the currently connected devices. Device types are far more than just desktops. Now there are smartphones, hotspots, IoT, smart speakers, cameras, etc. The list keeps proliferating as technology progresses, and in turn, so do the number of devices.

the past and future of ipv4 and ipv6

Future of IPv4

IPv4 addresses are set to finally run out, making IPv6 deployment the only viable solution left for the long-term growth of the Internet. I

n October 2019, RIPE NCC, one of five Regional Internet Registries, which is responsible for assigning IP addresses to Internet Service Providers (ISPs) in over 80 nations, announced that only one million IPv4 addresses were left. Due to these limitations, IPv6 has been introduced as a standardized solution offering a 128-bit address length that can define up to 2128 nodes.

Recovered addresses will only be assigned via a waiting list. And that means only a couple hundred thousand addresses can be allotted per year, which is not nearly enough to cover the several million that global networks require today. The consequences are that network tools will be forced to rely on expensive and complicated solutions to work around the problem of fewer available addresses. The countdown to zero addresses means enterprises world-wide have to take stock of IP resources, find interim solutions, and prepare for IPv6 deployment, to overcome the inevitable outage.

In the interim, one popular solution to bridge over to IPv6 deployment is Carrier Grade Network Address Translation (CGNAT). This technology allows for the prolongated use of IPv4 addresses. It does so by allowing a single IP address to be distributed across thousands of devices. It only plugs the hole in the meantime as CGNAT cannot scale indefinitely. Every added device creates another layer on NAT, that increases its workload and complexity, and thereby raises the chances of a CGNAT failing. When this happens, thousands of users are impacted and cannot be quickly put back online.

One more commonly-used workaround is IPv4 address trading. This is a market for selling and buying IPv4 addresses that are no longer needed or used. It’s a risky play since prices are dictated by supply and demand, and it can become a complicated and expensive process to maintain the status quo.

IPv4 scarcity remains a massive concern for network operators. The Internet won’t break, but it is at a breaking point since networks will only find it harder and harder to scale infrastructure for growth. IPv4 exhaustion goes back to 2012 when the Internet Assigned Numbers Authority (IANA) allotted the last IPv4 addresses to RIPE NCC. The long-anticipated run-out has been planned for by the technical community, and that’s where IPv6 comes in.

How is IPv6 different?

Internet Protocol Version 6 or IPv6 is the newest version of Internet Protocol used for carrying data in packets from one source to a destination via various networks. IPv6 is considered as an enhanced version of the older IPv4 protocol, as it supports a significantly larger number of nodes than the latter.

IPv6 allows up to 2128 possible combinations of nodes or addresses. It is also referred to as the Internet Protocol Next Generation or IPng. It was first developed in the hexadecimal format, containing eight octets to provide more substantial scalability. Released on June 6, 2012, it was also designed to deal with address broadcasting without including broadcast addresses in any class, the same as its predecessor.

comparing difference between ipv4 and ipv6

Comparing Difference Between IPv4 and IPv6

Now that you know more about IPv4 and IPv6 in detail, we can summarize the differences between these two protocols in a table. Each has its deficits and benefits to offer.

Points of Difference IPV4 IPV6
Compatibility with Mobile Devices Addresses use of dot-decimal notations, which make it less suitable for mobile networks. Addresses use hexadecimal colon-separated notations, which make it better suited to handle mobile networks.
Mapping Address Resolution Protocol is used to map to MAC addresses. Neighbor Discovery Protocol is used to map to MAC Address.
Dynamic Host Configuration Server When connecting to a network, clients are required to approach Dynamic Host Configuration Servers. Clients are given permanent addresses and are not required to contact any particular server.
Internet Protocol Security It is optional. It is mandatory.
Optional Fields Present Absent. Extension headers are available instead.
Local Subnet Group Management Uses Internet Group Management Protocol or GMP. Uses Multicast Listener Discovery or MLD.
IP to MAC resolution For Broadcasting ARP. For Multicast Neighbor Solicitation.
Address Configuration It is done manually or via DHCP. It uses stateless address autoconfiguration using the Internet Control Message Protocol or DHCP6.
DNS Records Records are in Address (A). Records are in Address (AAAA).
Packet Header Packet flow for QoS handling is not identified. This includes checksum options. Flow Label Fields specify packet flow for QoS handling.
Packet Fragmentation

           

Packet Fragmentation is allowed from routers when sending to hosts. For sending to hosts only.
Packet Size The minimum packet size is 576 bytes. Minimum packet size 1208 bytes.
Security It depends mostly on Applications. Has its own Security protocol called IPSec.
Mobility and Interoperability Network topologies are relatively constrained, which restricts mobility and interoperability. IPv6 provides mobility and interoperability capabilities which are embedded in network devices
SNMP Support included. Not supported.
Address Mask It is used for the designated network from the host portion. Not Used
Address Features Network Address Translation is used, which allows a single NAT address to mask thousands of non-routable addresses. Direct Addressing is possible because of the vast address space.
Configuration the Network Networks are configured either manually or with DHCP. It has autoconfiguration capabilities.
Routing Information Protocol Supports RIP routing protocol. IPv6 does not support RIP routing protocol.
Fragmentation It’s done by forwarding and sending routes. It is done only by the sender.
Virtual Length Subnet Mask Support Supports added. Support not added.
Configuration To communicate with other systems, a newly installed system must be configured first. Configuration is optional.
Number of Classes Five Different Classes, from A to E. It allows an unlimited number of IP Addresses to be stored.
Type of Addresses Multicast, Broadcast, and Unicast Anycast, Unicast, and Multicast
Checksum Fields Has checksum fields, example: 12.243.233.165 Not present
Length of Header Filed 20 40
Number of Header fields 12 8
Address Method It is a numeric address. It is an alphanumeric address.
Size of Address 32 Bit IP Address 128 Bit IP Address

Pros and Cons of using IPv6

IPv6 addresses have all the technical shortcomings present in IPv4. The difference is that it offers a 128 bit or 16-byte address, making the address pool around 340 trillion trillion trillion (undecillion).

It’s significantly larger than the address size provided by IPv4 since it’s made up of eight groups of characters, which are 16 bits long. The sheer size underlines why networks should adopt IPv6 sooner rather than later. Yet making a move so far has been a tough sell. Network operators find working with IPv4 familiar and are probably using a ‘wait and see’ approach to decide how to handle their IP situation. They might think they have enough IPv4 addresses for the near future. But sticking with IPv4 will get progressively harder to do so.

An example of the advantage of IPv6 over IPv4 is not having to share an IP and getting a dedicated address for your devices. Using IPv4 means a group of computers that want to share a single public IP will need to use a NAT. Then to access one of these computers directly, you will need to set up complex configurations such as port forwarding and firewall alterations. In comparison to IPv6, which has plenty of addresses to go around, IPv6 computers can be accessed publicly without additional configurations, saving resources.

Future of IPv6 adoption

The future adoption of IPv6 largely depends on the number of ISPs and mobile carriers, along with large enterprises, cloud providers, and data centers willing to migrate, and how they will migrate their data. IPv4 and IPv6 can coexist on parallel networks. So, there are no significant incentives for entities such as ISPs to vigorously pursue IPv6 options instead of IPv4, especially since it costs a considerable amount of time and money to upgrade.

Despite the price tag, the digital world is slowly moving away from the older IPv4 model into the more efficient IPv6. The long-term benefits outlined in this article that IPv6 provides are worth the investment.

Adoption still has a long way to go, but only it allows for new possibilities for network configurations on a massive scale. It’s efficient and innovative, not to forget it reduces dependency on the increasingly challenging and expensive IPv4 market.

Not preparing for the move is short-sighted and risky for networks. Smart businesses are embracing the efficiency, innovation, and flexibility of IPv6 right now. Be ready for exponential Internet growth and next-generation technologies as they come online and enhance your business.

IPv4 exhaustion will spur IPv6 adoption forward, so what are you waiting for? To find out how to adopt IPv6 for your business, give us a call today.


microservices testing

When is Microservice Architecture the Way to Go?

Choosing and designing the correct architecture for a system is critical. One must ensure the quality of service requirements and the handling of non-functional requirements, such as maintainability, extensibility, and testability.

Microservice architecture is quite a recurrent choice in the latest ecosystems after companies adopted Agile and DevOps. While not being a de facto choice, when dealing with systems that are extensively growing and where a monolith architecture is no longer feasible to maintain, it is one of the preferred options. Keeping components service-oriented and loosely coupled allows continuous development and release cycles ongoing. This drives businesses to constantly test and upgrade their software.

The main prerequisites that call for such an architecture are:

  • Domain-Driven Design
  • Continuous Delivery and DevOps Culture
  • Failure Isolation
  • Decentralization

It has the following benefits:

  • Team ownership
  • Frequent releases
  • Easier maintenance
  • Easier upgrades to newer versions
  • Technology agnostic

It has the following cons:

  • microservice-to-microservice communication mechanisms
  • Increasing the number of services increases the overall system complexity

The more distributed and complex the architecture is, the more challenging it is to ensure that the system can be expanded and maintained while controlling cost and risk. One business transaction might involve multiple combinations of protocols and technologies. It is not just about the use cases but also about its operations. When adopting Agile and DevOps approaches, one should find a balance between flexibility versus functionality aiming to achieve continuous revision and testing.

microservices testing strategies

The Importance of Testing Strategies in Relation to Microservices

Adopting DevOps in an organization aims to eliminate the various isolated departments and move towards one overall team. This move seeks to specifically improve the relationships and processes between the software team and the operations team. Delivering at a faster rate also means ensuring that there is continuous testing as part of the software delivery pipeline. Deploying daily (and in some cases even every couple of hours) is one of the main targets for fast end-to-end business solution delivery. Reliability and security must be kept in mind here, and this is where testing comes in.

The inclusion of test-driven development is the only way to achieve genuine confidence that code is production-ready. Valid test cases add value to the system since they validate and document the system itself. Apart from that, good code coverage encourages improvements and assists during refactoring.

Microservices architecture decentralizes communication channels, which makes testing more complicated. It’s not an insurmountable problem. A team owning a microservice should not be afraid to introduce changes because they might break existing client applications. Manual testing is very inefficient, considering that continuous integration and continuous deployment is the current best practice. DevOps engineers should ensure to include automation tests in their development workflow: write tests, add/refactor code, and run tests.

Common Microservice Testing Methods

The test pyramid is an easy concept that helps us identify the effort required when writing tests, and where the number of tests should decrease if granularity decreases. It also applies when considering continuous testing for microservices.

microservice testing
Figure 1: The test pyramid (Based on the diagram in Microservices Patterns by Chris Richardson)

To make the topic more concrete, we will tackle the testing of a sample microservice using Spring Boot and Java. Microservice architectures, by construct, are more complicated than monolithic architecture. Nonetheless, we will keep the focus on the type of tests and not on the architecture. Our snippets are based on a minimal project composed of one API-driven microservice owning a data store using MongoDB.

Unit tests

Unit tests should be the majority of tests since they are fast, reliable, and easy to maintain. These tests are also called white-box tests. The engineer implementing them is familiar with the logic and is writing the test to validate the module specifications and check the quality of code.

The focus of these tests is a small part of the system in isolation, i.e., the Class Under Test (CUT). The Single Responsibility Principle is a good guideline on how to manage code relating to functionality.

The most common form of a unit test is a “solitary unit test.” It does not cross any boundaries and does not need any collaborators apart from the CUT.

As outlined by Bill Caputo, databases, messaging channels, or other systems are the boundaries; any additional class used or required by the CUT is a collaborator. A unit test should never cross a boundary. When making use of collaborators, one is writing a “sociable unit tests.” Using mocks for dependencies used by the CUT is a way to test sociable code with a “solitary unit test.”

In traditional software development models, developer testing was not yet wildly adopted, having to test completely off-sync from development. Achieving a high code coverage rating was considered a key indicator of test suite confidence.

With the introduction of Agile and short iterative cycles, it’s evident now that previous test models no longer work. Frequent changes are expected continuously. It is much more critical to test observable behavior rather than having all code paths covered. Unit tests should be more about assertions than code coverage because the aim is to verify that the logic is working as expected.

It is useless to have a component with loads of tests and a high percentage of code coverage when tests do not have proper assertions. Applying a more Behavior-Driven Development (BDD) approach ensures that tests are verifying the end state and that the behavior matches the requirements set by the business. The advantage of having focused tests with a well-defined scope is that it becomes easier to identify the cause of failure. BDD tests give us higher confidence that failure was a consequence of a change in feature behavior. Tests that otherwise focus more on code coverage cannot offer much confidence since there would be a higher risk that failure is a repercussion for changes done in the tests themselves due to code paths implementation details.

Tests should follow Martin Fowler’s suggestion when he stated the following (in Refactoring: Improving the Design of Existing Code, Second Edition. Kent Beck, and Martin Fowler. Addison-Wesley. 2018):

Another reason to focus less on minor implementation details is refactoring. During refactoring, unit tests should be there to give us confidence and not slow down work. A change in the implementation of the collaborator might result in a test failure, which might make tests harder to maintain. It is highly recommended to keep a minimum of sociable unit tests. This is especially true when such tests might slow down the development life cycle with the possibility that tests end up ignored. An excellent situation to include a sociable unit test is negative testing, especially when dealing with behavior verification.

Integration tests

One of the most significant challenges with microservices is testing their interaction with the rest of the infrastructure services, i.e., the boundaries that the particular CUT depends on, such as databases or other services. The test pyramid clearly shows that integration tests should be less than unit tests but more than component and end-to-end tests. These other types of tests might be slower, harder to write, and maintain, and be quite fragile when compared to unit tests. Crossing boundaries might have an impact on performance and execution time due to network and database access; still, they are indispensable, especially in the DevOps culture.

In a Continuous Deployment scope, narrow integration tests are favored instead of broad integration tests. The latter is very close to end-to-end tests where it requires the actual service running rather than the use of a test double of those services to test the code interactions. The main goal to achieve is to build manageable operative tests in a fast, easy, and resilient fashion. Integration tests focus on the interaction of the CUT to one service at a time. Our focus is on narrow integration tests. Verification of the interaction between a pair of services can be confirmed to be as expected, where services can be either an infrastructure service or any other service.

Persistence tests

A controversial type of test is when testing the persistence layer, with the primary aim to test the queries and the effect on test data. One option is the use of in-memory databases. Some might consider the use of in-memory databases as a sociable unit test since it is a self-contained test, idempotent, and fast. The test runs against the database created with the desired configuration. After the test runs and assertions are verified, the data store is automatically scrubbed once the JVM exits due to its ephemeral nature. Keep in mind that there is still a connection happening to a different service and is considered a narrow integration test. In a Test-Driven Development (TDD) approach, such tests are essential since test suites should run within seconds. In-memory databases are a valid trade-off to ensure that tests are kept as fast as possible and not ignored in the long run.

@Before
public void setup() throws Exception {
   try {
	// this will download the version of mongo marked as production. One should
	// always mention the version that is currently being used by the SUT
	String ip = "localhost";
	int port = 27017;

	IMongodConfig mongodConfig = new MongodConfigBuilder().version(Version.Main. PRODUCTION)
		.net(new Net(ip, port, Network.localhostIsIPv6())).build();

	MongodStarter starter = MongodStarter.getDefaultInstance();
mongodExecutable = starter.prepare(mongodConfig);
	mongodExecutable.start();

   } catch (IOException e) {
	e.printStackTrace();
   }
}

Snippet 1: Installation and startup of the In-memory MongoDB

The above is not a full integration test since an in-memory database does not behave exactly as the production database server. Therefore, it is not a replica for the “real” mongo server, which would be the case if one opts for broad integration tests.

Another option for persistence integration tests is to have broad tests running connected to an actual database server or with the use of containers. Containers ease the pain since, on request, one provisions the database, compared to having a fixed server. Keep in mind such tests are time-consuming, and categorizing tests is a possible solution. Since these tests depend on another service running apart from the CUT, it’s considered a system test. These tests are still essential, and by using categories, one can better determine when specific tests should run to get the best balance between cost and value. For example, during the development cycle, one might run only the narrow integration tests using the in-memory database. Nightly builds could also run tests falling under a category such as broad integration tests.

@Category(FastIntegration.class)
@RunWith(SpringRunner.class)
@DataMongoTest
public class DailyTaskRepositoryInMemoryIntegrationTest {
	. . . 
}

@Category(SlowIntegration.class)
@RunWith(SpringRunner.class)
@DataMongoTest(excludeAutoConfiguration = EmbeddedMongoAutoConfiguration.class)
public class DailyTaskRepositoryIntegrationTest {
   ...
}

Snippet 2: Using categories to differentiate the types of integration tests

Consumer-driven tests

Inter-Process Communication (IPC) mechanisms are one central aspect of distributed systems based on a microservices architecture. This setup raises various complications during the creation of test suites. In addition to that, in an Agile team, changes are continuously in progress, including changes in APIs or events. No matter which IPC mechanism the system is using, there is the presence of a contract between any two services. There are various types of contracts, depending on which mechanism one chooses to use in the system. When using APIs, the contract is the HTTP request and response, while in the case of an event-based system, the contract is the domain event itself.

A primary goal when testing microservices is to ensure those contracts are well defined and stable at any point in time. In a TDD top-down approach, these are the first tests to be covered. A fundamental integration test ensures that the consumer has quick feedback as soon as a client does not match the real state of the producer to whom it is talking.

These tests should be part of the regular deployment pipeline. Their failure would allow the consumers to become aware that a change on the producer side has occurred, and that changes are required to achieve consistency again. Without the need to write intricate end-to-end tests, ‘consumer-driven contract testing’ would target this use case.

The following is a sample of a contract verifier generated by the spring-cloud-contract plugin.

@Test
public void validate_add_New_Task() throws Exception {
  // given:
   MockMvcRequestSpecification request = given()
.header("Content-Type", "application/json;charset=UTF-8")
	.body("{\"taskName\":\"newTask\",\"taskDescription\":\"newDescription\",\"isComplete\":false,\"isUrgent\":true}");

  // when:
   ResponseOptions response = given().spec(request).post("/tasks");

  // then:
   assertThat(response.statusCode()).isEqualTo(200);
   assertThat(response.header("Content-Type")).isEqualTo("application/json;charset=UTF-8");
  // and:
   DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
   assertThatJson(parsedJson).field("['taskName']").isEqualTo("newTask");
   assertThatJson(parsedJson).field("['isUrgent']").isEqualTo(true);
   assertThatJson(parsedJson).field("['isComplete']").isEqualTo(false);
   assertThatJson(parsedJson).field("['id']").isEqualTo("3");
   assertThatJson(parsedJson).field("['taskDescription']").isEqualTo("newDescription");
}

Snippet 3: Contract Verifier auto-generated by the spring-cloud-contract plugin

A BaseClass written in the producer is instructing what kind of response to expect on the various types of requests by using the standalone setup. The packaged collection of stubs is available to all consumers to be able to pull them in their implementation. Complexity arises when multiple consumers make use of the same contract; therefore, the producer needs to have a global view of the service contracts required.

@RunWith(SpringRunner.class)
@SpringBootTest
public class ContractBaseClass {

	@Autowired
	private DailyTaskController taskController;

	@MockBean
	private DailyTaskRepository dailyTaskRepository;

	@Before
	public void before() {
		RestAssuredMockMvc.standaloneSetup(this.taskController);
		Mockito.when(this.dailyTaskRepository.findById("1")).thenReturn(
Optional.of(new DailyTask("1", "Test", "Description", false, null)));
		
		. . . 
				
		Mockito.when(this.dailyTaskRepository.save(
new DailyTask(null, "newTask", "newDescription", false, true))).thenReturn(
new DailyTask("3", "newTask", "newDescription", false, true));
		
	}

Snippet 4: The producer’s BaseClass defining the response expected for each request

On the consumer side, with the inclusion of the spring-cloud-starter-contract-stub-runner dependency, we configured the test to use the stubs binary. This test would run using the stubs generated by the producer as per configuration having version specified or always the latest. The stub artifact links the client with the producer to ensure that both are working on the same contract. Any change that occurs would reflect in those tests, and thus, the consumer would identify whether the producer has changed or not.

@SpringBootTest(classes = TodayAskApplication.class)
@RunWith(SpringRunner.class)
@AutoConfigureStubRunner(ids = "com.cwie.arch:today:+:stubs:8080", stubsMode = StubRunnerProperties.StubsMode.LOCAL)
public class TodayClientStubTest {
 	 . . .
	@Test
	public void addTask_expectNewTaskResponse () {
		Task newTask = todayClient.createTask(
new Task(null, "newTask", "newDescription", false, true));
		BDDAssertions.then(newTask).isNotNull();
		BDDAssertions.then(newTask.getId()).isEqualTo("3");
		. . . 
		
	}
}

Snippet 5: Consumer injecting the stub version defined by the producer

Such integration tests verify that a provider’s API is still in line with the consumers’ expectations. When using mocked unit tests for APIs, we would have stubbed APIs and mocked the behavior. From a consumer point of view, these types of tests will ensure that the client is matching our expectations. It is essential to note that if the producer side changes the API, those tests will not fail. And it is imperative to define what the test is covering.

// the response we expect is represented in the task1.json file
private Resource taskOne = new ClassPathResource("task1.json");

@Autowired
private TodayClient todayClient;

@Test
public void createNewTask_expectTaskIsCreated() {
WireMock.stubFor(WireMock.post(WireMock.urlMatching("/tasks"))
		.willReturn(WireMock.aResponse()
		.withHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_UTF8_VALUE)
		.withStatus(HttpStatus.OK.value())
		.withBody(transformResourceJsonToString(taskOne))));
		
	Task tasks = todayClient.createTask(new Task(null, "runUTest", "Run Test", false, true));
	BDDAssertions.then(tasks.getId()).isEqualTo("1");

Snippet 6: A consumer test doing assertions on its own defined response

Component tests

Microservice architecture can grow fast, and so the component under test might be integrating with multiple other components and multiple infrastructure services. Until now, we have covered white-box testing with unit tests and narrow integration tests to test the CUT crossing the boundary to integrate with another service.

The fastest type of component testing is the in-process approach, where, alongside the use of test doubles and in-memory data stores, testing remains within boundaries. The main disadvantage of this approach is that the deployable production service is not fully tested; on the contrary, the component requires changes to wire the application differently. The preferred method is out-of-process component testing. These are like end-to-end tests, but with all external collaborators changed out with test doubles, by doing so, it exercises the fully deployed artifact making use of real network calls. The test would be responsible for properly configuring any externals services as stubs.

@Ignore
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { TodayConfiguration.class, TodayIntegrationApplication.class,
   CloudFoundryClientConfiguration.class })
public class BaseFunctionalitySteps {

   @Autowired
   private CloudFoundryOperations cf;

   private static File manifest = new File(".\\manifest.yml");

   @Autowired
   private TodayClient client;

   // Any stubs required 
   . . . 

   public void setup() {

cf.applications().pushManifest(PushApplicationManifestRequest.builder() 
 
 .manifest(ApplicationManifestUtils.read(manifest.toPath()).get(0)).build()).block();
}
. . .
// Any calls required by tests
public void requestForAllTasks() {
this.client.getTodoTasks();
}
}

Snippet 7: Deployment of the manifest on CloudFoundry and any calls required by tests

Cloud Foundry is one of the options used for container-based testing architectures. “It is an open-source cloud application platform that makes it faster and easier to build, test, deploy, and scale applications.” The following is the manifest.yml, a file that defines the configuration of all applications in the system. This file is used to deploy the actual service in the production-ready format on the Pivotal organization’s space where the MongoDB service is already set up, matching the production version.

---
applications:
- name: today
  instances: 1
  path: ../today/target/today-0.0.1-SNAPSHOT.jar 
  memory: 1024M
  routes:
  - route: today.cfapps.io
  services:
  - mongo-it

Snippet 8: Deployment of one instance of the service depending on mongo service

When opting for the out-of-process approach, keep in mind that actual boundaries are under test, and thus, tests end up being slower since there are network and database interactions. It would be ideal to have those test suites written in a separate module. To be able to run them separately at a different maven stage instead of the usual ‘test’ phase.

Since the emphasis of the tests is on the component itself, tests cover the primary responsibilities of the component while purposefully neglecting any other part of the system.

Cucumber, a software tool that supports Behavior-Driven Development, is an option to define such behavioral tests. With its plain language parser, Gherkin, it ensures that customers can easily understand all tests described. The following Cucumber feature file is ensuring that our component implementation is matching the business requirements for that particular feature.

Feature: Tasks

Scenario: Retrieving one task from list
 Given the component is running
 And the data consists of one or more tasks
 When user requests for task x
 Then the correct task x is returned

Scenario: Retrieving all lists
 Given the data consists of one or more tasks
 When user requests for all tasks
 Then all tasks in database are returned

Scenario: Negative Test
 Given the component is not running
 When user requests for task x it fails with response 404

Snippet 9: A feature file defining BDD tests

End-to-end tests

Similar to component tests, the aim of these end-to-end tests is not to perform code coverage but to ensure that the system meets the business scenarios requested. The difference is that in end-to-end testing, all components are up and running during the test.

As per the testing pyramid diagram, the number of end-to-end tests decreases further, taking into consideration the slowness they might cause. The first step is to have the setup running, and for this example, we will be leveraging docker.

version: '3.7'
services:
    today-app:
        image: today-app:1
        container_name: "today-app"
        build:
          context: ./
          dockerfile: DockerFile
        environment:
           - SPRING_DATA_MONGODB_HOST=mongodb
        volumes:
          - /data/today-app
        ports:
          - "8082:8080"
        links:
          - mongodb
        depends_on:
          - mongodb

    mongodb:
        image: mongo:3.2
        container_name: "mongodb"
        restart: always
        environment:
           - AUTH=no
           - MONGO_DATA_DIR=/data/db
           - MONGO_LOG_DIR=/dev/log
        volumes:
           - ./data:/data
        ports:
           - 27017:27017
        command: mongod --smallfiles --logpath=/dev/null # --quiet

Snippet 10: The docker.yml definition used to deploy the defined service and the specified version of mongo as containers

As per component tests, it makes sense to keep end-to-end tests in a separate module and different phases. The exec-maven-plugin was used to deploy all required components, exec our tests, and finally clean and teardown our test environment.

Microservices snippet 11
Snippet 11: Using exec-maven-plugin executions with docker commands to prepare for tests and clean-up after tests

Since this is a broad-stack test, a smaller selection of tests per feature will be executed. Tests are selected based on perceived business risk. The previous types of tests covered low-level details. That means whether a user story matches the Acceptance Criteria. These tests should also immediately stop a release, as a failure here might cause severe business repercussions.

Conclusion

Handoff centric testing often ends up being a very long process, taking up to weeks until all bugs are identified, fixed, and a new deployment readied. Feedback is only received after a release is made, making the lifespan of a version of our quickest possible turnaround time.

The continuous testing approach ensures immediate feedback. Meaning the DevOps engineer is immediately aware of whether the feature implemented is production-ready or not, depending on the outcome of the tests run. From unit tests up to end-to-end tests, they all assist in speeding up the assessment process.

Microservices architecture helps create faster rollouts to production since it is domain-driven. It ensures failure isolation and increases ownership. When multiple teams are working on the same project, it’s another reason to adopt such an architecture: To ensure that teams are independent and do not interfere with each other’s work.

Improve testability by moving toward continuous testing. Each microservice has a well-defined domain, and its scope should be limited to one actor. The test cases applied are specific and more concise, and tests are isolated, facilitating releases and faster deployments.

Following the TDD approach, there is no coding unless a failed test returns. This process increases confidence once an iterative implementation results in a successful trial. This process implies that testing happens in parallel with the actual implementation, and all the tests mentioned above are executed before changes reach a staging environment. Continuous testing keeps evolving until it enters the next release stage, that is, a staging environment, where the focus switches to more exhaustive testing such as load testing.

Agile, DevOps, and continuous delivery require continuous testing. The key benefit is the immediate feedback produced from automated tests. The possible repercussions could influence user experience but also have high-risk business consequences. For more information about continuous testing, Contact phoenixNAP today.

What is Cloud Computing Data Security

Definitive Cloud Migration Checklist For Planning Your Move

Embracing the cloud may be a cost-effective business solution, but moving data from one platform to another can be an intimidating step for technology leaders.

Ensuring smooth integration between the cloud and traditional infrastructure is one of the top challenges for CIOs. Data migrations do involve a certain degree of risk. Downtime and data loss are two critical scenarios to be aware of before starting the process.

Given the possible consequences, it is worth having a practical plan in place. We have created a useful strategy checklist for cloud migration.

planning your move with a cloud migration checklist

1. Create a Cloud Migration Checklist

Before you start reaping the benefits of cloud computing, you first need to understand the potential migration challenges that may arise.

Only then can you develop a checklist or plan that will ensure minimal downtime and ensure a smooth transition.

There are many challenges involved with the decision to move from on-premise architecture to the cloud. Finding a cloud technology provider that can meet your needs is the first one. After that, everything comes down to planning each step.

The very migration is the tricky part since some of your company’s data might be unavailable during the move. You may also have to take your in-house servers temporarily offline. To minimize any negative consequences, every step should be determined ahead of time.

With that said, you need to remain willing to change the plan or rewrite it as necessary in case something brings your applications and data to risk.

2. Which Cloud Solution To Choose, Public, Hybrid, Private?

Public Cloud

A public cloud provides service and infrastructure off-site through the internet. While public clouds offer the best opportunity for efficiency by sharing resources, it comes with a higher risk of vulnerability and security breaches.

Public clouds make the most sense when you need to develop and test application code, collaboratively working on projects, or you need incremental capacity. Be sure to address security concerns in advance so that they don’t turn into expensive issues in the future.

Private Cloud

A private cloud provides services and infrastructure on a private network. The allure of a private cloud is the complete control over security and your system.

Private clouds are ideal when your security is of the utmost importance.  Especially if the information stored contains sensitive data. They are also the best cloud choice if your company is in an industry that must adhere to stringent compliance or security measures.

Hybrid Cloud

A hybrid cloud is a combination of both public and private options.

Separating your data throughout a hybrid cloud allows you to operate in the environment which best suits each need. The drawback, of course, is the challenge of managing different platforms and tracking multiple security infrastructures.

A hybrid cloud is the best option for you if your business is using a SaaS application but wants to have the comfort of upgraded security.

3. Communication and Planning Are Key

Of course, you should not forget your employees when coming up with a cloud migration project plan. There are psychological barriers that employees must work through.

Some employees, especially older ones who do not entirely trust this mysterious “cloud” might be tough to convince. Be prepared to spend some time teaching them about how the new infrastructure will work and assure them they will not notice much of a difference.

Not everyone trusts the cloud, particularly those who are used to physical storage drives and everything that they entail. They – not the actual cloud service that you use – might be one of your most substantial migration challenges.

Other factors that go into a successful cloud migration roadmap are testing, runtime environments, and integration points. Some issues can occur if the cloud-based information does not adequately populate your company’s operating software. Such scenarios can have a severe impact on your business and are a crucial reason to test everything.

A good cloud migration plan considers all of these things. From cost management and employee productivity to operating system stability and database security. Yes, your stored data has some security needs, especially when its administration is partly trusted to an outside company

When coming up with and implementing your cloud migration system, remember to take all of these things into account. Otherwise, you may come across some additional hurdles that will make things tougher or even slow down the entire process.

meeting to go over cloud migration strategy

4. Establish Security Policies When Migrating To The Cloud

Before you begin your migration to the cloud, you need to be aware of the related security and regulatory requirements.

There are numerous regulations that you must follow when moving to the cloud. These are particularly important if your business is in healthcare or payment processing. In this case, one of the challenges is working with your provider on ensuring your architecture complies with government regulations.

Another security issue includes identity and access management to cloud data. Only a designated group in your company needs to have access to that information to minimize the risks of a breach.

Whether your company needs to follow HIPAA Compliance laws, protect financial information or even keep your proprietary systems private, security is one of the main points your cloud migration checklist needs to address.

Not only does the data in the cloud need to be stored securely, but the application migration strategy should keep it safe as well. No one – hackers included – who are not supposed to have it should be able to access that information during the migration process. Plus, once the business data is in the cloud, it needs to be kept safe when it is not in use.

It needs to be encrypted according to the highest standards to be able to resist breaches. Whether it resides in a private or public cloud environment, encrypting your data and applications is essential to keeping your business data safe.

Many third-party cloud server companies have their security measures in place and can make additional changes to meet your needs. The continued investments in security by both providers and business users have a positive impact on how the cloud is perceived.

According to recent reports, security concerns fell from 29% to 25% last year. While this is a positive trend in both business and cloud industries, security is still a sensitive issue that needs to be in focus.

5. Plan for Efficient Resource Management

Most businesses find it hard to realize that the cloud often requires them to introduce new IT management roles.

With a set configuration and cloud monitoring tools,  tasks switch to a cloud provider.  A number of roles stay in-house. That often involves hiring an entirely new set of talents.

Employees who previously managed physical servers may not be the best ones to deal with the cloud.

There might be migration challenges that are over their heads. In fact, you will probably find that the third-party company that you contracted to handle your migration needs is the one who should be handling that segment of your IT needs.

This situation is something else that your employees may have to get used to – calling when something happens, and they cannot get the information that they need.

While you should not get rid of your IT department altogether, you will have to change some of their functions over to adjust to the new architecture.

However, there is another type of cloud migration resource management that you might have overlooked – physical resource management.

When you have a company server, you have to have enough electricity to power it securely. You need a cold room to keep the computers in, and even some precautionary measures in place to ensure that sudden power surges will not harm the system. These measures cost quite a bit of money in upkeep.

When you use a third-party data center, you no longer have to worry about these things. The provider manages the servers and is in place to help with your cloud migration. Moreover, it can assist you with any further business needs you may have. It can provide you with additional hardware, remote technical assistance, or even set up a disaster recovery site for you.

These possibilities often make the cloud pay for itself.

According to a survey of 1,037 IT professionals by TechTarget, companies spend around 31% of their dedicated cloud spending budgets on cloud services. This figure continues to increase as businesses continue discovering the potential of the cloud

cost savings from moving to cloud

6. Calculate your ROI

Cloud migration is not inexpensive. You need to pay for the cloud server space and the engineering involved in moving and storing your data.

However, although this appears to be one of the many migration challenges, it is not. As cloud storage has become popular, its costs are falling. The Return on Investment or ROI, for cloud storage also makes the price worthwhile.

According to a survey conducted in September 2017, 82% of organizations realized that the prices of their cloud migration met or exceeded their ROI expectations. Another study showed that the costs are still slightly higher than planned.

In this study, 58% of the people responding spent more on cloud migration than planned. The ROI is not affected as they still may have saved money in the long run, even if the original migration challenges sent them over budget.

One of the reasons why people receive a positive ROI is because they will no longer have to store their current server farm. Keeping a physical server system running uses up quite a few physical utilities, due to the need to keep it powered and cool.

You will also need employees to keep the system architecture up to date and troubleshoot any problems. With a cloud server, these expenses go away. There are other advantages to using a third party server company, including the fact that these businesses help you with cloud migration and all of the other details.

The survey included some additional data, including the fact that most people who responded – 68% of them – accepted the help of their contracted cloud storage company to handle the migration. An overwhelming majority also used the service to help them come up with and implement a cloud migration plan.

Companies are not afraid to turn to the experts when it comes to this type of IT service. Not everyone knows everything, so it is essential to know when to reach out with questions or when implementing a new service.

Final Thoughts on Cloud Migration Planning

If you’re still considering the next steps for your cloud migration, the tactics outlined above should help you move forward. A migration checklist is the foundation for your success and should be your first step.

Cloud migration is not a simple task. However, understanding and preparing for challenges, you can migrate successfully.

Remember to evaluate what is best for your company and move forward with a trusted provider.


man protecting against insider threats

Insider Threats: Types & Attack Detection CISO's Need to Know For Prevention

In this article you will learn:

  • All CISO’s need to understand your biggest asset, people, can also your most significant risk.
  • Insider threats are increasing for enterprises across all industry sectors. Threats can come from anyone with access to sensitive data.
  • Be prepared to mitigate your risk with active insider threat detection and prevention.


What is an Insider Threat?

Insider threats are defined as cybersecurity threats that come from within your own company. It may be an employee or a vendor – even ex-employees. Anyone that has valid access to your network can be an insider threat.

Dealing with insider threats isn’t easy since the people you trust with your data and systems are the ones responsible for them.

definition of an insider threat

Types of Insider Threats

There are three types of insider threats, Compromised users, Careless users, and Malicious users.

different types of insider threats to be aware of

Compromised Employees or Vendors

Compromised employees or vendors are the most important type of insider threat you’ll face. This is because neither of you knows they are compromised. It can happen if an employee grants access to an attacker by clicking on a phishing link in an email. These are the most common types of insider threats.

Careless Employees

Careless employees or vendors can become targets for attackers. Leaving a computer or terminal unlocked for a few minutes can be enough for one to gain access.

Granting DBA permissions to regular users (or worse, using software system accounts) to do IT work are also examples of careless insider threats.

Malicious Insider

Malicious attackers can take any shape or form. They usually have legitimate user access to the system and willfully extract data or Intellectual Property. Since they are involved with the attack, they can also cover up their tracks. That makes detection even more difficult.

 

Detecting Insider Threats

Most of the security tools used today try to stop legitimate users being compromised. This includes things like firewalls, endpoint scanning, and anti-phishing tools. They are also the most common types of breaches, so it makes sense that so much effort goes into stopping them.

The other two types of profiles aren’t that easy to deal with. With careless behavior, knowing what system event was valid or not is almost impossible. Network and security admins probably don’t know the context behind an application’s behavior, so won’t notice anything suspicious before it’s too late.

Similarly, with malicious attackers, they will know the ins and outs of your company’s security system. Giving them a good chance of getting away without being detected.

The most significant issues with detecting insider threats are:

1. Legitimate Users

The nature of the threat is what makes it so hard to prevent. With the actor using their authentic login profiles, there’s no immediate warning triggered. Accessing large files or databases infrequently may be a valid part of their day to day job requirements.

2. System and Software Context

For the security team to know that something terrible is happening, they need to know what something bad looks like. This isn’t easy as. Usually, business units are the experts when it comes to their software. Without the right context, detecting a real insider threat from the security operations center is almost impossible.

3. Post Login Activities

Keeping track of every user’s activities after they’ve logged in to the system is a lot of work. In some cases, raw logs need to be checked, and each event studied. Even with Machine Learning (ML) tools, this can still be a lot of work. It could also lead to many false positives being reported, adding noise to the problem.

what to look for with an Inside attack

Indicators of Insider Attacks

Detecting attacks is still possible. Some signs are easy to spot and take action on.

Common indicators of insider threats are:

  • Unexplained Financial Gain
  • Abuse by Service Accounts.
  • Multiple failed logins.
  • Incorrect software access requests.
  • Large data or file transfers.

Using systems and tools that look for these items can help raise the alarm for an attack. While regular endpoint scans (daily) will ensure workstations stay clean from viruses and malware.

Identifying Breaches in the System

Identify breaches starts with the security team understanding normal behavior.

 

Normal behavior should be mapped down to the lowest access and activity. Included in the logs should be the User’s ID, workstation IP address, the accessed server’s IP, employee department, and the software used.

Additionally, knowing what database was accessed, which schemas and tables read, and what other SQL operations were performed, will help the security team identify breaches.

Detect Insider Threats with Machine Learning

One area where machine learning gives a massive ROI is in network threat detection. Although it isn’t magic, it can highlight where to point your resources.

By providing the system’s state and behavioral information to a machine learning algorithm, weird and suspect actions can be identified quickly. Information like user and connection types, role access and application rights, working times and access patterns, can promptly be passed to ML applications.

Knowing what falls outside of the above normal system state can be done by mapping the following into the alert process:

  • Listing table access rights per app.
  • Specifying service account credentials and schemas used.
  • Monitoring the usual data storage locations.

Prevent Insider Threats With Threat Scoring

Correlating the above types of information allows you to create threat scores for each user activity. Couple that to the user’s credentials, you can alert the security team soon after a breach is found.

Using this type of analytics is new to the industry. Early implementations have been successful in helping companies gain the edge on their rivals.

Vendors are starting to offer custom Security Risk Management solutions that include:

  • Behavior analytics
  • Threat intelligence
  • Anomaly detection
  • Predictive alerts

Statistics on Insider Threats

33% of organizations have faced an insider threat incident. (Source: SANS)

Two out of three insider incidents happen from contractor or employee negligence. (Source: Ponemon Institute)

69% of organizations have experienced an attempted or successful threat or corruption of data in the last 12 months. (Source: Accenture)

It takes an average of 72 days to contain an insider threat.

Take a Proactive Approach to Insider Threats

Using historical data can help you quickly build risk profiles for each of your users. Mapping their daily interactions with the data you manage will let you know where high-risk profiles are. This will allow you to proactively engage in the areas where you have the biggest concerns.

Although any point in the network poses a risk, elevated access rights have the highest potential for abuse. Implementing key indicator monitoring on these user profiles with active directory policies will reduce the amount of risk you face.

Auditing exiting employees, ensuring their credentials are revoked and they do not leave with company data is also vital. Nearly 70% of outgoing employees admit to taking some data with them out the door. If credentials are also left intact, you may as well leave the door open for them. Privileged access management is a great way to manage user.

Although unintended insider threats remain the biggest concern, it’s the malicious ones that can cause the worst disaster.


2020 Cybersecurity Trends: 31 Experts on Current Issues

This article was updated in December 2019.

According to expert estimates, we are trending for another record-breaking year for data breaches.

Is your company prepared?

Cybersecurity continues to be a hot topic in both media and business. The reasons are evident – the last two years saw consistent growth in cyber breaches with 2018 hitting a new record high. Namely, the recent 2018 Annual Data Breach Year-End Review by Identity Theft Resource Center revealed a 44.7 percent growth in the number of cyber incidents compared to 2016.

Developing at this pace, cybercrime threatens to become even more devastating for businesses in years to follow. For companies across the globe, this strengthens the imperative to implement advanced data security strategies. To do so efficiently, they need to understand what are the most significant threats to your data.

Below are some expert predictions regarding business data security to help you prepare for a new year of cybercrime. Coming from industry experts, these insights will help you protect your data and secure your business long term. Read them through and reconsider your current practices. Is your cybersecurity strategy missing anything? 

We are thankful to everyone who participated, and we appreciate the opportunity to collaborate with such great minds. We hope you will find the tips listed below helpful and inspiring to prepare your business for another year of cyber incidents.

1. Privileged account misuse

Csaba Krasznay, a security evangelist, believes that in 2020, privileged account misuse will continue to be the biggest threat to the security industry. He suggests that organizations should start to mitigate the threats using the following strategies:

An increased focus on user behavior analytics over IT assets.

Historically, IT security has mainly been focused on securing IT components, such as data, related processes, IT services, servers, networks, etc. However, if the user is the weakest link in the IT security chain, organizations should place more emphasis on identity and access management.

The implementation of a higher degree of automation through machine learning.

AI-based analysis of behavioral biometric data will be the next major trend in cybersecurity and data protection. Sophisticated machine learning algorithms can build up a profile of a user’s typical behavior, identify unusual patterns of activity and highlight potential threats in real-time before they have a chance to materialize. By automatically detecting suspicious data, the whole security process becomes more efficient, obviating the need for a painstaking manual review of log data.

Csaba Krasznay, Security Evangelist, Balabit

Csaba Krasznay is Balabit’s Security Evangelist and an Assistant Professor at the National University of Public Service in Budapest, Hungary. He is responsible for the vision and strategy of Balabit’s Privileged Access Management solutions.

2. Insider cyber security threats and inadequate security strategies

Assuming that you will be able to stop all breaches.

Too much emphasis and investment are focused on protecting the endpoints and connected devices on the network with the goal of preventing all breaches. It is time to acknowledge that even the most experienced security team cannot possibly keep all cybercriminals out – and insider threats will always be a challenge. Instead, there must be a shift toward active defense. This mindset will give the victims of hackers a pathway towards preventing more damage. The question should not be: “How can I make sure our systems are never penetrated?” Instead, the questions to ask are:

“When a hacker penetrates the network, what will he be able to access? How can we make sure the hacker can’t open, share, print or download any sensitive files?”

Entrusting encryption as your savior.

In 2020, we will see lots of investments in encryption and other data security technologies. Buyer beware. Encryption products, although crucial in many contexts and notoriously hard to use, will fail to stop the problem of data loss. Keys will be lost or stolen, at times by the companies who generate them. Users will be confounded by managing their own keys, which is hard to do when also trying to control one’s passwords.

Bad actors within your company.

Employees are one of the top cybersecurity risks to organizations by merely clicking malicious URLs or bypassing security controls, however unintentional. But the frustration festers into a paternalistic, us-vs-them attitude between security operations center teams and the rest of the organization.

Try googling “there’s no patch for stupidity,” or “people are the weakest link in the cybersecurity chain.” They have become the rallying cries for not knowing how to deal with what the sec pros dub “the human element” as though it were a zoonotic disease. Users will continue to be a weak link in the chain in 2020, but the problem is that experts are pretty bad at figuring out why.

Dr. Salvatore Stolfo, Chief Technology Officer, Allure Security

Dr. Salvatore Stolfo is a professor of Artificial Intelligence at Columbia University. He has been granted over 47 patents and has published over 230 papers and books in the areas of parallel computing, AI knowledge-based systems, data mining, computer security and intrusion detection systems.

3. The use of machine learning for hacking attempts

Stolen customer data almost inevitably leads to increases in the overall volume of chargebacks, so we work closely with partners to help clients mitigate that risk. One of the biggest overall threats I am seeing is that hackers and fraudsters are more and more using our own technology against us.

Take machine learning, for example. With the ability to process mass amounts of data and adjust algorithms on the fly, we can detect suspicious behavior faster, and with increasingly higher accuracy.

However, criminals are doing the same thing. They use machine learning to calculate defenses, feed false information to detection programs, and the like.

I also believe internal threats, disgruntled employees, for example, will continue to grow. Externally, I do not doubt that instances of ransomware will increase, probably dramatically: fraudsters have shown that such attacks WORK—and are profitable—so there is no reason to believe they will decrease.

Monica Eaton-Cardone, Co-founder and COO, Chargebacks911

Monica Eaton-Cardone is an international entrepreneur, speaker, author, and industry thought leader. She is the co-founder and COO of Chargebacks911, a global risk mitigation firm helping online merchants optimize chargeback management globally through offices in North America, Europe, and Asia.

4. Organized hacking efforts

Gregory Morawietz suggests that in 2020, one of the most significant threats will be organized efforts. More attacks from state-backed hackers will take place. Large-scale social attacks, trying to influence political or modern events.

When it comes to his advice on how businesses should prepare, Morawietz suggests:

Buy a firewall, have a security policy, keep strong passwords and treat your employees fairly and with respect.

Gregory Morawietz

Gregory Morawietz, IT Security Specialist, Single Point of Contact

Gregory Morawietz is a cloud and IT Security Specialist with over twenty years’ of network and security experience. He has worked with hundreds of firms on improving IT environments, architecting cloud environments, consulting and integrating technology for the enterprise network.

5. Ransomware and zero-day attacks.

Ransomware should be close to the top of everyone’s cybersecurity trends list. Disgruntled employees or former employees will still launch attacks. We will see more zero-day attacks as the market for vulnerabilities heats up.

What should businesses do to prepare?

Busy business leaders need to take these six catchy words to heart:

  • Care and share to be prepared.
  • Care enough about cyber-security to invest in it, and share what you learn with other good guys.
  • Level the playing field because the bad guys already know about your security operations.

Greg Scott

Greg Scott, Senior Technical Account Manager, Infrasupport Corporation

I’m Greg Scott, author of Bullseye Breach, a cybersecurity book disguised as fiction with the story about how elements of the Russian mob penetrated retailer Bullseye Stores and stole 40 million customer credit card numbers.

6. Lack of cybersecurity talent.

One of the top cybersecurity trends in 2020 will be a lack of cyber-security professionals. We are still in a position where almost half of the vacancies go unfilled, and a lack of staff means a lack of solutions to simple problems. Applying basic levels of protection in smaller businesses, or training and awareness in larger companies are all things that require human resources and can make a big difference to the every-day threats.

Karla Jobling, MD, Beecher Madden

Karla Jobling is MD of BeecherMadden. She has recruited for information security positions for over ten years, managing client requirements in the USA, Europe, and the UK.

7. Inadequate cyber hygiene.

In 2017, we saw the widespread impact of the Petya and WannaCry attacks, both of which were a direct result of businesses failing to do the basics of cyber hygiene.

The fact is cyber hygiene was the problem ten years ago. Cyber hygiene was the problem (in flashing lights with horns blaring) this year. I am completely confident it will be a problem again in 2020. This is because enterprises find it incredibly difficult to demonstrate active control over their cyber hygiene and thus efficiently remediate top cybersecurity risks. This is because the larger the organization, the more challenging it is to maintain these ‘basics,’ such as identifying their assets, updating software, patching it, running standard controls and educating the users.

Given that 80% of all cyber security threats could be stopped by addressing the issue of cyber hygiene, it needs to continue to be a key focus for security teams around the globe.

Nik Whitfield, Computer Scientist, Jones Consulting (UK) Ltd

Nik Whitfield is a noted computer scientist and cybersecurity technology entrepreneur. He founded Panaseer in 2014, a cybersecurity software company that gives businesses unparalleled visibility and insight into their cybersecurity weaknesses.

8. Trending types of cyber security threats

Internet Of Things

Using “smart” devices for a malicious activity like mining for bitcoins or DDOS will become more commonplace. These threats are coming from everywhere, but can be avoided!

Corporate Espionage

Undetected hacks that leave things operating as usual, but are actually siphoning off critical data. Again, these threats come from everywhere, including insiders within or closely associated with an organization. This type of risk can be mitigated by going back to the basics and getting a third-party evaluation.

You don’t know what you don’t know

Having a blind trust in cloud companies and assuming that the protections they implement are for you/your company’s best interests. Only YOU are responsible for YOUR security.

Cybersecurity Trends expert

Chadd Kappenman, CISO, SMS AZ

Chad Kappenman is Chief Information Security Officer (CISO) at SMS AZ , a local Arizona company that enables small and medium-sized businesses to be proactive about their security efforts.

9. More Advanced hacking technologies

Cybercriminals are incredibly sophisticated and developing ways to “listen in” now, not just to grab credit card numbers shown in text files. Software already exists that can “tap” a voice call and understand it has heard a credit card number, expiration date, or a unique code. It can transpose that data, store it, and sell it within seconds.

With active listening in the gaming space, for example, a cybercriminal could target young people who are completely unaware of the threat. What they are saying can be turned into valuable information, not just to steal identities or money, but to find future human trafficking victims. These technologies will become even more advanced.

Patrick Joggerst, Executive Vice President of Business Development, Ribbon Communications

Patrick Joggerst is the Executive Vice President of Business Development for Ribbon Communications, a secure real-time communications company. Previously, Patrick was EVP of Global Sales & Marketing for GENBAND.

10. Improperly secured cloud data

In 2020, we expect to see “more of the same.” Ransomware is very lucrative for cyber-criminals. It’s perhaps the easiest cybercrime to monetize because the criminals are taking payments directly from the victims. We advise companies to double down on basic security measures. These include a layered defense such as firewall with URL and malicious site blocking, filtered DNS, segmented networks, and security clients (anti-virus and anti-malware). But most of all, employee awareness and training is always the best ROI.

Secondly, expect more data breaches. 2018 was perhaps a record year for publicized data breaches – both in number and in scope. We advise companies to revisit all their stores of information and ensure they have got the proper controls and encryption – encryption at rest, encryption in transit, etc. This is another area where an employee error can overcome the best technology defenses. So employee security training awareness programs are also critical.

Lastly, there were quite a few instances of improperly secured cloud data in 2018. A lot of “MongoDB” databases with default admin credentials and cloud storage buckets were left wide open. This will continue into 2020. Companies need to perform regular SOC audits and reports on their access controls and settings on cloud services. The cloud doesn’t make security issues go away. In some respects, it increases the “attack surface.”

Timothy Platt Security threat analyst

Timothy Platt, VP of IT Business Services, Virtual Operations, LLC

Tim Platt has almost 25 years of experience in multiple areas of technology including programming, networking, databases, cloud computing, security, and project management. He currently works at Virtual Operations, LLC, providing technology consulting in the Orlando, FL area.

11. Weak passwords continue to be a trend in cybersecurity

This year, companies and consumers were plagued with massive cyber attacks and security breaches – from WannaCry to Equifax, companies in 2020 will have to do a lot to win back trust and ensure a safer experience for the customers they serve.

We have all read the tips on how to secure a website, but one misguided argument encourages individuals to create stronger passwords. What if the solution is to rid the world of passwords altogether?

As the former Worldwide Fraud Director of American Express and CEO of Trusona, cybersecurity expert Ori Eisen has dedicated his life to fighting crime online. Working other notable influencers like Frank Abagnale (former conman played by Leonardo DiCaprio in Catch Me If You Can), Eisen is on a mission to protect businesses and consumers across the globe by replacing static usernames and passwords with secure identity authentication, thus eliminating threats of organized cybercrime and rampant malware. Eisen hopes companies will continue to make the jump towards a password-less future.

12. Cyber-Skills Gap: We Are ALL the Problem

Cybersecurity training is everyone’s responsibility. While online training isn’t the golden arrow for the massive, industry-wide skills gap, it does intertwine security in the culture of the organization and raise awareness and culpability at all levels. As an employee, don’t let anyone tell you there’s no budget for continued training. Make your case on how it is beneficial for you and the organization. Here are five diligence practices that organizations can put in place before the ball drops:

  • If your business depends on the internet in any way, get a third-party DDoS protection service for business continuity.
  • Classify your digital assets immediately, and just as quickly fortify the highest risk areas first.
  • Find the hidden threats, get them out, and don’t let them back in. A defensive security approach will only get you so far.
  • Programmatic vulnerability scanning software can identify a substantial number of holes in their defenses, and when found, the organization must make plans to continuously and expeditiously patch their systems. Rule of thumb: There are no excuses.
  • AI-based malware prevention should be the de facto standard on all endpoints, not traditional signature-based antivirus. 

Attacks will happen

Nation-state attackers continue to challenge the stability and safety of our critical infrastructure. Criminals are opportunistic and gladly enter unlocked doors, especially since companies continue to disregard their fiduciary responsibility to invest and protect themselves from cyber attacks. Because of this, we will see an increased number of attacks, they will be successful, and they will be public. Additionally, massive Denial of Service (DoS) attacks will increase and cripple businesses and the internet itself.

Kathie Miley, Chief Operating Officer, Cybrary

As the COO, Kathie Miley brings more than 20 years of experience to help design and implement company business strategies, plans, and procedures, oversee daily operations of the company’s sales and marketing efforts, assist company leadership in strategic ventures, and manage relationships with all business customers, partners, and vendors.

13. Ransomware becoming more sophisticated

With data breaches and leaks on the radar of every industry, leaders are looking to cybersecurity experts for guidance more than ever.

The top IT Security Threats we expect to see include an increasing number of more sophisticated ransomware attacks that are difficult, if not impossible to detect. In response, leading IT professionals will place more emphasis not only on endpoint security but also on corporate data-protection.

For many government-based organizations, tech startups, and research labs, breaches can mean exposed vital and sensitive data. Although the cloud is a looming entity in the enterprise, it is estimated that half of the data lives on endpoint devices. We will see an increase in large-enterprise attacks costing hundreds and millions of dollars in revenue. Additionally, hackers will press for increased ransom due to easier information access.

To top off the evolution of ransomware, we’ll continue to see Petya-grade attacks threaten businesses and evolve into tools for hackers to leverage in 2020. With this in mind, we need to question the ability for an organization or business to protect itself. The only way companies can solve this is by adopting and streamlining evolving technology.

Ian Pratt, President and co-founder, Bromium

Ian Pratt is Co-Founder and President at Bromium, where he is focused on the continued rapid growth of the business through delivering the superb security provided by Bromium’s products to mainstream Enterprises.

14. New technologies will create new loopholes.

With the rise of Bitcoin, Ethereum, and other cryptocurrencies, many businesses and corporations started exploring blockchain technology. It is estimated that more than 50% of corporations currently expecting to integrate with this technology sometime this year.

However, with new technologies comes a valuable opportunity for cybercriminals. We have already started witnessing that as news are coming out every other week of cyber criminals hacking into cryptocurrency exchange companies and hacking corporations using this technology. This is expected to continue heavily in 2020, with more criminals and hackers finding similar opportunities.

Businesses and corporations that choose to adopt such an early-stage technology are also under the threat of attracting similar attacks by hackers. To prepare for such threats, businesses who plan on using blockchain technologies should focus heavily on building the right security infrastructure to protect themselves from hackers who are taking advantage of the vulnerability of the blockchain technology at this stage.

David Kosmayer, CEO and Founder, Bookmark Your Life Inc.

David Kosmayer is CEO and Founder of Bookmark Website Builder, an AI-powered website building disrupting the website design industry. David created his first company at 22 just coming out of college.

15. Smartphone risks

Enterprise

For several years now, cybersecurity has been a top priority for businesses of all sizes and industries. And yet, nearly every month another massive data breach takes place, leaving businesses and their customers highly vulnerable.

Even the most established organizations with ample resources are not safe (take Verizon’s or Chipotle’s recent breaches, for instance), and worse, cybercrime levels are only continuing to rise. The first attack (which is inevitable) of 2020 will set the tone for the year.

Consumer devices

Any individual who owns a smartphone or laptop needs a way to protect themselves against the ramifications of identity fraud should their personal information become compromised. Savvy consumers that are paying attention might agree that relying solely on business to protect one’s personal information is naive, and no longer enough. Given the realities of our increasingly complex, digital world, it behooves consumers to work to protect their privacy on their own.

Establishing company-wide security policies.

All it takes is an employee to click an insecure link, and your server is no longer secure. Implement a policy to keep employees informed of the latest scams and educate them on how to be vigilant and avoid downloading information from emails they do not recognize. Highlight the fact that their participation will boost efforts to keep an eye out for fraud and attacks.

Consumers can get the right cyber insurance.

The loss of sales caused by cybercrime has been reported to cost SMEs nearly $21,000. That could put a business under. Cyber insurance can lessen the financial blow of a cyber attack and give your business the support it needs to get back on track. Some business insurance policies may include limited coverage against cyber attacks compared to a standalone cyber insurance policy. It is imperative to speak with a licensed insurance agent with cyber insurance experience to understand the proper type of coverage your specific business needs.

Keith Moore, CEO, Coverhound

Keith Moore is the CEO of insurance technology leader CoverHound® and the Founder & CEO of CyberPolicy™, both of which are based in San Francisco, California.

16. Black market demand for personal information continues to surge.

Seeing as we’re in the midst of two giant data breaches with Equifax and Uber, I expect us to see much of the same in 2020. A person’s identity, such as their SSN or credit card information, is extremely valuable. As long as people on the black market keep purchasing people’s info and identities, hackers will continue to attack large data stores and take people’s information. Luckily, the implementation of blockchain technology could mitigate much of this issue, but widespread adoption is still ways away. Also, hackers always seem to find ways around the newest data security, anyway.

Evan Tarver, Fit Small Business

Evan Tarver is a staff writer at Fit Small Business, specializing in Small Business Finance. He is also a fiction author and screenwriter.

17. Email phishing

Researchers in the second half of 2017 have been finding more and more flaws in the way email clients deal with fraudulent emails. There have been further weaknesses discovered in email protocols themselves.

Moreover, automated tools that make it nearly impossible to detect fraudulent emails have recently been published. Phishing is already one of the most difficult attack vectors to defend, and this will only become more difficult. Businesses should focus on training their staff to prepare for more fake emails and spot fakes using clues in the email.

Pieter Van Iperen, Founder, Code Defenders

Pieter VanIperen is a Founding Member of Code Defenders, a collective that protects the long tail of the internet, an Adjunct Professor of Code Security at NYU, a Certified Pen Testing Engineer (Ethical Hacker) and a Certified Secure Web Application Engineer. 

18. Continued evolution of malware trends

Since 2017 was hallmarked by a record number of hacks to major data records, like Equifax and Verizon, David believes the focus should be put on storing and protecting precious data in a place that can’t be tampered with or altered – an immutable bucket.

According to David, the biggest mistake that IT people make is worrying about making their data hack-proof rather than keeping the focus on storing it someplace safe. Nothing is completely hack-proof, but lost data can certainly kill a business. If you have data that is stored in an immutable bucket, it cannot be altered or deleted. If someone gets a virus that is attempting to take over your data and encrypt it, this will not be possible. It will just produce an error message saying that the data cannot be altered. If all of those people had put their data into an immutable bucket, it would still be there in perfect condition because there’s no way the person or a piece of software could alter the content. If you have sensitive business data, it is worth putting into an immutable bucket and making it immune to ransomware and other threats.

David Friend, Co-founder and CEO, Wasabi

David has been a successful tech entrepreneur for more than 30 years. David co-founded Carbonite, one of the world’s leading cloud backup companies, and five other companies including Computer Pictures Corporation, Pilot Software, Faxnet, and Sonexis.

19. Increased reliance on convenience services

On the edge of another year, the wrath of cybersecurity threats continues. Given the breaches in 2017 such as Equifax, Sonic, FAFSA, and Verizon, we are going to continue feeling the repercussions of identity theft and ransomware. The nation needs to prepare for the when and how this personal information is going to be used against us. And, individuals need to be careful about what they are doing online. The busier our lives get, the more we are relying on convenience services such as Uber, DocuSign, and America’s JobLink, but unfortunately, these come at the cost of potential identity theft.

What should businesses do to prepare?

Businesses need to stop looking for cybersecurity professionals in the wrong places and using outdated ways of hiring employees. We find that many companies lack the understanding of potential cyber threats and also are unfamiliar with the state of the cybersecurity landscape. Therefore, they don’t know better than to rely on a resume than to ask a potential employee to show proof of their skills being validated. This is the main reason the National Cyber League started providing NCL Scouting Reports. Not only does this report reflect personal cybersecurity skills growth, but cybersecurity students are getting jobs as it shows employers their skills are tested and validated.

Dan Manson, National Cyber League Commissioner, Professor in Computer Information Systems (CIS) at California State Polytechnic University, Pomona (Cal Poly Pomona).

Dr. Manson has taught Information Systems Auditing, Internet Security and Computer Forensics in the College of Business Administration Computer Information Systems undergraduate and Master of Science programs. Dr. Manson has also served as the CIS Department Chair and Campus Information Security Officer.

20. Increased Attacks on emerging blockchain solutions

Based on the past two years, 2020 may very well see a ‘next phase’ of attacker activity that should have CISOs on high alert:

  • Acceleration of data breaches targeting individual information similar to those we have seen throughout last year – such as Equifax, the 198 million US voter registration breach, the IRS taxpayer information and the ongoing medical information breaches.
  • New attacks upon individuals or entire systems as a result of the information mined from these breached records, or the use of it for identity theft or spoofing to access higher-profile assets or objectives
  • Increased attacks and attempts upon Bitcoin and emerging blockchain solutions because of the high financial motivation, as well as the assertion that these systems offer stronger security and thus resulting confidence placed on these systems by the organizations that employ them
  • Social engineering has become the top-ranked attack vector, along with identity theft as one of the top crimes in the US. The information obtained from these breaches across 2017 will provide attackers substantial insight into how best compromise the employees of organizations in their personal lives, or gain access to government or business assets through them, including those with privileged access.

Organizations should stay vigilant and double-down on employee education and awareness, increase controls on identity and access, and improve audit trails and their frequency. Most importantly, they need to employ tools that implement advanced anomaly detection methods to determine when information and systems are being accessed inappropriately.

Monika Goldberg, Executive Director, ShieldX Networks

Monika Goldberg is a dynamic executive who brings over 25 years of industry experience from leadership roles at infrastructure and security companies such as Intel Security, McAfee, Cisco, HP, and NetApp. She currently serves as Executive Director at ShieldX Networks, a Gartner Cool Vendor that she helped groom.

21. Network endpoints becoming increasingly difficult to secure

Data security failures and cyber attacks such as the Equifax, Yahoo and OPM breaches demonstrate the extent and diversity of security challenges IT professionals are facing around the world.

The increased usage of laptops, smartphones and IoT devices all represent network endpoints that are increasingly difficult to secure, as most employees are always connected via multiple devices. In 2020, with the growing complexities of endpoint security, emphasis will be placed on tracking and managing how users access corporate data across each of their devices. When analyzing the flow of data for threats and vulnerabilities, powerful search and analytic tools can then deliver necessary, actionable intelligence.

Rob Juncker, Senior Vice President, Product Development at Code42

As senior vice president of product development, Rob leads Code42’s software development and delivery teams. He brings more than 20 years of security, cloud, mobile, and IT management experience to Code42.

22. Sophisticated cyber attacks within your infrastructure

No organization is always 100% secure. Detecting and stopping sophisticated cyber attacks that have bypassed traditional perimeter security systems and are now active within your infrastructure should be on your top 3 list of 2020 security priorities.

Security teams will need to factor in a slew of unforeseen threats next year, including those from bad actors scanning the Dark Web in search for the newest attack tools.

Increasingly, security and IT teams are collaborating to address these stealthy attacks before they do real damage. This includes the use of IT infrastructure and security solutions that work together. Leveraging new technologies such as AI-based machine learning, analytics and UEBA can be extremely useful to improve attack discovery and decrease attack dwell times, as well as to send alerts which activate automated or manual enforcement actions that suspend potential attacks until they can be thoroughly investigated.

Larry Lunetta, Vice President of Marketing for Security Solutions, Aruba, a Hewlett Packard Enterprise company

In his current role as Vice President, Security Solutions Marketing for Aruba, a Hewlett Packard Enterprise company, Larry manages the positioning, messaging and product marketing for the portfolio of security products and solutions that Aruba brings to market.

23. Cybersecurity threat: Advanced email phishing attacks like Mailsploit

While it is all but universally accepted that email phishing will remain the primary attack vector in 2020, recently discovered vulnerabilities such as Mailsploit, an exploit designed to spoof an email senders name to bypass DMARC, present substantial challenges for organizations phishing mitigation and email security.

To reduce the risk of spear-phishing, spoofing and impersonation vulnerabilities, organizations should consider implementing the following steps:

  • Augmenting the representation of senders inside the email client by learning true sender indicators and score sender reputation through visual cues and metadata associated with every email
  • Integrating automatic smart real-time email scanning into multi anti-virus, and sandbox solutions so forensics can be performed on any suspicious emails either detected or reported
  • Allowing quick reporting via an augmented email experience, thus helping the user make better decisions

Eyal Benishti

Eyal Benishti, Founder & CEO, IRONSCALES

Eyal Benishti is a veteran malware researcher, co-founder and CEO of IRONSCALES, the world’s first phishing prevention, detection and response provider.

24. Outdated equipment

2017 was a year of technical innovation, and that includes innovative cyber crime as well. We’ve seen ransomware evolve in unexpected ways, becoming a malicious enterprise operation. With vulnerabilities like KRACK infiltrating the standards we once thought secure, it’s more important than ever for businesses to make sure their equipment is up to date. Regular updates and security patches are essential!

What should businesses do to prepare?

Employee security training is equally important, especially when it comes to phishing scams. As with the advances in malware, cybercriminals are getting smarter about sneaking past the safeguards that keep them at bay. The recent cyber attacks attempting to replicate PayPal and Netflix, programs we frequently use in our personal lives, remind us to be aware of any email that hits you or your employees’ inboxes. Employee training and education serve as a critical barrier against these kinds of attacks, protecting from new cyber threats in the coming months. It only takes a single failure due to lack of proper training to take down an entire network.

Amy O. Anderson, Principal, Anderson Technologies

Amy O. Anderson is Principal of Anderson Technologies, a St. Louis IT company that optimizes technology to meet the demands of small and mid-sized businesses. For over 20 years, Anderson Technologies has provided the IT solutions that firms need to be competitive in today’s marketplace.

25. The development of AI and automation

The development of artificial intelligence and automation is the most imminent and dangerous trending threat that we’ll see in 2020. Artificial intelligence has already been weaponized, automating the process of malware dissemination and data retrieval. Machine learning has already been used to combat AI cyber attacks, but companies both large and small will be hit hard if they don’t adapt.

Harrison Brady, Communications Specialist, Frontier Communications

26. Mass growth of digital technologies 

Mass adoption of digital technology contributed to a wider dissemination of data. Environments which hold Personally Identifiable Information (PII) are constantly under external attack. If the information is stored online, one can assume it will be compromised.

Businesses will require strong data governance strategy, framework and controls together with the increased corporate use of social media tools and technology to mitigate this risk.

In addition to this, the rise of cloud-based technology platforms such as Amazon and Salesforce with an increased need for continuous delivery will bring new threats of unauthorized access by developers and third parties to production environments. These threats need to be balanced with the increasing demand for continuous delivery in a disruptive technology environment.

The focus of current cyber security issues moves to controlling what matters vs. controlling everything and working out ways to achieve the desired outcomes vs. locking everything down.

Increase in volume and sophistication of ransomware attacks and cyber terrorism is crippling the global economy. Ransomware could severely impact organizations globally where the threat is not mitigated. Businesses need to take this threat seriously if they are to avoid falling victim to ransomware attacks similar to the May 2017 cyber attack by the WannaCry ransomware cryptoworm.

Felicity Cooper, Head of Technology Risk at the Commonwealth Bank of Australia

Felicity Cooper is an expert in risk management solutions – acting as General Manager responsible for Line 1 Technology Risk across Enterprise Services since May 2016, and as Head of Technology Risk, Retail and Wealth, at the Commonwealth Bank (CBA) for the last four years.

27. Crypto-jacking

The crypto-jacking activity has been exploding, and we will undoubtedly see more threats in 2020, particularly as the value of cryptocurrencies escalates. Secondly, the cybercriminal underground will continue to evolve and grow further this year. Apart from that, there is a very strong chance; the state-sponsored attacks will increase immensely.

With cyber attacks on the upsurge, every industry has become a target. However, by becoming proactive towards cyber-security and employing innovative security strategies and tools, along with spreading awareness about the epidemic, organizations can indeed enhance their security against countless threats and avoid expensive data breaches. Many big organizations are improving their IT systems, but we need to do more. We have more devices, more data, more threats, more sophisticated attacks, and more attackers. We must group together as an industry to push in the opposite direction: towards blazing-fast solutions on a majestic scale. That is our only hope. And over the next decade, organizations that assure results without speed or scale will perish, as they should.

Kashif Yaqoob, Brand Strategist, Ivacy  

A Digital security and Privacy Enthusiast, working at Ivacy with a focus on developing sustainable brands in an increasingly complex media landscape.

28. Cultural inertia grows as a cybersecurity threat

One of the most significant cybersecurity trends will be cultural inertia. Not moving forward because you are not sure of how to get started or due to having the stance that “security is important, but not a priority” will most likely mean that your company will be the next headline news.

2017 marked yet another year of massive breaches. Yahoo and Equifax topped the charts, but there were, unfortunately, plenty of other incidents that punctuate the fact that security is not yet a top priority for many companies. If security priorities are not first, they are last. Security initiatives need to be embedded into overall programs and objectives, not an afterthought or a periodic exercise.

Unfortunately, I fear that there will continue to be substantial security breaches and issues in 2018, especially as more IoT devices flood the market. This will result in more regulatory discussions, which I hope actually help increase resiliency.

Mike Kail, CTO and co-founder, CYBRIC

Mike previously served as CIO at Yahoo and VP of IT operations at Netflix and has more than 25 years of IT operations and technology leadership experience. He also currently serves as a technical and strategic advisor to a number of technology companies.

29. Advanced persistent threats gaining more AI capability in 2020

One of the biggest cybersecurity trends we will see in 2020 are improvements to technology and services that already exist. For example, social engineering will continue to get better, ransomware will continue to evolve, attacks on exploits will continue to grow faster, and patch scenarios are going to quickly be exploited.

Secondly, we might be seeing more of Artificial Intelligence (AI) Malware, which can think in different ways and is self-aware. Watch out for Advance persistent threats as we might see that go into more of an AI capability in the new year. We will also notice that issues with IoT will grow and continue to be a problem.

What should businesses do to prepare?

Start doing something. Don’t wait until the last minute to take action. Begin following NIST guidelines as a resource for technological advancement and security and implement those guidelines to mitigate control. If you do not understand them, then work with a security expert or partner with someone who does to ensure that you are compliant and have the proper tools in place. You do not need the latest technology, malware or sandbox to prepare for these threats. Instead, figure out where your gaps are in your security posture and learn how you can better monitor, manage, and fill in those gaps.

Matt Corney, Chief Technology Officer, Nuspire Networks

Matt Corney is chief technology officer at Nuspire Networks, bringing over 20 years of data security experience to the company. As CTO, Corney oversees the management of Nuspire’s SIEM solutions as well as the overall creation, maintenance, and updating of the company’s current and future product portfolio.

30. Misconfiguration of permissions on Cloud resources

The most impactful threat to companies will be the misconfiguration of permissions on Cloud resources. As both small companies and large swaths of the Fortune 500 move to the Cloud, security practitioners will need to relearn how to restrict access and permissions to data. This is a model closer to Active Directory. While it’s powerful, it has a steep learning curve until IT staff can confidently monitor and restrict access.

2020 has been dubbed the year of Kubernetes and Container orchestration in production. Expect attackers to start paying attention to Docker and Kubernetes for post exploitation fun. As was presented a few weeks ago at KubeCon by Brad Geesaman, you need to harden your instance of Kubernetes on most public clouds and also monitor it.

We expect attackers to start looking for privileged containers on Docker hub and to start to abuse the Kubernetes and Docker APIs. Expect this will be an issue after containers with Web Applications get exploited while the rest of the Kubernetes world upgrades to the newer and safer versions of Kubernetes.

Expect this year to be the year that someone backdoors favorite container images on a container registry.

The last prediction is not a shocker, but expect that a lot of IoT devices will continue to be the launching point for DDoS attacks and that 2020 will be the year that these attacks do more sustained attacks against infrastructure like GitHub and dyn.

Pete Markowsky, Co-founder and Principal Engineer, Capsule8

Pete Markowsky has been involved with information security and application development since first working with Northeastern University in 2001. He has worked across the security industry from .edu to .mil in roles such as development, security engineer, risk analyst and principal security researcher.

31. State-sponsored attacks and massive IoT device hacks

State-sponsored cyber attacks

The more steps we take towards computerizing our lives, the more room there is for cyber attacks from foreign governments, targeting everything from the economy to national defense. Recently reported Russian interference into the election process perfectly demonstrates how even democracy itself can be affected.

Massive hacks of IoT devices

Internet of things (IoT) is a rapidly growing cybersecurity trend. The number of IoT devices is set to outrank human population by 2020. And most of them are easily hackable! Taking into account how easy it is to hack most of these devices and how devastating IoT-powered DDoS attacks can be, we would see even more significant attacks and breaches in 2020.

Cryptojacking

With Bitcoin and other crypto-currencies becoming a substitute for traditional money and rapidly rising cryptocurrency prices, many malicious actors turned their attention to hacking popular websites to hijack people’s devices to mine cryptocurrency.

Businesses can prepare for revising their data security policies and investing more in cybersecurity protection. 2020 is the right time to start using AI-powered cybersecurity solutions. Although nothing can guarantee 100% protection, using such technologies can dramatically lower the chance of data breach no matter which industry you are in or how big your company is.

George Tatar, Founder and CEO, Akruto, Inc.

George founded Akruto, Inc. in 2010 to help customers keep their private information safe and readily available wherever they go. 

32. The lack of urgency and concern around data breaches

The lack of urgency and concern around data breaches continues to increase, with significant incidents only dominating news cycles for a few days or a week at most. Consumers have become entirely numb to security issues and having your credit card information stolen is expected, rather than surprising.

Looking ahead to the cybersecurity trends of 2020, the public will either continue to tune out current cyber threats or something significant will happen to wake people up to the issue and have them take security seriously. In addition to the general public becoming more aware in the wake of a significant event, companies will begin to make consumer education a more substantial part of their business model.

Neill Feather, President, Sitelock

Neill Feather is the president of SiteLock, the leading provider of website security solutions for business. At SiteLock, Neill leads the company’s approach to 360-degree domain security by providing industry analysis and utilizing rapidly evolving data sets related to security and hacking trends.


Man representing social networks and security issues

9 Social Media Security Best Practices To Prevent Data Breaches

Employees love to use social networks at work. Security awareness training on the dangers of social media is critical.

For example, an Instagram leak was discovered that let hackers scrape millions of user accounts emails, phone numbers, and other sensitive contact data.

Many high profile users were affected by the hack. While this only meant changing phone numbers or addresses for many, others were affected in a much more profound way.

This information became prime material for social engineering attacks on other personal and business related accounts.

What can be done to address social media security concerns in the workplace? Much.

The purpose of implementing a social media security strategy is to enable staff to do their job without compromising security.

Social media security tips and best practices

1. Implement a Social Media Officer

Of course, a system administrator already has enough on their plate to be adding constant worry about social media to it as well! Delegate the task of social media security to another employee.

They should check in on company social media accounts and make sure everyone is following security best practices. The social media protection officer can also assist in educating employees on security issues and regularly test to make sure they retain what they’ve learned.

2. Limit Private Company Information On Social Networks

If the company goes on a retreat, sometimes you or others may be tempted to upload photos and posts about them on the company’s social media. Advertising everyone is away may be tipping off hackers that now is the right time to attack the company’s network and/or servers.

For this, company vacations should not be mentioned on social media until everyone is back at work. So everyone can enjoy vacation time instead of panicking over a security breach. Save the vacation photo sharing for your return.

3. Train Employees on Social Media Security Best Practices

Employees need to be trained to keep personal information private. Sometimes the weakest link is the employees themselves, and malicious criminals know this. This is why sometimes the target isn’t the social media accounts, but the employees behind them.

This information isn’t useless. It can be used to reset the password on not only their social media accounts but possibly company-related accounts as well. This is why it is vital that employees understand that under no circumstances should they give this information out to anyone.

Test employees regularly to make sure they know how to deal with phishing and scams. Put posters around workspace areas to keep them reminded of how to keep private information and data safe. Keep training employees regularly on social engineering techniques to keep the knowledge fresh in their minds.

4. Check Company Account Privacy Settings

Some social media platforms reset privacy settings every time the platform gets updated. Other times someone may change a privacy setting on accident. Malware may even get to a company account undetected from an authorized user’s account and change the security settings.

Since you never know when a security setting may get changed, it is vital to check these settings regularly. If anything seems out of place, make sure that all settings are as they should be. A misplaced security setting can lead to much public embarrassment for the company, or worse, the company account may become compromised and hacked.

5. Stay Up To Date

Significant risks can be reduced by ensuring software up to date. While it may be tempting to slack off on updates, in the long run, it will save more time and money to keep company software updated regularly.

6. Safe Use of Social Media With Two Factor Authentication

The best strategy starts with password security. Always enable two-factor authentication.

Biometrics may help make the transition less painful. Facial recognition and fingerprint scanners have become common on many laptops and mobile devices. With the proper training, employees will be comfortable and may even find two-factor authentication easier than the old system of using static passwords.

7. Perform Security Audits on Company Accounts

  • Security settings — Have there been any recent platform updates that require the security settings be changed?
  • User access —Do any users need their account access removed? Do any users need account access granted?
  • User publishing privileges— Do any users need their publishing privileges revoked? Do any users need publishing privileges granted?
  • Recent security threats— Are there any current security threats reported in the news that affect the company’s account? If so, has the company’s account and network appropriately been patched? Have malicious sites been blacklisted?

8. Secure All Devices

Mobile devices are typically the most insecure devices on any network.

Ensure all devices are protected. This includes implementing:

  • Anti-virus software: Everyone should be using anti-virus software that scans every application as it is downloaded and installed for malware which can hijack social media accounts.
  • Firewall or VPN: Employees should be using a firewall or a secured VPN for both mobile and Wi-Fi access to stay protected against hacking attempts.
  • Encryption: Phone data should be encrypted in case the phone is stolen to keep data from being compromised.
  • Secure passwords: Strong, secure passwords cannot be stressed enough when using social media. Every administrator knows how difficult it is to get employees to use unique, secure passwords. A company password manager can be a solution. It is a one-click solution to creating safe, unique, encrypted passwords.

9. Social Media Management Platforms

Another way to make social media management easier is to use a management platform that consolidates all the company accounts in one place. These platforms make it easier to manage social media by combining all the company accounts in one area. Some examples of this include Hootsuite and Buffer.

Social Media Security Awareness Checklist

  • Start by developing a social media policy.
  • Don’t advertise company vacation time. This can be announcing the right time to launch a cyber attack.
  • Be proactive with network security on all devices and networks. This includes cell phones, and it also means keeping social media off the company’s business network.
  • Use multi-factor authentication methods. So if a password does get compromised, the user’s account stays secure.
  • Be Aware. Stay aware of current security vulnerabilities that are relevant to your company’s network and devices, and keep them well patched and secured against these vulnerabilities.
  • Teach employees about social media security threats with consistent training and security awareness programs.
  • Make sure employees learn how to identify phishing emails, and stay alert when clicking on email links.
  • Use social media management software to track company accounts.
  • Keep personal information private. Hackers are always looking for a way to get personal information that can open the door to gaining account access.

Mitigate Social Media Security Risks

As the Instagram hack taught us all, security is in the eye of the system administrator when securing company data.

With all this taken into account, your company should be well protected against any social media vulnerabilities. The best policy of all is no social media should ever be used on the company’s business network.

Take control of your social media space today!


watch security trents by following bloggers

51 Best Cyber Security Blogs To Follow For 2020

This article was updated in 2019.

The world of internet security is continuously evolving.

To some, it may feel like an impossible task to keep up with everything relevant to the industry. You may have a well-curated Twitter feed or a few security industry blogs you trust and depend on. However, there is more out there.

We have put together a definitive list of 51 Internet Security Blogs to keep you and your business informed and on top of all the latest trends.

Cybersecurity Blogs You Should Be Following & Reading

1. Krebs on Security

In 2001, a Chinese hacking group took over Brian Krebs entire home network. This sparked a burning desire to become heavily invested and intensely interested in global security. Brian’s daily blog, KrebsOnSecurity.com, is dedicated to investigative stories and happenings within the computer security and cybercrime industry.

The most recent investigation on data breaches with Facebook and hackers taking over online CPA information. You will also find many categories covering safety news and alerts, cyber-criminals, privacy breaches, and the latest global threats.

2. Schneier on Security

The man behind the blog, Schneier on Security, Bruce Schneier is the “go-to” man when it comes to online threats and understanding the masterminds that create them. His specialty is internet security, alarms, hacking, and patching. Expect to find plenty of information on consumer safety as well as industrial safety tips on his blog.

3. Tao Security

Blogging since 2003, Richard Betjtlich will introduce you to a multitude of unique topics defending Western interest from intruders. Tao Security, and therefore Richard, devote his career to promoting Network Security Monitoring by detecting and responding to digital threats to help global organizations.

Thanks to his extensive background in malicious attacks on business networks as well as the cyber-criminal world, Richard’s specialty is online Chinese criminals. The reason for his interest in this field is due to the vast number of network attacks that originate in China.

4. Graham Cluley

Graham Cluley, an independent computer security analyst, and public speaker started his blog to teach and instruct others on online safety. Cluley’s blog is no stranger to articles on different empires and their breach of data and trust, such as the latest Facebook controversy.

Cluley has a stellar reputation as a public speaker and an independent analyst. He has been in the business since the early 90’s which is why his articles reach over 130k readers. Cluley played a definitive roll in the creation of McAfee, Sophos, and Dr. Solomons Anti-Virus platform for Windows.

5. Troy Hunt

Troy Hunt’s security blog is unique in the fact that not only does he write some of the most recognizable material on internet security, but he also offers workshops and videos on his site. Troy’s specialty is online security and how to protect yourself from hacks.

There are many features of his site that others seem to miss. His articles and classes are informative, and his knowledge of cybersecurity is what makes him a leader in the industry. Be sure to check Troy’s social media accounts for the latest updates as well.

6. Security Affairs

There are many reasons why the Security Affairs won “2016 Best European Personal Security” blog. This platform offers everything an IT security professional or anyone interested in cybersecurity needs. That includes informative articles, updates, and information on cybercrime. Pierluigi Paganini, is an ethical hacker, author, writer, and cybercrime analyst.

Cyberwarfare, data breach, digital ID, and hacking are the core of the knowledge Paganini shares with his audience. However, you can find much more information surrounding intelligence, deep web, cybercrimes and the internet of things discussed on this security blog.

Other than his insights through his articles, you can see individual interviews with hackers and useful intel for cybersecurity.

7. Architect Security

Architect Security is the blog of April C. Wright. April does more than speak and produce content; she teaches through her site. April is an author, teacher, community leader and hacker with over 25 years of breaking, fixing, making, and defending worldwide critical connections and communications.

On her website, you can find articles centered on Risk Management, hackers, personal privacy, and information safety. Her teaching is often related to the simple steps you can take that lead to a safer world. Writing about her experience as a hacker doing all she can for the greater good is what makes her blog unique.

8. Dark Reading Blog

Dark Reading is so much more than a blog: it is a multitude of IT specialists, coming together to make the cyber world a safer place. The community specializes in Vulnerabilities and Threats, Security Management and Analytics, Risk Management and Compliance.

This informative site is a one-stop-shop for anyone who is interested in cybersecurity and all of the areas in between. With its articles, videos, webinars, library, radio shows, and the many contributors, Dark Reading is indeed the most widely-read cybersecurity news sites.

Founded in 2010, the Dark Reading offers comprehensive news and information with its focus on IT security. Their aim through their articles is to help IT and Infosec professionals manage the balance between user access and data backup strategies.

9. Hacker Combat

The Hacker Combat community is a reliable source for learning about the latest developments in the cybersecurity world. Hear what security experts have to say and employ those tips in safeguarding your enterprises from various evolving IT security threats.

HackerCombat covers everything from IT Security to Hacking related news and also provides expert analysis and forums where anything related to IT security can be discussed. The security community also serves as an ideal platform for promoting start-ups, organizes event management, and helps various people as well as security geeks.

10. CSO Online

CSO (from IDG) covers a broad range of risk management and safety topics on their blog. Research, analysis, and news that focuses on loss prevention, identity, and access management are the main features of the site. However, you can expect to find current information on business continuity, cyber and information security.

11. PCMag’s Security Watch

It should be no surprise to anyone in the IT industry that PCMag offers one of the most informative and recognizable cyber security blogs. The site is much more than Neil Rubenkings wit and style, but also the astounding amount of knowledge his articles offer.

PCMag is the leader for lab-based rigorous comparative reviews of internet products, business technology, consumer electronics, and much more. In this cybersecurity blog, you will find sharp analysis and detailed reports of everything cyber safety related.

12. Paul’s Security Weekly

This 6-time RSA best Security Blogger and the Social Security Award for Best IT Security Podcast is unique due to making more than cocktails and live internet TV. You will find plenty of entertaining and engaging podcasts, interviews, hacking information, security news and much more on his site.

Hacking is Paul Asadoorians primary interest. The following are other topics you can expect to find on his blogs and podcast.

Covering these topics is how he does his part to help keep the internet safe:

13. Forbes

Forbes has been a leader in media for multiple years, so it is not surprising that the company security blog ranks high on our list.

The Forbes cyber security blog offers reports for the latest online vulnerabilities and real-time reports on cybersecurity, reliable tools, and contributions, as well as authoritative analysis.

14. SC Magazinez

SC Media uses their blog and platform to arm information security professionals. They offer their expertise in technical and unbiased business information of current cyber situations. Such information is necessary to undertake the continuous security challenges IT professionals face.

The team also provides multiple channels for risk management and compliance positions that reinforce overall business strategies. SC Magazine also offers automated testing results for mobile devices, web and cloud security, and email safety.

The blog provides data analysis to defeat immediate internet security and network threats and as well as pertinent technical information.

15. The Hacker News

Hacker News delivers the latest in detailed coverage of future and current trends in the information technology industry. It also features posts of online activity that are shaping cyber protection and how cybercrimes evolve.

THN’s specialty is technology, security news updates, and hacking. The THN blog is an award winner and leading source for IT professionals.

16. Security Week

Security Week is one platform where multiple leaders in the cyber sector come together to offer their professional insight. Identity and Access, Incident Response, Risk Management, and Malware are the unique specialty topics of the site.

Mobile Security and Network Protection are also specialized categories the blog offers. You can expect news, insight, and analysis from the cyber sector experts that stay up-to-date with all the latest news and threats for IT professionals to follow.

17. The Last Watchdog

Byron V. Acohido is a Pulitzer Award Winning journalist who is also the editor of The Last Watchdog, a leader in cybersecurity blogging.

Byron’s unique specialties are videos, articles, podcasts, stellar coverage of complex security issues and privacy distilled for an intelligent audience.

18. Privacy Paradox from Lawfare

Privacy Paradox from Lawfare is a blog centered around difficult national security choices. How far should we go for privacy? This blog centers around privacy and the law. Hardcore arguments between data protection and data collection and the war that continues to rage over “the right to be left alone.

19. The Register

The Register is the go-to blog for IT professionals for articles, news, and updates. The material found on the site will appeal to Database Administrators, Software Engineers, Sysadmins, Network Managers and even CIO’s. Issues covered in this blog are problems these professionals face on a daily basis such as hardware, software, IT protection and networking.

The site has over 9 million visitors each month and a significant following on social media.

20. Zero Day

Zero Day blog is the powerhouse destination for IT professionals wanting answers to business-related tech problems. With a steady flow of current events, there are always avenues to find new opportunities to learn.

ZDNet features include ongoing research, peer feedback, editorial analysis, and Webcasts. Other educating features are white papers, photo galleries, video, blogs and current security news daily surrounding the IT industry.

21. Help Net Security

Since 1988, Help Net Security has placed its focus on information security. They are a team of online safety and protection consultants that explore a vast range of content and solve technical internet protection challenges. These challenges, are the issues concerning management and other items in each department of the business.

On the blog, you will find all the latest articles and information related to the IT industry. These include security events, reviews, hottest topics, expert opinions and much more. The team does not merely accommodate people looking for breaking news.

Contributors to the site often include leaders within their respected industries. That includes both hands-on and technical experience. It is through their years of experience that each member provides the most seasoned advice to their readers.

Divisions of the blog include Malware, White papers, Information, events, news, and newsletters.

22. Tech World

Tech world, located in London is a privately held company with nearly 200 hundred employees. The primary focus is on digital disruption: a space where entrepreneurship and innovation intersect with business technology.

The organization offers the latest views, and news articles on the site, including:

    • Innovation
    • Startups
    • Developers
    • Disruptive technology
    • The Impact digital disruption has on society and UK businesses

Techworld provides an exceptional blend of features, analysis, and expert advice on apps, mobile, social media, and cloud technology. They offer information on wearable tech, e-commerce, IT startups, AI, and drones.

Techworld offers detailed information on a variety of topics on their blog. The team highlights the effects of 3rd platform technologies on a variety of industries which include, automotive, public sector, and travel. Other departments include retail and leisure, digital and creative, financial services, and much more.

Techworld has been a leader in the industry of business technology publishing for over fifteen years.
The Security section dedicates itself to zero-day exploits, the latest malware threats, and analysis and tutorials since 2003.

23. IT Security Guru

IT Guru Best Internet Security Blogs

IT Security Guru has been eating, sleeping and breathing IT security since 2012. The team’s goal is to make IT security exciting and digestible through its posts. The site offers a daily news digest of all the best and latest breaking stories that take place around the world in IT protection and safety.

They make it simple to find the latest events to keep you from having to search the web for the most recent news, happenings, and events.
For the latest news headlines, IT Security Guru is the site you need to pay attention to.

24. NETWORK computing

Network Computing is the team that connects the dots between how technology affects a business, network, application, and architectural approach. With the world of storage, networking, and data colliding, NETWORK computing find its no surprise that all things IT requires knowing are changing.

IT operations seen through a silo view is no longer valid. The significant stakeholders, whether that be application groups, server groups, storage groups, and networking groups must remain connected. Remaining abreast of all things regarding IT is a full-time job.

Here is where Network Computing comes into play. They take the approach of offering an unembellished opinion and analysis from peers on the latest architectures and technologies. The team provides real-world understanding from professionals that are designing, implementing and managing IT.

The content, available on the online safety and protection blog, primarily focuses on enterprise infrastructure systems and cloud technology. The topics discuss solutions on how to deliver services and applications for an expanding sizeable environmental threat in the news, the business world, and expert advice.

25. Infosecurity Magazine

infosecurity logo

Infosecurity is an infosec blog dedicates itself to the technology and strategy of information protection and safety. The articles on the site delivering critical technical and business information for IT. Daily features and news are always updated and available.

Infosecurity remains dedicated to serving the Information protection community whether it be face-to-face, online or in print.

With more than a decade of experience, Infosecurity Magazine concentrates not only on security topics, but they also focus on valuable insights and strategy content on trending topics. Its educational approach is reason enough to follow the site.

26. Peerlyst

Peerlyst is data-driven, unique trusted peer network established for Tech professionals. The blog’s success is due to those connecting with peers for various results and discussing the outcome. The primary purpose is for making more sound purchase decisions while linking to the products vital to their business needs.

Peerlyst saves tech management time and resources by making it simple to find and compare alternatives to security products. Also, they discuss online protection and safety concerns. Peers can ask questions, get breaking news, and debate the hottest hacking and safety reports.

Users can subscribe to the latest and ongoing updates from vendors with over 5,000 creative industry products. The company makes using CNET, YELP, and LinkedIn simple to IT tech professionals. Thanks to their unique information extraction algorithms, Peerlyst can deliver quality interactive resources for free. They also provide orderly product comparison and information.

The site also delivers peer generated product reviews and ratings. These ratings include user experience with product execution, documentation, support, ease of management, and performance.

29. Security Boulevard

SBN-Security Bloggers Network has over 300 member blogs and growing. For over a decade the website has promoted and distributed many leading bloggers in the security industry. In addition to their content, Security Boulevard features original content provided by many of the most distinguished leaders and journalists in the industry.

On the site, you will find video and audio content featured via SBN’s chat and TV. There are over 5k members on their Facebook site as well as a strong reputation on LinkedIn. On each media outlet, you will find a community who provide ample amounts of useful resources.

In addition to blog posts, you can find information on useful resources pertaining to threats and data breaches and current cybersecurity news. Moreover, there is a library of safety and protection resources as well as numerous educational resources.

28. Stay Safe Online

The National Cyber Security Alliance (NCSA) organization powers this blog to build secure private/public partnerships and execute broad-reaching awareness and efforts. It is a secure and reliable source that empowers businesses the information they need for safety.

They provide information that will aid in keeping users, their systems, organizations, and their sensitive data secure and safely online. They also offer knowledge to encourage a nation of cybersecurity. Their mission is to not only provide insight to keep users safe, but it also provides useful ideas and tips on privacy through fresh content.

Founded in 2001, the NCSA continues to expand and make a great stride to empower and support digital citizens on using the World Wide Web safely and securely. The site offers numerous tools and features for users to protect themselves, the networks and computers, and the digital assets all users share.

27. Virus Bulletin

virus bulletin blog

In 1989 Virus Bulletin hit the market as a magazine making its priority to provide PC users with a general source of knowledge on viruses. They also aimed to instruct how to prevent them, detect and remove, and how to retrieve programs and data after an attack.

Virus Bulletin became the leader in specialist publications covering the field of related malware and viruses. Today, the site is a complete security information gateway and certification blog, offering users with independent intelligence.

They utilize that intelligence for coverage on the latest developments happening in the global threat landscape. The team at virus Bulletin also conducts bi-monthly certification of anti-spam and anti-malware products while leading a comprehensive IT security conference.

The posting site has the support of an Advisory Board comprised of a group of global leaders in anti-spam and anti-malware experts. The prime concern of VB’s has always been Editorial Independence. Their goal and accomplishment from the very first issue have been to cut through the all the hype.

They remain uninfluenced by marketing babble and sales pitches to this day. The details that make the blog successful is arming users with the information necessary to stay current and up-to-date. The latest developments in the anti-malware industry are always available.

30. Bleeping Computer

Bleeping Computer launched its site in February 2004 and hadn’t looked back since. It is an expanding free site that has all types of blogs. These posts provide users information on computer assistance, security and technical questions asked by novice computer users.
It offers a discussion forum and free web-based community outlet for anyone seeking help and knowledge. The mission of the group is they are there to help you enjoy your computer instead of hating it.

Bleeping Computer is a fantastic platform for technical support that provides self-educating tools. You would benefit by taking the time to read their forums, cybersecurity guides, the tutorials and not just the blog posts.

31. IT Security Blog

IT Security is a small but powerful independent organization who has no connection to any publisher, doctrine, dogma, or vendor. The sole purpose of the blog is to offer and discuss security information in a challenging and new manner via posts.

Each post is strictly the view and opinion of the unique author. IT Security is a channel for multiple viewpoints. Rather than finding government propaganda, unchallenged propaganda or recycled press releases, you get hard facts.

IT Security offers an evaluation of the problems that reinforce the news. The purpose is so that you, the user, gains a better understanding of events in the news. The goal is to provide you with a broader perspective of cyber, national, and International affairs.
The site owners understand that everyone will not agree on topics provided. Therefore they offer readers a section to voice their opinions. There is no editorial section or editor. You will only find independent ideas and thoughts.

32. GBHackers on Security

GBHackers is an online cybersecurity platform which provides up to date information on the IT industry.

The mission of the blog is to establish a secure community from the cyber community. GBH aim to secure, educate, update, and keep the digital community in a protected zone. The platform shares the latest in hacking tutorials, security updates, and hacking news. The goal is still to remain one step ahead for the future of cybersecurity.

32. BetaNews

BetaNews is one of the leaders of online sources of analysis and technology news. The blog maintains its reputation of offering original content as an online advertising and publishing leader. Their target audience is decision makers and IT professionals.

BetaNews takes pride in being in the group of the leading influential internet sources for IT current news and analysis. You can also find the group on Facebook and LinkedIn.

33. State of Security (Tripwire)

tripwire security blog

Tripwire, located in Portland Oregon, began its journey in 1997. The company is a privately owned business with an average of 201-500 employees. The group specializes in vulnerability management, compliance automation software, and Infosec security.

Tripwire is a leader in offering the following:

    • IT operations solutions and compliance
    • Provider of security options
    • Industrial Businesses
    • Government agencies and service providers

Business context, together with profound endpoint intelligence and high-reliability asset visibility, is what Tripwire bases their solutions on. The articles posted on the blog are not for the simple-minded.

What you will learn from the post is, when combined, these solutions conform and automate IT operations and security. The portfolio of organization-class solutions includes policy and configurations management. Also, you will find file integrity monitoring, log management, vulnerability management, and reporting and analytics.

On the Tripwire website, you will find information on security insights, trends, and security news.

34. Naked Security

Naked Security by Sophos located in Abingdon, Oxfordshire was established in 1985. It is a public company with thousands of employees.

Naked Security is cyber protection and safety made simple. With over 100 million active users in 150 countries that depend on their solutions, they must remain up-to-date with their articles. That is due to users relying on the total online safety and protection solutions Sophos offers.

Those solutions are what makes the group the leader in protection against data loss and sophisticated threats. That is apparent through their articles. The team at Sophos began introducing encryption and antivirus products nearly three decades ago. SophosLabs- which is a worldwide network of threat intelligence centers backs Naked Security.

35. F-Secure Safe & Savvy Blog

F-Secure is a powerhouse when it comes to cybersecurity. For over three decades, F-Secure has propelled innovations in Internet Security. By doing so, the savvy group defended a multitude of companies and millions of individuals.

The group at F-secure shields consumers and businesses through unrivaled experience and excellent protection, detention and response. This shield provides thorough information on data breaches, widespread ransomware infections, and advanced cyber attacks.
Live Security is a singular approach that F-Secures state-of-the-art technology produced. It combines human expertise with the power of machine learning. That production and more all take place in their world-renowned security labs.

F-secure’s experts in security have contributed to more cybercrime scene investigation in Europe than any other business in the field. Their quality products are available in more than 200 mobile and broadband operators all over the world. Also, there are thousands of resellers available.

For these reasons are what makes their blog one of hottest sought after portals. That is due to their incredible attention to detail in every post and the company’s outstanding deliverance of information and products.

36. Hot For security-Powered by Bitdefender

Hot for Security, powered by Bitdefender’s passion for becoming the most trusted cybersecurity tech provider globally. What that means is they are continuously anticipating, innovating and always going that extra mile to produce stellar information for their blog.

Bitdefender offers robust security information that you can rely on always. The company is a trusted provider for all security needs since 2001. The site protects half a billion users, and through their blog offers a wealth knowledge.

Bitdefender has won multiple awards for their excellent service to the digital users of today. The mission of the company is to deliver market changing security technology to all users and companies globally.

They have a massive team of innovators that are developing state-of-the-art technology that dramatically improves the customer’s security experience. Bitdefender employs over 1,300 employees and has a team of over 600 researchers and engineers.

It is easy to see why Bitdefender is in a league of their own as a global innovative IT security vendor that provides vital information all over the globe.

37. Malwarebytes Labs

Malwarebytes Labs is located in Santa Clara, CA and made its way onto the digital scene in 2008. It is a privately held company with 500-1,000 employees.

It is no secret that Malwarebytes is the leader in trusted online safety and protection in the world. The product provides a way for organizations to protect and remediate their final stage against cybercriminals.

Their focus is to protect digital users against threats, ransomware, and malware while exploiting the regular escape findings by conventional antivirus solutions. Malwarebytes is the official software that a multitude of businesses uses the product.

In 2008, Marcin Kleczynski, founder, and CEO of Malwarebytes created his company to develop the best protection solution and disinfection globally. The purpose was to combat the most harmful Internet threat around the world.

Through that initial step is where their award-winning information site began. Marcin wanted an outlet to communicate with the users as well as offer crucial security information. On the blog, you will find articles that cover a wealth of information.

Marcin has multiple awards for his excellent service and his products used around the world including “CEO of the Year” for his product.

38. We Live Security

We Live Security, otherwise known as ESET, began in 1992 when a group of friends with the purpose of protecting peoples data conjured up a business. The same idea is still alive today. ESET is one of the pioneers of the IT security industry and the creator of the NOD32 proactive award-winning technology.

Their headquarters remains in Bratislava, Slovak Republic and is a privately held business with 1,001-5,000 employees. Their online protection and safety solutions protect over 100 million digital users. That includes firms and consumers in over 180 countries. ESET aims to enable individuals to have and enjoy safer technology.

39. Threatpost

Threatpost is an excellent leader in providing a source of information about business security and IT as an independent news site and blog. There is an endless count of the number of professionals that rely on Threatpost for the latest information and the leading news from media groups.

Such groups include MSNBC, USA Today, The Wall Street Journal, and the New York Times as well as National Public Radio. Media outlets such as these refer to Threatpost as an authoritative source for updates on information security.
The editorial team is award-winning for producing high-impact and unique content for the blog including videos, news, feature reports and more. They are known worldwide for breaking compelling original stories and offering their expert opinion on critical news gathers from other sources.

The team at Threatpost also receive winning nods for engaging with their readers with a discussion on why and how these events matter.

40. Kaspersky Labs Secure List

Located in Moscow, Kaspersky Labs originated in 1997 and is a privately held company with 1,001-5,000 employees. The group’s specialty Is IT security Software Development. On their blog, you will find articles and information surrounding the industry of IT security.
You will also find current and fresh information through their post that is kept up-to-date with topics, news, discussions, feedback and much more.

When it comes to the endpoint of a protection solution, Kaspersky Labs is the largest privately held vendor in the world. The organization ranks amongst the top four vendors that offer security for endpoint users and a leader in providing information worldwide.
During their two decades history, the company remains a leader as an innovator in IT protection and safety. The group provides effective digital security resolution for large enterprises, consumers, and SMB’s.

Kaspersky Lab is a United Kingdom registered entity. However, it operates in nearly 200 territories and countries across the globe. The group protects over 400 million worldwide users and has over 150,000 subscribers to their site.

41. Symantec Blogs

Symantec Corporation is the leader in cybersecurity around the world. The organization assists individuals, governments and organization secure their most vital data wherever it exists.

Symantec Corporation in Mountain View, CA and began its journey in 1982. It is a public company with over 10,000 employees.

Companies around the world rely on Systematic for integrated solutions for defending against high-level attacks across endpoints. Businesses and Individuals also seek the company for help with infrastructure and cloud defense.

Over 50 million people rely on the stellar products such as Nortons, and Lifelock. Both are for the protection of their digital businesses and across every device. You can expect to find articles about the about the latest online threats and solutions.

The featured stories, usually written by freelance security and technology journalist cover topics from around the globe. Each provides a vast variety of security issues and topics from numerous angles and perspectives.

42. ZoneAlarmjmk

ZoneAlarmJMK  is from one of the most recognized cyber safety and protection sites and vendor of security products, ZoneAlarm. The site offers crucial information on online security and malware defense to protect millions of devices, including PCs.

By utilizing the experience the writers have on malware topics, the site can publish practical security tips, malware alerts and the latest news and events in the IT field.

ZoneAlarm made its entrance to the digital security industry in 1997. Since then, the group became the leader in solutions that will protect millions. On the blog, you can expect to find articles on various cyberattack information, such as:

You will have access to a wworld leader that provides state-of-the-art protection with their award-winning products.

43. McAfee Security Blog

The McAfee company is a world leader in the independent cybersecurity companies. The organization continues to be inspired by the power of people working together. McAfee develops solutions that make the life safer worldwide.

The blog offers the latest techniques and tips for experts. The information through various posts helps keep you and your business up-to-date at all times. On the site, you will always find the most recent updates, malware trends, online threats and news of the online environment.

For the business world, McAfee aids in designing holistic cyber safety environments. These elements perform smarter, not harder.

44. Microsoft Secure Blog

Get the latest on how you can benefit from new approaches to infrastructure across the data center and the cloud. Microsoft’s mission with their blog is no different from their products: to empower every business and person around the globe.

Being new on the scene as of January 2018, Microsoft places all blogs on one website. On the site, you can expect to find masses of technical information, cyber security news, and updates for Azure, Windows, and Office 365.

You can also look forward to industry trends, cyber protection and safety guidance, product updates and much more. You will always find experts, engineers, and Windows Defender researchers coming together as a global team to deliver the most excellent information.

Microsoft aims to remain grounded in future products and educations as well as striving for the same in the present. The world as it is today is cloud-first and mobile-first, and the team at Microsoft aims to transform their business to meet the needs and demands globally.

Being an organization that does business in 170 countries and has over 114,000 employees, it is necessary to conform to the needs of the world? What’s their end-goal? To fulfill the mission of assisting business owners and their organization to achieve more.

45. SpiderLabs by Trustwave

Trustwaves is an organization that aids other firms with information on their blog. That information is on topics such as protecting data, reducing security risk, and fighting cybercrime. With managed security services and a team of online safety and protection experts, you can expect articles that are stellar on their site.

Trustwave also teams with researchers and ethical hackers to help enable businesses. The goal is to offer an outlet to help them convert to the way they handle their compliance programs and information safety.

That is where the researchers and investigators at Trustwave (via their blog on SpiderLabs), provide the latest in technology news, updates, critical malware. You can also find articles on safety and the internet. The team gathers vital information through testing and research to ensure they offer the best in advice and products.

They publish content and safety studies to combat cyber-criminals and online hackers. Trustwaves headquarters is in Chicago, and they have customers in 96 countries.

46. Dell SecureWorks

SecureWorks is a leader in global cybersecurity that aids in keeping businesses safe throughout the digitally connected world of today. Their company is in North America and started in 1999. The company offers information on protection and safety services on their site.

In 2016, the company became a public business. On their Internet Security blog, you will find information and the latest in the news for users and IT professionals. This information is for those that want to remain in-the-know with malware attacks and online threats.

There are three divisions of topics you will find on the site; Fundamentals, Leadership, also Threats and Defenses. SecureWorks combines artificial intelligence and the visibility of their thousands of clients to create their platform to provide useful insights.

You will find masses of information covering security breaches, malware activity in real time, and emerging threats on their blog.

47. Trend Micro Simple Security

Trend Micro Incorporated is a leader in global cloud security. They aim to establish a safer world for the exchanging of digital information via its threat management and content security solutions. Having over two decades of experience, they deliver top-ranked customer, cloud-based and server security news to their readers.

The writers provide information on how to prevent new threats and protect data in virtualized, physical, and cloud environment.

Their information website provides expert insight from over 1k experts in the industry.

The topics include:

    • Cybersecurity Industry News
    • Research and Analysis
    • Cloud Security Data Safety
    • Benefits Of Hypervisors
    • Privacy protection
    • Threat Intelligence
    • Physical Security Blog Articles

48. ThreatTrack

ThreatTrack is one of the best information security websites that keeps the world up-to-date on the latest developments and innovation within the IT industry. That includes software vulnerabilities, cyber-criminals, attempts, and security exploits.

On their site, you will find all the information your company needs through articles on all the topics discussed above.

49. Sucuri Security

Sucuri is a leader in providers of web-based integrity monitoring, malware detections, and removal solutions. The company’s web monitoring solution assists in over 400,000 websites worldwide. Their job is to clean up your computer.

In simple terms, if your system gets blacklisted, infected with malware or gets hacked, Securi can fix the problem. The information security site’s parent company is Sucuri. The two gentlemen that run the sight maintain the enthusiasm necessary to keep fresh, educated and high-quality informative content on the blog continuously.

Sucuri is an excellent digital source for learning about web malware infections, emerging vulnerabilities, and website security.

50. AlienVault

AlienVault is the leader in providing unified community-powered threat intelligence and security management to the mid-market.

AlienVault’s platform is to enable businesses with limited resources to succeed. The goal is to simplify and accelerate their ability to detect and retort to the expanding landscape of cyber-threats. AlienVaults blog consists of current and fresh news on cybersecurity. They provide expert advice on simplifying management and compliance and emerging global threats.

51. CIO

CIO, from IDG, offers award-winning resources and content for IT leaders. The CIO portfolio specializes in giving business technology leaders insight and analysis on IT trends and a keen understanding of its role in achieving business goals.

The expertise of CIO is security suite reviews, encryption, firewalls, ad blockers, spam blockers, and price comparison all the top brand products. The blog offers videos, digital magazine, and a newsletter.

Keep up with Online Cybersecurity Publications

These top cyber security blogs all focus on technology and tools for fighting cybercrimes and making your online experience safe. Each offers something unique, but all include in-depth coverage and insight into the world of cyber dangers and protection.

Use this guide to monitor your news, hardware, and software research to protect your companies security. Every day, there is the possibility of an attack or breach, so it is best to stay on top of the latest news and trends

Take advantage of the knowledge and the years of experience each of these blogs offer to stay on top of the happenings of interest from the modern day information security professionals.


How to Create Strong Passwords

9 Strong Password Ideas For Greater Protection

For your online accounts, passwords are the weakest point in any level in security. If someone accesses your credentials, your content and your vital information are at risk.

Although most websites today offer extra security protection, anyone who retrieves or guesses your password can easily bypass other security measures that most sites have in place.

That person can make any changes to your online accounts, make purchases, or otherwise manipulate your data. Always have your data backed up just in case.

Selecting a secure password is crucial because let’s face it, our entire life is now spent in the digital universe: social media, banking, email, shopping, and more.

Many people have the terrible habit of using the same passwords across multiple accounts. It may be easier to remember, but if there is a security vulnerability on one account, everything could be compromised.

important password ideas to keep hackers away

Passwords are Your Digital Keys

Your sign-on details are the digital keys to all your personal information and the best way to keep your company information safe. You want to make sure to keep your passwords safe from third-parties so that they can stay private.

While many small-time cybercriminals attempt to hack into email accounts and social networks, they often have darker and more malicious goals. They’re usually after information from personal finances such as credit card details and bank account info, or business accounts to either directly line their pockets or attempt to extort an individual or business.

The two significant security risks are insecure password practices and shared accounts. This involves using the same password for personal and business apps, reusing passwords across multiple apps, sharing passwords with other employees, and storing passwords insecurely.

The point here is that a robust and secure password is all that could potentially stand between you and pesky cybercriminals.

How Can Your Password Be Compromised?

Outside of spyware and phishing attacks, there are numerous techniques that hackers use to crack your passwords.

One strategy is to gain access is by straight-up guessing your password. They could do this by looking at your security questions, your social media presence, or any other found information that could be online. That is why it is vital that not to include any personal information in your passwords.

Other tactics that hackers utilize is to try a password cracker. By using brute force, a password cracker employs various combinations continuously until it breaks the password and gains access to the account. We’ve all seen this in the movies, but it’s worth noting that this is not just a Hollywood special effect.

The less complex and shorter your password is, the faster it can be for the tool to produce the correct combination of characters. The more complex and more extended your passwords are, it is less likely the hacker will use a brute force technique. That is due to the extended amount of time it would take for the software to figure it out.

Instead, they will put in place a method called a “dictionary attack.” Here is where a program will cycle through common words people use in passwords.

Strong Passwords are one of the best ways to start

Strong Password Ideas and Tips with Great Examples

  • Make sure you use at minimum ten characters. That is where it can get tricky.  As previously noted, you should avoid using personal information or your pet’s information — those are the first choices for hackers to try and exploit. In determining your password strength, pay close attention to two significant details: the complexity and length you choose. Long-tail, complex passwords are tough to crack. To create complex but memorable passwords, use different types of characters, a mixture of lower and uppercase letters, symbols, and number

 

  • Do not use directly identifiable information. The ones trying to hack into your accounts may already know personal details such as your phone number, birthday, address, etc. They will use that information as an aid to more easily guess your password.

 

  • Use a unique password for each separate account. If you use the same password across multiple accounts, you could use the most reliable password possible, and if one account is compromised: all of them are. The recommended best practice is to create a strong password ideas list and use it for all your online accounts. Your unique list of passwords should be kept safe.

 

  • Avoid common dictionary words. This mistake is the toughest one to avoid. The temptation is always there to use ordinary, everyday dictionary words. It is true that the most common password used today is, “password.” Avoid plain dictionary words as well as a combination of words. For instance, “Home” is a bad password. However, adding “Blue Home” isn’t an improvement either. A strong hacker will have a dictionary-based system that cracks this type of password. If you must use a single word, misspell it as best as you can or insert numbers for letters. Use a word or phrase and mix it with shortcuts, nicknames, and acronyms. Using shortcuts, abbreviations, upper and lower case letters provide easy to remember but secure passwords.

For example:

    • “Pass Go and collect $200”– p@$$GOandCLCt$200
    • “Humpty Dumpty sat on a wall” — humTdumt$@t0nAwa11
    • “It is raining cats and dogs!”– 1tsrAIn1NGcts&DGS!

Incorporate emoticons, emoticons are the text format of emojis, commonly seen as various “faces.”

You may also find remembering a sentence for your password if it refers to something easy for you, but complex for others, such as; “The first house I ever lived in was 601 Lake Street. Rent was $300 per month.” You could use “TfhIeliw601lS.Rw$3pm.” You took the first letters of each word, and you created a powerful password with 21 digits.

If you want to reuse passwords across numerous accounts, this technique is particularly useful as it makes them easy to remember. Even though, as already mentioned, you really should use separate passwords, you can customize each per account. Utilizing the same phrase as above, “Humpty Dumpty sat on a wall” we created a secure and reliable password, and now you can use it on Amazon, Netflix, or Google accounts:

Here are good password examples using this technique.

    • AMZn+humTdumt$@t0nAwa11
    • humTdumt$@t0nAwa11@gOoGL
    • humTdumt$@t0nAwa114netFLX

Weak Passwords to Avoid

Everyone is guilty of creating easy to guess passwords at some point in their digital life. You might feel confident that when you chose “3248575” that no one would figure out is your phone number.  The examples below add to what are weak passwords that at first appear strong. However, once you look a little closer, you realize what is missing.

A brief explanation of what makes these bad choices follows each:

    • 5404464785: Using numbers such as these quickly reveal someone’s phone number. By using this strategy, you are breaking two basic rules, using personal information and all numbers.
    • Marchl101977: The birthday password. Even though this password contains a combination of numbers with small and capital letters and is over ten characters long, it is a disaster waiting to happen. It too breaks the rules by starting with a standard dictionary word, use of personal information and it lacks special characters.
    • P@ssword234: You may at first feel this password meets the basics. However, it indirectly fails our tests. While it does have over ten characters, contains special characters and numbers, a mix of the letters, and it does not include any personally identifiable information, it is still considered weak. Because of how easy they are to guess, replacing letters for symbols is not a strong recommendation. It also offers the standard “234” sequential pattern.

example of the most insecure passwords

What is Two-Factor Authentication?

“Multi-factor authentication” in the digital world is simply an extra layer of security. As common as it may seem in the technology industry, if you ask around, you will find that not everyone knows about “Two Factor Authentication”. What’s even more interesting is that many people who don’t understand the term may very well be using it every day.

As mentioned throughout this blog, standard cybersecurity solutions and procedures only require a necessary username and password. With such simplicity, criminals score by the millions.

Two Factor Authentication, also known as 2FA, is two-step verification process, or TFA. It requires more than just a username and password but also something that only that user has on them.

That could be a document or piece of information only they should know or immediately have on hand, like a token of some type. Using this technique makes it difficult for cybercriminals to gain access and steal the identity or personal information of that person.

Many people do not understand this type of security, and they may not recognize it though they use it on a daily basis. When you use hardware tokens, issued by your bank to put to use with your card and PIN when needing to complete internet banking transactions, you’re using 2FA.

They are merely utilizing the benefits of multi-factor authentication by using something they have or what they know. Putting this process to use can indeed help with lowering the number of cases of identity theft on the web, as well as Phishing through email. The reason is that it needs more than supplying the mere name and password details. See our article on preventing ransomware for more information.

There are downsides, however. New hardware tokens which come from the bank in the form of card readers or key fobs that require ordering may slow business down. There can be issues for customers waiting and wanting to gain access to their private data through this authentication procedure.

The tokens get easily lost because they are small, so that too causes problems for everyone when a customer calls in requesting new ones.  Tokenless Authentication is the same procedure except there no tokens involved. It is quicker, faster, and less expensive to establish and maintain across numerous networks.

Managing Passwords the Easy Way

Implementing enterprise password management helps small, and large businesses keep their information sound. No matter how many employees you have, they need help protecting the passwords that operate your business and your private life.

A password manager helps you generate strong passwords as well as remembering each one for you. However, if you do choose this route, you will need to at least create a secure password and remember it.

With the masses of websites for which you have accounts, there is no logical way to remember each one easily. Trying to remember every single password, (and where you wrote them down) and not duplicate one or resorting to using an easy-to-read pattern, is where the trouble starts.

Here is where password managers make life more comfortable – as long as you can create a strong master password that is necessary for you to remember. The good news is, that is the last one you will you need to worry about no matter how many accounts you have.

The Truth about Browser-Based Managers

Web browsers – Safari, Firefox, Chrome, and others – each have integrated password managers.

No browser can compete with a dedicated solution. For one, Internet Explorer and Chrome store your passwords in an unencrypted form on your computer.

People can easily access password files and view them unless you encrypt your hard drive. Mozilla Firefox has the feature, “master password” that with one single, “master password” you can encrypt your saved passwords. It then stores them in an encrypted format on your computer.

However, Firefox password manager is not the perfect solution, either. The interface does not help you generate random passwords, and it also lacks various features such as cross-platform syncing.

There are three standout-above-the rest dedicated platforms for password management. Each of these is a reliable option, and the one you choose will rely on what is most important to you.

The important part is remembering that you need to use genuinely random words for a secure password. A great example is “cat in the hat” would make a horrible word because it is a common phrase and makes sense. “My beautiful red car” is another type that is horrible.

However, something such as “correct kid donor housewife” or “Whitehorse staring sugar invisible” are examples of a randomized password. They make no sense together and are in no grammatically correct order, which is fantastic.  Managers also allow users to store other data types in a secure form–everything from secure notes to credit card numbers.

In Closing, Stay Secure and Protected

We are experiencing times when passwords that you can remember is not enough to keep yourself and your company safe. If you do suspect criminal mischief with your account, immediately change your passwords.

Doing so only takes a minute, as restoring your personal life and your company financial records and history can often be devastating. Follow the steps listed above for selecting a strong unique password to establish and maintain safe accounts, secure email, and personal information. If your password is easy to remember, it is probably not secure.


man at desk looking at Disaster Recovery Statistics

2020 Disaster Recovery Statistics That Will Shock Business Owners

This article was updated in December 2019.

Data loss can be chilling and has serious financial implications. Downtime can occur at any time. Something as small as an employee opening an infected email, or as significant as a natural disaster.

Yet, 75% of small businesses have no disaster recovery plan objective in place.

We have compiled an interesting mix of disaster recovery statistics from a variety of sources from technology companies to mainstream media. Think of a disaster recovery plan a lifeboat for your business.

Hardware failure is the number one cause of data loss and/or downtime.

According to Dynamic Technologies, hardware failures cause 45% of total unplanned downtime. Followed by the loss of power (35%), software failure (34%), data corruption (24%), external security breaches (23%), and accidental user error (20%).

17 more startling Disaster Recovery Facts & Stats

1. 93% of companies without Disaster Recovery who suffer a major data disaster are out of business within one year.

2. 96% of companies with a trusted backup and disaster recovery plan were able to survive ransomware attacks.

3. More than 50% of companies experienced a downtime event in the past five years that longer than a full workday.

Recovering From A Disaster Is Expensive

When your business experiences downtime, there is a cost associated with that event. This dollar amount can be pretty tough to pin down as it includes direct expenses such as recovery labor and equipment replacement. But, also indirect costs such as lost business opportunity.

The cost can be staggering:

4. Corero Network Security found that organizations spend up to $50,000 dealing with a denial of service attack. Preventing DDoS attacks is critical.

4. Estimate are that unplanned downtime can cost up to $17,244 per minute, with a low-end estimate of $926 per minute.

5. On average, businesses lose over $100,000 per ransomware incident due to downtime and recovery costs. (source: CNN)

6. 40-60% of small businesses who lose access to operational systems and data without a DR plan close their doors forever. Companies that can recover do so at a much higher cost and a more extended timeframe than companies who had a formal backup and disaster recovery (BDR) plan in place.

7. 96% of businesses with a disaster recovery solution in place fully recover operations.

disaster recovery stat showing 90% of businesses will fail

Numbers Behind Security Breaches and Attacks

9. In a 2017 survey of 580 attendees of the Black Hat security conference in Las Vegas, it was revealed that the more than half of the organizations had been the target of cyber attacks. 20% of those came from ransomware attacks.

10. 2/3 of the individuals responding to the survey believe that a significant security breach will occur at their organization in the next year

11. More than 50% of businesses don’t have the budget to recover from the attack.

The Human Element Of Data Loss

Cybercriminals often utilize a human-based method of bypassing security, such as increasingly-sophisticated phishing attacks.

12. Human error is the number one cause of security and data breaches, responsible for 52 percent of incidents.

13. Cybersecurity training for new employees is critical. Only 52% receive cybersecurity policy training once a year.

14. The painful reality is that malware can successfully bypass anti-spam email filters, and are mostly ineffective against a targeted malware attack. It was reported that in 2018, malware attacks increased by 25 percent.

man drawing an image of a cloud with the words disaster recovery

Evolving Security Threat Matrix

15. By 2021, cybercrimes will cost $6 trillion per year worldwide.

16. Cybersecurity spending is on the rise; reaching $96 billion in 2018.

17. Cryptojacking attacks are increasing by over 8000% as miners exploit the computing power of unsuspecting victims.

Don’t Become a Disaster Recovery Statistic

The good news is that with adequate planning, you can minimize the costs regarding time and lost sales that are associated with disaster recovery.

Backing up and securing your data and systems and having the capability to maintain business as usual in the face of a disaster is no longer a luxury, it is a necessity. Understanding how to put a disaster recovery plan in place is essential. Read our recent article on data breach statistics for 2020.


an employee securing a website from a hacker

Creating a Secure Website: Simple Guide to Website Security

Experts predicted that in 2019, business websites would fall prey to ransomware attacks at the rate of one site every 14 seconds.

In 2018, the damage to websites attacked by cyber criminal exceeded 5 billion dollars.

Every year, these attacks grow in size, and before you know it, it could be your website that is affected.

Why You Need To Keep Your Website Secure

Every website is potentially vulnerable to these attacks.

You need to keep yours safe. An unsecured site can be compromised. Your customer’s data might be stolen. This can lead to lost revenue, costly website coding repairs, and many other problems.

You can protect your website from hackers. We’ll start off with a few basic descriptions of the types of attacks that you might encounter. This is followed by the eleven tips to secure your website.

website security with a lock

Potential Web Attacks/What To Prepare For

Whaling / Spear-Phishing

Phishing attacks are used to get people to give away their personal information, such as a social security number or bank account pin number. These attacks aim at broad audiences in hopes of fooling as many people as they can. Typically, phishing is done by email.

For example, a hacker sends out an email that looks like it comes from a bank, causing the recipient to click on the link in panic. That link takes the person to their standard looking banking site. But it is a site only designed to look like the real one. Someone who falls for one of these tricks and fills out the form on that site accidentally gives away their information.

Spear-phishing is similar, but it targets one specific person, not a lot of people in general. Hackers choose a particular target and then try to get them to give away their sensitive information.

Whaling is similar to spear-phishing. Only, in this case, a critical executive, at a company is targeted. That person is called a “whale” due to their influence and power. Hackers try to lure in whales, hoping to gain high-level access to company websites and bank accounts.

Server-Side Ransomware

Ransomware hits everyone from the average computer-user to those who operate websites.

These attacks consist of a hacker taking control of a computer and refusing to allow the user to access even the most basic commands. Server-side ransomware works similarly, except the hacker, gains control of a website server. Access to every website on that server is lost until the hackers are overridden or have their demands met.

IoT Vulnerabilities

IoT stands for Internet of Things. The term refers to the large number of devices that connect to the internet, such as smartphones and tablets that link to the internet and access sites.

The main IoT vulnerabilities are privacy issues, unreliable mobile interfaces, and inadequate mobile security. All of these stem from websites that don’t have the right protective measures installed or those that aren’t optimized for mobile devices. Hackers can take advantage of these issues and use them to gain access to your website.

Securing Your Website, The First Steps

Protecting your website from being hacked can be achieved in a simple 11 step process. 

1. Use Secure Passwords

The best website security starts with a secure password. The backend (the developer side) of every website is password protected. Although it’s tempting to use an easy to remember password; don’t. 

Instead, pick something that is extremely secure and tough for anyone but you to figure out. A good rule of thumb for passwords is to include a mix of capital letters, punctuation, and numbers, or use a strong password created by a password manager. Never use something that is easy to guess. This goes for everyone in your organization.

2. Be Careful When Opening Emails

Many phishing attacks appear in emails. Hackers also send viruses via email. Every one of your employees (including you) needs to be careful when opening emails from people you don’t know, especially if those emails have an attachment. Spam guards aren’t infallible. A hacker can compromise website security with a virus, wreaking havoc on your website.

Even attachments that are scanned and declared to be “clean” can still contain harmful viruses. Train your employees to use security precautions when opening emails with attachments.

3. Install Software Updates

Manufacturers keep operating systems and software running efficiently with regular updates. It can be tempting to push those updates aside to save time. After all, many of them require a complete system restart and some installation time which eats into productivity. This is a dangerous practice, as those updates contain crucial new security patches. You need to install these updates as they are available to keep your entire system secure.

businessman sitting on a secure safe

4. Use a Secure Website Hosting Service

Your web hosting service plays a vital role in the security of every website under their jurisdiction. Choose yours wisely.

Before you build or move your site to a host, ask them about their security platform. The best hosts work with or hires experts in the internet security field. They understand the importance that their customer’s websites aren’t vulnerable to attack.

Make sure they include a backup option. You could lose valuable information due to a hacker. It is easier to rebuild your site from a backup than it is from scratch.

Managed options are also available,  such as Security as a Service (Saas).

5. An SSL Certificate Keeps Information Protected

The letters in “https” stand for Hypertext Transfer Protocol Secure. Any webpage that uses this protocol is secure. Those pages exist on a specific server and are protected. Any page that contains a login or asks for payment information needs to be on this secure system. With that said, it is possible to set up your entire website using https.

Google has started marking sites in the Chrome browser as unsecured that do not use SSL Certificates or encrypt data.

credit cards being stolen online with phishing tactics

6. Secure Folder Permissions

Websites consist of folders and files that contain every piece of information necessary to make your site work properly. All of these live on your web server. Without the right privacy protections and security measures, anyone with the right skills can get in and see this information.

Prevent this from happening by assigning security permissions to those files and folders. Go to your website’s file manager and change the file attributes.

In the section for “numeric values” set the permissions to these options:

  • 644 for individual files
  • 755 for files and directories

7. Run Regular Website Security Checks

A good security check can identify any potential issues with your website. Use a web monitoring service to automate this. You need to run a test on your site’s programming every week (at minimum). Monitoring services have programs that make this easy to do.

Once you receive the report, pay close attention to the findings. These are all of the vulnerabilities on your site. The report should contain details on them. It may even classify them according to threat level. Start with the most harmful and then fix these issues.

8. Update Website Platforms And Scripts

We already covered the importance of keeping your computer software up to date. The same is true of your web hosting platform, and your plugins and scripts, such as Javascript.

If you use Wordpress, ensure that you are running the most updated version. If you are not, then update your version by clicking on the button on the upper left side of the screen. It is imperative to keep a WordPress site current to avoid any potential threats.

For people who don’t use Wordpress, check your web hosts’ dashboard for updates. Many of them will let you know which version of their software you’re running and keep you informed of any security patches.

You also need to check your plugins and tools.

Most WordPress plugins are created by third-party companies (or individuals.) Although they are safe, for the most part, you are relying on those third parties to keep their security parameters up to date. Set aside time to check for plugin updates at least once a week, and keep an eye out for anything that may seem strange, such as a plugin that ceases to work correctly. This could be a sign that it’s compromised.

important password ideas to keep hackers away

9. Install Security Plugins

There are several options here, depending on what type of website you run. For those based on WordPress, there are specific WordPress security plugins that provide additional protection. Examples include Bulletproof Security and iThemes Security. If your site is not on WordPress, protect it with a program like SiteLock.

Security plugins prevent hackers from infiltrating your site. Even the most up to date hosting platforms have some vulnerability. These plugins ensure that no one can take advantage of them.

SiteLock monitors your site continually looking for malware and viruses. It also closes those vulnerable loopholes, providing additional security updates.

10. Watch Out For XSS Attacks

XSS is cross-site scripting. An XSS attack is when a hacker inserts malicious code into your website, which can change its information or even steal user information. How do they get in? It’s as simple as adding some code in a blog comment.

Prevent XSS attacks by inserting a CSP header into your website code. CSP stands for Content Security Policy. It limits the amount of Javascript on your website, keeping foreign, and potentially contaminated scripts from running. Set it so that only the Javascript added to the page by your or your web developer works.

11. Beware of SQL Injection

SQL stands for Structured Query Language. It’s a type of code that manages and allows people to search for information in databases.

Here’s an example of an SQL Attack: if you have a search form on your website, people can enter terms to look for specific new information. Now imagine that someone got into your database files and inserted a code designed to mess them up.

That code can delete information and make it tough for the website to find what it needs to run. Hackers get in through URL parameters and web form fields and wreak havoc. Keep this from happening by setting up parameterized queries and make sure to create secure forms.

learn how to secure a website before ransomware hits

Now You Know How To Secure a Website from Hackers

Hopefully, now you understand the importance of creating a secure website. You also understand the eleven necessary steps to follow to prevent hackers from gaining access to its code and elements.

Leaving your website vulnerable to hackers can destroy your livelihood, especially if you run a web-based business. All that it takes is one lapse, and years of years client information can be compromised. This makes your company look bad and creates negative press attention. You’ll lose customers, many of whom may not come back.

Don’t allow this scenario to happen. Instead, focus on website security using the tips presented here.


vcloud director work being conducted

How to Integrate Keycloak SSO with Duo 2FA Into vCloud

This article is a first-hand account of lab-based testing to configure Keycloak SSO with Duo 2FA into VMware’s vCloud Director. All testing and documentation was completed by phoenixNAP’s own Joe Benga. Joe is our trusted Enterprise Architect for cloud, infrastructure, and networking technologies.

I have grown fond of Keycloak as a product. I find it to be a strong source identity and Access Management Solution (AMS).

I wanted to be able to leverage Keycloak in my lab for VMware’s vCloud Director (vCD), and test Active Directory (AD) integration with two-factor authentication (2FA) support.

In this first-hand account, I list the steps I took in a lab environment to provide Security Assertion Markup Language (SAML) integration with Keycloak at an organizational level. Additionally, I explain how I provide Duo Security 2FA to the front-facing portal.

As always these are just basic steps. Please keep security and best practices for your company in mind.

Keycloak

For this experiment, we are fortunate that Keycloak has a pre-built docker package to jumpstart everything.

With a few simple commands, I was able to get a core product up and running on a Centos 7 Virtual Machine (VM). At this point, I decided that instead of running everything standalone, that I would back it with a Postgres container. For your experiment, you can decide what you need for your own lab, with different options provided in the info link below. I added color coding to show how commands are related to each other.

For full info: https://hub.docker.com/r/jboss/keycloak/

Prerequisites: Docker CE, Firewall settings (if required)

    1. Create a shared user network for Database and Keycloak container
      docker network create keycloak-network
    2. Start DB Container (optional and I went with Postgres)
      docker run -d --name postgres --net keycloak-network -e POSTGRES_DB=keycloak -e POSTGRES_USER=keycloak -e POSTGRES_PASSWORD=password postgres
    3. Start Keycloak
      docker run -p 8080:8080 --name keycloak --net keycloak-network -e KEYCLOAK_USER=username -e KEYCLOAK_PASSWORD=password jboss/keycloak
    4. Once you have completed these steps, you should be able to login into Keycloak at (username and password is what you defined in green text)
      http://ip_addr:8080/auth/admin/

Note: For production use, you will want to consider using a secure secret storage service for handling credentials passed to the container.

Keycloak vCloud Director Configuration

    1. Create a new realm based on your vCloud Org’s name
      Select the drop-down arrow next to Master and select Add Realm. Note: If you are not going to leverage SSL once your realm is created, navigate to the login TAB and set Require SSL to none
    2. While in Keycloak, create a local User to use as a test. (Later, we will leverage an Active Directory User.) Under our Realm in the left pane navigate to Manage > Users. In the right pane select Add user
    3. Enter the info for the User. Make sure to include an email address as we will specify email as the Name ID Format in this log. Then select Save

add username in Keycloak

    1. Create a password for the User. Navigate to the Credentials tab. Enter and confirm the new password and deselect Temporary. Then click Reset Password
    2. Now that we have your realm and user we will need to grab your org’s metadata. Log into your org. Navigate to Administration in the left pane under Settings > Federation and select the Metadata link. Download the spring_saml_metadata.xml file. This file will provide us the certificate and config file that can easily be imported into Keycloak’s client setup

    1. In Keycloak, navigate to the left pane under Configure > Clients and select create in the right pane
    2. Click on Select File and import the spring_saml_metadata.xml that was just downloaded. Select Save
    3. Navigate to the Installation Tab and in the Format Option: select SAML Metadata IDPSSODescriptor, then copy or download the text that shows up in the dialog box

installation tab

    1. In vCD, navigate under Administration. Then in the left pane Settings > Federation select Use SAML Identity Provider and then copy or upload the SAML Metadata IDPSSODescriptor info from above. Then click apply. Note: Depending on how you are doing your network and setup you may need to adjust the IP info in the Metadata XML manually
    2. Import the Keycloak user. In the right pane under Members > Users. Select Import Users Icon. Then enter the email address of the user we created above and select the desired vCD Role. Note: During this step, I also include the active directory user email that I will be using later.

    1. Log into vCloud Director org, and we will be redirected to the Keycloak Realm login screen.

We now have our Identity Manager Application providing authentication for our vCloud Director portal. Next, we will sync up with the Active Directory underlying this set up through a Duo proxy and show how to leverage 2FA. I will also show how to add OTP to any user to quickly leverage Google Authentication.

Integrating Duo into Keycloak

Prerequisites:

    • Active Directory already running
    • Duo Auth Proxy Running (https://duo.com/docs/authproxy-overview) and connected to Active Directory

Note: My Duo Proxy Configuration

DUO Proxy Configuration

    1. Under our Realm in the left pane navigate to Configure > User Federation. In the right pane add an LDAP provider.
    2. Set your Edit ModeVendor: Active Directory (this is due to my Duo proxy connecting to Win2012 AD), Connection URL, User DN, Auth Type, Bind DN, Bind Credentials, Connection Search Scope: SubTree (unless you working in the same level), Cache Policy: No Cache (we want the request to hit the proxy for every request)

    1. Select Save, and then select Synchronize: All User
    2. If you didn’t pre-add your Duo user add it now; repeat Step 14.
    3. In the vCD login screen, use the email of the backing AD account that is leveraging Duo. Please note that the login screen will not display any prompt that it is awaiting 2FA approval. The screen will just appear stuck until you approve on you 2FA hardware device.

Adding One Time Password (OTP)

Use this process to add an OTP quickly.

    1. Under our Realm in the left pane, navigate to Configure > Authentication. Select the OTP Policy and configure your setting. Note: you may need to adjust your look ahead window to accommodate mismatched time settings on your servers.

configuration authentication

    1. In the left pane, navigate to Manage > Users and select the user we created in Step 6.
    2. Under the Detail tab, in the Required User Actions config select Configure OTP, and select Save

    1. Log into vCD with this user. Before we can access vCloud Director, we need to set up a 2FA authenticator. This can be done from the screen shown below.

mobile authenticator setup

    1. You can leverage any OTP app such as Duo or Google Authenticator. Once you verify, you will be directed to your VCD Org.

The next time you log in, you will be presented with the following prompt:

blogtest log in screen

Enter your code from the App and you will be good to go.


man not watching his Cloud applications and services

What Is Cloud Monitoring? Benefits and Best Practices

Cloud monitoring is a suite of tools and processes that reviews and monitors cloud computing resources for optimal workflow.

Manual or automated monitoring and management techniques ensure the availability and performance of websites, servers, applications, and other cloud infrastructure. Continually evaluating resources levels, server response times, speed, availability, and predicts potential vulnerability future issues before they arise.

Cloud Monitoring Strategy As An Expansion of Infrastructure

Web servers and networks have continued to become more complicated. Companies found themselves needing a better way to monitor their resources.

Cloud monitoring tools were developed to keep track of things like hard drive usage, switch, and router efficiency, and processor/RAM performance. These are all excellent, and vulnerabilities. But many of these management tools fall short of the needs for cloud computing.

Another similar toolset, often used by network administrators, is configuration management. This includes user controls like group policies and security protocols such as firewalls and/or two-factor authentication. These work based on a preconfigured system, which is built on anticipated use and threats. However, when a problem occurs, these can be slow to respond. The issue must first be detected, the policy adjusted, then the change implemented. A delayed response time of manually logging and reviewing can bog this process down even further.

A cloud monitor uses the advantages of virtualization to overcome many of these challenges. Most cloud functions run as software in a constructed virtual environments. Because of this, monitoring and managing applications can be built into the fabric of that environment; including resource cloud management and security.

Cloud Monitoring service

The Structure of Cloud Monitoring Solutions

Consider the growing range of SaaS services such as Software, Platform, and Infrastructure. Each of these services runs in a virtual server space in the cloud. For example; Security as a Service lives in a hosted cloud space in a data center.  Users remotely connect over the internet. In the case of cloud platform services, an entire virtual server is created in the cloud.  A virtual server might span across several real-world servers and hard-drives, but it can host hundreds of individual virtual computers for users to connect to.

As these services exist in a secure environment, there is a layer of insulation between the real-world monitoring and cloud-based monitoring.

Just as a network monitoring application is capable of being installed on a Local Area Network (LAN) to watch network traffic, monitoring software can be deployed within the cloud environment. Instead of examining hard drives or network switches, monitoring apps in the cloud track resources across multiple devices and locations.

One important feature of cloud server monitoring is that it provides more access and reporting ability than traditional infrastructure monitors.

diagram showing What Is Cloud Monitoring

Types of Cloud-Based Monitoring of Servers & Their Benefits

Website: A website is a set of files stored on a computer, which in turn sends those files to other computers over a network.

The host can be a local computer on your network, or remotely hosted by a cloud services provider. Some of the essential metrics for website monitoring include traffic, availability, and resource usage. For managing a website as a business asset, other parameters include user experience, search availability, and time on page. There are several ways this monitoring can be implemented and acted on. A monitoring solution that tracks visitors might indicate that the “time on page” metric is low, suggesting a need for more useful content. A sudden spike in traffic could mean a cyber attack. Having this data available in real-time helps a business adjust its strategy to serve customer needs better.

A virtual machine is a simulation of a computer, within a computer. This is often scaled out in Infrastructure as a Service (IaaS), where a virtual server hosts several virtual desktops for users to connect to. A monitoring application can track users and traffic, as well as infrastructure and the status of each machine. This offers the benefits of traditional IT infrastructure monitoring, with the added benefits of additional cloud monitoring solutions. From a management perspective, tracking employee productivity and resource allocation can be important metrics for virtual machines.

Database Monitoring:  Many cloud applications rely on databases, such as the popular SQL server database. In addition to the previous benefits, a database monitor can also track queries and data integrity. It can also help to monitor connections to the database to show real-time usage data. Tracking database access requests can also help improve security.  For example, resource usage and responsiveness can show if there’s a need for upgraded equipment. Even a simple uptime detector can be useful if your database has a history of instability. Knowing the precise moment a database goes down can improve resolution response time.

Virtual Network:  This technology creates software versions of network tech, such as routers, firewalls, and load balancers. As they are designed with software, integrated tools to monitor can give you a wealth of data about their operation. For example, if one virtual router is continuously overwhelmed with traffic, the network can be adjusted to compensate.  Instead of replacing hardware, virtualization infrastructure easily adapts to optimize the flow of data. Also, monitoring tools analyze user behavior to detect and resolve intrusions or inefficiencies.

Cloud Storage:  Secure cloud storage combines multiple storage devices into a single virtual storage space.

Cloud computing monitoring track multiple analytics simultaneously. More than that, cloud storage is often used to host SaaS and IaaS solutions. In these applications, it can be configured to track performance metrics, processes, users, databases, and available storage. This data is used to focus on features that users find helpful or to fix bugs that disrupt functionality.

company meeting to plan IT strategy

Best Practices For Monitoring

Decide what metrics are most critical. There are many customizable cloud monitoring solutions. Take an inventory of the assets you are using. Then map out the data you would like to collect. This helps to make informed decisions about which cloud monitoring software best fits your needs. It also gives you an advantage when moving to implement a monitoring plan. For example, an application developer might want to know which features are used the most, or the least. As they update, they may scrap features that aren’t popular in favor of features that are. Or, they may use application performance monitoring to make sure they have a good user experience.

Automate the monitoring. One compelling feature is scripting. Monitoring and reporting can be scripted to run automatically. Since cloud functions are virtual, it’s easy to implement software monitoring into the fabric of the cloud application.  Even logging and red-flag events can be automated to send a notice when problems are detected. For example, an email notification might be sent if unauthorized access is detected or if resource usage exceeds a threshold.

Consider the security of cloud-based applications. Many users believe that their data is less secure on a remote cloud server than on a local device.  While it is true that data centers present a tempting target for hackers, they also have better resources. Modern data centers invest in top-tier security technology and personnel.  This offers a significant advantage over end users. With that said, it’s still crucial for cloud users to be mindful of cloud security.

While data centers offer protection for the hardware and infrastructure, it’s important to exercise good end-user security habits. Proper data security protocols like two-factor authentication and strong firewalls are a good start. Monitoring can supplement that first line of defense by tracking usage within the virtual space. This helps detect vulnerabilities by reporting habits that might create security gaps. It also helps by recognizing unusual behavior patterns, which can identify and resolve data breach.

scale monitor businessman

Final Thoughts: Cloud Based Monitoring

With the virtual nature of cloud computing management, infrastructure is already in place for cloud monitoring applications. For a reasonable up-front investment of time and money, monitoring applications can deliver a wealth of actionable data. This data gives businesses insight into which digital strategies are more effective than others.  It can identify costly and ineffective services as well.

It is worth looking at application monitoring to report on how your cloud resources are being used. There may be room for improvement.


high availability architecture and best practices

What is High Availability Architecture? Why is it Important?

Achieving business continuity is a primary concern for modern organizations. Downtime can cause significant financial impact and, in some cases, irrecoverable data loss.

The solution to avoiding service disruption and unplanned downtime is employing a high availability architecture.

Because every business is highly dependent on the Internet, every minute counts. That is why company computers and servers must stay operational at all times.

Whether you choose to house your own IT infrastructure or opt for a hosted solution in a data center, high availability must be the first thing to consider when setting up your IT environment.

High Availability Definition

A highly available architecture involves multiple components working together to ensure uninterrupted service during a specific period. This also includes the response time to users’ requests. Namely, available systems have to be not only online, but also responsive.

Implementing a cloud computing architecture that enables this is key to ensuring the continuous operation of critical applications and services. They stay online and responsive even when various component failures occur or when a system is under high stress.

Highly available systems include the capability to recover from unexpected events in the shortest time possible. By moving the processes to backup components, these systems minimize downtime or eliminate it. This usually requires constant maintenance, monitoring, and initial in-depth tests to confirm that there are no weak points.

High availability environments include complex server clusters with system software for continuous monitoring of the system’s performance. The top priority is to avoid unplanned equipment downtime. If a piece of hardware fails, it must not cause a complete halt of service during the production time.

Staying operational without interruptions is especially crucial for large organizations. In such settings, a few minutes lost can lead to a loss of reputation, customers, and thousands of dollars. Highly available computer systems allow glitches as long as the level of usability does not impact business operations.

A highly available infrastructure has the following traits:

  • Hardware redundancy
  • Software and application redundancy
  • Data redundancy
  • The single points of failure eliminated

Load Balancers

How To Calculate High Availability Uptime Percentage?

Availability is measured by how much time a specific system stays fully operational during a particular period, usually a year.

It is expressed as a percentage. Note that uptime does not necessarily have to mean the same as availability. A system may be up and running, but not available to the users. The reasons for this may be network or load balancing issues.

The uptime is usually expressed by using the grading with five 9’s of availability.

If you decide to go for a hosted solution, this will be defined in the Service Level Agreement (SLA). A grade of “one nine” means that the guaranteed availability is 90%. Today, most organizations and businesses require having at least “three nines,” i.e., 99.9% of availability.

Businesses have different availability needs. Those that need to remain operational around the clock throughout the year will aim for “five nines,” 99.999% of uptime. It may seem like 0.1% does not make that much of a difference. However, when you convert this to hours and minutes, the numbers are significant.

Refer to the table of nines to see the maximum downtime per year every grade involves:

Availability Level Maximum Downtime per Year Downtime per Day
One Nine: 90% 36.5 days 2.4 hours
Two Nines: 99% 3.65 days 14 minutes
Three Nines: 99.9% 8.76 hours 86 seconds
Four Nines: 99.99% 52.6 minutes 8.6 seconds
Five Nines: 99.999% 5.25 minutes 0.86 seconds
Six Nines: 99.9999% 31.5 seconds 8.6 milliseconds

As the table shows, the difference between 99% and 99.9% is substantial.

Note that it is measured in days per year, not hours or minutes. The higher you go on the scale of availability, the cost of the service will increase as well.

How to calculate downtime? It is essential to measure downtime for every component that may affect the proper functioning of a part of the system, or the entire system. Scheduled system maintenance must be a part of the availability measurements. Such planned downtimes also cause a halt to your business, so you should pay attention to that as well when setting up your IT environment.

As you can tell, 100% availability level does not appear in the table.

Simply put, no system is entirely failsafe. Additionally, the switch to backup components will take some period, be that milliseconds, minutes, or hours.

How to Achieve High Availability

 

Businesses looking to implement high availability solutions need to understand multiple components and requirements necessary for a system to qualify as highly available. To ensure business continuity and operability, critical applications and services need to be running around the clock. Best practices for achieving high availability involve certain conditions that need to be met. Here are 4 Steps to Achieving 99.999% Reliability and Uptime.

1. Eliminate Single Points of Failure High Availability vs. Redundancy

The critical element of high availability systems is eliminating single points of failure by achieving redundancy on all levels. No matter if there is a natural disaster, a hardware or power failure, IT infrastructures must have backup components to replace the failed system.

There are different levels of component redundancy. The most common of them are:

  • The N+1 model includes the amount of the equipment (referred to as ‘N’) needed to keep the system up. It is operational with one independent backup component for each of the components in case a failure occurs. An example would be using an additional power supply for an application server, but this can be any other IT component. This model is usually active/passive. Backup components are on standby, waiting to take over when a failure happens. N+1 redundancy can also be active/active. In that case, backup components are working even when primary components function correctly. Note that the N+1 model is not an entirely redundant system.
  • The N+2 model is similar to N+1. The difference is that the system would be able to withstand the failure of two same components. This should be enough to keep most organizations up and running in the high nines.
  • The 2N model contains double the amount of every individual component necessary to run the system. The advantage of this model is that you do not have to take into consideration whether there was a failure of a single component or the whole system. You can move the operations entirely to the backup components.
  • The 2N+1 model provides the same level of availability and redundancy as 2N with the addition of another component for improved protection.

The ultimate redundancy is achieved through geographic redundancy.

That is the only mechanism against natural disasters and other events of a complete outage. In this case, servers are distributed over multiple locations in different areas.

The sites should be placed in separate cities, countries, or even continents. That way, they are entirely independent. If a catastrophic failure happens in one location, another would be able to pick up and keep the business running.

This type of redundancy tends to be extremely costly. The wisest decision is to go for a hosted solution from one of the providers with data centers located around the globe.

Next to power outages, network failures represent one of the most common causes of business downtime.

For that reason, the network must be designed in such a way that it stays up 24/7/365. To achieve 100% network service uptime, there have to be alternate network paths. Each of them should have redundant enterprise-grade switches and routers.

2. Data Backup and recovery

Data safety is one of the biggest concerns for every business. A high availability system must have sound data protection and disaster recovery plans.

An absolute must is to have proper backups. Another critical thing is the ability to recover in case of a data loss quickly, corruption, or complete storage failure. If your business requires low RTOs and RPOs and you cannot afford to lose data, the best option to consider is using data replication. There are many backup plans to choose from, depending on your business size, requirements, and budget.

Data backup and replication go hand in hand with IT high availability. Both should be carefully planned. Creating full backups on a redundant infrastructure is vital for ensuring data resilience and must not be overlooked.

3. Automatic failover with Failure Detection

In a highly available, redundant IT infrastructure, the system needs to instantly redirect requests to a backup system in case of a failure. This is called failover. Early failure detections are essential for improving failover times and ensuring maximum systems availability.

One of the software solutions we recommend for high availability is Carbonite Availability. It is suitable for any infrastructure, whether it is virtual or physical.

For fast and flexible cloud-based infrastructure failover and failback, you can turn to Cloud Replication for Veeam. The failover process applies to either a whole system or any of its parts that may fail. Whenever a component fails or a web server stops responding, failover must be seamless and occur in real-time.

The process looks like this:

  1. There is Machine 1 with its clone Machine 2, usually referred to as Hot Spare.
  2. Machine 2 continually monitors the status of Machine 1 for any issues.
  3. Machine 1 encounters an issue. It fails or shuts down due to any number of reasons.
  4. Machine 2 automatically comes online. Every request is now routed to Machine 2 instead of Machine 1. This happens without any impact to end users. They are not even aware there are any issues with Machine 1.
  5. When the issue with the failed component is fixed, Machine 1 and Machine 2 resume their initial roles

The duration of the failover process depends on how complicated the system is. In many cases, it will take a couple of minutes. However, it can also take several hours.

Planning for high availability must be based on all these considerations to deliver the best results. Each system component needs to be in line with the ultimate goal of achieving 99.999 percent availability and improve failover times.

4. Load Balancing

A load balancer can be a hardware device or a software solution. Its purpose is to distribute applications or network traffic across multiple servers and components. The goal is to improve overall operational performance and reliability.

It optimizes the use of computing and network resources by efficiently managing loads and continuously monitoring the health of the backend servers.

How does a load balancer decide which server to select?

Many different methods can be used to distribute load across a server pool. Choosing the one for your workloads will depend on multiple factors. Some of them include the type of application that is served, the status of the network, and the status of the backend servers. A load balancer decides which algorithm to use according to the current amount of incoming requests.

Some of the most common load balancing algorithms are:

  • Round Robin. With Round Robin, the load balancer directs requests to the first server in line. It will move down the list to the last one and then start from the beginning. This method is easy to implement, and it is widely used. However, it does not take into consideration if servers have different hardware configurations and if they can overload faster.
  • Least Connection. In this case, the load balancer will select the server with the least number of active connections. When a request comes in, the load balancer will not assign a connection to the next server on the list, as is the case with Round Robin. Instead, it will look for one with the least current connections. Least connection method is especially useful to avoid overloading your web servers in cases where sessions last for a long time.
  • Source IP hash. This algorithm will determine which server to select according to the source IP address of the request. The load balancer creates a unique hash key using the source and destination IP address. Such a key enables it always to direct a user’s request to the same server.

Load balancers indeed play a prominent role in achieving a highly available infrastructure. However, merely having a load balancer does not mean that you have a high system availability.

If a configuration with a load balancer only routes the traffic to decrease the load on a single machine, that does not make a system highly available.

By implementing redundancy for the load balancer itself, you can eliminate it as a single point of failure.

Cluster of Load Balancers

In Closing: Implement High Availability Architecture

No matter what size and type of business you run, any kind of service downtime can be costly without a cloud disaster recovery solution.

Even worse, it can bring permanent damage to your reputation. By applying a series of best practices listed above, you can reduce the risk of losing your data. You also minimize the possibilities of having production environment issues.

Your chances of being offline are higher without a high availability system.

From that perspective, the cost of downtime dramatically surpasses the costs of a well-designed IT infrastructure. In recent years, hosted and cloud computing solutions have become more popular than in-house solutions support. The main reason for this is the fact it reduces IT costs and adds more flexibility.

No matter which solution you go for, the benefits of a high availability system are numerous:

  • You save money and time as there is no need to rebuild lost data due to storage or other system failures. In some cases, it is impossible to recover your data after an outage. That can have a disastrous impact on your business.
  • Less downtime means less impact on users and clients. If your availability is measured in five nines, that means almost no service disruption. This leads to better productivity of your employees and guarantees customer satisfaction.
  • The performance of your applications and services will be improved.
  • You will avoid fines and penalties if you do not meet the contract SLAs due to a server issue.


man looking at server issues at his business

Small Business Servers: Do You Really Need The Best Performance?

The server that you choose makes all the difference in your business efficiency. Depending on your decision, it could also thwart or aid in your ability to expand. Selecting the best server for your small business does not have to be a hassle. There are the standards that you can count on to make the right choice.

First, let’s go over the essential functions of a business server. Then, we will touch on the operating systems that drive servers and make them perform. These are two of the most critical aspects of choosing a server so that you can make an informed decision for your business.

man on his ipad looking at hosting options

Understanding The Functions Of A Small Business Server

Small to Medium Businesses need servers that scale to their needs. You should not overpay for resources that you do not use though this is a typical sales practice to bundle features into a solution. You should also have adequate hardware to deal with unexpected traffic spikes. Once a marketing program starts working, you will likely see improvements in your online traffic. You need a server that can handle these increases.

Here are the most common uses of a server in the small business environment:

    • Securing hosting email. Startups can start by utilizing Gmail, Yahoo, or Mail.com, but it’s best to transition into a domain-specific email client quickly. SMBs should upgrade and think more deeply about security and their digital reputation.
    • Hosting eCommerce. The right server provides secure and efficient commercial transactions. Your company must protect the personal and financial information of its clients. You may be held legally responsible for the unprotected information. Hosting eCommerce securely is essential.
    • Hosting a website. You want your content to be available to your audience. Your web host determines the speed and efficiency of your site. Your web host can also make a tremendous difference in your search engine ranking.
    • Hosting applications. Hosting apps on a remote server reduces your need for new hardware. Instead of purchasing equipment to store in-house, you can rent it through the cloud. Some common uses for internal apps include employee management, CRM, planning and invoice management SaaS apps.
    • Creating a virtual server environment. Does your business cover multiple brands? You may need a virtual server interface. Do your employees work remotely? You may need virtual desktops. Your small business server hosting can make this happen.
    • Data backup. You increase data storage security for your business when you back up to the cloud. If something unexpected happens, you can quickly reload a saved instance of your business. There is very little lag when this process is used efficiently. You may not even have to inform your customers that anything went wrong.
    • Storing documents. Storing documents is essential for business continuity and data protection. It also aids in disaster and data recovery strategies. You can also enable employees to work remotely.

A remote server can deliver many other services to an SMB. Powerful small business servers can support all of these services at the same time and more. Many companies will use separate servers for each feature. This makes a company more accessible to expand digitally.

Your Operating System

The operating system is of vital importance. Imagine if your home PC ran on an OS that you couldn’t work with and ultimately did not like to use. You would be quickly finding a solution that worked and was much more effective. Think of the operating system in the same way.

Server software requires a specialized OS. It’s not often that you’ll see the same operating system on your desktop as in a server. There may be similarities, but ultimately the functionality will be different.

Here are your main choices when it comes to selecting the best server os for small business:

    • Linux. This semi-popular desktop operating system is more often known as a server OS, made to work for many users simultaneously. Linux has many variations that combine a full OS and a package manager. You end up with a faster install and better operations this way. The most popular distributions for Linux includes Debian, CentOS, and Ubuntu.
    • Windows OS. Microsoft names its OS after its desktop operating system, but rest assured that they are dramatically different when it comes to functionality. Microsoft Windows Server includes apps that support virtualization, security and the IIS web server.

Linux is more popular than Windows, server-side. Linux is free in many cases. It is also more efficient and less open to being hacked. Linux also can support many of the most popular open-source software options. Many Linux software packages are also free.

Windows offers a more pleasing graphical user interface (GUI) for server management. Linux requires learning complex command line syntax. Additionally, many business owners prefer to use Microsoft to complement their current base of applications. These applications often include Active Directory, MS SQL, and Sharepoint. These Microsoft branded programs run far more efficiently on their native platform.

Support is also a reason companies will lean towards Microsoft. They are known for years of helpful and responsive support. With Linux being open-source, your options for support are either researching yourself through online message boards or contacting your Linux distributor.

dedicated or cloud server options

Dedicated vs. Cloud Servers

Once you’ve settled on an operating system, you need to decide on your server’s hardware infrastructure (to be fair, these choices would be made simultaneously). Your two main options are between a dedicated server versus a cloud-based server. They are both self-contained environments that are fully complete. However, the underlying hardware of each server type is used differently.

The Cloud For Small Business

Think of a cloud server as a piece of a dedicated server, in a way. From a customer perspective, you will receive similar benefits. However, you actually share the physical space with other clients. The thing is, you’ll never know that you’re sharing space or how many other people you’re sharing it with unless a major issue comes up.

The cloud is comprised of virtual machines. They run on top of an enterprise-grade dedicated server. The dedicated server can create multiple virtual servers and provide a virtual environment for each client.

Power

Each virtual server functions as a slice of a dedicated server. The virtual cloud server is always weaker than its bare metal base, but a virtual server can still be incredibly powerful.

Cloud servers can receive resources from many dedicated servers, creating a virtual space that can match the credibility of a dedicated environment. The total power available in the cloud depends on the physical servers providing resources.

Cloud servers have more than enough power to handle the needs of a small business. Multiple servers work together to manage various companies at the same time.

Resources

The Cloud can host websites, applications, file sharing applications, email clients and eCommerce. The primary functions are the same as a dedicated server though due to hardware differences, speed may suffer in a cloud environment.

There is a built-in latency that slows down all virtual servers regardless of their infrastructure. The extra layer of virtual processing between the base OS and outside requests for data requires more time regardless of the total resources that are available.

Efficiency

Even the best cloud provider will almost always lag behind even a moderate dedicated server. That said, it’s worth mentioning that most SMB’s do not require a dedicated server. Even if they do, they are so cost-prohibitive that it’s often more economical to weight the minimal differences in speed vs. performance as an overall hit on a business’s finances.

Cloud server hosting infrastructure is efficient enough to handle the needs of most small businesses and with minimal customer engagement. The company managing the server is just as important as the infrastructure. Make sure that you choose a reputable web hosting company with strong storage solutions.

Scalability

The virtual server is much easier to scale from the perspective of the client. Scalability, outside of cost, is the main advantage of the cloud over a dedicated server.

There must be available resources to allow a business to scale their server. The one disadvantage of the virtual space is that in a live environment, clients compete for resources. If many clients experience unexpected traffic spikes at once, the server may experience the “noisy neighbor” effect. There may not be enough resources to go around in this case and could cause a problem for multiple clients. Thankfully, this situation is often mitigated by a reputable host that is actively managing traffic across the network.

Scalability is also a perk for a growing business when they are looking to expand their entire environment. There is no need for downtime to add resources to the virtual space which becomes extremely attractive for companies. Adding cloud storage is an incredibly simple task. There are also many hosting companies that are investing strongly in machine learning architecture to better utilize resources.

Speed

The cloud can be slower than dedicated solutions because of the built-in lag. Speed in the world of virtual servers is a resource that can become volatile. If too many clients are vying for the same resources, the cloud slows down. Dedicated resources are always going to be faster, but with advancements in technology and a reliable hosting provider, these limitations are becoming less and less.

Cloud Server Pricing

Cloud servers are less expensive than dedicated business servers. There are many clients on a single piece of hardware. Each client only pays for a fraction of the resources on the physical server.

There are also fewer resources allocated to each client. In return, the price is much lower.

small business servers on racks

The Dedicated Server

The dedicated server is yours and yours alone, though it will most likely live inside of a data center. You don’t share it with another company, you don’t have to worry about another company hogging your resources, and you’re entirely responsible for it. Providers can service many clients simultaneously. When you rent a dedicated server, you are reserving your own dedicated space for your business. There are many advantages to this configuration though it’s not always ideal for every small business.

Power

Dedicated servers are the most powerful option for a server. The purpose of a dedicated server is to provide its client with more resources than it will ever need. Physical tower server hardware is difficult to upgrade without downtime. This results in large hardware racks for potentially limitless resources and efficiency but for a customer that may never utilize the power.

A dedicated server could potentially contain dozens of processors that can host hundreds of terabytes of data and thousands of users simultaneously. It’s likely to have many different storage options, large hard drives (configurable in hot-swappable compartments), aggressive graphics cards, and much more. You can also maintain the infrastructure of a complex eCommerce platform along with hundreds of concurrent applications.

Resources

Dedicated servers far outclass cloud servers concerning straight-up resources. However, dedicated servers are more difficult to upgrade. It is best to include all needed requirements within the original infrastructure build of the server. Dedicated servers are much less flexible than the cloud server in this regard.

Even a moderately featured dedicated server can support most SMB database and application trees.

Efficiency

A dedicated server is built to be highly effective for a single client. The client accesses the OS directly. No lag is generated from any additional layers of processing. The result is a very streamlined system. Dedicated servers are less stressed during peak traffic times. They are highly efficient for a more extended period when compared to cloud servers.

Scalability

A dedicated business server can be scaled through additional ports. Upgrading a dedicated box is much more difficult than scaling to the cloud. For that (and many other reasons), companies don’t often invest in a dedicated server as a short-term solution. Most businesses look for a server that can scale with them over time without drastic hardware changes.

Speed

The dedicated server is built to be fast. There are no extra layers of processing between the operating system and requests for data. There is no built-in latency. Additionally, data streams within the dedicated environment are genuinely isolated. All resources are allocated in one direction. There is never a risk of a loss of resources.

Price

Here’s the thing about dedicated servers: they are much more expensive than cloud-based options. As previously mentioned, dedicated server clients are paying for the use of all available hardware resources. Dedicated clients receive full use of these resources regardless of the outside environment. Resources can never be assigned away from the client.

choosing best server solution

Dedicated Server Options

The dedicated server has benefits over the cloud server regarding power, resources, efficiency, and speed. However, the virtual server is the more flexible and scalable option. Virtual servers are also much less expensive. Does your business need high-cost power or low-cost flexibility?

Dedicated servers often have more resources available than a small business requires. A small business may intend to scale in the near future and believe that a dedicated server is the way to go. Consider the following before committing.

The business world is a volatile one. Web traffic is not guaranteed. Just because a company experiences a spike in traffic one month does not mean that the traffic will last. You may not have the ability to accurately forecast future traffic. Companies may need more data before being able to commit to a dedicated server and stay out of the red.

Small businesses with a definite plan may need a dedicated server. This is especially true if an industry is expanding along with a company. If a company expects a consistent audience, then it may require a dedicated server.

In most cases, dedicated servers are reserved for enterprise-level companies. SMBs usually have less volume and therefore fewer requirements for space and power.

The flexibility and scalability of the cloud environment may be more important than the immediate power and the efficiency of the dedicated server. The elasticity of the virtual world matches the volatility of the business world.

Cloud servers can emulate a lower level dedicated server. However, resources allocated to a client in the virtual world can be taken away. Speed and efficiency can suffer for clients in the virtual world under certain circumstances. Both clients may experience a noisy neighbor effect. The effect can be temporary or sustained depending on the number of clients that are using the underlying physical server.

man on the phone call his office

Servers For Small Business: Your Next Move

The average virtual server can handle the needs of most SMB’s. There are enough resources here to scale up without experiencing the noisy neighbor effect.

The dedicated server is much more expensive. Many small and medium-sized companies are on a strict budget. A growing company can benefit from a timely evolution into a dedicated server environment if they end up requiring it.

As a company, you want to do your research and understand the market before making a decision. Strive to create a cost-effective, long-term relationship with the right service provider. This partnership will allow your business to grow and prosper. Your industry is competitive and stressful. Make sure that your server solutions aren’t adding to that stress so that you can do what you do best: running your business!