IT Strategy – phoenixNAP Blog https://devtest.phoenixnap.com/blog phoenixNAP Global IT Services Thu, 08 Oct 2020 14:24:16 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.12 How AI and Voice Technology are Changing Business https://devtest.phoenixnap.com/blog/voice-technology Thu, 06 Aug 2020 14:53:58 +0000 https://devtest.phoenixnap.com/blog/?p=77129

When the first version of Siri came out, she battled to understand natural language patterns, expressions, colloquialisms, and different accents all synthesized into computational algorithms.

Voice technology has improved extensively over the last few years. These changes are all thanks to the implementation of Artificial Intelligence (AI). AI has made voice technology much more adaptive and efficient.

This article focuses on the impact that AI and voice technology have on businesses enabling voice technology services.

AI and Voice Technology

The human brain is complex. Despite this, there are limits to what it can do. For a programmer to think of all the possible eventualities is impractical at best. In traditional software development, developers instruct software on what function to execute and in which circumstances.

It’s a time-consuming and monotonous process. It is not uncommon for developers to make small mistakes that become noticeable bugs once a program is released.

With AI, developers instruct the software on how to function and learn. As the AI algorithm learns, it finds ways to make the process more efficient. Because AI can process data a lot faster than we can, it can come up with innovative solutions based on the previous examples that it accesses.

The revolution of voice tech powered by AI is dramatically changing the way many businesses work. AI, in essence, is little more than a smart algorithm. What makes it different from other algorithms is its ability to learn. We are now moving from a model of programming to teaching.

Traditionally, programmers write code to tell the algorithm how to behave from start to finish. Now programmers can dispense with tedious processes. All they need to do is to teach the program the tasks it needs to perform.

The Rise of AI and Voice Technology

Voice assistants can now do a lot more than just run searches. They can help you book appointments, flights, play music, take notes, and much more. Apple offers Siri, Microsoft has Cortana, Amazon uses Alexa, and Google created Google Assistant. With so many choices and usages, is it any wonder that 40% of us use voice tech daily?

voice technology ai diagram

They’re also now able to understand not only the question you’re asking but the general context. This ability allows voice tech to offer better results.

Before it, communication happened via typing or graphical interfaces. Now, sites and applications can harness the power of smart voice technologies to enhance their services in ways previously unimagined. It’s the reason voice-compatible products are on the rise.

Search engines have also had to keep up as optimization targeted text-based search queries only. As voice assistant technology advances, it’s starting to change. In 2019, Amazon sold over 100 million devices, including Echo and third-party gadgets, with Alexa built-in.

According to Google, 20% of all searches are voice, and by 2020 that number could rise to 50%. That means for businesses looking to grow voice technology is one major area to consider, as the global voice commerce is expected to be worth $40B by 2022.

How Voice Technology Works

Voice technology requires two different interfaces. The first is between the end-user and the endpoint device in use. The second is between the endpoint device and the server.

It’s the server that contains the “personality” of your voice assistant. Be it a bare metal server or on the cloud, voice technology is powered by computational resources. It’s where all the AI’s background processes run, despite giving you the feeling that the voice assistant “lives” on your devices.

diagram of voice recognition

It seems logical, considering how fast your assistant answers your question. The truth is that your phone alone doesn’t have the required processing power or space to run the full AI program. That’s why your assistant is inaccessible when the internet is down.

How Does AI in Voice Technology Work?

Say, for example, that you want to search for more information on a particular country. You simply voice your request. Your request then relays to the server. That’s when AI takes over. It uses machine learning algorithms to run searches across millions of sites to find the precise information that you need.

To find the best possible information for you, the AI must also analyze each site very quickly. This rapid analysis enables it to determine whether or not the website pertains to the search query and how credible the information is.

If the site is deemed worthy, it shows up in search results. Otherwise, the AI discards it.

The AI goes one step further and watches how you react. Did you navigate off the site straight away? If so, the technology takes it as a sign that the site didn’t match the search term. When someone else uses similar search terms in the future, AI remembers that and refines its results.

Over time, as the AI learns more and more. It becomes more capable of producing accurate results. At the same time, the AI learns all about your preferences. Unless you say otherwise, it’ll focus on search results in the area or country where you live. It determines what music you like, what settings you prefer, and makes recommendations. This intelligent programming allows that simple voice assistant to improve its performance every time you use it.

Learn how Artificial Intelligence automates procedures in ITOps – What is AIOps.

Servers Power Artificial Intelligence

Connectivity issues, the program’s speed, and the ability of servers to manage all this information are all concerns of voice technology providers.

Companies need to offer these services to run enterprise-level servers. The servers must be capable of storing large amounts of data and processing it at high speed. The alternative is cloud-computing located off-premise by third-party providers, that reduces over-head costs and increases the growth potential of your services and applications.

How servers and AI power voice technology
Alexa and Siri are complex programs, but why would they need so much space on a server? After all, they’re individual programs; how much space could they need? That’s where it becomes tricky.

According to Statista, in 2019, there were 3.25 billion active virtual assistant users globally. Forecasts say that the number will be 8 billion by the end of 2023.

The assistant adapts to the needs of each user. That essentially means that it has to adjust to a possible 3.25 billion permutations of the underlying system. The algorithm learns as it goes, so all that information must pass through the servers.

It’s expected that each user would want their personal settings stored. So, the servers must accommodate not only the new information stored, but also save the old information too.

This ever-growing capacity is why popular providers run large server farms. This is where the on-premise versus cloud computing debate takes on greater meaning.

man holding a phone speaking

Takeaway

Without the computational advances made in AI, voice technology would not be possible. The permutations in the data alone would be too much for humans to handle.

Artificial intelligence is redefining tech apps with voice technologies in a variety of businesses. It’s very compatible with AI and will keep improving as machine learning grows.

The incorporation of voice technology using AI in the cloud can provide fast processing and improve businesses dramatically. Businesses can have voice assistants that handle customer care and simultaneously learn from those interactions, teaching itself how to serve your clients better.

]]>
How to Leverage Object Storage with Veeam Backup Office 365 https://devtest.phoenixnap.com/blog/object-storage-with-veeam-backup-office-365 Tue, 19 May 2020 14:53:57 +0000 https://devtest.phoenixnap.com/blog/?p=77473

Introduction

phoenixNAP Managed Backup for Microsoft Office 365 solution powered by Veeam has gained popularity amongst Managed Service Providers and Office 365 administrators in recent years.

Following the publication of our KB article, How To Install & Configure Veeam Backup For Office 365, we wanted to shed light on how one can leverage Object Storage as a target to offload bulk Office 365 backup data. Object Storage support has been introduced in the recent release of Veeam Backup for Office 365 v4 as of November 2019. It has significantly increased the product’s ability to offload backup data to cloud providers.

Unlike other Office 365 backup products, VBO has further solidified the product’s flexibility benefits to be deployed in different scenarios, on-premises, as a hybrid cloud solution, or as a cloud service. phoenixNAP has now made it easier for Office 365 Tenants to leverage Object Storage, and for MSPs to increase margins as part of their Managed Backup service offerings. It’s simple deployment, lower storage cost and ability to scale infinitely has made Veeam Backup for Office 365 a top performer amongst its peers.

In this article, we will be discussing the importance of taking Office 365 backup, explain Object Storage architecture in brief and present the necessary steps required to configure Object Storage as a backup repository for Veeam Backup for Office 365.

You may have different considerations in the way the product should be configured. Nonetheless, this blog will focus on leveraging Object Storage as a backup target for Office 365 data. Since Veeam Backup for Office 365 can be hosted in many ways, this blog will remain deployment-neutral as the process required to add Object Storage target repository is common to all deployment models.

veeam

Why Should We Backup Office 365?

Some misconceptions which frequently surface when mentioning Office 365 backup is the idea that since Office 365 data resides on Microsoft cloud, such data is already being taken care of. To some extent they do, Microsoft goes a long way to have this service highly available and provide some data retention capabilities, but they still make it clear that as per the Shared Responsibility Model and GDPR regulation, the data owner/controller is still the one responsible for Office 365 data. Even if they did, should you really want to place all the eggs in one basket?

Office 365 is not just limited to email communication – Exchange Online, but it is also the service used for SharePoint Online, OneDrive, and Teams which are most commonly used amongst organizations to store important corporate data, collaborate, and support their distributed remote workforce. At phoenixNAP we’re here to help you elevate Veeam Backup for Office 365 and assist you in recovering against:

  • Accidental deletion
  • Overcome retention policy gaps
  • Fight internal and external security threats
  • Meet legal and compliance requirements

This further solidifies our reason why you should also opt for Veeam Backup for Office 365 and leverage phoenixNAP Object Storage to secure and maintain a solid DRaaS as part of your Data Protection Plan.

veeam-backup for microsoft

Object Storage

What is object storage?

Object Storage is another type of data storage architecture that is best used to store a significant amount of unstructured data. Whereas File Storage data is stored in a hierarchical way to retain the original structure but is complex to scale and expensive to maintain, Object Storage stores data as objects typically made up of the data itself, a variable amount of metadata and unique identifiers which makes it a smart and cost-effective way to store data.

Cache helps in cost reduction and is aimed at reducing cost expensive operations, this is especially the case when reading and writing data to/from object storage repositories. With the help of cache, Veeam Explorer is powerful enough to open backups in Object Storage and use metadata to obtain the structure of the backup data objects. Such a benefit allows the end-user to navigate through backup data without the need to download any of it from Object Storage. Large chunks of data are first compressed and then saved to Object Storage. This process is handled by the Backup Proxy server and allows for a smarter way to store data. When using object storage, metadata and cache both reside locally, backup data is transferred and located in Object Storage

In this article, we’ll be speaking on how Object Storage is used as a target for VBO Backups, but one must point out that as explained in the picture below, other Veeam products are also able to interface with Object Storage as a backup repository.

veeam backup repository

Why should we consider using it?

With the right infrastructure and continuous upkeep, Office 365 administrators and MSPs are able to design an on-premise Object Storage repository to directly store or offload O365 backup data as needed but to fully achieve and consume all its benefits, Object Storage on cloud is the ideal destination for Office 365 backups due to its simpler deployment, unlimited scalability, and lower costs;

  • Simple Deployment
    As noted further down in this article one will have a clear picture of the steps required to set up an Object Storage repository on the cloud. With a few necessary pre-requires and proper planning, one can have this repository up and running in no time by following a simple wizard to create an Object Storage repository and present it as a backup repository.
  • Easily Scalable
    While the ability to scale and design VBO server roles as needed is already a great benefit, the ability to leverage Object Storage to a cloud provider makes harnessing backup data growth easier to achieve and highly redundant.
  • Lower Cost Capabilities
    An object-based architecture is the most effective way for organizations to store large amounts of data and since it utilizes a flat architecture it consumes disk space more efficiently thus benefiting from a relatively low cost without the overhead of traditional file architectures. Additionally, with the help of retention policies and storage limits, VBO provides great ways on how one can keep costs under control.

Veeam Backup for Microsoft Office 365 is licensed per user account and supports a variety of licensing options such as Subscription or Rental based licenses. In order to use Object Storage as a backup target, a storage account from a cloud service provider is required but other than that, feel free to start using it!

VBO Deployment Models

For the benefit of this article, we won’t be digging in too much detail on the various deployment models that exist for VBO, but we believe that you ought to know about the various models that exist when opting for VBO.

VBO can run on-premises, private cloud, and public cloud environments. O365 tenants have the flexibility to choose from different designs based on their current requirements and host VBO wherever they deem right. In any scenario, a local primary backup repository is required as this will be the direct storage repository for backups. Object Storage can then be leveraged to offload bulk backup data to a cheaper and safer storage solution provided by a cloud service provider like phoenixNAP to further achieve disaster recovery objectives and data protection.

In some instances, it might be required to run and store VBO in different infrastructures for full disaster recovery (DR) purposes. Both O365 tenants and MSPs are able to leverage the power of the cloud by collaborating with a VCSP like phoenixNAP to provide them the ability to host and store VBO into a completely different infrastructure while providing self-service restore capabilities to end-users. For MSPs, this is a great way to increase revenue by offering managed backup service plans for clients.

The prerequisites and how these components work for each environment are very similar, hence for the benefit of this article the following Object Storage configuration is generally the same for each type of deployment.

veeam for office 365

Click here to see the image in full size.

Configuring Object Storage in Veeam Backup for Office 365

As explained in the previous section, although there are different ways on how one can deploy VBO, the procedure to configure and set up Object Storage repository is quite similar in any case, hence no specific attention will be given to a particular deployment model during the following configuration walk-through.

This section of the document will assume that the initial configuration as highlighted with checkmarks below, has so far been accomplished and in a position to; set up Object Storage as a Repository, Configure the local Repository, Secure Object Storage and Restore Backup Data.

  • Defined Policy-based settings and retention requirements according to Data Protection Plan and Service Costs
  • Object Storage cloud account details and credentials in hand
  • Office 365 prerequisite configurations to connect with VBO
  • Hosted and Deployed VBO
  • Installed and Licensed VBO
  • Created an Organization in VBO
    Adding S3 Compatible Object Storage Repository*
    Adding Local Backup Repository
    Secure Object Storage
    Restore Backup Data

* When opting for Object Storage, it is a suggested best practice that S3 Object Storage configuration is set up in advance, this will come in handy when asked for Object Storage repository option when adding the Local Backup Repository.

Adding S3 Compatible Object Storage Repository

Step 1. Launch New Object Storage Repository Wizard

Right-click Object Storage Repositories, select Add object storage.

Step 2. Specify Object Storage Repository Name

Enter a Name for the Object Storage Repository and optionally a Description. Click Next.

Step 3. Select Object Storage Type

On the new Object storage type page, select S3 Compatible (phoenixNAP compatible). Click Next.

Step 4. Specify Object Storage Service Point and Account

Specify the Service Point and the Datacenter region. Click Add to specify the credentials to connect with your cloud account.

If you already have a credentials record that was configured beforehand, select the record from the drop-down list. Otherwise, click Add and provide your access and secret keys, as described in Adding S3-Compatible Access Key. You can also click Manage cloud accounts to manage existing credentials records.

Enter the Access key, the Secret key, and a Description. Click OK to confirm.

Step 5. Specify Object Storage Bucket

Finalize by selecting the Bucket to use and click Browse to specify the folder to store the backups. Click New folder to create a new folder and click OK to confirm

Clicking Advanced lets you specify the storage consumption soft limit to keep costs under control, this will be the global retention storage policy for Object Storage. As a best practice, this consumption value should be lower than the Object Storage repository amount you’re entitled to from the cloud provider in order to leave room for additional service data.

Click OK followed by Finish.

Adding Local Backup Repository

Step 1. Launch New Backup Repository Wizard

Open the Backup Infrastructure view.

In the inventory pane, select the Backup Repositories node.

On the Backup Repository tab, click Add Repository on the ribbon.

Alternatively, in the inventory pane, right-click the Backup Repositories node and select Add backup repository.

Step 2. Specify Backup Repository Name

Specify Backup Repository Name and Description then click Next.

Step 3. Specify Backup Proxy Server

When planning to extend a backup repository with object storage, this directory will only include a cache consisting of metadata. The actual data will be compressed and backed up directly to object storage that you specify in the next step.

Specify the Backup Proxy to use and the Path to the location to store the backups. Click Next.

Step 4. Specify Object Storage Repository

At this step of the wizard, you can optionally extend a backup repository with object storage to back up data directly to the cloud.

To extend a backup repository with object storage, do the following:

  1. Select the Offload backup data to the object storage checkbox.
  2. In the drop-down list, select an object storage repository to which you want to offload your data.
    Make sure that an object storage repository has been added to your environment in advance. Otherwise, click Add and follow the steps of the wizard, as described in Adding Object Storage Repositories.
  3. To offload data encrypted, select Encrypt data uploaded to object storage and provide a password.

Step 5. Specify Retention Policy Settings

At this step of the wizard, specify retention policy settings.

Depending on how retention policies are configured, any obsolete restore points are automatically removed from Object Storage by VBO. A service task is used to calculate the age of offloaded restore points, when this exceeds the age of the specified retention period, it automatically purges obsolete restore points from Object Storage.

  • In the Retention policy drop-down list, specify how long your data should be stored in a backup repository.
  • Choose a retention type:
    • Item-level retention.
      Select this type if you want to keep an item until its creation time or last modification time is within the retention coverage.
  • Snapshot-based retention.
    Select this type if you want to keep an item until its latest restore point is within the retention coverage.
  • Click Advanced to specify when to apply a retention policy. You can select to apply it on a daily basis, or monthly. For more information, see Configuring Advanced Settings.

Configuring Advanced Settings

After you click Advanced, the Advanced Settings dialog appears in which you can select either of the following options:

  • Daily at:
    Select this option if you want a retention policy to be applied on a daily basis and choose the time and day.
  • Monthly at:
    Select this option if you want a retention policy to be applied on a monthly basis and choose the time and day, which can be the first, second, third, fourth or even the last one in the month.

Securing Object Storage

To ensure Backup Data is kept safe and secure from any possible vulnerabilities, one must make sure to secure the backup application itself, and its communication channels. Veeam has made this possible by continuously implementing key security measures to address and mitigate any possible threats while providing us with some great security functionalities to interface with Object Storage.

VBO v4 can provide the same level of protection for your data irrelevant to any deployment model used. Communications between VBO components are always encrypted and all communication between Microsoft Office 365 and VBO is encrypted by default. When using object storage, data can be protected with optional encryption at-rest.

VBO v4 also introduces a Cloud Credential Manager which lets us create and maintain a solid list of credentials provided by any of the Cloud Service Providers. These records allow us to connect with the Object Storage provider to store and offload backup data. Credentials will consist of access and secret keys and work with any S3-Compatible Object Storage.

Password Manager lets us manage encryption passwords with ease. One can create passwords to protect encryption keys that are used to encrypt data being transferred to object storage repositories. To encrypt data, VBO uses the AES-256 specification.

Watch one of our experts speak about the importance of Keeping a Tight Grip on Office 365 Security While Working Remotely.

Restoring from Object Storage

Restoring backup data from Object Storage is just as easy as if you’re restoring from any traditional storage repositories. As explained earlier in this article, Veeam Explorer is the tool used to open and navigate through backups without the need to download any of it.

Veeam Explorer uses metadata to obtain the structure of the backup data objects and once backup data has been identified for restore, you may choose to select any of the available restore options as required. When leverage Object Storage on the cloud, one is also able to host Veeam explorer locally and use it to restore Office 365 backup data from the cloud.

Where Does phoenixNAP Come into Play?

For more information, please look at our product pages and use the form to request additional details or send an e-mail to sales@phoenixnap.com 

 

Abbreviations Table

DRaaS Disaster Recovery as a Service
GDPR General Data Protection Regulation
MSP Managed Service Provider
O365 Microsoft Office 365
VBO Veeam Backup for Office 365
VCC Veeam Cloud Connect
VCSP Veeam Cloud & Service Provider

]]>
Test Driven vs Behavior Driven Development: Key Differences https://devtest.phoenixnap.com/blog/tdd-vs-bdd Mon, 27 Apr 2020 20:53:14 +0000 https://devtest.phoenixnap.com/blog/?p=76844

Test-driven development (TDD) and Behavior-driven development (BDD) are both test-first approaches to Software Development. They share common concepts and paradigms, rooted in the same philosophies. In this article, we will highlight the commonalities, differences, pros, and cons of both approaches.

What is Test-driven development (TDD)

Test-driven development (TDD) is a software development process that relies on the repetition of a short development cycle: requirements turn into very specific test cases. The code is written to make the test pass. Finally, the code is refactored and improved to ensure code quality and eliminate any technical debt. This cycle is well-known as the Red-Green-Refactor cycle.

What is Behavior-driven development (BDD)

Behavior-driven development (BDD) is a software development process that encourages collaboration among all parties involved in a project’s delivery. It encourages the definition and formalization of a system’s behavior in a common language understood by all parties and uses this definition as the seed for a TDD based process.

diagram comparing Test Driven Development and Behavior Driven Development

Key Differences Between TDD and BDD

  TDD BDD
Focus Delivery of a functional feature Delivering on expected system behavior
Approach Bottom-up or Top-down (Acceptance-Test-Driven Development) Top-down
Starting Point A test case A user story/scenario
Participants Technical Team All Team Members including Client
Language Programming Language Lingua Franca
Process Lean, Iterative Lean, Iterative
Delivers A functioning system that meets our test criteria A system that behaves as expected and a test suite that describes the system’s behavior in human common-language
Avoids Over-engineering, low test coverage, and low-value tests Deviation from intended system behavior
Brittleness Change in implementation can result in changes to test suite Test suite-only needs to change if the system behavior is required to change
Difficulty of Implementation Relatively simple for Bottom-up, more difficult for Top-down The bigger learning curve for all parties involved

Test-Driven Development (TDD)

In TDD, we have the well-known Red-Green-Refactor cycle. We start with a failing test (red) and implement as little code as necessary to make it pass (green). This process is also known as Test-First Development. TDD also adds a Refactor stage, which is equally important to overall success.

The TDD approach was discovered (or perhaps rediscovered) by Kent Beck, one of the pioneers of Unit Testing and later TDD, Agile Software Development, and eventually Extreme Programming.

The diagram below does an excellent job of giving an easily digestible overview of the process. However, the beauty is in the details. Before delving into each individual stage, we must also discuss two high-level approaches towards TDD, namely bottom-up and top-down TDD.

example of the refactor cycle with red and green
Figure 1: TDD’s Red-Green-Refactor Cycle

 

Bottom-Up TDD

The idea behind Bottom-Up TDD, also known as Inside-Out TDD, is to build functionality iteratively, focusing on one entity at a time, solidifying its behavior before moving on to other entities and other layers.

We start by writing Unit-level tests, proceeding with their implementation, and then moving on to writing higher-level tests that aggregate the functionalities of lower-level tests, create an implementation of the said aggregate test, and so on. By building up, layer by layer, we will eventually get to a stage where the aggregate test is an acceptance level test, one that hopefully falls in line with the requested functionality. This process makes this a highly developer-centric approach mainly intended at making the developer’s life easier.

Pros Cons
Focus is on one functional entity at a time Delays integration stage
Functional entities are easy to identify Amount of behavior an entity needs to expose is unclear
High-level vision not required to start High risk of entities not interacting correctly with each other thus requiring refactors
Helps parallelization Business logic possibly spread across multiple entities making it unclear and difficult to test

Top-Down TDD

Top-Down TDD is also known as Outside-In TDD or Acceptance-Test-Driven Development (ATDD). It takes the opposite approach. Wherein we start building a system, iteratively adding more detail to the implementation. And iteratively breaking it down into smaller entities as refactoring opportunities become evident.

We start by writing an acceptance-level test, proceed with minimal implementation. This test also needs to be done incrementally. Thus, before creating any new entity or method, it needs to be preceded by a test at the appropriate level. We are hence iteratively refining the solution until it solves the problem that kicked off the whole exercise, that is, the acceptance-test.

This setup makes Top-Down TDD a more Business/Customer-centric approach. This approach is more challenging to get right as it relies heavily on good communication between the customer and the team. It also requires good citizenship from the developer as the next iterative step needs to come under careful consideration. This process will speed-up in time but does have a learning curve. However, the benefits far outweigh any negatives. This approach results in the collaboration between customer and team taking center stage, a system with very well-defined behavior, clearly defined flows, focus on integrating first, and a very predictable workflow and outcome.

Pros Cons
Focus is on one user requested scenario at a time Critical to get the Assertion-Test right thus requiring collaborative discussion between business/user/customer and team
Flow is easy to identify Relies on Stubbing, Mocking and/or Test Doubles
Focus is on integration rather than implementation details Slower start as the flow is identified through multiple iterations
Amount of behavior an entity needs to expose is clear More limited parallelization opportunities until a skeleton system starts to emerge
User Requirements, System Design and Implementation details are all clearly reflected in the test suite
Predictable

The Red-Green-Refactor Life Cycle

Armed with the above-discussed high-level vision of how we can approach TDD, we are free to delve deeper into the three core stages of the Red-Green-Refactor flow.

Red

We start by writing a single test, execute it (thus having it fail) and only then move to the implementation of that test. Writing the correct test is crucial here, as is agreeing on the layer of testing that we are trying to achieve. Will this be an acceptance level test or a unit level test? This choice is the chief delineation between bottom-up and top-down TDD.

Green

During the Green-stage, we must create an implementation to make the test defined in the Red stage pass. The implementation should be the most minimal implementation possible, making the test pass and nothing more. Run the test and watch it pass.

Creating the most minimal implementation possible is often the challenge here as a developer may be inclined, through force of habit, to embellish the implementation right off the bat. This result is undesirable as it will create technical baggage that, over time, will make refactoring more expensive and potentially skew the system based on refactoring cost. By keeping each implementation step as small as possible, we further highlight the iterative nature of the process we are trying to implement. This feature is what will grant us agility.

Another key aspect is that the Red-stage, i.e., the tests, is what drives the Green-stage. There should be no implementation that is not driven by a very specific test. If we are following a bottom-up approach, this pretty much comes naturally. However, if we’re adopting a top-down approach, then we must be a bit more conscientious and make sure to create further tests as the implementation takes shape, thus moving from acceptance level tests to unit-level tests.

Refactor

The Refactor-stage is the third pillar of TDD. Here the objective is to revisit and improve on the implementation. The implementation is optimized, code quality is improved, and redundancy eliminated.

Refactoring can have a negative connotation for many, being perceived as a pure cost, fixing something improperly done the first time around. This perception originates in more traditional workflows where refactoring is primarily done only when necessary, typically when the amount of technical baggage reaches untenable levels, thus resulting in a lengthy, expensive, refactoring effort.

Here, however, refactoring is an intrinsic part of the workflow and is performed iteratively. This flexibility dramatically reduces the cost of refactoring. The code is not entirely reworked. Instead, it is slowly evolving. Moreover, the refactored code is, by definition, covered by a test. A test that has already passed in a previous iteration of the code. Thus, refactoring can be done with confidence, resulting in further speed-up. Moreover, this iterative approach to improvement of the codebase allows for emergent design, which drastically reduces the risk of over-engineering the problem.

It is of critical importance that behavior should not change, and we do not add extra functionality during the Refactor-stage. This process allows refactoring to be done with extreme confidence and agility as the relevant code is, by definition, already covered by a test.

diagram of the testing pyramid with manual session testing on top

Behavior-Driven Development (BDD)

As previously discussed, TDD (or bottom-up TDD) is a developer-centric approach aimed at producing a better code-base and a better test suite. In contrast, ATDD is more Customer-centric and aimed at producing a better solution overall. We can consider Behavior-Driven Development as the next logical progression from ATDD. Dan North’s experiences with TDD and ATDD resulted in his proposing the BDD concept, whose idea and the claim was to bring together the best aspects of TDD and ATDD while eliminating the pain-points he identified in the two approaches. What he identified was that it was helpful to have descriptive test names and that testing behavior was much more valuable than functional testing.

Dan North does a great job of succinctly describing BDD as “Using examples at multiple levels to create shared understanding and surface certainty to deliver software that matters.”

Some key points here:

  • What we care about is the system’s behavior
  • It is much more valuable to test behavior than to test the specific functional implementation details
  • Use a common language/notation to develop a shared understanding of the expected and existing behavior across domain experts, developers, testers, stakeholders, etc.
  • We achieve Surface Certainty when everyone can understand the behavior of the system, what has already been implemented and what is being implemented and the system is guaranteed to satisfy the described behaviors

BDD puts the onus even more on the fruitful collaboration between the customer and the team. It becomes even more critical to define the system’s behavior correctly, thus resulting in the correct behavioral tests. A common pitfall here is to make assumptions about how the system will go about implementing a behavior. This mistake occurs in a test that is tainted with implementation detail, thus making it a functional test and not a real behavioral test. This error is something we want to avoid.

The value of a behavioral test is that it tests the system. It does not care about how it achieves the results. This setup means that a behavioral test should not change over time. Not unless the behavior itself needs to change as part of a feature request. The cost-benefit over functional testing is more significant as such tests are often so tightly coupled with the implementation that a refactor of the code involves a refactor of the test as well.

However, the more substantial benefit is the retention of Surface Certainty. In a functional test, a code-refactor may also require a test-refactor, inevitably resulting in a loss of confidence. Should the test fail, we are not sure what the cause might be: the code, the test, or both. Even if the test passes, we cannot be confident that the previous behavior has been retained. All we know is that the test matches the implementation. This result is of low value because, ultimately, what the customer cares about is the behavior of the system. Thus, it is the behavior of the system that we need to test and guarantee.

A BDD based approach should result in full test coverage where the behavioral tests fully describe the system’s behavior to all parties using a common language. Contrast this with functional testing were even having full coverage gives no guarantees as to whether the system satisfies the customer’s needs and the risk and cost of refactoring the test suite itself only increase with more coverage. Of course, leveraging both by working top-down from behavioral tests to more functional tests will give the Surface Certainty benefits of behavioral testing. Plus, the developer-focused benefits of functional testing also curb the cost and risk of functional testing since they’re only used where appropriate.

In comparing TDD and BDD directly, the main changes are that:

  • The decision of what to test is simplified; we need to test the behavior
  • We leverage a common language which short-circuits another layer of communication and streamlines the effort; the user stories as defined by the stakeholders are the test cases

An ecosystem of frameworks and tools emerged to allow for common-language based collaboration across teams. As well as the integration and execution of such behavior as tests by leveraging industry-standard tooling. Examples of this include Cucumber, JBehave, and Fitnesse, to name a few.

behavior-driven development diagram

The Right Tool for the Job

As we have seen, TDD and BDD are not really in direct competition with each other. Consider BDD as a further evolution of TDD and ATDD, which brings more of a Customer-focus and further emphasizes communication between the customer and the Technical team at all stages of the process. The result of this is a system that behaves as expected by all parties involved, together with a test suite describing the entirety of the system’s many behaviors in a human-readable fashion that everyone has access to and can easily understand. This system, in turn, provides a very high level of confidence in not only the implemented system but in future changes, refactors, and maintenance of the system.

At the same time, BDD is based heavily on the TDD process, with a few key changes. While the customer or particular members of the team may primarily be involved with the top-most level of the system, other team members like developers and QA engineers would organically shift from a BDD to a TDD model as they work their way in a top-down fashion.

We expect the following key benefits:

  • Bringing pain forward
  • Onus on collaboration between customer and team
  • A common language shared between customer and team-leading to share understanding
  • Imposes a lean, iterative process
  • Guarantee the delivery of software that not only works but works as defined
  • Avoid over-engineering through emergent design, thus achieving desired results via the most minimal solution possible
  • Surface Certainty allows for fast and confident code refactors
  • Tests have innate value VS creating tests simply to meet an arbitrary code coverage threshold
  • Tests are living documentation that fully describes the behavior of the system

There are also scenarios where BDD might not be a suitable option. There are situations where the system in question is very technical and perhaps is not customer-facing at all. It makes the requirements more tightly bound to the functionality than they are to behavior, making TDD a possibly better fit.

Adopting TDD or BDD?

Ultimately, the question should not be whether to adopt TDD or BDD, but which approach is best for the task at hand. Quite often, the answer to that question will be both. As more people are involved in more significant projects, it will become self-evident that both approaches are needed at different levels and at various times throughout the project’s lifecycle. TDD will give structure and confidence to the technical team. While BDD will facilitate and emphasize communication between all involved parties and ultimately delivers a product that meets the customer’s expectations and offers the Surface Certainty required to ensure confidence in further evolving the product in the future.

As is often the case, there is no magic bullet here. What we have instead is a couple of very valid approaches. Knowledge of both will allow teams to determine the best method based on the needs of the project. Further experience and fluidity of execution will enable the team to use all the tools in its toolbox as the need arises throughout the project’s lifecycle, thus achieving the best possible business outcome. To find out how this applies to your business, talk to one of our experts today.

]]>
Why Carrier-Neutral Data Centers are Key to Reduce WAN Costs https://devtest.phoenixnap.com/blog/lower-wan-costs Thu, 27 Feb 2020 13:02:15 +0000 https://devtest.phoenixnap.com/blog/?p=76616

Every year, the telecom industry invests hundreds of billions on network expansion, which will rise by 2%-4% in 2020. Not surprisingly, the outcome is predictable: bandwidth prices keep falling.

As Telegeography reported, several factors accelerated this phenomenon in recent years. Major cloud providers like Google, Amazon, Microsoft, and Facebook have altered the industry by building their own massive global fiber capacity while scaling back their purchases from telecom carriers. These companies have simultaneously driven global fiber supply up and demand down. Technology advances, like 100 Gbps bit rates, have also contributed to the persistent erosion of costs.

The result is bandwidth prices that have never been lower. And the advent of Software-Defined Networking (SD-WAN) makes it simpler than ever to prioritize traffic between costly private networks and cheaper Internet bandwidth.

lower wan costs

This period should be the best of times for enterprise network architects, but not necessarily.

Many factors conspire against buyers who seek to lower costs for the corporate WAN, including:

  • Telecom contracts that are typically long-term and inflexible
  • Competition that is often limited to a handful of major carriers
  • Few choices for local access and Internet at corporate locations
  • The tremendous effort required to change providers, meaning incumbents, have all the leverage

The largest telcos, companies like AT&T and Verizon, become trapped by their high prices. Protecting their revenue base makes these companies reluctant adopters of SD-WAN and Internet-based solutions.

So how can organizations drive down spending on the corporate WAN, while boosting performance?

As in most markets, the essential answer is: Competition.

The most competitive marketplaces for telecom services in the world are Carrier-Neutral Data Centers (CNDCs). Think about all the choices: long-haul networks; local access; Internet providers, storage, compute, SaaS, etc. CDNCs offer a wide array of networking options, and the carriers realize that competitors are just a cross-connect away.

How much savings are available? Enough to make it worthwhile for many large regional, national, and global companies. In one report, Forrester interviewed customers of Equinix, the largest retail colocation company, and found that they saved an average of 40% on bandwidth costs, and 60%-70% cloud connectivity and network traffic cost reduction.

The key is to leverage CNDCs as regional network hubs, rather than the traditional model of hubbing connectivity out of internal corporate data centers.

CNDCs like to remind the market that they offer much more than racks and power as these sites can offer performance benefits as well. Internet connectivity is often superior, and many CNDCs offer private cloud gateways that improve latency and security.

But lower costs and the savings alone should be enough to justify most deployments. To see how you can benefit, contact one of our experts today.

]]>
Extend Your Development Workstation with Vagrant & Ansible https://devtest.phoenixnap.com/blog/development-environment-vagrant-ansible Fri, 14 Feb 2020 20:53:20 +0000 https://devtest.phoenixnap.com/blog/?p=76020

The mention of Vagrant in the title might have led you to believe that this is yet another article about the power of sharing application environments. As one does with code or how Vagrant is a great facilitator for that approach. However, there exists plenty of content about that topic, and by now the benefits of it are widely known. Instead, we will describe our experience in putting Vagrant to use in a somewhat unusual way.

A Novel Idea

The idea is to extend a developer workstation running Windows to support running a Linux kernel in a VM and to make the bridge between the two as seamless as possible. Our motivation was to eliminate certain pain points or restrictions in development. Which are brought about by the choice of OS for the developer’s local workstation. Be it a requirement at an organizational level, regulatory enforcement or any other thing that might or might not be under the developer’s control.

This approach is not the only one evaluated, as we also considered shifting work entirely to a guest OS on a VM, using Docker containers, leveraging Cygwin. And yes, the possibility of replacing the host OS was also challenged. However, we found that the way technologies came together in this approach can be quite powerful.

We’ll take this opportunity to communicate some of the lessons learned and limitations of the approach and share some ideas of how certain problems can be solved.

workstation expansion

Why Vagrant?

The problem that we were trying to solve and the concept of how we tried to do it does not necessarily depend on Vagrant. In fact, the idea was based on having a virtual machine (VM) deployed on a local hypervisor. Running the VM locally might seem dubious at first thought. However, as we found out, this gives us certain advantages that allow us to create a better experience for the developer by creating an extension to the workstation.

We opted to go for VirtualBox as a virtualization provider primarily because of our familiarity with the tool and this is where Vagrant comes into play. Vagrant is one of the tools that make up the open-source HashiCorp Suite, which is aimed at solving the different challenges in automating infrastructure provisioning.

In particular, Vagrant is concerned with managing VM environments in the development phase Note, for production environments there are other tools in the same suite that are more suitable for the job. More specifically Terraform and Packer, which are based on configuration as code. This implies that an environment can be easily shared between team members and changes are version controlled and can be tracked easily. Making the resultant product (the environment) consistently repeatable. Vagrant is opinionated and therefore declaring an environment and its configuration becomes concise, which makes it easy to write and understand.

Why Ansible?

After settling on using Vagrant for our solution and enjoying the automated production of the VM; the next step was to find a way to provision that VM in a way that marries the principles advertised by Vagrant.

We do not recommend having Vagrant spinning up the VMs in an environment and then manually installing and configuring the dependencies for your system. In Vagrant, provisioners are core and there are plenty from which you can choose. In our case, as long as our provisioning remained simple we stuck with using Shell (Vagrant simply uploads scripts to the guest OS and executes them).

Soon after, it became obvious that that approach would not scale well, alongside the scripts being too verbose. The biggest pain point was that developers would need to write in a way that favored idempotency. This is due to the common occurrence of needing to add steps to the configuration. All the while being overkill to have to re-provision everything from scratch.

At this point, we decided to use Ansible. Ansible by RedHat is another open-source automation tool that is built around the idea of managing the execution of plays. Using a playbook where a play can be thought of as a list of tasks mapped against a group of hosts in an environment.

These plays should ideally be idempotent which is not always possible. And again the entire configuration one would write is declared as code in YAML. The biggest win that was achieved with this strategy is that the heavy lifting is done by the community. It provides Ansible Modules, configurable Python scripts that perform specific tasks, for virtually anything one might want to do. Installing dependencies and configuring the guest according to industry standards becomes very easy and concise. Without requiring the developer to go into the nitty-gritty details since modules are in general highly opinionated. All of these concepts combine perfectly with the principles for Vagrant and integration between the two works like a charm.

There was one major challenge to overcome in setting up the two to work together. Our host machine runs Windows, and although Ansible is adding more support for managing Windows targets with time, it simply does not run from a Windows control machine. This leaves us with two options: having a further environment which can act as the Ansible controller or the simpler approach of having the guest VM running Ansible to provision itself.

The drawback of this approach is that one would be polluting the target environment. We were willing to compromise on this as the alternative was cumbersome. Vagrant allows you to achieve this by simply replacing the provisioner identifier. Changing from ansible to ansible_local, it automatically installs the required Ansible binaries and dependencies on the guest for you to use.

workstation expansion

File Sharing

One of the cornerstones we wanted to achieve was the possibility to make the local workspace available from within the guest OS. This is so you can have the tooling which makes up a working environment be readily available to easily run builds inside the guest. The options for solving this problem are plenty and they vary depending on the use case. The simplest approach is to rely on VirtualBox`s file-sharing functionality which gives near-instant, two-way syncing. And setting it up is a one-liner in the VagrantFile.

The main objective here was to share code repositories with the guest. It can also come handy to replicate configuration for some of the other toolings. For instance, one might find it useful to configure file sharing for Maven`s user settings file, the entire local repository, local certificates for authentication, etc.

Port Forwarding

VirtualBox`s networking options were a powerful ally for us. There are a number of options for creating private networks (when you have more than one VM) or exposing the VM on the same network as the host. It was sufficient for us to rely on a host-only network (i.e. the VM is reachable only from the host). And then have a number of ports configured for forwarding through simple NAT.

The major benefit of this is that you do not need to keep changing configuration for software, whether it is executing locally or inside the guest. All of this can be achieved in Vagrant by writing one line of configuration code. This NATting can be configured in either direction (host to guest or guest to host).

Bringing it together

Having defined the foundation for our solution, let’s now briefly go through what we needed to implement all of this. You will see that for the most part, it requires minimal configuration to reach our target.

The first part of the puzzle is the Vagrantfile in which we define the base image for the guest OS (we went with CentOS 7). The resources we want to allocate (memory, vcpus, storage), file shares, networking details and provisioning.

Figure 1: File structure of the solution

Note that the vagrant plugin `vagrant-vbguest` was useful to automatically determine the appropriate version of VirtualBox’s Guest Addition binaries for the specified guest OS and installing them. We also opted to configure Vagrant to prefer using the binaries that are bundled within itself for functionality such as SSH (VAGRANT_PREFER_SYSTEM_BIN set to 0) rather than rely on the software already installed on the host. We found that this allowed for a simpler and more repeatable setup process.

The second major part of the work was integrating Ansible to provision the VM. For this we opted to leverage Vagrant’s ansible_local that works by installing Ansible in the guest on the fly and running provisioning locally.

Now, all that is required is to provide an Ansible playbook.yml file and here one would define any particular configuration or software that needs to be set up on the guest OS.

Figure 2: Configuration of Ansible as provisioner in the VagrantFile

We went a step further and leveraged third-party Ansible roles instead of reinventing the wheel and having to deal with the development and ongoing maintenance costs.

The Ansible Galaxy is an online repository of such roles that are made available by the community. And you install these by means of the ansible-galaxy command.

Since Vagrant is abstracting away the installation and invocation of Ansible, we need to rely on Vagrant. Why? To make sure that these roles are installed and made available when executing the playbook. This is achieved through the galaxy_command parameter. The most elegant way to achieve this is to provide a requirements.yml file with the list of roles needed and have it passed to the ansible-galaxy command. Finally, we need to make sure that the Ansible files are made available to the guest OS through a file share (by default the directory of the VagrantFile is shared) and that the paths to them are relative to /vagrant.

Building a seamless experience…BAT to the rescue

We were pursuing a solution that makes it as easy as possible to jump from working locally to working inside the VM. If possible, we also wanted to be able to make this switch without having to move through different windows.

For this reason, we wrote a couple of utility batch scripts that made the process much easier. We wanted to leverage the fact that our entire workspace directory was synced with the guest VM. This allowed us to infer the path in the workspace on the guest from the current location in the host. For example, if on our host we are at C:WorkspaceProjectX and the workspace is mapped to vagrantworkspace, then we wanted the ability to easily run a command in vagrantworkspaceprojectx without having to jump through hoops.

To do this we placed a script on our path that would take a command and execute it in the appropriate directory using Vagrant’s command flag. The great thing about this trick is that it allowed us to trigger builds on the guest with Maven through the IDE by specifying a custom build command.

Figure 3: Illustrating how the path is resolved on the guest

Figure 4: Running a command in the guest against files in the local workspace

We also added the ability to the same script to SSH into the VM directly in the path corresponding to the current location on the host. To do this, on VM provisioning we set up a file share that allows us to sync the bashrc directory in the vagrant user’s home folder. This allows us to cd in the desired path (which is derived on the fly) on the guest upon login.

Finally, since a good developer is an efficient developer, we also wanted the ability to manage the VM from anywhere. So if, for instance, we have not yet launched the VM we would not need to keep navigating to the directory hosting the VagrantFile.

This is standard Vagrant functionality that is made possible by setting the %VAGRANT_CWD% variable. What we added on top is the ability to define it permanently in a dedicated user variable. And simply set it up only when we wanted to manage this particular environment.

Figure 5: Spinning up the VM from an arbitrary path

File I/O performance

In the course of testing out the solution, we encountered a few limitations that we think are relevant to mention.

The problems revolved around the file-sharing mechanism. Although there are a number of options available, the approach might not be a fit for certain situations that require intensive File I/O. We first tried to set up a plain VirtualBox file share and this was a good starting point since it works. And without requiring many configurations, it syncs 2-ways instantaneously, which is great in most cases.

The first wall was hit as soon as we tried running a FrontEnd build using NPM which relies on creating soft-links for common dependency packages. Soft-linking requires a specific privilege to be granted on the Windows host and still, it does not work very well. We tried going around the issue by using RSync which by default only syncs changes in one direction and runs on demand. Again, there are ways to make it poll for changes and bi-directionality could theoretically be set up by configuring each direction separately.

However, this creates a race-condition with the risk of having changes reversed or data loss. Another option, SMB shares, required a bit more work to set up and ultimately was not performant enough for our needs.

In the end, we found a solution to make the NPM build run without using soft-links and this allowed us to revert to using the native VirtualBox file share. The first caveat was that this required changes in our source-code repository, which is not ideal. Also, due to the huge number of dependencies involved in one of our typical NPM-based FrontEnd builds, the intense use of File I/O was causing locks on the file share, slowing down performance.

workstation remote

Conclusions

The aim was to extend a workstation running Windows by also running a Linux Kernel, to make it as easy as possible to manage and switch between working in either environment. The end result from our efforts turned out to be a very convenient solution in certain situations.

Our setup was particularly helpful when you need to run applications in an environment that is similar to production. Or when you want to run certain tooling for development, which is easier to install and configure on a Linux host. We have shown you how, with the help of tools like Vagrant and Ansible, it is easy to create a setup in such a way that can be shared and recreated consistently. Whilst keeping the configuration concise.

From a performance point of view, the solution worked very well for tasks that were demanding from a computation perspective. However, not the same can be said for situations that required intensive File I/O due to the overhead in synchronization.

For more knowledge-based information, check out what our experts have to say. Bookmark the site to stay updated weekly.

]]>
38 Cyber Security Conferences to Attend in 2020 https://devtest.phoenixnap.com/blog/cybersecurity-conferences Tue, 28 Jan 2020 23:53:51 +0000 https://devtest.phoenixnap.com/blog/?p=76118

Global cybersecurity ensures the infrastructure of global enterprises and economies, safeguarding the prosperity and well-being of citizens worldwide. As IoT (Internet of Things) devices rapidly expand, and connectivity and usage of cloud services increases, cyber-related incidents such as hacking, data breaches, and infrastructure tampering become common. 

Global cybersecurity conferences are a chance for stakeholders to address these issues and formulate strategies to defend against attacks and disseminate knowledge on new cybersecurity policies and procedures.

Benefits of Attending a Cyber Security Conference in 2020:

  • Networking with peers
  • Education on new technologies
  • Outreach
  • New strategies
  • Pricing information
  • Giving back and knowledge-sharing
  • Discovering new talent
  • Case studies

Here is a list below of the top 37 cybersecurity conferences to attend in 2020. As future conference details become confirmed for later in the year, bookmark the page and check back for the latest info.

woman giving a speech on stage at security conference

1. National Privacy and Data Governance Congress

The National Privacy and Data Governance Congress Conference is an occasion to discover critical problems at the landmark of privacy, law, security, access, and data authority. This event joins specialists from the academe, industry, and government who are involved with compliance, data governance, privacy, security, and access within establishments.

The conference is extensive but provides an adequate amount of time for representatives to present inquiries, receive impromptu responses, and participate in eloquent discussions with hosts, associates, and decision-makers.

2. NextGen SCADA Europe 2020

The 6th Annual NextGen SCADA Europe exhibition and networking conference is back thanks to the high demand. It is a dedicated forum that will provide content depth and networking emphasis you need to help with making critical new decisions. A decision such as upgrading your SCADA structure to meet the needs of the digital grid better.

In a matter of three intensive days, you can take part in 20+ utility case-studies. Such vital studies like the critical subjects of integration, system architecture, cybersecurity, and functionality. Enroll now to gain insight into the reason why these studies are the newest buzz around the cyber circle.

3. Sans Security East

  • Date: February 1 – 8, 2020
  • Location: New Orleans, Louisiana, United States
  • Cost: Different prices for different courses. Most courses cost $7,020. An online course is available.
  • https://www.sans.org/event/security-east-2020

Jump-start the New Year with one of the first training seminars of 2020. Attend the SANS Security East 2020 in New Orleans for an opportunity to learn new cybersecurity best practices for 2020 from the world’s top experts. This training experience is to assist you in progressing your career.

SANS’ development is unchallenged in the industry. The organization provides fervent instructors who are prominent industry specialists and practitioners. Their applied knowledge adds significance to the teaching syllabus. These skilled instructors guarantee you will be capable of utilizing what you learn immediately. From this conference, you can pick from over twenty information security courses that are prepared by first-rate mentors.

4. International Conference on Cyber Security and Connected Technologies (ICCSCT)

  • Date: February 3 – 4, 2020
  • Location: Tokyo, Japan
  • Cost: $260 – $465
  • https://iccsct.coreconferences.com

The 2020 ICCSCT is a leading research session focused on presenting new developments in cybersecurity. The seminar happens every year to make a perfect stage for individuals to share opinions and experiences.

The International Conference on Cyber Security and Connected Technologies centers on the numerous freshly forthcoming parts of cybersecurity and connected technologies.

5. Manusec Europe: Cyber Security for Critical Manufacturing

As the industrial division continues to adopt advancements in technology, it becomes vulnerable to an assortment of cyber threats. To have the best tools to tackle cyber threats in the twenty-first century, organizations must involve all levels of employees to cooperate and institute best exercise strategies to guard vital assets.

This event will bridge the gap between the corporate IT senior level and process control professionals. Such practices will allow teams to discuss critical issues and challenges, as well as to debate cyber security best practice guidelines.

6. The European Information Security Summit

This organization, known as TEISS, is currently one of the leading and most wide-ranging cybersecurity meetings in Europe. It features parallel sessions on the following four streams:

  • Threat Landscape stream
  • Culture and Education stream
  • Plenary stream
  • CISOs & Strategic stream

Join over 500 specialists in the cybersecurity industry and take advantage of different seminars.

7. Gartner Data & Analytics Summit

Data and analytics are conquering all trades as they become the core of the digital transformation. To endure and flourish in the digital age, having a grasp on data and analytics is critical for industry players.

Gartner is currently the global leader in IT conference providers. You, too, can benefit from our research, exclusive insight, and unmatched peer networking. Reinvent your role and your business by joining the 50,000+ attendees that walk away from this seminar annually, armed with a better understanding and the right tools to make their organization a leader in their industry.

8. Holyrood Connect’s Cyber Security

With cyber threats accelerating in frequency, organizations must protect themselves from the potentially catastrophic consequences due to security breaches.

In a time where being merely aware of security threats is unsustainable, Holyrood’s annual cybersecurity conference will research the latest developments, emerging threats, and constant practice.

Join relevant stakeholders, experts, and peers as they research the next steps to reinforce defenses, improve readiness, and maintain cyber resilience.

Critical issues to be addressed:

  • Up-to-date briefing on the latest in cybersecurity practice and policy
  • Expert analysis of the emerging threat landscape both at home and abroad
  • Good practice and innovation in preventing, detecting and responding to cyber attacks
  • Developing a whole-organization approach to cyber hygiene – improving awareness, culture, and behavior
  • Horizon scanning: cybersecurity and emerging technology

9. 3rd Annual Internet of Things India Expo

The Internet of Things is a trade transformation vital to government, companies, and clients, renovating all industries. The Second Edition IoT India Expo will concentrate on the Internet of Things environment containing central bodies, software, services, and hardware. Distinct concentration for this symposium will be on:

  • Industrial IoT
  • Smart Appliances
  • Robotics
  • Cybersecurity
  • Smart Solutions
  • System Integrators
  • Smart Wearable Devices
  • AI
  • Cloud Computing
  • Sensors
  • Data Analytics
  • And much more…

10. BSides Tampa

  • Category: General Cyber Security Conference
  • Date: February 20, 2020
  • Location: Tampa, Florida, United States
  • Cost: General Admission – $50; Discount for specific parties like Teachers, Military and Students
  • https://bsidestampa.net

The BSides Tampa conference focuses on offering attendees the latest information on security research, development, and uses. The conference is held yearly in Tampa and features various demonstrations and presentations from the best available in industry and academia.

Attendees have the opportunity to attend numerous training classes throughout the conference. These training classes provide individual technical courses on subjects ranging from penetration testing, security abuse, or cybersecurity certifications.

11. RSA Conference, the USA

With the simplicity of joining the digital space opens the risk of real-world cyber dangers. The 2020 RSA Conference focuses on the topics of managing these cyber threats that prominent organizations, agencies, and businesses are facing.

This event is famous in the US as well as in Abu Dhabi, Europe, and Singapore. The RSA Conference is renowned for being one of the leading information security seminars that occur yearly. The event’s real objective is to utilize the active determination the leaders place into study and research.

12. RSA Conference, Cryptographers’ Track

More than 40,000 industry-leading specialists attend the event as it is the first industry demonstration for the security business. As one of the industry-leading cybersecurity events, 2020 is the pathway to scientific documents on cryptography. It is a fantastic place for the broader cryptologic public to get in touch with attendees from the government, the industry, and academia.

presentations at a cyber security seminar

13. Hack New York City 2020

  • Date: March 6 – 8, 2020
  • Location: Manhattan, New York, United States
  • Cost: Free
  • https://hacknyu.org

Hack NYC is about sharing ideas on how we can improve our daily cybersecurity practices and the overall economic strength. The threat of attack targeting the Critical National Infrastructure is real as provisions supporting businesses and communities face common weaknesses and an implicit dynamic threat.

Hack NYC emphasis’ on our planning for, and resistance to, the real potential for Kinetic Cyberattack. Be part of crucial solutions and lighten risks aimed at Critical National Infrastructure.

14. Healthcare Information and Management Systems Society (HIMSS)

Over 40,000 health IT specialists, executives, vendors, and clinicians come together from all over the globe for the yearly HIMSS exhibition and seminar. Outstanding teaching, first-class speakers, front-line health IT merchandise, and influential networking are trademarks of this symposium. Over three hundred instruction programs feature discussions and workspaces, front-runner meetings, keynotes, and an entire day of pre-conference seminars.

15. 14th International Conference on Cyber Warfare and Security

The 14th Annual ICCWS is an occasion for academe, consultants, military personnel, and practitioners globally to explore new methods of fighting data threats. Cybersecurity conferences like this one offer the opportunity to increase information systems safety and share concepts.

New risks that exist from migrating to the cloud and social networking are a growing focus for the research group, and the sessions designed to cover these matters particularly. Join the merging of crucial players as CCWS uniquely addresses cyber warfare, security, and information warfare.

16. PROTECT International Exhibition and Conference on Security and Safety

Leverage International (Consultants) Inc. arranges this annual international seminar and demonstration on safety and security. PROTECT first started in 2005 by the Anti-Terrorism Council. This conference is the one government-private division partnership sequence in the Philippines dedicated to protection and security. It contains a global display, an excellent level symposium, free to go to practical workspaces, and networking prospects.

17. TROOPERS20

The TROOPERS20 conference is a two-day event providing hands-on involvement, discussions of current IT security matters, and networking with attendees as well as speakers.

During the two-day seminar, you can expect discussions on numerous issues. There are also practical demonstrations on the latest research and attack methods to bring the topics closer to the participants.

The conference also includes discussions about cyber defense and risk management, as well as relevant demonstrations of InfoSec management matters.

18. 27th International Workshop on Fast Software Encryption

The 2020 Fast Software Encryption conference arranged by the International Association for Cryptologic Research, will take place in Athens, Greece. FSE is broadly renowned as the globally prominent occasion in the field of symmetric cryptology.

This event will cover many topics, both practical and theoretical, including the design and analysis of block ciphers, stream ciphers, encryption schemes, hash functions, message authentication codes, authenticated encryption schemes, cryptanalysis, and evaluation tools, and secure implementations.

The IACR has been organizing FSE seminars since 2002 and is an international company with over 1,600 associates that brings researchers in cryptology together.

The conference concentrates on quick and protected primitives aimed at symmetric cryptography which contains the examining and planning of:

  • Block Cryptographs
  • Encryption Structures
  • Stream Codes
  • Valid Encryption Structures
  • Hash Meanings
  • Message Valid Schemes
  • Cryptographic Variations Examination and
  • Assessment Apparatuses

They’ll address problems and resolutions concerning their protected applications.

19. Vienna Cyber Security Week 2020

Austrian non-governmental affiliates, international governmental entities, and the Energypact Foundation present this year’s conference. Its focus is to connect with significant global investors in the discussion and collaboration of the discipline of cybersecurity. The primary focus is an analytical structure with an emphasis on the energy subdivision.

20. Cyber Security for Critical Assets (CS4CA) the USA

  • Date: March 24 – 25, 2020
  • Location: Houston, Texas, United States
  • Cost: $1,699 – $2,999
  • https://usa.cs4ca.com

The Yearly CS4CA features two devoted issues for OT and IT authorizing representatives to enhance their professional areas of attention. The discussions intend to tackle some of the most common problems that connect both sets of specialists.

Each issue is curated by a set of industry-prominent professionals to be as significant, up-to-date, and detailed as possible for two days. Throughout this conference, you can expect the opportunities to take relevant tests, as well as to get inspired by some of the world’s prominent cybersecurity visionaries and network with colleagues.

21. World Cyber Security Congress 2020

The World Cyber Security Seminar is an advanced international seminar that interests CISOs as well as other cybersecurity specialists from various divisions. Its panel of 150 + outstanding speakers represent a wide range of verticals, such as:

  • Finance
  • Retail
  • Government
  • Critical Infrastructure
  • Transport
  • Healthcare
  • Telecoms
  • Educational Services.

The World Cyber Security Congress is a senior-level meeting created with Data Analytics, Heads of Risk and Compliance, CIOs, and CTOs, as well as CISOs and Heads of Information of Security in mind.

22. InfoSec World 2020

The 2020 InfoSec World Seminar will feature over one hundred industry specialists to provide applied and practical instructions on a wide array of security matters. The 2020 conference offers an opportunity for security specialists to research and examine concepts with colleagues.

Throughout the conference, attendees will have plenty of opportunities to increase their knowledge from this world-class seminar platform headed by the industry’s prominent specialists. They will also have a chance to receive 47 CPE credits over the one week course or attend New Tech Lab assemblies presented in real-life scenarios. Attendees also have the opportunity to participate in a virtual career fair where you can join via your tablet or at the fair section of the Expo.

Lastly, attendees have availability to take advantage of the Disney Resort with associates and guests.

23. 19th Annual Security Conference

The 19th Annual Security Conference provides an opportunity for discussions on security, assurance, and privacy that improve the understanding of current events but also encourage future dialogues related to cybersecurity.

The 2020 security seminar is a portion of the Vegas Conferences prepared by:

  • University of Houston, USA
  • University of Arkansas, USA

It provides a forum for discourses in Security, Assurance, and Privacy that enhance the understanding of current events, but also nurture future dialogues related to cybersecurity.

IT training at a cyber security seminar

24. World Border Security Congress

The 2020 World Border Security Congress is a conference which is planned by Torch Marketing. This seminar will contain subjects such as perimeter surveillance methods and schemes and will include:

  • The Latest Threats and Challenges at the Border
  • Continuing efforts against foreign terrorist fighters, irregular migration, and human trafficking
  • Capacity Building and Training in Border and Migration Management
  • Securing the Littoral Border: Understanding Threats and Challenges for Maritime Borders
  • Pre-Travel Risk Assessment and Trusted Travellers
  • The developing role of Biometrics in identity management & document fraud
  • Smuggling & Trade in Illicit Goods, Antiquities and Endangered Species
  • The Future Trends and Approach to Alternatives for Securing Borders

Join global leaders as they discuss issues surrounding improvements to the defense and administration of protracted land boundaries.

25. Black Hat Asia 2020

The sharpest industry professionals and scholars will come together for a four-day event at the 2020 Black Hat Asia. This function contains two days of intense applied training followed by another two days of the most current studies and susceptibility discoveries at these meetings. The Black Hat Executive Summit offers CISOs and other cybersecurity executives an opportunity to hear from a variety of industry experts who are helping to shape this next generation of information security strategy.

Black Hat delivers attendees practical lessons on subjects such as:

  • Wider-ranging Offensive Security
  • Penetration Testing
  • Analyzing Automotive Electrical Systems
  • Mobile Application Automation Testing Tools and Security
  • Infrastructure Hacking

These practical attack and defense lessons created entirely for Black Hat Asia and prepared by some of the prominent specialists in the industry are available to you. They each share the objective of distinguishing and protecting tomorrow’s information security environment.

26. ASIS Europe

This event’s purpose is to assist security professionals in finding the best ways to assess risks and act accordingly – not in legal arrangements or disclaimers but having the risk owner and user make educated decisions.

CONFERENCE
For aspiring and established leaders in need of a comprehensive learning experience, including masterclasses, executive sessions, keynotes, and show pass.

CLASSROOM TRAINING
For managers and team members who are seeking to gain attentive, practical skills with precise learning outcomes. Show pass included.

27.

The 2020 forum will feature keynote speaker Katie Arrington, chief information security officer at the Office of the Assistant Secretary of Defense for Acquisition of the USA. She is a 2020 Wash100 Award recipient. The themes to be addressed are the Cybersecurity Maturity Model Certification’s (CMMC) timeline. And how the certification process could change, explaining the functionality of the newly established CMMC accrediting body. Learn about the impact DoD’s CMMC will have on supply chain security, cybersecurity practices, and other aspects of the federal market.

28. CyberCentral 2020

The 2020 CyberCentral conference is a two-day event where participants collaborate with a global community of compatible cybersecurity enthusiasts. This event is a limited occasion, which allows its participants to walk away revitalized with resilient H2H networks, instead of with a lot of brochures and business cards.

29. Infiltrate Security Conference

  • Date: April 19 – 24, 2020
  • Location: Miami Beach, Florida, United States
  • Cost: $1,800 – $2,200
  • https://infiltratecon.com

Infiltrate is an in-depth technical conference whose focus is on aggressive security issues. Innovative researchers come together to evaluate and share experiences regarding the latest technological strategies that you cannot find elsewhere. Infiltrate is the leading event for those who are focused on the technical aspects of offensive security issues, such as:

  • Reverse-Engineering
  • Modern Wi-Fi Hacking
  • Linux Kernel Exploitation
  • IoT Exploit Development
  • Vulnerability Research

Infiltrate avoids policy and elaborate presentations in favor of just diehard thought-provoking technical topics.

30. Industrial Control Systems (ICS) Cyber Security Conference

The ICS Cyber Security Conference is an event where ICS users, vendors, system security providers, and government representatives meet to discuss industry trends. The convention’s goal is to analyze the latest threats and their causes to offer effective solutions for businesses of different sizes.

31. QuBit 2020 Cybersecurity Conference

The 2020 QuBit Cybersecurity Conference aims to provide up-to-date data to the cyber community of Central Europe from the western realm. Also, it hopes to aid in the circulation of security materials such as IT and internet tools that are now available to over two billion individuals internationally.

QuBit offers you a unique way to meet advanced and elite individuals with impressive backgrounds in the information security industry. Connect with QuBit today and discover the latest revolutions and ideas that are paving tomorrow‘s industry platform.

32. CSO50 Conference + Awards

The yearly CSO50 Awards acknowledges fifty businesses, including the employees, for their security development or inventiveness that exhibits exceptional commercial worth. The award-winning organizations are revealed in a special announcement and have their project summarized in an editorial on csoonline.com.

Attending this seminar is one of the best ways to boost your employees’ and colleagues’ esteem, as it gathers some of the industry’s top thought leaders. It can also be an excellent hiring device for those seeking to find new cybersecurity talents. Another benefit includes project winners asked to exhibit their projects at the CSO Conference + Awards.

Team members of winning projects are also offered courtesy registrations to the seminar. Lastly, the winning company will be announced at the CSO50 Awards dinner and summoned on stage to accept their award.

33. 15th Annual Secure360 Twin Cities

The 2020 Secure360 Twin Cities conference is a seminar for education in comprehensive security and risk organization. This event offers partnership and teaching for your whole team.

Secure360 concentrates on the following:

  • Risk and Compliance
  • Governance
  • Information Security
  • Professional Development
  • Continuity Management
  • Business Continuity
  • Physical Security

Key speakers will cover topics such as “Leading from any seat: Stories from the cockpit & lessons from the Grit Project” and Information Warfare: How our phones, newspapers, and minds have become the battlefield.”

34. THOTCON 0x9

THOTCON 0x9 is a yearly hacking seminar that takes place in Chicago, Illinois. More than 1,300 specialists and speakers from all over the world attend the event each year.

THOTCON is a non-profit, non-commercial event offering the best seminar conceivable on a restricted budget. When you go to a THOTCON event, you will have undergone one of the best information security conferences around the globe, combined with an exclusively casual and social experience.

35. CyberTech Asia

CyberTech Asia 2020 provides attendees with innovative and unique opportunities to gain insight into the latest improvements and resolutions presented by the global cyber public.

The conference’s central focus is on networking as well as reinforcing associations and establishing new contacts. CyberTech also delivers an unbelievable stage for B2B collaboration. CyberTech Asia will join the following:

  • Leading Multinational Corporations
  • Start-ups
  • Corporate and Private Investors
  • Specialists
  • Clients
  • SMB’s
  • Venture capital firms

36. The 18th International Conference on Applied Cryptography and Network Security

ACNS is an annual conference concentrating on current advances in the fields of practical cryptography and its implementation to systems and network security. The objective of this workshop is to exemplify academic research in addition to advances in engineering and practical frontiers.

The Computer Security group is organizing the conference at Sapienza University. The proceedings of ACNS 2020 are to be published by Springer in the LNCS series.

37. ToorCamp 2020

The ToorCamp first opened in 2009 in Washington State at the Titan-1 Missile Silo and is the American hacker camp. The next two conferences occurred in 2012 and 2014 on the Washington Coast. At these seminars, you are encouraged to display projects, share ideas, and collaborate with technology specialists that are attending the event.

38. 20th International Conference on Cyber Security Exercises (ICCSE)

The 2020 International Conference on Cyber Security Exercise focuses on joining prominent researchers, experts, and professors to discuss and disclose their knowledge and investigations on every feature of cybersecurity implementations. This year’s focus is on the fields of Cybersecurity and Security Engineering.

Don’t Miss Out!

When you attend security conferences, you benefit in multiple ways. Specialists teach you. You can take advantage of colleague-to-colleague discussions for professional improvement.

Most importantly, attending seminars presents the opportunity for you to obtain the information you need to tackle cyber attacks. Every minute of the day, there is someone somewhere creating the next cyber threat that could shut your business down.

Take advantage of learning how to stay one step ahead but getting your company and its employees ready and prepared for the next threat. It is no longer a matter of if it is a matter of when.

Are you ready?

]]>
IPv4 vs IPv6: Understanding the Differences and Looking Ahead https://devtest.phoenixnap.com/blog/ipv4-vs-ipv6 Tue, 17 Dec 2019 14:53:17 +0000 https://devtest.phoenixnap.com/blog/?p=75406

As the Internet of Things (IoT) continues to grow exponentially, more devices connect online daily. There has been fear that, at some point, addresses would just run out. This conjecture is starting to come true.

Have no fear; the Internet is not coming to an end. There is a solution to the problem of diminishing IPv4 addresses. We will provide information on how more addresses can be created, and outline the main issues that need to be tackled to keep up with the growth of IoT by adopting IPv6.

We also examine how Internet Protocol version 6 (IPv6) vs. Internet Protocol 4 (IPv4) plays an important role in the Internet’s future and evolution, and how the newer version of the IP is superior to older IPv4.

How an IP Address Works

IP stands for “Internet Protocol,” referring to a set of rules which govern how data packets are transmitted across the Internet.

Information online or traffic flows across networks using unique addresses. Every device connected to the Internet or computer network gets a numerical label assigned to it, an IP address that is used to identify it as a destination for communication.

Your IP identifies your device on a particular network. It’s I.D. in a technical format for networks that combine IP with a TCP (Transmission Control Protocol) and enables virtual connections between a source and destination. Without a unique IP address, your device couldn’t attempt communication.

ipv4 vs ipv6 adoption

IP addresses standardize the way different machines interact with each other. They trade data packets, which refer to encapsulated bits of data that play a crucial part in loading webpages, emails, instant messaging, and other applications which involve data transfer.

Several components allow traffic to flow across the Internet. At the point of origin, data is packaged into an envelope when the traffic starts. This process is referred to as a “datagram.” It is a packet of data and part of the Internet Protocol or IP.

A full network stack is required to transport data across the Internet. The IP is just one part of that stack. The stack can be broken down into four layers, with the Application component at the top and the Datalink at the bottom.

Stack:

  • Application – HTTP, FTP, POP3, SMTP
  • Transport – TCP, UDP
  • Networking – IP, ICMP
  • Datalink – Ethernet, ARP

As a user of the Internet, you’re probably quite familiar with the application layer. It’s one that you interact with daily. Anytime you want to visit a website; you type in http://[web address], which is the Application.

Are you using an email application? At some point then, you would have set up an email account in that application, and likely came across POP3 or SMTP during the configuration process. POP3 stands for Post Office Protocol 3 and is a standard method of receiving an email. It collects and retains email for you until picked up.

From the above stack, you can see that the IP is part of the networking layer. IPs came into existence back in 1982 as part of ARPANET. IPv1 through IPv3 were experimental versions. IPv4 is the first version of IP used publicly, the world over.

IPv4 Explained

IPv4 or Internet Protocol Version 4 is a widely used protocol in data communication over several kinds of networks. It is the fourth revision of the Internet protocol. It was developed as a connectionless protocol for using in packet-switched layer networks like Ethernet. Its primary responsibility is to provide logical connections between network devices, which includes providing identification for every device.

IPv4 is based on the best-effort model, which guarantees neither delivery nor avoidance of a duplicate delivery and is hired by the upper layer transport protocol, such as the Transmission Control Protocol (TCP). IPv4 is flexible and can automatically or manually be configured with a range of different devices depending on the type of network.

Technology behind IPv4

IPv4 is both specified and defined in the Internet Engineering Task Force’s (IETF) publication RFC 791, used in the packet-switched link layer in OSI models. It uses a total of five classes of 32-bit addresses for Ethernet communication: A, B, C, D, and E. Of these, classes A, B, and C have a different bit length for dealing with network hosts, while Class D is used for multi-casting. The remaining Class E is reserved for future use.

Subnet Mask of Class A – 255.0.0.0 or /8

Subnet Mask of Class B – 255.255.0.0 or /16

Subnet Mask of Class C – 255.255.255.0 or /24

Example: The Network 192.168.0.0 with a /16 subnet mask can use addresses ranging from 192.168.0.0 to 192.168.255.255. It’s important to note that the address 192.168.255.255 is reserved only for broadcasting within the users. Here, the IPv4 can assign host addresses to a maximum of 232 end users.

IP addresses follow a standard, decimal notation format:

171.30.2.5

The above number is a unique 32-bit logical address. This setup means there can be up to 4.3 billion unique addresses. Each of the four groups of numbers are 8 bits. Every 8 bits are called an octet. Each number can range from 0 to 255. At 0, all bits are set to 0. At 255, all bits are set to 1. The binary form of the above IP address is 10101011.00011110.00000010.00000101.

Even with 4.3 billion possible addresses, that’s not nearly enough to accommodate all of the currently connected devices. Device types are far more than just desktops. Now there are smartphones, hotspots, IoT, smart speakers, cameras, etc. The list keeps proliferating as technology progresses, and in turn, so do the number of devices.

the past and future of ipv4 and ipv6

Future of IPv4

IPv4 addresses are set to finally run out, making IPv6 deployment the only viable solution left for the long-term growth of the Internet. I

n October 2019, RIPE NCC, one of five Regional Internet Registries, which is responsible for assigning IP addresses to Internet Service Providers (ISPs) in over 80 nations, announced that only one million IPv4 addresses were left. Due to these limitations, IPv6 has been introduced as a standardized solution offering a 128-bit address length that can define up to 2128 nodes.

Recovered addresses will only be assigned via a waiting list. And that means only a couple hundred thousand addresses can be allotted per year, which is not nearly enough to cover the several million that global networks require today. The consequences are that network tools will be forced to rely on expensive and complicated solutions to work around the problem of fewer available addresses. The countdown to zero addresses means enterprises world-wide have to take stock of IP resources, find interim solutions, and prepare for IPv6 deployment, to overcome the inevitable outage.

In the interim, one popular solution to bridge over to IPv6 deployment is Carrier Grade Network Address Translation (CGNAT). This technology allows for the prolongated use of IPv4 addresses. It does so by allowing a single IP address to be distributed across thousands of devices. It only plugs the hole in the meantime as CGNAT cannot scale indefinitely. Every added device creates another layer on NAT, that increases its workload and complexity, and thereby raises the chances of a CGNAT failing. When this happens, thousands of users are impacted and cannot be quickly put back online.

One more commonly-used workaround is IPv4 address trading. This is a market for selling and buying IPv4 addresses that are no longer needed or used. It’s a risky play since prices are dictated by supply and demand, and it can become a complicated and expensive process to maintain the status quo.

IPv4 scarcity remains a massive concern for network operators. The Internet won’t break, but it is at a breaking point since networks will only find it harder and harder to scale infrastructure for growth. IPv4 exhaustion goes back to 2012 when the Internet Assigned Numbers Authority (IANA) allotted the last IPv4 addresses to RIPE NCC. The long-anticipated run-out has been planned for by the technical community, and that’s where IPv6 comes in.

How is IPv6 different?

Internet Protocol Version 6 or IPv6 is the newest version of Internet Protocol used for carrying data in packets from one source to a destination via various networks. IPv6 is considered as an enhanced version of the older IPv4 protocol, as it supports a significantly larger number of nodes than the latter.

IPv6 allows up to 2128 possible combinations of nodes or addresses. It is also referred to as the Internet Protocol Next Generation or IPng. It was first developed in the hexadecimal format, containing eight octets to provide more substantial scalability. Released on June 6, 2012, it was also designed to deal with address broadcasting without including broadcast addresses in any class, the same as its predecessor.

comparing difference between ipv4 and ipv6

Comparing Difference Between IPv4 and IPv6

Now that you know more about IPv4 and IPv6 in detail, we can summarize the differences between these two protocols in a table. Each has its deficits and benefits to offer.

Points of Difference IPV4 IPV6
Compatibility with Mobile Devices Addresses use of dot-decimal notations, which make it less suitable for mobile networks. Addresses use hexadecimal colon-separated notations, which make it better suited to handle mobile networks.
Mapping Address Resolution Protocol is used to map to MAC addresses. Neighbor Discovery Protocol is used to map to MAC Address.
Dynamic Host Configuration Server When connecting to a network, clients are required to approach Dynamic Host Configuration Servers. Clients are given permanent addresses and are not required to contact any particular server.
Internet Protocol Security It is optional. It is mandatory.
Optional Fields Present Absent. Extension headers are available instead.
Local Subnet Group Management Uses Internet Group Management Protocol or GMP. Uses Multicast Listener Discovery or MLD.
IP to MAC resolution For Broadcasting ARP. For Multicast Neighbor Solicitation.
Address Configuration It is done manually or via DHCP. It uses stateless address autoconfiguration using the Internet Control Message Protocol or DHCP6.
DNS Records Records are in Address (A). Records are in Address (AAAA).
Packet Header Packet flow for QoS handling is not identified. This includes checksum options. Flow Label Fields specify packet flow for QoS handling.
Packet Fragmentation

           

Packet Fragmentation is allowed from routers when sending to hosts. For sending to hosts only.
Packet Size The minimum packet size is 576 bytes. Minimum packet size 1208 bytes.
Security It depends mostly on Applications. Has its own Security protocol called IPSec.
Mobility and Interoperability Network topologies are relatively constrained, which restricts mobility and interoperability. IPv6 provides mobility and interoperability capabilities which are embedded in network devices
SNMP Support included. Not supported.
Address Mask It is used for the designated network from the host portion. Not Used
Address Features Network Address Translation is used, which allows a single NAT address to mask thousands of non-routable addresses. Direct Addressing is possible because of the vast address space.
Configuration the Network Networks are configured either manually or with DHCP. It has autoconfiguration capabilities.
Routing Information Protocol Supports RIP routing protocol. IPv6 does not support RIP routing protocol.
Fragmentation It’s done by forwarding and sending routes. It is done only by the sender.
Virtual Length Subnet Mask Support Supports added. Support not added.
Configuration To communicate with other systems, a newly installed system must be configured first. Configuration is optional.
Number of Classes Five Different Classes, from A to E. It allows an unlimited number of IP Addresses to be stored.
Type of Addresses Multicast, Broadcast, and Unicast Anycast, Unicast, and Multicast
Checksum Fields Has checksum fields, example: 12.243.233.165 Not present
Length of Header Filed 20 40
Number of Header fields 12 8
Address Method It is a numeric address. It is an alphanumeric address.
Size of Address 32 Bit IP Address 128 Bit IP Address

Pros and Cons of using IPv6

IPv6 addresses have all the technical shortcomings present in IPv4. The difference is that it offers a 128 bit or 16-byte address, making the address pool around 340 trillion trillion trillion (undecillion).

It’s significantly larger than the address size provided by IPv4 since it’s made up of eight groups of characters, which are 16 bits long. The sheer size underlines why networks should adopt IPv6 sooner rather than later. Yet making a move so far has been a tough sell. Network operators find working with IPv4 familiar and are probably using a ‘wait and see’ approach to decide how to handle their IP situation. They might think they have enough IPv4 addresses for the near future. But sticking with IPv4 will get progressively harder to do so.

An example of the advantage of IPv6 over IPv4 is not having to share an IP and getting a dedicated address for your devices. Using IPv4 means a group of computers that want to share a single public IP will need to use a NAT. Then to access one of these computers directly, you will need to set up complex configurations such as port forwarding and firewall alterations. In comparison to IPv6, which has plenty of addresses to go around, IPv6 computers can be accessed publicly without additional configurations, saving resources.

Future of IPv6 adoption

The future adoption of IPv6 largely depends on the number of ISPs and mobile carriers, along with large enterprises, cloud providers, and data centers willing to migrate, and how they will migrate their data. IPv4 and IPv6 can coexist on parallel networks. So, there are no significant incentives for entities such as ISPs to vigorously pursue IPv6 options instead of IPv4, especially since it costs a considerable amount of time and money to upgrade.

Despite the price tag, the digital world is slowly moving away from the older IPv4 model into the more efficient IPv6. The long-term benefits outlined in this article that IPv6 provides are worth the investment.

Adoption still has a long way to go, but only it allows for new possibilities for network configurations on a massive scale. It’s efficient and innovative, not to forget it reduces dependency on the increasingly challenging and expensive IPv4 market.

Not preparing for the move is short-sighted and risky for networks. Smart businesses are embracing the efficiency, innovation, and flexibility of IPv6 right now. Be ready for exponential Internet growth and next-generation technologies as they come online and enhance your business.

IPv4 exhaustion will spur IPv6 adoption forward, so what are you waiting for? To find out how to adopt IPv6 for your business, give us a call today.

]]>
Microservices: Importance of Continuous Testing with Examples https://devtest.phoenixnap.com/blog/microservices-continuous-testing Mon, 28 Oct 2019 13:01:11 +0000 https://devtest.phoenixnap.com/blog/?p=74880

When is Microservice Architecture the Way to Go?

Choosing and designing the correct architecture for a system is critical. One must ensure the quality of service requirements and the handling of non-functional requirements, such as maintainability, extensibility, and testability.

Microservice architecture is quite a recurrent choice in the latest ecosystems after companies adopted Agile and DevOps. While not being a de facto choice, when dealing with systems that are extensively growing and where a monolith architecture is no longer feasible to maintain, it is one of the preferred options. Keeping components service-oriented and loosely coupled allows continuous development and release cycles ongoing. This drives businesses to constantly test and upgrade their software.

The main prerequisites that call for such an architecture are:

  • Domain-Driven Design
  • Continuous Delivery and DevOps Culture
  • Failure Isolation
  • Decentralization

It has the following benefits:

  • Team ownership
  • Frequent releases
  • Easier maintenance
  • Easier upgrades to newer versions
  • Technology agnostic

It has the following cons:

  • microservice-to-microservice communication mechanisms
  • Increasing the number of services increases the overall system complexity

The more distributed and complex the architecture is, the more challenging it is to ensure that the system can be expanded and maintained while controlling cost and risk. One business transaction might involve multiple combinations of protocols and technologies. It is not just about the use cases but also about its operations. When adopting Agile and DevOps approaches, one should find a balance between flexibility versus functionality aiming to achieve continuous revision and testing.

microservices testing strategies

The Importance of Testing Strategies in Relation to Microservices

Adopting DevOps in an organization aims to eliminate the various isolated departments and move towards one overall team. This move seeks to specifically improve the relationships and processes between the software team and the operations team. Delivering at a faster rate also means ensuring that there is continuous testing as part of the software delivery pipeline. Deploying daily (and in some cases even every couple of hours) is one of the main targets for fast end-to-end business solution delivery. Reliability and security must be kept in mind here, and this is where testing comes in.

The inclusion of test-driven development is the only way to achieve genuine confidence that code is production-ready. Valid test cases add value to the system since they validate and document the system itself. Apart from that, good code coverage encourages improvements and assists during refactoring.

Microservices architecture decentralizes communication channels, which makes testing more complicated. It’s not an insurmountable problem. A team owning a microservice should not be afraid to introduce changes because they might break existing client applications. Manual testing is very inefficient, considering that continuous integration and continuous deployment is the current best practice. DevOps engineers should ensure to include automation tests in their development workflow: write tests, add/refactor code, and run tests.

Common Microservice Testing Methods

The test pyramid is an easy concept that helps us identify the effort required when writing tests, and where the number of tests should decrease if granularity decreases. It also applies when considering continuous testing for microservices.

microservice testing
Figure 1: The test pyramid (Based on the diagram in Microservices Patterns by Chris Richardson)

To make the topic more concrete, we will tackle the testing of a sample microservice using Spring Boot and Java. Microservice architectures, by construct, are more complicated than monolithic architecture. Nonetheless, we will keep the focus on the type of tests and not on the architecture. Our snippets are based on a minimal project composed of one API-driven microservice owning a data store using MongoDB.

Unit tests

Unit tests should be the majority of tests since they are fast, reliable, and easy to maintain. These tests are also called white-box tests. The engineer implementing them is familiar with the logic and is writing the test to validate the module specifications and check the quality of code.

The focus of these tests is a small part of the system in isolation, i.e., the Class Under Test (CUT). The Single Responsibility Principle is a good guideline on how to manage code relating to functionality.

The most common form of a unit test is a “solitary unit test.” It does not cross any boundaries and does not need any collaborators apart from the CUT.

As outlined by Bill Caputo, databases, messaging channels, or other systems are the boundaries; any additional class used or required by the CUT is a collaborator. A unit test should never cross a boundary. When making use of collaborators, one is writing a “sociable unit tests.” Using mocks for dependencies used by the CUT is a way to test sociable code with a “solitary unit test.”

In traditional software development models, developer testing was not yet wildly adopted, having to test completely off-sync from development. Achieving a high code coverage rating was considered a key indicator of test suite confidence.

With the introduction of Agile and short iterative cycles, it’s evident now that previous test models no longer work. Frequent changes are expected continuously. It is much more critical to test observable behavior rather than having all code paths covered. Unit tests should be more about assertions than code coverage because the aim is to verify that the logic is working as expected.

It is useless to have a component with loads of tests and a high percentage of code coverage when tests do not have proper assertions. Applying a more Behavior-Driven Development (BDD) approach ensures that tests are verifying the end state and that the behavior matches the requirements set by the business. The advantage of having focused tests with a well-defined scope is that it becomes easier to identify the cause of failure. BDD tests give us higher confidence that failure was a consequence of a change in feature behavior. Tests that otherwise focus more on code coverage cannot offer much confidence since there would be a higher risk that failure is a repercussion for changes done in the tests themselves due to code paths implementation details.

Tests should follow Martin Fowler’s suggestion when he stated the following (in Refactoring: Improving the Design of Existing Code, Second Edition. Kent Beck, and Martin Fowler. Addison-Wesley. 2018):

Another reason to focus less on minor implementation details is refactoring. During refactoring, unit tests should be there to give us confidence and not slow down work. A change in the implementation of the collaborator might result in a test failure, which might make tests harder to maintain. It is highly recommended to keep a minimum of sociable unit tests. This is especially true when such tests might slow down the development life cycle with the possibility that tests end up ignored. An excellent situation to include a sociable unit test is negative testing, especially when dealing with behavior verification.

Integration tests

One of the most significant challenges with microservices is testing their interaction with the rest of the infrastructure services, i.e., the boundaries that the particular CUT depends on, such as databases or other services. The test pyramid clearly shows that integration tests should be less than unit tests but more than component and end-to-end tests. These other types of tests might be slower, harder to write, and maintain, and be quite fragile when compared to unit tests. Crossing boundaries might have an impact on performance and execution time due to network and database access; still, they are indispensable, especially in the DevOps culture.

In a Continuous Deployment scope, narrow integration tests are favored instead of broad integration tests. The latter is very close to end-to-end tests where it requires the actual service running rather than the use of a test double of those services to test the code interactions. The main goal to achieve is to build manageable operative tests in a fast, easy, and resilient fashion. Integration tests focus on the interaction of the CUT to one service at a time. Our focus is on narrow integration tests. Verification of the interaction between a pair of services can be confirmed to be as expected, where services can be either an infrastructure service or any other service.

Persistence tests

A controversial type of test is when testing the persistence layer, with the primary aim to test the queries and the effect on test data. One option is the use of in-memory databases. Some might consider the use of in-memory databases as a sociable unit test since it is a self-contained test, idempotent, and fast. The test runs against the database created with the desired configuration. After the test runs and assertions are verified, the data store is automatically scrubbed once the JVM exits due to its ephemeral nature. Keep in mind that there is still a connection happening to a different service and is considered a narrow integration test. In a Test-Driven Development (TDD) approach, such tests are essential since test suites should run within seconds. In-memory databases are a valid trade-off to ensure that tests are kept as fast as possible and not ignored in the long run.

@Before
public void setup() throws Exception {
   try {
	// this will download the version of mongo marked as production. One should
	// always mention the version that is currently being used by the SUT
	String ip = "localhost";
	int port = 27017;

	IMongodConfig mongodConfig = new MongodConfigBuilder().version(Version.Main. PRODUCTION)
		.net(new Net(ip, port, Network.localhostIsIPv6())).build();

	MongodStarter starter = MongodStarter.getDefaultInstance();
mongodExecutable = starter.prepare(mongodConfig);
	mongodExecutable.start();

   } catch (IOException e) {
	e.printStackTrace();
   }
}

Snippet 1: Installation and startup of the In-memory MongoDB

The above is not a full integration test since an in-memory database does not behave exactly as the production database server. Therefore, it is not a replica for the “real” mongo server, which would be the case if one opts for broad integration tests.

Another option for persistence integration tests is to have broad tests running connected to an actual database server or with the use of containers. Containers ease the pain since, on request, one provisions the database, compared to having a fixed server. Keep in mind such tests are time-consuming, and categorizing tests is a possible solution. Since these tests depend on another service running apart from the CUT, it’s considered a system test. These tests are still essential, and by using categories, one can better determine when specific tests should run to get the best balance between cost and value. For example, during the development cycle, one might run only the narrow integration tests using the in-memory database. Nightly builds could also run tests falling under a category such as broad integration tests.

@Category(FastIntegration.class)
@RunWith(SpringRunner.class)
@DataMongoTest
public class DailyTaskRepositoryInMemoryIntegrationTest {
	. . . 
}

@Category(SlowIntegration.class)
@RunWith(SpringRunner.class)
@DataMongoTest(excludeAutoConfiguration = EmbeddedMongoAutoConfiguration.class)
public class DailyTaskRepositoryIntegrationTest {
   ...
}

Snippet 2: Using categories to differentiate the types of integration tests

Consumer-driven tests

Inter-Process Communication (IPC) mechanisms are one central aspect of distributed systems based on a microservices architecture. This setup raises various complications during the creation of test suites. In addition to that, in an Agile team, changes are continuously in progress, including changes in APIs or events. No matter which IPC mechanism the system is using, there is the presence of a contract between any two services. There are various types of contracts, depending on which mechanism one chooses to use in the system. When using APIs, the contract is the HTTP request and response, while in the case of an event-based system, the contract is the domain event itself.

A primary goal when testing microservices is to ensure those contracts are well defined and stable at any point in time. In a TDD top-down approach, these are the first tests to be covered. A fundamental integration test ensures that the consumer has quick feedback as soon as a client does not match the real state of the producer to whom it is talking.

These tests should be part of the regular deployment pipeline. Their failure would allow the consumers to become aware that a change on the producer side has occurred, and that changes are required to achieve consistency again. Without the need to write intricate end-to-end tests, ‘consumer-driven contract testing’ would target this use case.

The following is a sample of a contract verifier generated by the spring-cloud-contract plugin.

@Test
public void validate_add_New_Task() throws Exception {
  // given:
   MockMvcRequestSpecification request = given()
.header("Content-Type", "application/json;charset=UTF-8")
	.body("{\"taskName\":\"newTask\",\"taskDescription\":\"newDescription\",\"isComplete\":false,\"isUrgent\":true}");

  // when:
   ResponseOptions response = given().spec(request).post("/tasks");

  // then:
   assertThat(response.statusCode()).isEqualTo(200);
   assertThat(response.header("Content-Type")).isEqualTo("application/json;charset=UTF-8");
  // and:
   DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
   assertThatJson(parsedJson).field("['taskName']").isEqualTo("newTask");
   assertThatJson(parsedJson).field("['isUrgent']").isEqualTo(true);
   assertThatJson(parsedJson).field("['isComplete']").isEqualTo(false);
   assertThatJson(parsedJson).field("['id']").isEqualTo("3");
   assertThatJson(parsedJson).field("['taskDescription']").isEqualTo("newDescription");
}

Snippet 3: Contract Verifier auto-generated by the spring-cloud-contract plugin

A BaseClass written in the producer is instructing what kind of response to expect on the various types of requests by using the standalone setup. The packaged collection of stubs is available to all consumers to be able to pull them in their implementation. Complexity arises when multiple consumers make use of the same contract; therefore, the producer needs to have a global view of the service contracts required.

@RunWith(SpringRunner.class)
@SpringBootTest
public class ContractBaseClass {

	@Autowired
	private DailyTaskController taskController;

	@MockBean
	private DailyTaskRepository dailyTaskRepository;

	@Before
	public void before() {
		RestAssuredMockMvc.standaloneSetup(this.taskController);
		Mockito.when(this.dailyTaskRepository.findById("1")).thenReturn(
Optional.of(new DailyTask("1", "Test", "Description", false, null)));
		
		. . . 
				
		Mockito.when(this.dailyTaskRepository.save(
new DailyTask(null, "newTask", "newDescription", false, true))).thenReturn(
new DailyTask("3", "newTask", "newDescription", false, true));
		
	}

Snippet 4: The producer’s BaseClass defining the response expected for each request

On the consumer side, with the inclusion of the spring-cloud-starter-contract-stub-runner dependency, we configured the test to use the stubs binary. This test would run using the stubs generated by the producer as per configuration having version specified or always the latest. The stub artifact links the client with the producer to ensure that both are working on the same contract. Any change that occurs would reflect in those tests, and thus, the consumer would identify whether the producer has changed or not.

@SpringBootTest(classes = TodayAskApplication.class)
@RunWith(SpringRunner.class)
@AutoConfigureStubRunner(ids = "com.cwie.arch:today:+:stubs:8080", stubsMode = StubRunnerProperties.StubsMode.LOCAL)
public class TodayClientStubTest {
 	 . . .
	@Test
	public void addTask_expectNewTaskResponse () {
		Task newTask = todayClient.createTask(
new Task(null, "newTask", "newDescription", false, true));
		BDDAssertions.then(newTask).isNotNull();
		BDDAssertions.then(newTask.getId()).isEqualTo("3");
		. . . 
		
	}
}

Snippet 5: Consumer injecting the stub version defined by the producer

Such integration tests verify that a provider’s API is still in line with the consumers’ expectations. When using mocked unit tests for APIs, we would have stubbed APIs and mocked the behavior. From a consumer point of view, these types of tests will ensure that the client is matching our expectations. It is essential to note that if the producer side changes the API, those tests will not fail. And it is imperative to define what the test is covering.

// the response we expect is represented in the task1.json file
private Resource taskOne = new ClassPathResource("task1.json");

@Autowired
private TodayClient todayClient;

@Test
public void createNewTask_expectTaskIsCreated() {
WireMock.stubFor(WireMock.post(WireMock.urlMatching("/tasks"))
		.willReturn(WireMock.aResponse()
		.withHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_UTF8_VALUE)
		.withStatus(HttpStatus.OK.value())
		.withBody(transformResourceJsonToString(taskOne))));
		
	Task tasks = todayClient.createTask(new Task(null, "runUTest", "Run Test", false, true));
	BDDAssertions.then(tasks.getId()).isEqualTo("1");

Snippet 6: A consumer test doing assertions on its own defined response

Component tests

Microservice architecture can grow fast, and so the component under test might be integrating with multiple other components and multiple infrastructure services. Until now, we have covered white-box testing with unit tests and narrow integration tests to test the CUT crossing the boundary to integrate with another service.

The fastest type of component testing is the in-process approach, where, alongside the use of test doubles and in-memory data stores, testing remains within boundaries. The main disadvantage of this approach is that the deployable production service is not fully tested; on the contrary, the component requires changes to wire the application differently. The preferred method is out-of-process component testing. These are like end-to-end tests, but with all external collaborators changed out with test doubles, by doing so, it exercises the fully deployed artifact making use of real network calls. The test would be responsible for properly configuring any externals services as stubs.

@Ignore
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { TodayConfiguration.class, TodayIntegrationApplication.class,
   CloudFoundryClientConfiguration.class })
public class BaseFunctionalitySteps {

   @Autowired
   private CloudFoundryOperations cf;

   private static File manifest = new File(".\\manifest.yml");

   @Autowired
   private TodayClient client;

   // Any stubs required 
   . . . 

   public void setup() {

cf.applications().pushManifest(PushApplicationManifestRequest.builder() 
 
 .manifest(ApplicationManifestUtils.read(manifest.toPath()).get(0)).build()).block();
}
. . .
// Any calls required by tests
public void requestForAllTasks() {
this.client.getTodoTasks();
}
}

Snippet 7: Deployment of the manifest on CloudFoundry and any calls required by tests

Cloud Foundry is one of the options used for container-based testing architectures. “It is an open-source cloud application platform that makes it faster and easier to build, test, deploy, and scale applications.” The following is the manifest.yml, a file that defines the configuration of all applications in the system. This file is used to deploy the actual service in the production-ready format on the Pivotal organization’s space where the MongoDB service is already set up, matching the production version.

---
applications:
- name: today
  instances: 1
  path: ../today/target/today-0.0.1-SNAPSHOT.jar 
  memory: 1024M
  routes:
  - route: today.cfapps.io
  services:
  - mongo-it

Snippet 8: Deployment of one instance of the service depending on mongo service

When opting for the out-of-process approach, keep in mind that actual boundaries are under test, and thus, tests end up being slower since there are network and database interactions. It would be ideal to have those test suites written in a separate module. To be able to run them separately at a different maven stage instead of the usual ‘test’ phase.

Since the emphasis of the tests is on the component itself, tests cover the primary responsibilities of the component while purposefully neglecting any other part of the system.

Cucumber, a software tool that supports Behavior-Driven Development, is an option to define such behavioral tests. With its plain language parser, Gherkin, it ensures that customers can easily understand all tests described. The following Cucumber feature file is ensuring that our component implementation is matching the business requirements for that particular feature.

Feature: Tasks

Scenario: Retrieving one task from list
 Given the component is running
 And the data consists of one or more tasks
 When user requests for task x
 Then the correct task x is returned

Scenario: Retrieving all lists
 Given the data consists of one or more tasks
 When user requests for all tasks
 Then all tasks in database are returned

Scenario: Negative Test
 Given the component is not running
 When user requests for task x it fails with response 404

Snippet 9: A feature file defining BDD tests

End-to-end tests

Similar to component tests, the aim of these end-to-end tests is not to perform code coverage but to ensure that the system meets the business scenarios requested. The difference is that in end-to-end testing, all components are up and running during the test.

As per the testing pyramid diagram, the number of end-to-end tests decreases further, taking into consideration the slowness they might cause. The first step is to have the setup running, and for this example, we will be leveraging docker.

version: '3.7'
services:
    today-app:
        image: today-app:1
        container_name: "today-app"
        build:
          context: ./
          dockerfile: DockerFile
        environment:
           - SPRING_DATA_MONGODB_HOST=mongodb
        volumes:
          - /data/today-app
        ports:
          - "8082:8080"
        links:
          - mongodb
        depends_on:
          - mongodb

    mongodb:
        image: mongo:3.2
        container_name: "mongodb"
        restart: always
        environment:
           - AUTH=no
           - MONGO_DATA_DIR=/data/db
           - MONGO_LOG_DIR=/dev/log
        volumes:
           - ./data:/data
        ports:
           - 27017:27017
        command: mongod --smallfiles --logpath=/dev/null # --quiet

Snippet 10: The docker.yml definition used to deploy the defined service and the specified version of mongo as containers

As per component tests, it makes sense to keep end-to-end tests in a separate module and different phases. The exec-maven-plugin was used to deploy all required components, exec our tests, and finally clean and teardown our test environment.

Microservices snippet 11
Snippet 11: Using exec-maven-plugin executions with docker commands to prepare for tests and clean-up after tests

Since this is a broad-stack test, a smaller selection of tests per feature will be executed. Tests are selected based on perceived business risk. The previous types of tests covered low-level details. That means whether a user story matches the Acceptance Criteria. These tests should also immediately stop a release, as a failure here might cause severe business repercussions.

Conclusion

Handoff centric testing often ends up being a very long process, taking up to weeks until all bugs are identified, fixed, and a new deployment readied. Feedback is only received after a release is made, making the lifespan of a version of our quickest possible turnaround time.

The continuous testing approach ensures immediate feedback. Meaning the DevOps engineer is immediately aware of whether the feature implemented is production-ready or not, depending on the outcome of the tests run. From unit tests up to end-to-end tests, they all assist in speeding up the assessment process.

Microservices architecture helps create faster rollouts to production since it is domain-driven. It ensures failure isolation and increases ownership. When multiple teams are working on the same project, it’s another reason to adopt such an architecture: To ensure that teams are independent and do not interfere with each other’s work.

Improve testability by moving toward continuous testing. Each microservice has a well-defined domain, and its scope should be limited to one actor. The test cases applied are specific and more concise, and tests are isolated, facilitating releases and faster deployments.

Following the TDD approach, there is no coding unless a failed test returns. This process increases confidence once an iterative implementation results in a successful trial. This process implies that testing happens in parallel with the actual implementation, and all the tests mentioned above are executed before changes reach a staging environment. Continuous testing keeps evolving until it enters the next release stage, that is, a staging environment, where the focus switches to more exhaustive testing such as load testing.

Agile, DevOps, and continuous delivery require continuous testing. The key benefit is the immediate feedback produced from automated tests. The possible repercussions could influence user experience but also have high-risk business consequences. For more information about continuous testing, Contact phoenixNAP today.

]]>
Definitive Cloud Migration Checklist For Planning Your Move https://devtest.phoenixnap.com/blog/cloud-migration-checklist Sat, 19 Oct 2019 02:48:01 +0000 https://devtest.phoenixnap.com/blog/?p=66180

Embracing the cloud may be a cost-effective business solution, but moving data from one platform to another can be an intimidating step for technology leaders.

Ensuring smooth integration between the cloud and traditional infrastructure is one of the top challenges for CIOs. Data migrations do involve a certain degree of risk. Downtime and data loss are two critical scenarios to be aware of before starting the process.

Given the possible consequences, it is worth having a practical plan in place. We have created a useful strategy checklist for cloud migration.

planning your move with a cloud migration checklist

1. Create a Cloud Migration Checklist

Before you start reaping the benefits of cloud computing, you first need to understand the potential migration challenges that may arise.

Only then can you develop a checklist or plan that will ensure minimal downtime and ensure a smooth transition.

There are many challenges involved with the decision to move from on-premise architecture to the cloud. Finding a cloud technology provider that can meet your needs is the first one. After that, everything comes down to planning each step.

The very migration is the tricky part since some of your company’s data might be unavailable during the move. You may also have to take your in-house servers temporarily offline. To minimize any negative consequences, every step should be determined ahead of time.

With that said, you need to remain willing to change the plan or rewrite it as necessary in case something brings your applications and data to risk.

2. Which Cloud Solution To Choose, Public, Hybrid, Private?

Public Cloud

A public cloud provides service and infrastructure off-site through the internet. While public clouds offer the best opportunity for efficiency by sharing resources, it comes with a higher risk of vulnerability and security breaches.

Public clouds make the most sense when you need to develop and test application code, collaboratively working on projects, or you need incremental capacity. Be sure to address security concerns in advance so that they don’t turn into expensive issues in the future.

Private Cloud

A private cloud provides services and infrastructure on a private network. The allure of a private cloud is the complete control over security and your system.

Private clouds are ideal when your security is of the utmost importance.  Especially if the information stored contains sensitive data. They are also the best cloud choice if your company is in an industry that must adhere to stringent compliance or security measures.

Hybrid Cloud

A hybrid cloud is a combination of both public and private options.

Separating your data throughout a hybrid cloud allows you to operate in the environment which best suits each need. The drawback, of course, is the challenge of managing different platforms and tracking multiple security infrastructures.

A hybrid cloud is the best option for you if your business is using a SaaS application but wants to have the comfort of upgraded security.

3. Communication and Planning Are Key

Of course, you should not forget your employees when coming up with a cloud migration project plan. There are psychological barriers that employees must work through.

Some employees, especially older ones who do not entirely trust this mysterious “cloud” might be tough to convince. Be prepared to spend some time teaching them about how the new infrastructure will work and assure them they will not notice much of a difference.

Not everyone trusts the cloud, particularly those who are used to physical storage drives and everything that they entail. They – not the actual cloud service that you use – might be one of your most substantial migration challenges.

Other factors that go into a successful cloud migration roadmap are testing, runtime environments, and integration points. Some issues can occur if the cloud-based information does not adequately populate your company’s operating software. Such scenarios can have a severe impact on your business and are a crucial reason to test everything.

A good cloud migration plan considers all of these things. From cost management and employee productivity to operating system stability and database security. Yes, your stored data has some security needs, especially when its administration is partly trusted to an outside company

When coming up with and implementing your cloud migration system, remember to take all of these things into account. Otherwise, you may come across some additional hurdles that will make things tougher or even slow down the entire process.

meeting to go over cloud migration strategy

4. Establish Security Policies When Migrating To The Cloud

Before you begin your migration to the cloud, you need to be aware of the related security and regulatory requirements.

There are numerous regulations that you must follow when moving to the cloud. These are particularly important if your business is in healthcare or payment processing. In this case, one of the challenges is working with your provider on ensuring your architecture complies with government regulations.

Another security issue includes identity and access management to cloud data. Only a designated group in your company needs to have access to that information to minimize the risks of a breach.

Whether your company needs to follow HIPAA Compliance laws, protect financial information or even keep your proprietary systems private, security is one of the main points your cloud migration checklist needs to address.

Not only does the data in the cloud need to be stored securely, but the application migration strategy should keep it safe as well. No one – hackers included – who are not supposed to have it should be able to access that information during the migration process. Plus, once the business data is in the cloud, it needs to be kept safe when it is not in use.

It needs to be encrypted according to the highest standards to be able to resist breaches. Whether it resides in a private or public cloud environment, encrypting your data and applications is essential to keeping your business data safe.

Many third-party cloud server companies have their security measures in place and can make additional changes to meet your needs. The continued investments in security by both providers and business users have a positive impact on how the cloud is perceived.

According to recent reports, security concerns fell from 29% to 25% last year. While this is a positive trend in both business and cloud industries, security is still a sensitive issue that needs to be in focus.

5. Plan for Efficient Resource Management

Most businesses find it hard to realize that the cloud often requires them to introduce new IT management roles.

With a set configuration and cloud monitoring tools,  tasks switch to a cloud provider.  A number of roles stay in-house. That often involves hiring an entirely new set of talents.

Employees who previously managed physical servers may not be the best ones to deal with the cloud.

There might be migration challenges that are over their heads. In fact, you will probably find that the third-party company that you contracted to handle your migration needs is the one who should be handling that segment of your IT needs.

This situation is something else that your employees may have to get used to – calling when something happens, and they cannot get the information that they need.

While you should not get rid of your IT department altogether, you will have to change some of their functions over to adjust to the new architecture.

However, there is another type of cloud migration resource management that you might have overlooked – physical resource management.

When you have a company server, you have to have enough electricity to power it securely. You need a cold room to keep the computers in, and even some precautionary measures in place to ensure that sudden power surges will not harm the system. These measures cost quite a bit of money in upkeep.

When you use a third-party data center, you no longer have to worry about these things. The provider manages the servers and is in place to help with your cloud migration. Moreover, it can assist you with any further business needs you may have. It can provide you with additional hardware, remote technical assistance, or even set up a disaster recovery site for you.

These possibilities often make the cloud pay for itself.

According to a survey of 1,037 IT professionals by TechTarget, companies spend around 31% of their dedicated cloud spending budgets on cloud services. This figure continues to increase as businesses continue discovering the potential of the cloud

cost savings from moving to cloud

6. Calculate your ROI

Cloud migration is not inexpensive. You need to pay for the cloud server space and the engineering involved in moving and storing your data.

However, although this appears to be one of the many migration challenges, it is not. As cloud storage has become popular, its costs are falling. The Return on Investment or ROI, for cloud storage also makes the price worthwhile.

According to a survey conducted in September 2017, 82% of organizations realized that the prices of their cloud migration met or exceeded their ROI expectations. Another study showed that the costs are still slightly higher than planned.

In this study, 58% of the people responding spent more on cloud migration than planned. The ROI is not affected as they still may have saved money in the long run, even if the original migration challenges sent them over budget.

One of the reasons why people receive a positive ROI is because they will no longer have to store their current server farm. Keeping a physical server system running uses up quite a few physical utilities, due to the need to keep it powered and cool.

You will also need employees to keep the system architecture up to date and troubleshoot any problems. With a cloud server, these expenses go away. There are other advantages to using a third party server company, including the fact that these businesses help you with cloud migration and all of the other details.

The survey included some additional data, including the fact that most people who responded – 68% of them – accepted the help of their contracted cloud storage company to handle the migration. An overwhelming majority also used the service to help them come up with and implement a cloud migration plan.

Companies are not afraid to turn to the experts when it comes to this type of IT service. Not everyone knows everything, so it is essential to know when to reach out with questions or when implementing a new service.

Final Thoughts on Cloud Migration Planning

If you’re still considering the next steps for your cloud migration, the tactics outlined above should help you move forward. A migration checklist is the foundation for your success and should be your first step.

Cloud migration is not a simple task. However, understanding and preparing for challenges, you can migrate successfully.

Remember to evaluate what is best for your company and move forward with a trusted provider.

]]>
Insider Threats: Types & Attack Detection CISO’s Need to Know For Prevention https://devtest.phoenixnap.com/blog/insider-threats Thu, 18 Apr 2019 16:28:47 +0000 https://devtest.phoenixnap.com/blog/?p=73512

In this article you will learn:

  • All CISO’s need to understand your biggest asset, people, can also your most significant risk.
  • Insider threats are increasing for enterprises across all industry sectors. Threats can come from anyone with access to sensitive data.
  • Be prepared to mitigate your risk with active insider threat detection and prevention.


What is an Insider Threat?

Insider threats are defined as cybersecurity threats that come from within your own company. It may be an employee or a vendor – even ex-employees. Anyone that has valid access to your network can be an insider threat.

Dealing with insider threats isn’t easy since the people you trust with your data and systems are the ones responsible for them.

definition of an insider threat

Types of Insider Threats

There are three types of insider threats, Compromised users, Careless users, and Malicious users.

different types of insider threats to be aware of

Compromised Employees or Vendors

Compromised employees or vendors are the most important type of insider threat you’ll face. This is because neither of you knows they are compromised. It can happen if an employee grants access to an attacker by clicking on a phishing link in an email. These are the most common types of insider threats.

Careless Employees

Careless employees or vendors can become targets for attackers. Leaving a computer or terminal unlocked for a few minutes can be enough for one to gain access.

Granting DBA permissions to regular users (or worse, using software system accounts) to do IT work are also examples of careless insider threats.

Malicious Insider

Malicious attackers can take any shape or form. They usually have legitimate user access to the system and willfully extract data or Intellectual Property. Since they are involved with the attack, they can also cover up their tracks. That makes detection even more difficult.

 

Detecting Insider Threats

Most of the security tools used today try to stop legitimate users being compromised. This includes things like firewalls, endpoint scanning, and anti-phishing tools. They are also the most common types of breaches, so it makes sense that so much effort goes into stopping them.

The other two types of profiles aren’t that easy to deal with. With careless behavior, knowing what system event was valid or not is almost impossible. Network and security admins probably don’t know the context behind an application’s behavior, so won’t notice anything suspicious before it’s too late.

Similarly, with malicious attackers, they will know the ins and outs of your company’s security system. Giving them a good chance of getting away without being detected.

The most significant issues with detecting insider threats are:

1. Legitimate Users

The nature of the threat is what makes it so hard to prevent. With the actor using their authentic login profiles, there’s no immediate warning triggered. Accessing large files or databases infrequently may be a valid part of their day to day job requirements.

2. System and Software Context

For the security team to know that something terrible is happening, they need to know what something bad looks like. This isn’t easy as. Usually, business units are the experts when it comes to their software. Without the right context, detecting a real insider threat from the security operations center is almost impossible.

3. Post Login Activities

Keeping track of every user’s activities after they’ve logged in to the system is a lot of work. In some cases, raw logs need to be checked, and each event studied. Even with Machine Learning (ML) tools, this can still be a lot of work. It could also lead to many false positives being reported, adding noise to the problem.

what to look for with an Inside attack

Indicators of Insider Attacks

Detecting attacks is still possible. Some signs are easy to spot and take action on.

Common indicators of insider threats are:

  • Unexplained Financial Gain
  • Abuse by Service Accounts.
  • Multiple failed logins.
  • Incorrect software access requests.
  • Large data or file transfers.

Using systems and tools that look for these items can help raise the alarm for an attack. While regular endpoint scans (daily) will ensure workstations stay clean from viruses and malware.

Identifying Breaches in the System

Identify breaches starts with the security team understanding normal behavior.

 

Normal behavior should be mapped down to the lowest access and activity. Included in the logs should be the User’s ID, workstation IP address, the accessed server’s IP, employee department, and the software used.

Additionally, knowing what database was accessed, which schemas and tables read, and what other SQL operations were performed, will help the security team identify breaches.

Detect Insider Threats with Machine Learning

One area where machine learning gives a massive ROI is in network threat detection. Although it isn’t magic, it can highlight where to point your resources.

By providing the system’s state and behavioral information to a machine learning algorithm, weird and suspect actions can be identified quickly. Information like user and connection types, role access and application rights, working times and access patterns, can promptly be passed to ML applications.

Knowing what falls outside of the above normal system state can be done by mapping the following into the alert process:

  • Listing table access rights per app.
  • Specifying service account credentials and schemas used.
  • Monitoring the usual data storage locations.

Prevent Insider Threats With Threat Scoring

Correlating the above types of information allows you to create threat scores for each user activity. Couple that to the user’s credentials, you can alert the security team soon after a breach is found.

Using this type of analytics is new to the industry. Early implementations have been successful in helping companies gain the edge on their rivals.

Vendors are starting to offer custom Security Risk Management solutions that include:

  • Behavior analytics
  • Threat intelligence
  • Anomaly detection
  • Predictive alerts

Statistics on Insider Threats

33% of organizations have faced an insider threat incident. (Source: SANS)

Two out of three insider incidents happen from contractor or employee negligence. (Source: Ponemon Institute)

69% of organizations have experienced an attempted or successful threat or corruption of data in the last 12 months. (Source: Accenture)

It takes an average of 72 days to contain an insider threat.

Take a Proactive Approach to Insider Threats

Using historical data can help you quickly build risk profiles for each of your users. Mapping their daily interactions with the data you manage will let you know where high-risk profiles are. This will allow you to proactively engage in the areas where you have the biggest concerns.

Although any point in the network poses a risk, elevated access rights have the highest potential for abuse. Implementing key indicator monitoring on these user profiles with active directory policies will reduce the amount of risk you face.

Auditing exiting employees, ensuring their credentials are revoked and they do not leave with company data is also vital. Nearly 70% of outgoing employees admit to taking some data with them out the door. If credentials are also left intact, you may as well leave the door open for them. Privileged access management is a great way to manage user.

Although unintended insider threats remain the biggest concern, it’s the malicious ones that can cause the worst disaster.

]]>